Results 1 to 5 of 5

Thread: very bad network performance

Threaded View

  1. very bad network performance

    Dear experts,

    we are struggling with bad network performance on a SLES 11 SP4 Installation as PowerLinux LPAR. The LPARS are having installed 10 Gbit/s network adapters to them. Currently in our project we are communicating with AIX sandbox systems that only have 1 Gbit/s. we were able to increase the performance between those systems by 12 MB/s by simply upgrading the linux kernel from 3.0.101-63-ppc64 to 3.0.101-77-ppc64 via zypper, because now ethtool -k eth0 shows that tso and gro are on, that gave us the boost.

    however, after the kernel upgrade we tested the network connection between SLES on 10 Gbit/s and a producitve AIX machine on 10 Gbit/s, both residing in the same data center, but different ibm power servers, but in the same subnet of course, so no hops via traceroute are taken. We measured only 1,2 Gbit/s in speed, but between two AIX on 10 Gbit/s we are measuring > 7 Gbit/s on the same network.

    When i am querying the devices on the SLES machine via ethtool eth0 and ethtool eth1, the supported ports are detected as FIBRE, supported link modes are 1000baseT/Full and adterised is 1000baseT/full, auto-neogtiation is on, duplex is full and speed is only 1000 Mb/s. However that can't be really the case because as stated above we measured 1,2 Gbit/s between SLES 10 Gbit/s and AIX 10 Gbit/s.

    My concern is: shouldn't sles be able to advertise a speed of 10000 Mb/s? why does it only advertise 1000 Mb/s?
    Last edited by dafrk; 30-Jun-2016 at 11:28.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts