Allan,

The switch reports 10GbE (see below, ports "1/xg49" and "1/xg50" at "10G Full". 
Each server has an Intel X520-DA1 (I gather that is one of the preferred cards 
for 10GbE on Illumos). The one is connected to the NetGear switch with a Cisco 
SFP+ direct-attach cable, and the other is connected over fiber with Cisco SFP+ 
SR modules. The switch reports no errors on either port, so I think physical 
connectivity is OK.

I can perfectly well understand that I may not get 1 GByte/s throughput from 
server to server over a single stream without some tuning, but I do expect to 
see more than 100 MByte/s, which I can do between the Intel E1000 ports just as 
easily.

Yes, aggregate throughput is higher, which is OK for the individual clients, 
but I do want to run backups between the two servers at 10GbE speeds.

FWIW, running 12 streams in parallel, the first one finishes after 3:14, which 
corresponds to roughly 50 MByte/s throughput. Most of the others take 5-6 
minutes to complete. Total throughput might be 400-500 MByte/s, still pretty 
weak.

Thanks,
Chris


Am 20.07.2014 um 14:33 schrieb Allan McAleavy via smartos-discuss 
<[email protected]>:

> Check your network switch is the same rate as your nic also run more streams 
> the card can only do what is asked for it
> 
> On 19 Jul 2014 23:41, "Nick Perry via smartos-discuss" 
> <[email protected]> wrote:
> Some interesting suggestions here: 
> http://www.solarisinternals.com/wiki/index.php/Networks
> 
> 
> On 19 July 2014 22:16, Nick Perry <[email protected]> wrote:
> Hi Chris.
> 
> How much improvement do you get with jumbo frames?
> 
> Can you achieve significantly higher output if you try multiple streams in 
> parallel?
> 
> During the test are there any CPU cores with very low idle time?
> 
> Depending on the answers to the above it might be interesting to see if there 
> is any improvement by increasing rx_queue_number and tx_queue_number on the 
> ixgbe driver.
> 
> Regards,
> 
> Nick
> 
> 
> On 19 July 2014 14:42, Chris Ferebee via smartos-discuss 
> <[email protected]> wrote:
> 
> I'm trying to debug a network performance issue.
> 
> I have two servers running SmartOS (20140613T024634Z and 20140501T225642Z), 
> one is a Supermicro dual Xeon E5649 (64 GB RAM) and the other is a dual Xeon 
> E5-2620v2 (128 GB RAM). Each has an Intel X520-DA1 10GbE card, and they are 
> both connected to 10GbE ports on a NetGear GS752TXS switch.
> 
> The switch reports 10GbE links:
> 
> 1/xg49                  Enable  10G Full        10G Full        Link Up 
> Enable  1518    20:0C:C8:46:C8:3E       49      49
> 1/xg50                  Enable  10G Full        10G Full        Link Up 
> Enable  1518    20:0C:C8:46:C8:3E       50      50
> 
> as do both hosts:
> 
> [root@90-e2-ba-00-2a-e2 ~]# dladm show-phys
> LINK    MEDIA           STATE   SPEED           DUPLEX          DEVICE
> igb0            Ethernet                down    0                       half  
>                   igb0
> igb1            Ethernet                down    0                       half  
>                   igb1
> ixgbe0  Ethernet                up              10000           full          
>           ixgbe0
> 
> [root@00-1b-21-bf-e1-b4 ~]# dladm show-phys
> LINK    MEDIA           STATE   SPEED           DUPLEX          DEVICE
> igb0            Ethernet                down    0                       half  
>                   igb0
> ixgbe0  Ethernet                up              10000           full          
>           ixgbe0
> igb1            Ethernet                down    0                       half  
>                   igb1
> 
> Per dladm show-linkprop, maxbw is not set on either of the net0 vnic 
> interfaces.
> 
> And yet, as measured via netcat, throughput is just below 1 Gbit/s:
> 
> [root@90-e2-ba-00-2a-e2 ~]# time cat /zones/test/10gb | nc -v -v -n 
> 192.168.168.5 8888
> Connection to 192.168.168.5 8888 port [tcp/*] succeeded!
> 
> real            1m34.662s
> user            0m11.422s
> sys             1m53.957s
> 
> (In this test, 10gb is a test file that is warm in RAM and transfers via dd 
> to /dev/null at approx. 2.4 GByte/s.)
> 
> What could be causing the slowdown, and how might I go about debugging this?
> 
> FTR, disk throughput, while not an issue here, appears to be perfectly 
> reasonable, approx. 900 MB/s read performance.
> 
> Thanks for any pointers!
> 
> Chris
> 
> 
> 
> 
> -------------------------------------------
> smartos-discuss
> Archives: https://www.listbox.com/member/archive/184463/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/184463/22416839-083bd2e9
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
> 
> 
> smartos-discuss | Archives  | Modify Your Subscription        
> smartos-discuss | Archives  | Modify Your Subscription        



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to