Garrett,
Thanks for your comments. I did perform a number of tests with multiple
streams, e. g. with iperf, that show total throughput limited to about 3 Gbit/s
with one CPU core maxed out and the others idle. Please see
<https://www.mail-archive.com/[email protected]/msg00949.html>
Do you think this might be limited by the default value of 1 for
rx_queue_number and tx_queue_number in the ixgbe driver, or can the workload be
parallelized better another way?
Best,
chris
Am 09.08.2014 um 00:34 schrieb Garrett D'Amore <[email protected]>:
> Generally, to get good performance with many NICs, you will need multiple
> streams. The problem is that the dispatch latencies and round trip times
> mean that without parallelization you won't get the best possible numbers.
> Additionally, with some products, there are hardware rings for
> parallelization, and getting full performance requires engaging multiple
> rings, which you can't do with a single stream. Try running 4-5 of those
> netcats in parallel and see what happens.
>
> Also please include your file size when posting times. I was concerned that
> perhaps you were mixing bytes and bits. (Probably not, but it comes up often
> enough that I prefer to check.)
>
>
>
>
> On Sat, Jul 19, 2014 at 6:42 AM, Chris Ferebee via smartos-discuss
> <[email protected]> wrote:
>
> I'm trying to debug a network performance issue.
>
> I have two servers running SmartOS (20140613T024634Z and 20140501T225642Z),
> one is a Supermicro dual Xeon E5649 (64 GB RAM) and the other is a dual Xeon
> E5-2620v2 (128 GB RAM). Each has an Intel X520-DA1 10GbE card, and they are
> both connected to 10GbE ports on a NetGear GS752TXS switch.
>
> The switch reports 10GbE links:
>
> 1/xg49 Enable 10G Full 10G Full Link Up
> Enable 1518 20:0C:C8:46:C8:3E 49 49
> 1/xg50 Enable 10G Full 10G Full Link Up
> Enable 1518 20:0C:C8:46:C8:3E 50 50
>
> as do both hosts:
>
> [root@90-e2-ba-00-2a-e2 ~]# dladm show-phys
> LINK MEDIA STATE SPEED DUPLEX DEVICE
> igb0 Ethernet down 0 half
> igb0
> igb1 Ethernet down 0 half
> igb1
> ixgbe0 Ethernet up 10000 full
> ixgbe0
>
> [root@00-1b-21-bf-e1-b4 ~]# dladm show-phys
> LINK MEDIA STATE SPEED DUPLEX DEVICE
> igb0 Ethernet down 0 half
> igb0
> ixgbe0 Ethernet up 10000 full
> ixgbe0
> igb1 Ethernet down 0 half
> igb1
>
> Per dladm show-linkprop, maxbw is not set on either of the net0 vnic
> interfaces.
>
> And yet, as measured via netcat, throughput is just below 1 Gbit/s:
>
> [root@90-e2-ba-00-2a-e2 ~]# time cat /zones/test/10gb | nc -v -v -n
> 192.168.168.5 8888
> Connection to 192.168.168.5 8888 port [tcp/*] succeeded!
>
> real 1m34.662s
> user 0m11.422s
> sys 1m53.957s
>
> (In this test, 10gb is a test file that is warm in RAM and transfers via dd
> to /dev/null at approx. 2.4 GByte/s.)
>
> What could be causing the slowdown, and how might I go about debugging this?
>
> FTR, disk throughput, while not an issue here, appears to be perfectly
> reasonable, approx. 900 MB/s read performance.
>
> Thanks for any pointers!
>
> Chris
>
>
>
>
> -------------------------------------------
> smartos-discuss
> Archives: https://www.listbox.com/member/archive/184463/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/184463/22103350-51080293
> Modify Your Subscription: https://www.listbox.com/member/?&
> Powered by Listbox: http://www.listbox.com
>
-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com