On 11/24/2009 06:07 PM, Chris K. wrote:
> Hello,
>     I'm writing in regards to the performance with open-iscsi on a
> 10gbe network. On your website you posted performance results
> indicating you reached read and write speeds of 450 MegaBytes per
> second.
> 
> In our environment we use Myricom dual channel 10gbe network cards on
> a gentoo linux system connected via fiber to a 10gbe interfaced SAN
> with a raid 0 volume mounted with 4 15000rpm SAS drives.

That is the iscsi-target machine, right?
What is the SW environment of the initiator box?

> Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
> know that the network interfaces can stream data at 822MB/s (results
> obtained with netperf). we know that local read performance on the
> disks is 480MB/s. When using netcat or direct tcp/ip connection we get
> speeds in this range, however when we connect a volume via the iscsi
> protocol using the open-iscsi initiator we drop to 94MB/s(best result.
> Obtained with bonnie++ and dd).
> 

What iscsi target are you using?

Mike, is it still best to use no-op-io-scheduler on initiator?

Boaz
> We were wondering if you would have any recommendations in terms of
> configuring the initiator or perhaps the linux system to achieve
> higher throughput.
> We have also set the the interfaces on both ends to jumbo frames (mtu
> 9000). We have also modified sysctl parameters to look as follows :
> 
> net.core.rmem_max = 16777216
> net.core.wmem_max = 16777216
> net.ipv4.tcp_rmem = 4096 87380 16777216
> net.ipv4.tcp_wmem = 4096 65536 16777216
> net.core.netdev_max_backlog = 250000
> 
> Any help would greatly be appreciated,
> Thank you for your time and  your work.
> 

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.


Reply via email to