Thank you for your response. The SAN is a 10gbe Nimbus with I believe
to be iscsitarget( as it's target
The switch is a Cisco Nexus5010 set to jumbo frame and flow control.
We have through tcp/ip performance tests in conjunction with Cisco
proved that this works. Furthermore using netcat and dd conjointly we
have achieved speeds around 200MB/s. This is far from the 822MB/s
shown in our testing with netperf and Cisco's performance tests, but
it is way above what we are getting with iscsi at 94MB/s which
technically is a GiG network not a 10gbe network.

I am not familiar with no-op-io-scheduler where exactly is this set
and what are it's implications ?

Thank you once again for your help.

On Wed, Nov 25, 2009 at 4:11 AM, Boaz Harrosh <> wrote:
> On 11/24/2009 06:07 PM, Chris K. wrote:
>> Hello,
>>     I'm writing in regards to the performance with open-iscsi on a
>> 10gbe network. On your website you posted performance results
>> indicating you reached read and write speeds of 450 MegaBytes per
>> second.
>> In our environment we use Myricom dual channel 10gbe network cards on
>> a gentoo linux system connected via fiber to a 10gbe interfaced SAN
>> with a raid 0 volume mounted with 4 15000rpm SAS drives.
> That is the iscsi-target machine, right?
> What is the SW environment of the initiator box?
>> Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
>> know that the network interfaces can stream data at 822MB/s (results
>> obtained with netperf). we know that local read performance on the
>> disks is 480MB/s. When using netcat or direct tcp/ip connection we get
>> speeds in this range, however when we connect a volume via the iscsi
>> protocol using the open-iscsi initiator we drop to 94MB/s(best result.
>> Obtained with bonnie++ and dd).
> What iscsi target are you using?
> Mike, is it still best to use no-op-io-scheduler on initiator?
> Boaz
>> We were wondering if you would have any recommendations in terms of
>> configuring the initiator or perhaps the linux system to achieve
>> higher throughput.
>> We have also set the the interfaces on both ends to jumbo frames (mtu
>> 9000). We have also modified sysctl parameters to look as follows :
>> net.core.rmem_max = 16777216
>> net.core.wmem_max = 16777216
>> net.ipv4.tcp_rmem = 4096 87380 16777216
>> net.ipv4.tcp_wmem = 4096 65536 16777216
>> net.core.netdev_max_backlog = 250000
>> Any help would greatly be appreciated,
>> Thank you for your time and  your work.


You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to