On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
> Hello,
>     I'm writing in regards to the performance with open-iscsi on a
> 10gbe network. On your website you posted performance results
> indicating you reached read and write speeds of 450 MegaBytes per
> second.
> 
> In our environment we use Myricom dual channel 10gbe network cards on
> a gentoo linux system connected via fiber to a 10gbe interfaced SAN
> with a raid 0 volume mounted with 4 15000rpm SAS drives.
> Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
> know that the network interfaces can stream data at 822MB/s (results
> obtained with netperf). we know that local read performance on the
> disks is 480MB/s. When using netcat or direct tcp/ip connection we get
> speeds in this range, however when we connect a volume via the iscsi
> protocol using the open-iscsi initiator we drop to 94MB/s(best result.
> Obtained with bonnie++ and dd).
>

What block size are you using with dd? 
Try: dd if=/dev/foo of=/dev/null bs=1024k count=32768

How's the CPU usage on both the target and the initiator when you run
that? Is there iowait?

Did you try with nullio LUN from the target?

-- Pasi

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.


Reply via email to