On 18.06.2012 17:38, Ross S. W. Walker wrote:
On Jun 18, 2012, at 7:02 AM, "George Shuklin"<george.shuk...@gmail.com>  wrote:

Good day.

I'm trying to rid of bottleneck in SAN environment. After some tests
I've came to conclusion that bottleneck is in open-iscsi or IET.

Here simple test to check it:

1) Setup relatively fast array of disks (in my case it is about 20 SATA
drives in raid10)
2) set up iet in blockio mode.
3) discover/login on it locally (no network, no switches, just lo0)
4) run fio with following config:
[test]
blocksize=4k
filename=/dev/sdal  #iscsi disk
rw=randwrite
direct=1
buffered=0
ioengine=libaio
iodepth=32

What I see:

sdal (open-iscsi disk) utilization is 100%, all other disks are below
50% (about 35-45%)

Changing filename from iscsi disk to raid disk (which is exported by
iet) raise performance (in my case with 20 SATA disks from 4.5k IOPS to
5.4k IOPS).
I don't quite understand this, do you mean the performance of going direct to 
the native raid was 5400 IOPS?

You could try disabling rx/tx checksums on the loopback if enabled.


Ok, sorry, this is not related to open-iscsi. I done some more testing and found following data:

direct test: 5.4k IOPS
scst/vdisk_blockio: 5.3kIOPS
iet: 4.5k IOPS.

So this is definitively problem of iet, not open-iscsi.

--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to