So we are logging in from 3 interfaces on the initiator side to 1 on the target side? Is the target side portal a 1 gig or 10 gig interface?

And you are getting a total of 80 MB/s throughput, right? And if it is a 1 gig link on the target, are you just expecting to get closer to 1 gig throughput (maybe around 110?)?

I think if we are sending IO down to just a single 1 gig link on the target, then that is going to be a bottleneck for us. But I think we should get closer to 11* MB/s.



Yes. The portal side is composed of 3, 1Gb interfaces, as is the client side. I'm getting around a total of 80MB/s. I'm expecting to get about 300Mb/s on the client side (given the 3 gig interfaces on each side of the link).


It looks like the scsi/iscsi device queues are not very filled, right? 2 - 6 request only? Could you try setting the IO scheduler at the scsi level for the iscsi devices to noop?
echo noop > /sys/block/sdX/queue/scheduler


Also at the same time try increasing the queue_depth for the dm device.
echo X > /sys/block/dm-0/queue/nr_requests
I think it is probably at 128 already, right? If so try increasing it to 256.

And try running a work load that is going to create more IO. It looks like the dm device is only getting 32 requests/IOs so there is not that much to spread around to 3 paths and keep them filled.

I've set my IO scheduler (for the dm-1 device) to noop. Here's some output from multipath -ll and hdparm using the raw /dev/sdX devices;

iscsitest-squeeze:~/san# multipath -ll
santest (36090a03830e6bacf89c664e8d0007042) dm-1 EQLOGIC,100E-00
size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=3 status=active
  |- 0:0:0:0 sde  8:64   active ready  running
  |- 1:0:0:0 sdb  8:16   active ready  running
  `- 2:0:0:0 sda  8:0    active ready  running

iscsitest-squeeze:~/san# hdparm -t /dev/sde

/dev/sde:
 Timing buffered disk reads:  204 MB in  3.01 seconds =  67.78 MB/sec

iscsitest-squeeze:~/san# hdparm -t /dev/sdb

/dev/sdb:
 Timing buffered disk reads:  238 MB in  3.02 seconds =  78.71 MB/sec
iscsitest-squeeze:~/san# hdparm -t /dev/sda

/dev/sda:
 Timing buffered disk reads:  244 MB in  3.02 seconds =  80.83 MB/sec


After setting the queue depth for the dm-1 device to 256, bonnie++ reports the following;

Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP iscsitest-squeez 4G 104 99 131137 95 132971 97 163 99 516178 99 3367 139 Latency 83758us 316ms 119ms 52234us 594us 13421us Version 1.96 ------Sequential Create------ --------Random Create-------- iscsitest-squeeze.g -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ 26887 54 +++++ +++ +++++ +++ Latency 8265us 1499us 1600us 8764us 220us 57us
1.96,1.96,iscsitest-squeeze,1,1288145329,4G,,104,99,131137,95,132971,97,163,99,516178,99,3367,139,16,,,,,+++++,+++,+++++,+++,+++++,+++,26887,54,+++++,+++,+++++,+++,83758us,316ms,119ms,52234us,594us,13421us,8265us,1499us,1600us,8764us,220us,57us

Better, but still not quite what I'd expect for 3x1Gb links. If I down one of the interfaces, the traffic appears to balance across the remaining (two) interfaces and I'm left with the same throughput. Where could the bottleneck lie?



Mike

--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to