What is the kernel version?

On 10/26/2010 10:28 PM, Mike Bordignon (GMI) wrote:
It looks like the scsi/iscsi device queues are not very filled, right?
2 - 6 request only? Could you try setting the IO scheduler at the scsi
level for the iscsi devices to noop?
echo noop > /sys/block/sdX/queue/scheduler


Also at the same time try increasing the queue_depth for the dm device.
echo X > /sys/block/dm-0/queue/nr_requests
I think it is probably at 128 already, right? If so try increasing it
to 256.

And try running a work load that is going to create more IO. It looks
like the dm device is only getting 32 requests/IOs so there is not
that much to spread around to 3 paths and keep them filled.

I've set my IO scheduler (for the dm-1 device) to noop. Here's some

You normally have to switch all queues (scsi and dm ones). But since we not even getting throughput equal to the iscsi disks that can wait.

output from multipath -ll and hdparm using the raw /dev/sdX devices;

iscsitest-squeeze:~/san# multipath -ll
santest (36090a03830e6bacf89c664e8d0007042) dm-1 EQLOGIC,100E-00
size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=3 status=active
|- 0:0:0:0 sde 8:64 active ready running
|- 1:0:0:0 sdb 8:16 active ready running
`- 2:0:0:0 sda 8:0 active ready running

iscsitest-squeeze:~/san# hdparm -t /dev/sde

/dev/sde:
Timing buffered disk reads: 204 MB in 3.01 seconds = 67.78 MB/sec

iscsitest-squeeze:~/san# hdparm -t /dev/sdb

/dev/sdb:
Timing buffered disk reads: 238 MB in 3.02 seconds = 78.71 MB/sec
iscsitest-squeeze:~/san# hdparm -t /dev/sda

/dev/sda:
Timing buffered disk reads: 244 MB in 3.02 seconds = 80.83 MB/sec

After setting the queue depth for the dm-1 device to 256, bonnie++
reports the following;


You are running this in the disk and not on a FS right?

Did you also increase the multipath.conf rr_min_io?


Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
iscsitest-squeez 4G 104 99 131137 95 132971 97 163 99 516178 99 3367 139
Latency 83758us 316ms 119ms 52234us 594us 13421us
Version 1.96 ------Sequential Create------ --------Random Create--------
iscsitest-squeeze.g -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ 26887 54 +++++ +++ +++++ +++
Latency 8265us 1499us 1600us 8764us 220us 57us
1.96,1.96,iscsitest-squeeze,1,1288145329,4G,,104,99,131137,95,132971,97,163,99,516178,99,3367,139,16,,,,,+++++,+++,+++++,+++,+++++,+++,26887,54,+++++,+++,+++++,+++,83758us,316ms,119ms,52234us,594us,13421us,8265us,1499us,1600us,8764us,220us,57us


Better, but still not quite what I'd expect for 3x1Gb links. If I down
one of the interfaces, the traffic appears to balance across the
remaining (two) interfaces and I'm left with the same throughput. Where
could the bottleneck lie?


It could be the disk. If you just run the 3 hdparms at the same time, does the combined throughput equal the 67+78+80 or is it closer to what you get with the multipath disk IO test?

--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to