Hello list!

I have lvm volume (from a single sata disk) exported with IET like this:

Target iqn.2001-04.com.test:vol1
        Lun 0 Path=/dev/vg1/test1,Type=fileio
        InitialR2T              No
        ImmediateData           Yes

I have not modified read-ahead or other disk settings on the target server.
target is using deadline elevator/scheduler.

open-iscsi initiator (CentOS 5.1 2.6.18-53.1.14.el5PAE and the default
open-iscsi that comes with the distro) sees that volume as /dev/sda.

I'm testing performance with different read-ahead settings on the initiator.

Can somebody explain why I see these throughput changes? 

# blockdev --setra 256 /dev/sda
# dd if=/dev/sda of=/dev/null bs=1024k count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 45.0601 seconds, 47.7 MB/s

# blockdev --setra 512 /dev/sda
# dd if=/dev/sda of=/dev/null bs=1024k count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 50.7867 seconds, 42.3 MB/s

# blockdev --setra 1024 /dev/sda
# dd if=/dev/sda of=/dev/null bs=1024k count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 36.0101 seconds, 59.6 MB/s

# blockdev --setra 2048 /dev/sda
# dd if=/dev/sda of=/dev/null bs=1024k count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 56.8964 seconds, 37.7 MB/s

# blockdev --setra 4096 /dev/sda
# dd if=/dev/sda of=/dev/null bs=1024k count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 147.388 seconds, 14.6 MB/s


I repeated all of the test above multiple times and the results were
reproducible..

I tested also with 4k and 1024k blocksizes.. the results were about the same
with both blocksizes.


# blockdev --setra 1024 /dev/sda
# dd if=/dev/sda of=/dev/null bs=4k count=524288
524288+0 records in
524288+0 records out
2147483648 bytes (2.1 GB) copied, 35.3623 seconds, 60.7 MB/s

# blockdev --setra 4096 /dev/sda
# dd if=/dev/sda of=/dev/null bs=4k count=524288
524288+0 records in
524288+0 records out
2147483648 bytes (2.1 GB) copied, 146.793 seconds, 14.6 MB/s


between test runs I dropped caches on both initiator and the target with:
"echo 3 > /proc/sys/vm/drop_caches"

I'm using noop elevator/scheduler on /dev/sda, but changing it to deadline
or cfq didn't change anything. nr_requests and queue_depth are the defaults.

Initiator box had about 50% CPU free during all the tests.. 

One thing I noticed was when you got better throughput there was less iowait
and more time spent on "si" and on "sy" (from top output). When I got worse
throughput there was more time spent on iowait and less on "si" and on "sy".


top output with read-ahead of 4096:
Cpu(s):  0.0%us,  1.5%sy,  0.0%ni, 49.5%id, 47.0%wa,  0.0%hi,  2.0%si, 0.0%st

top output with read-ahead of 1024:
Cpu(s):  0.5%us,  3.0%sy,  0.0%ni, 49.8%id, 36.8%wa,  0.0%hi, 10.0%si, 0.0%st


Target box has 2G of RAM and the initiator has 4G of RAM. 
Both are CentOS 5.1 32bit with latest updates installed. 
Target software is IETD v0.4.16.

Comments are very welcome :) I'd like to understand why this happens and
what's the limiting factor. 

-- Pasi

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to