You are using dd to test it . I will suggest that ratehr using dd, hdparam ,
iostat go for some tools like fio,bonnie,iozone & iometer. Please once do
your experiment with any of this tool and let us know the results. I don't
think iSCSI can give better throughput, because it has tcp/ip overhead.

On Sat, Apr 11, 2009 at 4:34 AM, Billy Crook <billycr...@gmail.com> wrote:

> I wrote a small script to compare a LW16800 (e0.0) and vblade (e9.0)
> and iscsi to the same disk.  It reads/writes a GB from two WDC
> WD1001FALS-00J7B0 drives ten times.  Both drives are sitting out of
> chassis on the same shelf next to eachother, with direct cooling.  I
> drop_caches on the initiator and on the vblade and iscsi target just
> before each test.
>
> iscsi and vblade export the same disk, /sdb.  Both the initiator and
> target are otherwise idle when performing these tests.  The initiator,
> and targets are connected to the same gigabit switch.  Here is my
> script.
>
> #! /bin/bash
>
> size=1024
> iters=10
>
> for target in /dev/etherd/e0.0 /dev/etherd/e9.0
> /dev/disk/by-path/ip-192.168.171.180:3260-iscsi-testingiscsi-lun-1
> do
>        #wake up disks in case the went to sleep
>        dd if=/dev/zero of=${target} bs=1M count=1 2> /dev/null
>
>        for op in reading writing
>        do
>                echo ${op} ${size}M on ${target} ${iters} times
>                for (( iter=0 ; iter < iters ; iter++ ))
>                do
>                        sync
>                        echo 3 > /proc/sys/vm/drop_caches
>                        su bcrook -c 'ssh r...@192.168.171.180 sync'
>                        su bcrook -c 'ssh r...@192.168.171.180 sh -c
> "echo 3 > /proc/sys/vm/drop_caches"'
>                        dd if=$( if [ ${op} == "reading" ] ; then echo
> ${target}; else echo /dev/zero; fi ) \
>                                of=$( if [ ${op} == "writing" ] ; then
> echo ${target}; else echo /dev/null; fi ) \
>                                bs=1M count=${size} 2>&1 | grep -v " records
> "
>                done
>        done
> done
>
>
>
> And here are the results.
>
>
>
> [r...@zero ~]# ./aoebenchtest
> reading 1024M on /dev/etherd/e0.0 10 times
> 1073741824 bytes (1.1 GB) copied, 24.8476 s, 43.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.6179 s, 43.6 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.9608 s, 43.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.7051 s, 43.5 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.7361 s, 43.4 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.8583 s, 43.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.89 s, 43.1 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.728 s, 43.4 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.6888 s, 43.5 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.9267 s, 43.1 MB/s
> writing 1024M on /dev/etherd/e0.0 10 times
> 1073741824 bytes (1.1 GB) copied, 20.6834 s, 51.9 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.7419 s, 51.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.661 s, 52.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.6506 s, 52.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.5523 s, 52.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.5728 s, 52.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.5807 s, 52.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.6596 s, 52.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.6352 s, 52.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.6348 s, 52.0 MB/s
> reading 1024M on /dev/etherd/e9.0 10 times
> 1073741824 bytes (1.1 GB) copied, 20.5144 s, 52.3 MB/s
> 1073741824 bytes (1.1 GB) copied, 19.0654 s, 56.3 MB/s
> 1073741824 bytes (1.1 GB) copied, 19.4408 s, 55.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 18.9759 s, 56.6 MB/s
> 1073741824 bytes (1.1 GB) copied, 21.4898 s, 50.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.7371 s, 51.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.9078 s, 51.4 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.7329 s, 51.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.8393 s, 51.5 MB/s
> 1073741824 bytes (1.1 GB) copied, 20.4598 s, 52.5 MB/s
> writing 1024M on /dev/etherd/e9.0 10 times
> 1073741824 bytes (1.1 GB) copied, 58.7555 s, 18.3 MB/s
> 1073741824 bytes (1.1 GB) copied, 57.2597 s, 18.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 57.1691 s, 18.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 57.3913 s, 18.7 MB/s
> 1073741824 bytes (1.1 GB) copied, 57.3032 s, 18.7 MB/s
> 1073741824 bytes (1.1 GB) copied, 58.9277 s, 18.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 57.3344 s, 18.7 MB/s
> 1073741824 bytes (1.1 GB) copied, 57.1933 s, 18.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 57.3323 s, 18.7 MB/s
> 1073741824 bytes (1.1 GB) copied, 58.9522 s, 18.2 MB/s
> reading 1024M on
> /dev/disk/by-path/ip-192.168.171.180:3260-iscsi-testingiscsi-lun-1 10
> times
> 1073741824 bytes (1.1 GB) copied, 21.4272 s, 50.1 MB/s
> 1073741824 bytes (1.1 GB) copied, 23.6282 s, 45.4 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.0691 s, 44.6 MB/s
> 1073741824 bytes (1.1 GB) copied, 24.6449 s, 43.6 MB/s
> 1073741824 bytes (1.1 GB) copied, 23.9868 s, 44.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 23.9808 s, 44.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 23.573 s, 45.5 MB/s
> 1073741824 bytes (1.1 GB) copied, 23.8478 s, 45.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 23.7493 s, 45.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 23.7931 s, 45.1 MB/s
> writing 1024M on
> /dev/disk/by-path/ip-192.168.171.180:3260-iscsi-testingiscsi-lun-1 10
> times
> 1073741824 bytes (1.1 GB) copied, 17.38 s, 61.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.1432 s, 62.6 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.2576 s, 62.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.5517 s, 61.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.6609 s, 60.8 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.2669 s, 62.2 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.3945 s, 61.7 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.6162 s, 61.0 MB/s
> 1073741824 bytes (1.1 GB) copied, 17.7097 s, 60.6 MB/s
> 1073741824 bytes (1.1 GB) copied, 18.1907 s, 59.0 MB/s
>
>
> It looks like iSCSI is outperforming both vblade, and the hardware AoE
> board with the exception of a 5MB/s gain vblade has over iscsi on the
> same disk in the same target host.  Is there some flaw in my testing?
> Can I improve AoE performance somehow?  What could make writes so slow
> with vblade?  I'll switch the drives around and test again just to
> make sure one of them isn't slower than the other.
>
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by:
> High Quality Requirements in a Collaborative Environment.
> Download a free trial of Rational Requirements Composer Now!
> http://p.sf.net/sfu/www-ibm-com
> _______________________________________________
> Aoetools-discuss mailing list
> Aoetools-discuss@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/aoetools-discuss
>
------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
Aoetools-discuss mailing list
Aoetools-discuss@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/aoetools-discuss

Reply via email to