I'm trying to test iSCSI under VMWare.

   I've been told to first test raw disk using disktest:


   disktest -PT -T30  -K32 -B128k -ID /dev/sdXYZ -h1


  disktest -PT -T30  -K32 -B128k -ID /dev/sdXYZ -h1 -D 0:100

  If I use this configuration I get around 90 MB/s for writes and 1900
MB/s for reads. If I add the flag -If then is meant to fsync after each
write (this deactivates the cache, right?) then the performances lows to
  5 MB/s for writes and 34 MB/s for reads

  To compare, the local drive (not the iSCSI drive), I get using cache
374.60MB/s in writes and 2290.25MB/s in reads and without cache 8.10MB/s
for writes and 30.12MB/s for reads.

  When I use dd for filesystem benchmarking

  for writes:

  dd if=/dev/zero of=/mnt/raid1/zero bs=4096 count=1572864


  dd if=/dev/zero of=/mnt/raid1/zero bs=128k count=52428

  and reads:

  dd if=/mnt/virtualdata1/zero of=/dev/null bs=4096


  dd if=/mnt/raid1/zero of=/dev/null bs=128k

  I enabled noop scheduler:

  echo noop > /sys/block/sdc/queue/scheduler

  The results that I get from dd are much closer to the disktest using
fsync. I've enabled noatime in the mounting options but I don't get
better results.

  am I getting reasonable results?


You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi

Reply via email to