I've been recommended by Mike Christie to perform raw benchmarks with 
  for reads:

  disktest -PT -T30  -K32 -B128k -ID /dev/sdXYZ -h1

  for writes:

  disktest -PT -T30  -K32 -B128k -ID /dev/sdXYZ -h1 -D 0:100

  I'm testing virtual machines using iSCSI. Virtual Server has a 
bottleneck with the network emulation. It works fine for the rest of the 
services but not for iSCSI.

  I've tried with VMware, and honestly, the best performance I get is 
using the iSCSI Windows initiator and create the virtual hard drive on 
the Windows partition that I create through iSCSI.

  However, I'm still trying to compare direct iSCSI from linux versus 
hard drives created on iSCSI windows partitions but using the 
filesystem. I've tried using:

  dd if=/dev/zero of=/mnt/virtualdata1/zero bs=4096 count=1572864

  dd if=/mnt/virtualdata1/zero of=/dev/null bs=4096

  and then I get very low performances (around 10-14 MB/s) or even less.

  While testing raw performance the difference is huge:

   disk on iSCSI Windows Reads: 1759.08MB/s Writes: 93.70MB/s
   disk using open-iscsi  Reads: 35.84MB/s Writes: 57.39MB/s

    I've tried changing MTU, and only changing the scheduler to noop I 
get better reads performance.

    But when It comes to filesystem, apparently performances are pretty 
more the same. Maybe dd is not very reliable? (I've tested it and kill 
the process in about one minute and the numbers vary a lot)



You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi

Reply via email to