Benny Halevy escribió:
> On Feb. 02, 2008, 19:18 +0200, Miguel Gonzalez Castaños <[EMAIL PROTECTED]> 
> wrote:
>   
>> Miguel Gonzalez Castaños escribió:
>>     
>>> Hi,
>>>
>>>   I've been recommended by Mike Christie to perform raw benchmarks with 
>>> disktest:
>>>  
>>>   for reads:
>>>
>>>   disktest -PT -T30  -K32 -B128k -ID /dev/sdXYZ -h1
>>>
>>>   for writes:
>>>
>>>   disktest -PT -T30  -K32 -B128k -ID /dev/sdXYZ -h1 -D 0:100
>>>
>>>   I'm testing virtual machines using iSCSI. Virtual Server has a 
>>> bottleneck with the network emulation. It works fine for the rest of the 
>>> services but not for iSCSI.
>>>
>>>   I've tried with VMware, and honestly, the best performance I get is 
>>> using the iSCSI Windows initiator and create the virtual hard drive on 
>>> the Windows partition that I create through iSCSI.
>>>
>>>   However, I'm still trying to compare direct iSCSI from linux versus 
>>> hard drives created on iSCSI windows partitions but using the 
>>> filesystem. I've tried using:
>>>
>>>   dd if=/dev/zero of=/mnt/virtualdata1/zero bs=4096 count=1572864
>>>
>>>   dd if=/mnt/virtualdata1/zero of=/dev/null bs=4096
>>>
>>>   and then I get very low performances (around 10-14 MB/s) or even less.
>>>
>>>   While testing raw performance the difference is huge:
>>>
>>>    disk on iSCSI Windows Reads: 1759.08MB/s Writes: 93.70MB/s
>>>       
>
> Read performance seems suspiciously high here, as if you're reading
> from cache and not from the device.
>   
How to avoid that? Unmounting and mounting again?
>   
>>>  
>>>    disk using open-iscsi  Reads: 35.84MB/s Writes: 57.39MB/s
>>>
>>>     I've tried changing MTU, and only changing the scheduler to noop I 
>>> get better reads performance.
>>>
>>>     But when It comes to filesystem, apparently performances are pretty 
>>> more the same. Maybe dd is not very reliable? (I've tested it and kill 
>>> the process in about one minute and the numbers vary a lot)
>>>       
>
> dd is very reliable, and you should not kill it, let it finish.
> It's important especially for writes where you want it to take 
> close time into account.  It may take the bulk of the time to flush
> the data to disk on close (i.e. dd may fill the cache quickly if
> you write little enough data and then wait for it to be flushed
> when the file is either fsync'ed or closed).
>   
I'm creating a zero file of about 6 Gb
>   
>>>   
>>>       
>> I've found that changing the mount options of ext3 to noatime,nodiratime 
>> the performance improves a lot, anyone knows if it is a good idea to use 
>> this settings?
>>     
>
> Keeping (persistent) track of file/dir access time causes writes
> during the read-only access and that probably what causes the performance 
> drop.
>
> Many users do not need to keep track of file and directory access times so
> these mount options are acceptable.
>
> However, typically the file system writes are cached so they can be
> optimized and amortized over larger number of operations.
> Can you please try (much) bigger block sizes? 64k, 256k, 1024k maybe?
>   
It is weird, I have tried different block sizes and I still get similar 
results. Also the read performance against writes performance varies a 
lot (from 90 MB/s of reads to 5 MB/s of writes), while with disktest I 
got around 90 MB/s in writes and more in reads (you say that is probably 
cached).

Also I haven't been able to reproduce again the increase in performance 
with the noatime settings. It is really noticeable with dd filling up a 
big  zero file?

I'm really puzzled.

Miguel



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to