Hello Grant,

Friday, May 25, 2007, 6:54:04 PM, you wrote:

>> It would also be worthwhile doing something like the
>> following to
>> determine the max throughput the H/W RAID is giving
>> you:
>> # time dd of=<raw disk> if=/dev/zero bs=1048576
>>  count=1000
>> or a 2Gbps 6140 with 300GB/10K drives, we get ~46MB/s
>> on a
>> single-drive RAID-0 array, ~83MB/s on a 4-disk RAID-0
>> array w/128k
>> stripe, and ~69MB/s on a seven-disk RAID-5 array
>> w/128k strip.
>> 
>> -- 
>> albert chin ([EMAIL PROTECTED])
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
>> ss
>> 

GK> Well the Solaris kernel is telling me that it doesn't understand
GK> zfs_nocacheflush, but the array sure is acting like it!
GK> I ran the dd example, but increased the count for a longer running time.

GK> 5-disk RAID5 with UFS: ~79 MB/s
GK> 5-disk RAID5 with ZFS: ~470 MB/s

GK> I'm assuming there's some caching going on with ZFS that's really helping 
out?

How did you measure the performance?

When setting up RAID-5 on the array and then putting ZFS on top of it
it's possible to get much better performance for some workload than
with UFS due to the fact that ZFS will 'convert' most write IO's into
sequential writes and will boundle most IOs from last 5s - this means
the array will do mostly full stripe writes.

However in you case with just only one dd command it shouldn't be a
case - I guess you've got plenty of RAM and you measure how much time
dd is running.

Better use iostat to see what is your actual performance in both
cases.

-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to