On Fri, May 25, 2007 at 09:54:04AM -0700, Grant Kelly wrote:
> > It would also be worthwhile doing something like the following to
> > determine the max throughput the H/W RAID is giving you:
> > # time dd of=<raw disk> if=/dev/zero bs=1048576 count=1000
> > or a 2Gbps 6140 with 300GB/10K drives, we get ~46MB/s on a
> > single-drive RAID-0 array, ~83MB/s on a 4-disk RAID-0 array w/128k
> > stripe, and ~69MB/s on a seven-disk RAID-5 array w/128k strip.
> 
> Well the Solaris kernel is telling me that it doesn't understand
> zfs_nocacheflush, but the array sure is acting like it!
> I ran the dd example, but increased the count for a longer running time.

I don't think a longer running time is going to give you a more
accurate measurement.

> 5-disk RAID5 with UFS: ~79 MB/s

What about against a raw RAID-5 device?

> 5-disk RAID5 with ZFS: ~470 MB/s

I don't think you want to if=/dev/zero on ZFS. There's probably some
optimization going on. Better to use /dev/urandom or concat n-many
files comprised of random bits.

> I'm assuming there's some caching going on with ZFS that's really
> helping out?

Yes.

> Also, no Santricity, just Sun's Common Array Manager. Is it possible
> to use both without completely confusing the array?

I think both are ok. CAM is free. Dunno about Santricity.

-- 
albert chin ([EMAIL PROTECTED])
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to