On 6/26/07, Roshan Perera <[EMAIL PROTECTED]> wrote:
25K   12 CPU dual core x 1800Mhz with ZFS 8TB storage SAN storage (compressed & 
RaidZ) Solaris 10.
RaidZ is a poor choice for database apps in my opinion; due to the way
it handles checksums on raidz stripes, it must read every disk in
order to satisfy small reads that traditional raid-5 would only have
to read a single disk for.  Raid-Z doesn't have the terrible write
performance of raid 5, because you can stick small writes together and
then do full-stripe writes, but by the same token you must do
full-stripe reads, all the time.  That's how I understand it, anyways.
Thus, raidz is a poor choice for a database application which tends
to do a lot of small reads.

Using mirrors (at the zfs level, not the SAN level) would probably
help with this.  Mirrors each get their own copy of the data, each
with its own checksum, so you can read a small block by touching only
one disk.

What is your vdev setup like right now?  'zpool list', in other words.
How wide are your stripes?  Is the SAN doing raid-1ish things with
the disks, or something else?

2. Unfortunately we are using twice RAID (San level Raid and RaidZ) to overcome 
the panic problem my previous blog (for which I had good response).
Can you convince the customer to give ZFS a chance to do things its
way?  Let the SAN export raw disks, and make two- or three-way
mirrored vdevs out of them.

3. Any way of monitoring ZFS performance other than iostat ?
In a word, yes.  What are you interested in?  DTrace or 'zpool iostat'
(which reports activity of individual disks within the pool) may prove
interesting.

Will
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to