JS writes:
The big problem is that if you don't do your redundancy in the zpool,
then the loss of a single device flatlines the system. This occurs in
single device pools or stripes or concats. Sun support has said in
support calls and Sunsolve docs that this is by design, but I've never
-when using highly available SAN storage, export the
disks as LUNS and use zfs to do your redundancy -
using array rundandancy (say 5 mirrors that you will
zpool together as a stripe) will cause the machine
to crap out and die if any of those mirrored
devices, say, gets too much io and
The big problem is that if you don't do your redundancy in the zpool, then the
loss of a single device flatlines the system. This occurs in single device
pools or stripes or concats. Sun support has said in support calls and Sunsolve
docs that this is by design, but I've never seen the loss of
JS wrote:
General Oracle zpool/zfs tuning, from my tests with Oracle 9i and the APS
Memory Based Planner and filebench. All tests completed using Solaris 10 update
2 and update 3.:
-use zpools with 8k blocksize for data
definitely!
-don't use zfs for redo logs - use ufs with directio
I'm sorry dude, I can't make head or tail from your post. What is your point?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
General Oracle zpool/zfs tuning, from my tests with Oracle 9i and the APS
Memory Based Planner and filebench. All tests completed using Solaris 10 update
2 and update 3.:
-use zpools with 8k blocksize for data
-don't use zfs for redo logs - use ufs with directio and noatime. Building
redo