On Wed, 26 Nov 2008, Paul Sobey wrote:

> Hello,
>
> We have a new Thor here with 24TB of disk in (first of many, hopefully).
> We are trying to determine the bext practices with respect to file system
> management and sizing. Previously, we have tried to keep each file system
> to a max size of 500GB to make sure we could fit it all on a single tape,
> and to minimise restore times and impact should we experience some kind of
> volume corruption. With zfs, we are re-evaulating our working practices.
>
> A few questions then. Apologies if these have been asked recently, I went
> back through a month's worth of posts and couldn't see anything. I've also
> read the best practices guide here:
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_Administration_Considerations
>
> Pointers to additional info are most welcome!
>
> 1. Do these kinds of self-imposed limitations make any sense in a zfs
> world?
>
> 2. What is the 'logical corruption boundry' for a zfs system - the
> filesystem or the zpool?
>
> 3. Are there scenarios apart from latency sensitive applications (e.g.
> Oracle logs) that warrant separate zpools?
>
> One of our first uses for our shiney new server is to hold about 5TB of
> data which logically belongs on one share/mount, but has historically been
> partitioned up into 500GB pieces. The owner of the data is keen to see it
> available in one place, and we (as the infrastructure team) are debating
> whether it's a sensible thing to allow.
>
> Thanks for any advice/wisdom you may have...

Apologies to all, sent to opensolaris.com and then bounced from Pine to 
correct address hoping the headers would sort themselves. Apparently not. 
Replying to this one should work...

Paul

_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to