Well, the Sun-supported way is separating /var from the common root.

In our systems we do a more fine-tuned hierarchy of separated /usr /opt /var in 
each BE and sub-separated /var/adm /var/log /var/cores /var/crash and /var/mail 
shared between boot environments. This requires quite many tricks to set up, 
and often is apain to maintain with i.e. LiveUpgrade (often the system doesn't 
come up on first reboot after update, because something was mixed up - luckily 
these boxes have remote consoles). However this also allows quota'ing specific 
datasets, i.e. so that core-dumps don't eat up the whole root FS.

In your case it might make sense to separate the application software's paths 
(i.e. /opt/programname/ and /var/opt/programname/) to individual datasets with 
quotas and migrate the data via cpio...

LU does not fetch system information straight from the system itself, it 
consults its own configuration (with copies in each boot environment). See 
/etc/lutab and /etc/lu/ICF.* files (and other /etc/lu/*) but beware that manual 
mangling is not supported by Sun. Not that it does not work or help in some 
cases ;)

A common approach is to have a separate root pool (slice or better a mirror of 
two slices), and depending on your base OS installation footprint, anywhere 
from 4Gb (no graphics) to 20Gb (enough for several BE revisions) would do. 
Remainder of the disk is given to a separate data pool, where our local zone 
roots live, as well as distros, backup data, etc. Thus there is very little 
third-party software installed in the global zones which act more like 
hypervisors to many local zones with actual applications and server software.

You might want or not want to keep the swap and dump volumes in the root pool - 
this extends the limit according to your RAM size. For example, on boxes with 4 
disks we can use one 2*20Gb mirror as a root pool, and another 2*20Gb mirror as 
a pool for swap and dump, leaving equal amounts of disk for the separate data 
pool.

In case of 4 slices for the data pool you can select to make it a RAID10, 
RAIDZ1 or RAIDZ2 - with different processing overheads/performance and 
different redundancy levels and different available space. For demo boxes with 
no critical performance requirements we use RAIDZ2 as it protects from failure 
of any two disks while RAID10 protects from failure of two specific disks (from 
different mirrors), or RAIDZ1 when we need more space available.

HTH,
//Jim Klimov
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to