Hello David,

Friday, June 2, 2006, 4:03:45 AM, you wrote:

DJO> ----- Original Message -----
DJO> From: Robert Milkowski <[EMAIL PROTECTED]>
DJO> Date: Thursday, June 1, 2006 1:17 pm
DJO> Subject: Re[2]: [zfs-discuss] question about ZFS performance for 
webserving/java

>> Hello David,
>> 
>> The system itself won't take too much space.
>> You can create one large slice form the rest of the disks and the same
>> slices on the rest of the disks. Then you can create one large pool
>> from 8 such slices. Remaining space on the rest of the disks could be
>> use for swap for example, or other smaller pool.
DJO> Ok, sorry I'm not up to speed on Solaris/software raid types. So
DJO> you're saying create a couple slices on each disk. One set of
DJO> slices I'll use to make a raid of some sort (maybe 10) and use
DJO> UFS on that (for the initial install - can this be done on
DJO> installation??), and then use the rest of the slices on the disks
DJO> to do the zfs/raid for everything else?
>> 

Exactly. And you can do it during installation.
I do it with jumpstart - server with 6 disks - on first two disks
system is installed on a mirror (SVM+UFS, configured automatically by
jumpstart profile) and one additional slice (rest of the disk). Then I
create zpool on these slices.

So it can look like:

   c0t0d0s1    c0t1d0s1      SVM mirror, UFS      /
   c0t0d0s3    c0t1d0s3      SVM mirror, UFS      /var
   c0t0d0s4    c0t1d0s4      SVM mirror, UFS      /opt
   c0t0d0s6    c0t1d0s6      SVM metadb

   c0t2d0s1    c0t2d0s1      SVM mirror, SWAP     SWAP    /s1 size =
                                                           sizeof(/ + /var + 
/opt)

   zpool create local mirror c0t0d0s0 c0t1d0s0 mirror c0t2d0s0
                      c0t3d0s0 mirror c0t4d0s0 c0t5d0s0

   or any other raid supported by ZFS.

   I put SWAP on other disks than / intentionally so disk space isn't
   wasted and we have as much space for s0 as possible.

   
   I use s0 slice here to emphasis that it would be good idea to have
   s0 slice starts from cylinder 0 (or 1) - which means from the
   beginning of the disks as we can expect that ZFS pool will be
   mostly used in terms of IOs and not / /var or /opt. In that config
   I would eventually resign of creating separate /opt.

   Then on such created pool I would put zones (each in its own zfs
   filesystem under common hierarchy like local/zones) and other
   filesystems. I would also consider setting atime=off for most of
   them.

   




-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to