> I got a reply from another person about how to lay
> out my disc:
> 
> Instead, I would create two 10 GB for / and 10 GB for
> an alternate / (to multiboot a different Solaris
> release) and use all remaining space (~73 GB?) as a
> ZFS pool with these filesystems inside:
> - /export/home (don't use /home which is
> automounted)
> - /opt
> - /opt (alternate)
> - /export
> - /usr/local

That would also work, and comes close to using the disk space efficiently, 
although not as efficiently as the "whole disk root" approach.

Consider the above schema carefully, for it has both strengths and weaknesses:

weaknesses:

- you will basically have a fixed size root, which will be either undersized of 
oversized (inefficient)

- since you have one physical disk, UFS and ZFS will have to work on slices 
rather than physical devices; this is "business as usual" for UFS, but ZFS 
works best when you give him the whole disk (as stated in the documentation); 
unfortunately Solaris isn't *readily* bootable from ZFS yet, or we wouldn't 
even be having this discussion

strengths:

- ZFS protects data and makes sure it arrives unaltered from disk to the OS

- separate filesystems may be created and size of those controlled flexibly by 
setting a quota

- you can create and use separate filesystems which share the disk space 
optimally (as much as needed) without being explicitly fixed (flexibility again)

Once installation and booting from ZFS becomes the norm you'll only have two, 
or even one slice: s2, which is the whole partition or the whole disk, 
depending on how you partitioned it. Creating a file system will be like 
creating a directory. All filesystems will inherently use the space 
efficiently, and this will no longer be an issue.

> It seems difficult to do?

ZFS is easy. It only has two commands, `zpool` and `zfs`. Step-by-step 
documentation, as always, is on http://docs.sun.com/


> Before, I would
> install all the programs I would use most first,
> because they would be close to the File Allocation
> Table, and stuff.

It doesn't matter how "far" or "near" data is on a FS that uses file allocation 
tables; what matters is that data is written sequentially and not strewn all 
over the disk.

> But now it seems not really
> necessary with all that stuff.

Not on UNIX. Everything is designed to give maximum performance and that 
includes filesystems. Just look at writes: I/O is buffered in a FS cache, 
reordered in the UFS log, and then finally flushed out to disk, just to 
minimize the time that the system has to wait as much as possible. And the 
whole thing is designed like that.

Remember, UNIX is an OS designed for serving masses, and that means large 
amounts of data as fast as possible, "pedal-to-the-metal" style.
 
 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to