Lori Alt wrote:

> In designing the changes to the install software, we had to
> decide whether to be all things to all people or make some
> default choices.  Being all things to all people makes the
> interface a lot more complicated and takes a lot more
> engineering effort (we'd still be developing it and zfs boot
> would not be available if we'd taken that path).  We
> erred on the "make default choices" side (although with
> some opportunities for customization), and leaned toward
> the "move the system toward zfs" side in our choices for
> those defaults.  We leaned a little too far in that direction in
> our selection of default choices for swap/dump space in
> the interactive install and so we're fixing that.

Great, thanks :-)  Is this just a case of letting people use the -m flag 
with the ZFS filesystem type, and allowing the dump device to be 
specified with -m, or is there more to it than that?

I also notice that there doesn't appear to be any way to specify the 
size of the swap & dump areas when migrating - I thought I saw somewhere 
in the documentation that swap is sized to 1/2 physmem.  That might be 
problematic on machines with large amounts of memory.

> In this case, LU does move the system toward using
> swap and dump zvols within the root pool.  If you really
> don't want that, you can still use your existing swap and
> dump slice and delete the swap/dump zvol.   I know it's
> not ideal because it requires some manual steps, and maybe
> you'll have to repeat those manual actions with subsequent
> lucreates (or maybe not, I'm actually not sure how that works).

Seems to work, at least if you create the new BE in the same pool as the 
source BE.  I can't get it to work if I try to use a different pool though.

> But is there any really good reason NOT to move to the
> use of swap/dump zvols?  If your existing swap/dump slice
> is contiguous with your root pool, you can grow the root
> pool into that space (using format to merge the slices.
> A reboot or re-import of the pool will cause it to grow into
> the newly-available space).

For some reason, when I initially installed I ended up with this:

Part      Tag    Flag     Cylinders         Size            Blocks
   0       root    wm    1046 -  2351       10.00GB    (1306/0/0)   20980890
   1       swap    wu       1 -  1045        8.01GB    (1045/0/0)   16787925

So physically swap comes before root, so I can't do the trick you 
suggested.  I also have a second root slice that comes just after the 
first one, for my second current LU environment.  Really I want to 
collapse  those 3 into 1.  What I'm planning to do is evacuate 
everything else off my first disk onto a USB disk, then re-layout the 
disk.  Everything else on the machine bar the boot slices is already 
ZFS, so I can create a ZFS BE in the pool on my 2nd disk, boot into that 
then re-layout the first disk.

> Keep these comments coming!  We've tried to make the
> best choices, balancing all the many considerations, but
> as in the case or swap, I'm sure we made some choices
> that were wrong or at least non-optimal and we want to
> continue to refine how zfs works as a root file system.

I'm really liking what I see so far, it's just a question of getting my 
head around the best way of setting things up, and figuring out the 
easiest way of migrating.

-- 
Alan Burlison
--
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to