Alan Burlison wrote:

Lori Alt wrote:

In designing the changes to the install software, we had to
decide whether to be all things to all people or make some
default choices.  Being all things to all people makes the
interface a lot more complicated and takes a lot more
engineering effort (we'd still be developing it and zfs boot
would not be available if we'd taken that path).  We
erred on the "make default choices" side (although with
some opportunities for customization), and leaned toward
the "move the system toward zfs" side in our choices for
those defaults.  We leaned a little too far in that direction in
our selection of default choices for swap/dump space in
the interactive install and so we're fixing that.

Great, thanks :-) Is this just a case of letting people use the -m flag with the ZFS filesystem type, and allowing the dump device to be specified with -m, or is there more to it than that?

The changes to which I was referring are to the interactive initial install program,
not LU.

I'm checking into how swap and dump zvols are sized in LU.

The two changes we made to the interactive initial install are:

1) Change the default size of the swap and dump zvols to 1/2 of
  physmem, but no more than 2 GB, and no less than 512 MB.
  Previously, we were allowing swap and dump to go as high
  as 32 GB, but that's just way too much for some systems.  Not
  only were we setting the size too high, we didn't give the user the
  opportunity to change it, so we made this change:
2)  Allow the user to change the swap and dump size to anything
  from 0 (i.e. no swap or dump zvol at all) to the maximum size
  that will fit in the pool).   We don't recommend the smaller sizes
  (especially for dump.  A system that can't take a crash dump is
  a system with a serviceability problem.), but we won't prevent
  users from setting it up that way.


I also notice that there doesn't appear to be any way to specify the size of the swap & dump areas when migrating - I thought I saw somewhere in the documentation that swap is sized to 1/2 physmem. That might be problematic on machines with large amounts of memory.

In this case, LU does move the system toward using
swap and dump zvols within the root pool.  If you really
don't want that, you can still use your existing swap and
dump slice and delete the swap/dump zvol.   I know it's
not ideal because it requires some manual steps, and maybe
you'll have to repeat those manual actions with subsequent
lucreates (or maybe not, I'm actually not sure how that works).

Seems to work, at least if you create the new BE in the same pool as the source BE. I can't get it to work if I try to use a different pool though.

But is there any really good reason NOT to move to the
use of swap/dump zvols?  If your existing swap/dump slice
is contiguous with your root pool, you can grow the root
pool into that space (using format to merge the slices.
A reboot or re-import of the pool will cause it to grow into
the newly-available space).

For some reason, when I initially installed I ended up with this:

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm    1046 -  2351       10.00GB    (1306/0/0)   20980890
  1       swap    wu       1 -  1045        8.01GB    (1045/0/0)   16787925

So physically swap comes before root, so I can't do the trick you suggested. I also have a second root slice that comes just after the first one, for my second current LU environment. Really I want to collapse those 3 into 1. What I'm planning to do is evacuate everything else off my first disk onto a USB disk, then re-layout the disk. Everything else on the machine bar the boot slices is already ZFS, so I can create a ZFS BE in the pool on my 2nd disk, boot into that then re-layout the first disk.
What if you turned slice 1 into a pool (a new one), migrated your BE into it, then grow that pool to soak up the space in the slices that follow it? You might
still need to save some stuff elsewhere while you're doing the transition.

Just a suggestion.  It sounds like you're working out a plan.

Lori
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to