Richard Elling wrote:
Hi Jan, comments below...

jan damborsky wrote:
Hi folks,

I am member of Solaris Install team and I am currently working
on making Slim installer compliant with ZFS boot design specification:

http://opensolaris.org/os/community/arc/caselog/2006/370/commitment-materials/spec-txt/

After ZFS boot project was integrated into Nevada and support
for installation on ZFS root delivered into legacy installer,
some differences occurred between how Slim installer implements
ZFS root and how it is done in legacy installer.

One part is that we need to change in Slim installer is to create
swap & dump on ZFS volume instead of utilizing UFS slice for this
as defined in design spec and implemented in SXCE installer.

When reading through the specification and looking at SXCE
installer source code, I have realized some points are not quite
clear to me.

Could I please ask you to help me clarify them in order to
follow the right way as far as implementation of that features
is concerned ?

Thank you very much,
Jan


[i] Formula for calculating dump & swap size
--------------------------------------------

I have gone through the specification and found that
following formula should be used for calculating default
size of swap & dump during installation:

o size of dump: 1/4 of physical memory

This is a non-starter for systems with 1-4 TBytes of physical
memory.  There must be a reasonable maximum cap, most
likely based on the size of the pool, given that we regularly
boot large systems from modest-sized disks.
Actually, starting with build 90, the legacy installer sets the default size of the
swap and dump zvols to half the size of physical memory, but no more
then 32 GB and no less than 512 MB.   Those are just the defaults.
Administrators can use the zfs command to modify the volsize
property of both the swap and dump zvols (to any value, including
values larger than 32 GB).



o size of swap: max of (512MiB, 1% of rpool size)

However, looking at the source code, SXCE installer
calculates default sizes using slightly different
algorithm:

size_of_swap = size_of_dump = MAX(512 MiB, MIN(physical_memory/2, 32 GiB))

Are there any preferences which one should be used or is
there any other possibility we might take into account ?

zero would make me happy :-)  But there are some cases where swap
space is preferred.  Again, there needs to be a reasonable cap.  In
general, the larger the system, the less use for swap during normal
operations, so for most cases there is no need for really large swap
volumes.  These can also be adjusted later, so the default can be
modest.  One day perhaps it will be fully self-adjusting like it is
with other UNIX[-like] implementations.

[ii] Procedure of creating dump & swap
--------------------------------------

Looking at the SXCE source code, I have discovered that following
commands should be used for creating swap & dump:

o swap
# /usr/sbin/zfs create -b PAGESIZE -V <size_in_mb>m rpool/swap
# /usr/sbin/swap -a /dev/zvol/dsk/rpool/swap

o dump
# /usr/sbin/zfs create -b 128*1024 -V <size_in_mb>m rpool/dump
# /usr/sbin/dumpadm -d /dev/zvol/dsk/rpool/dump

The above commands for creating the swap and dump zvols match
what the legacy installer does, as of build 90.

Could you please let me know, if my observations are correct
or if I should use different approach ?

As far as setting of volume block size is concerned (-b option),
how that numbers are to be determined ? Will they be the same in
different scenarios or are there plans to tune them in some way
in future ?
There are no plans to tune this.  The block sizes are appropriate
for the way the zvols are to be used.


Setting the swap blocksize to pagesize is interesting, but should be
ok for most cases.  The reason I say it is interesting is because it
is optimized for small systems, but not for larger systems which
typically see more use of large page sizes.  OTOH larger systems
should not swap, so it is probably a non-issue for them.  Small
systems should see this as the best solution.

Dump just sets the blocksize to the default, so it is a no-op.
 -- richard

[iii] Is there anything else I should be aware of ?
---------------------------------------------------

Installation should *not* fail due to running out of space because
of large dump or swap allocations.  I think the algorithm should
first take into account the space available in the pool after accounting
for the OS.


The Caiman team can make their own decision here, but we
decided to be more hard-nosed about disk space requirements in the
legacy install.  If the pool is too small to accommodate the recommended
swap and dump zvols, then maybe this system isn't a good candidate for
a zfs root pool.  Basically, we decided that since you almost
can't buy disks smaller than 60 GB these days, it's not worth much
effort to facilitate the setup of zfs root pools on disks that are smaller
than that.  If you really need to do so, Jumpstart can be used to
set the dump and swap sizes to whatever you like, at the time
of initial install.

Lori

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to