Ethan,
Ethan Quach wrote:
> William,
>
> A couple of comments below ....
>
>
> William Schumann wrote:
>> The Automated Solaris Installer (AI) client should support selection
>> and formatting of more than one disk. Additional considerations
>> include mirroring, additional zfs pools.
>>
>> This document attempts to lay out an approach to the task.
>> OpenSolaris Community feedback is desired.
>>
>> Currently, only one disk is supported - in
>> <ai_target_device>...</ai_target_device>. Slice and partition
>> definitions for multiple device specifications can simply be defined
>> by moving the slice and partition definitions inside ai_target_device.
>>
>> Target devices can then have symbolic names. The symbolic name can
>> be used in vdevs in zfs pool definitions. Examples follow below.
>>
>> The default disk reference names are of the form "deviceN" where
>> N=1....number of
>> devices. A custom name for a disk can be created using element:
>> ai_target_device/reference_name, and must be unique alphanumeric
>> string with underscores.
>>
>> Implementing root zfs pool (rpool)
>>
>> The manifest currently allows specification of rpool.
>> ai_target_device/install_slice_number indicates the root pool slice.
>> If not specified, slice 0 of the 1st disk in the list is assumed to
>> be the root.
>
> With the support for multiple disks,
>
>> A mirror slice vdev can be declared within the ai_target_device:
>> - unique device identifier (ctds, mpxio, /device node, reference
>> name of a selected disk)
>> - slice number
>> This results in the command:
>> zpool create <poolname> <install slice> [<mirror vdev>]
>> If the pool exists, doing the "zpool create" will overwrite the
>> existing pool.
>>
>> The target_device_option_overwite_root_zfs_pool can be done as follows:
>> - import named rpool
>> - delete datasets that comprise the rpool, using "zfs destroy <dataset>"
>
> Doing this sounds like you're really reusing an existing pool.
> Is that the intent of this parameter? If not, why wouldn't
> we destroy the pool, and recreate it? If I'm reinstalling, I
> don't want to see crufty attributes on the pool from my previous
> install.
Well, we could reuse an existing pool as it was defined. The case
covered here would allow the user to use the existing pool definition,
saving the user from having to redefine the entire pool or from having
to know any details about how the pool was defined in the first place.
>
>> - proceed with installation an usual
>>
>> zfs pool creation:
>> A pool consists of a set of vdevs. At this time, the vdevs are
>> slices, so they consist of a unique disk identifier (can be ctds,
>> mpxio, /device, or reference name) plus a slice number.
>>
>> Mirrors consist of list of vdevs and can be listed in the same way.
>>
>> General definition for a zfs pool (not the root pool):
>> ai_zfs_pool
>> name
>> id (used to reference an existing pool)
>> vdevs (1 or more vdev definitions or a set)
>> mirror_type (regular mirror, raid, or raid2)
>> mirror_vdevs (0 or more mirror definitions, each a list of vdevs)
>> mountpoint (for consideration)
>
> should be optional if provided.
>
>> /ai_zfs_pool
>>
>> Format for vdev:
>> disk_name - real (ctds, mpxio, /device node) or reference name of
>> selected disk
>> slice - valid slice number 0,1,3-7
>>
>> Example: install on boot disk, use some selected disk as raid2
>> mirror, and use another selected disk over 30GB for zfs pool newpool
>> mounted at /export1
>> <ai_target_device>
>> <target_device_select_boot_disk>
>> <mirror>mirrordev.s0</mirror> <!-- put mirror selected disk named
>> "mirrordev", slice 0 -->
>> <mirror_type>raid2</mirror_type>
>> </ai_target_device>
>> <ai_target_device>
>> <reference_name>newpooldisk</reference_name>
>> <target_select_min_size>30<target_select_min_size>
>> <target_device_overwrite_disk/> <!-- erase disk, use whole disk
>> for slice 0 -->
>> </ai_target_device>
>
> For this second ai_target_device, its basically going to be
> selecting *any* disk, used or unused, that's over 30Gig ?
In tihs example, yes, it would be any disk over 30GB
>
>> <ai_target_device>
>> <reference_name>mirrordev</reference_name>
>> <!-- assume that disk is appropriate for raid2 mirror -->
>> <target_device_overwrite_disk/> <!-- erase disk, use whole disk
>> for slice 0 -->
>> </ai_target_device>
>
> This third ai_target_device here seems to be what's defining
> the 'mirrordev' reference name, and also its usage definition
> (the fact that it should be erased and relaid out using s0),
> but then first ai_target_device seems to also be defining
> (or maybe just assuming) the usage definition of 'mirrordev'
> by saying 'mirrordev.s0'
The first device is using it, the third is defining it.
>
> I know this is just an example, and see below that you will
> be posting a schema for what you're proposing, so I'll wait
> for that for more comments. In general, it will be easier to
> comment by looking at something that defines the objects in
> play here.
I have a schema for this that is almost complete that I will send out
later today (CET).
>
>> <ai_zfs_pool>
>> <name>newpool</name>
>> <mountpoint>/export1</mountpoint>
>> <vdev>
>> newpooldisk.s0 <!-- use selected disk named "newpooldisk",
>> slice 0 -->
>> </vdev>
>> </ai_zfs_pool>
>>
>> For further consideration:
>> rpool deletion:
>> is there a use case for this? Should this be defined?
>> zfs pool deletion:
>> is there a use case for this? Should this be defined?
>
> In the context of deletion, I don't see a distinction between
> rpool deletion vs. (non-rpool?) pool deletion.
No, there doesn't appear to be any difference. It seems that the
non-rpool version could be used to delete rpools as well.
William
>
> The only use case I can think of is for when I'm the type
> of user that does not trust AI to *pick* a disk for me.
> I.e. I will never use the 'overwrite' flag. In that case,
> if there are existing pools that I know I don't care about,
> I can specify to delete them, thereby allowing AI to
> pick it since its unused.
>
>
> thanks,
> -ethan
>
>>
>> Not addressed:
>> - reusability issues - if a manifest specifying non-root zfs pools is
>> re-used, what happens to the existing pools? Are they verified in
>> any manner?
>> - use of /var as a separate zfs volume
>>
>> A proposed updated RNG schema, ai_manifest.rng, will be posted with
>> examples.
>>
>> Again, comments from the OpenSolaris community are desired.
>> _______________________________________________
>> caiman-discuss mailing list
>> caiman-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss