Ethan, I published a RNG schema, as you requested: http://www.opensolaris.org/os/project/caiman/auto_install/ai_manifest_schema.xml I added a quick example (albeit as yet not a good real world example, no reference names yet): http://www.opensolaris.org/os/project/caiman/auto_install/ai_manifest_example.xml
In response to your issues with reference naming (at the bottom): Semantic validation - algorithm: A single pass can be made to find reference names, which are listed. Then device_name can be semantically validated as being either: - reference name - ctds - mpxio A vdev can be semantically validated as being either: - reference name + maybe slice or partition sN or pN - ctds - mpxio Run time - algorithm: -A pass to find reference names -Disk selection - reference name is defined and resolved here: -- from top to bottom, find a disk device according to the criteria. If device has a reference name, when found, map reference name to device name -zpool vdev definition: -- when reference name is encounted in vdev, substitute the real disk name for reference name -zfs file system definition -- same as for zpool vdev definition Potential problems: - if, after reference name substitution, slice or partition doesn't exist -- the user is still responsible for making sure that slices and partitions are created. An error message with instructions on how to create the missing slice or partition would be helpful. - forward references - the reference name is referenced before it is defined -- At run time, since the use of the reference name (as location of a vdev or zfs filesystem) always comes after disk selection, where it is defined, there is no problem with the ordering here. - your "peer specification" problem --- I think that the previous point addresses this. They might be peers, but the disk is always selected before its reference name is used I don't see any prohibitive complications yet. William Ethan Quach wrote: > > William Schumann wrote: >>>> A mirror slice vdev can be declared within the ai_target_device: >>>> - unique device identifier (ctds, mpxio, /device node, reference >>>> name of a selected disk) >>>> - slice number >>>> This results in the command: >>>> zpool create <poolname> <install slice> [<mirror vdev>] >>>> If the pool exists, doing the "zpool create" will overwrite the >>>> existing pool. >>>> >>>> The target_device_option_overwite_root_zfs_pool can be done as >>>> follows: >>>> - import named rpool >>>> - delete datasets that comprise the rpool, using "zfs destroy >>>> <dataset>" >>> >>> Doing this sounds like you're really reusing an existing pool. >>> Is that the intent of this parameter? If not, why wouldn't >>> we destroy the pool, and recreate it? If I'm reinstalling, I >>> don't want to see crufty attributes on the pool from my previous >>> install. >> Well, we could reuse an existing pool as it was defined. The case >> covered here would allow the user to use the existing pool >> definition, saving the user from having to redefine the entire pool >> or from having to know any details about how the pool was defined in >> the first place. > > So then there should be a use case defined for wanting to create a > pool from scratch, even if one by that name already exists. > >>> >>>> <ai_target_device> >>>> <reference_name>mirrordev</reference_name> >>>> <!-- assume that disk is appropriate for raid2 mirror --> >>>> <target_device_overwrite_disk/> <!-- erase disk, use whole disk >>>> for slice 0 --> >>>> </ai_target_device> >>> >>> This third ai_target_device here seems to be what's defining >>> the 'mirrordev' reference name, and also its usage definition >>> (the fact that it should be erased and relaid out using s0), >>> but then first ai_target_device seems to also be defining >>> (or maybe just assuming) the usage definition of 'mirrordev' >>> by saying 'mirrordev.s0' >> The first device is using it, the third is defining it. > > So then there's a dependency there, in that the first specification > depends on the third (the s0 part), but the specifications are peers. > This will be a nightmare for semantic validation, and if not done > there, then a nightmare for the program consuming this wad of data. > > > -ethan