Hi Alok,

This is very useful - thanks for writing this up.

I have a few general comments, below.


On 01/13/11 20:32, Alok Aggarwal wrote:
This email is required reading for: Darren, Dermot, Matt, Niall, Karen, Jean and Drew. Optional for others.

The following changes/clarifications are proposed to
the current TI/TD design.

General flow of an app:
----------------------

The app or the target controller will call into TD to do the device
discovery. TD populates everything it finds in the Target.DISCOVERED
tree in the DOC.

When the app intends to make changes to the devices/layout of those devices, it/target controller will make a copy of Target.DISCOVERED and call it Target.DESIRED. The controller will then call partition.add_partition/delete_partition, slice.add_slice/delete_slice, etc on behalf of the app. These calls in turn will trigger validation of the requested change using

In practice, I think it will be both the app and the TargetController that
make these calls.  The original design was for the app to do everything
through the TargetController, but we then realized this just resulted in
passing target objects around as parameters, rather than using proper object-
oriented programming.

So now, the TargetController sets up the Target.DESIRED tree structure
when a disk is initially selected or another disk is selected or added.
Then the app directly operates on the objects in the Target.DESIRED tree
to make the user's requested changes.

Shadow Lists. If the requested operation yields no validation errors, the change is carried through to Target.DESIRED. If any validation errors occur, they will be stored in the error service and at the same time the change
will still be carried through to Target.DESIRED. A complete list
of the calls to make to change Target.DESIRED is below[1].

Do you also have a list of the errors that can be raised for the
different validation issues?  eg I imagine something like:

   partition1 = Partition("1")

   partition1.action = "create"

   partition1.type = "solaris"

   partition1.size = ...

   try:

       disk.insert_children([partition1])

   except TOO_MANY_SOLARIS_PARTITIONS_ERROR:

       # this is OK while install targets are being configured

       pass

   except PARTITIONS_TOO_BIG_FOR_DISK_ERROR:

       # we can't allow this to stand

       display_error(...)


etc

The app/target controller decides whether the reported errors are "soft" (interim state change is okay) or "hard" (interim state is a hard failure). After the app/target controller has made all the necessary changes to Target.DESIRED, it calls ti.final_validation() which validates Target.DESIRED completely and stores any errors thus found in the error service.

I assumed that final_validation() would be a method of the Target
class, not the ti library?  Am I incorrect?

The api/target controller has to then go through the process of ensuring that the errors in the error service are appropriately addressed
and final_validation is successful prior to calling TI.

Once Target.DESIRED is fully valid, the app/target controller calls TI
to lay out the targets as indicated in Target.DESIRED.

Target Validation
-----------------
Target Validation will be performed via the use of Shadow Lists
on the backend. The following is a list of all the validation
checks that will be made for the respective target entities
not counting the checks specific for GPT.

Partition
- Only one Solaris2 partition on a given disk should exist
- A maximum of 4 primary partitions can be present
- A maximum of 1 extended partition and at most 32 partitions
  within that (one Solaris2 partition) can be present
- Partitions should not overlap
- Partitions should not be too small for a given architecture
- Solaris2 partition must be <2TB for VTOC
- FAT32 partition must be <4GB
- Extended partition must greater than a certain size (to
  account for 63 reserved sectors, etc)
- A partition must not be in-use for something else (eg: part
  of a zpool)

Slice
- Number of slices must not exceed MAX_NSLICES
- Slice number must be MAX_NSLICES-1
- Slice size must be less than the size of a disk
- S2 must be the backup slice
- S1 must be swap slice (is this really even needed?)
- Slices should not overlap
- A slice must not be in-use for something else (eg: part of
  a zpool)

Zpool
- Pool name must be unique
- Pool should not have vdevs that are part of another pool
- Pool mountpoint, if specified, should be unique

Size
----

TD/TI will internally always operate on sectors (base 2).
A 'Size' class will be provided to assist in converting
the size in sectors to bytes/KB/MB/GB/TB.

The classmethods supported by this class are:

def bytes(cls, sectors, cylsize=512)
def KB(cls, sectors, cylsize=512)
def MB(cls, sectors, cylsize=512)
def GB(cls, sectors, cylsize=512)
def TB(cls, sectors, cylsize=512)
def sectors(cls, value, units, cylsize=512)

Holey partition/slices
----------------------

Holes within the partition/slice table will not be reported
by TD. The following functions will be provided to allow
a consumer to get a list of holes within partitions/slices.

def Partition.get_gaps(self, size_units=Target.SIZE_UNITS_GB)
    '''
       Returns a tuple containing HoleyPartition objects corresponding
       to the spaces within partitions. The HoleyPartition objects
       are available to the partition to use when increasing the size.
       The sum of the size of both HoleyPartitions determines the
       maximum additional size this partition can grow by.
       size_units indicates the default human readable size units
       format used in each HoleyPartition object in the returned tuple.
       If HoleyPartitions exist on both sides of the partition
       then the corresponding tuble will be of the form:
       (Before, After)

       If no adjacent space exists on a given side of the partition then
       the corresponding tuble will contain the None. (None, After) or
       (Before, None)

       If no adjacent space exists on either side of the partition
       then the returned tuple will be (None, None)
   '''

def Slice.get_gaps(self, size_units=Target.SIZE_UNITS_GB)
    '''
       Same as for Partition object
    '''

[1] Target.DESIRED needs to reflect the state of the targets
    in the DOC as they should be laid out by TI. So, if

Right.  I think we need a precise definition of what layouts are
acceptable to TI, which I think are a subset of the possible layouts
supported by the DTD.  For example, and using XML notation,
*before* the proposed "device referencing" changes to the schema
are enacted, I think your example code below would result in
something like the following objects in the DOC.

Will TI run successfully if it finds the following?


<!-- start -->

<target name="desired targets">

   <target_device>

       <zpool name="mypool" action="create" is_root="True">

           <vdev>

               <disk>

                   <disk_name name="c0t0d0" type="ctd"/>

                   <partition action="use-existing" part_type="191"/>

                   <partition name="1" action="create" part_type="primary">

                       <slice action="preserve" name="1"/>

                       <slice action="preserve" name="2"/>

                       <size val="..."/>

                   </partition>

               </disk>

           </vdev>

           <dataset>

               <filesystem name="mypool/user1" action="create">

           </dataset>

       </zpool>

   </target_device>

</target>

<!-- end -->


    modifications to physical and logical devices needs to
    be made, it can be done as follows:

    target = Target.DESIRED
    disk = target.get_descendants( .. )
    slice1 = Slice("1")
    slice1.action = "preserve"
    slice2 = Slice("2")
    slice2.action = "preserve"
    partition1 = Partition("1")
    partition1.action = "create"
    partition1.type = "primary"
    partition1.bootid = 0x80
    partition1.size = ..
    partition1.insert_children([slice1, slice2])
    disk.insert_children([partition1])

    zpool = Zpool("mypool")
    zpool.action = "create"

    vdev = Vdev("vdev")

    dataset = Dataset("dataset")

    fs = Filesystem("mypool/user1")
    fs.action = "create"

    dataset.insert_children(fs)
    vdev.insert_children(disk)
    zpool.insert_children([vdev, dataset])

    Note that the logical stuff needs to change a LOT
    once the target schema changes are made. Also note that

Agreed.

    more examples on do something like above can be found
    in the cud_ti gate under the 'install_target/test' directory.

Thanks,
- Dermot



Alok
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to