On 02/18/11 04:55, Darren Kenny wrote:
On 17/02/2011 19:31, Ethan Quach wrote:
Darren,

Thanks for the comments ...
Thanks to you for looking at this side of things ;)

On 02/16/11 04:24, Darren Kenny wrote:
Hi Ethan,

I took a look at your document, in general it looks pretty good...

Naturally though, there are some comments:

5.1.1<configuration>   element

      You mention here that there are two URI schemas supported, whilefile://
      doesn't present a large problem, I'm wondering abouthttp://  URIs, since
      right now the AI client doesn't do any fetching from remote URLs - it's
      the manifest/locator service that does that.

      Is there the expectation that this will be handled by the AI client
      itself, or will the manifest/locator service be enhanced to to the
      fetching of the URIs by extracting them from the manifest?
The AI client will (via a checkpoint that the AI client runs.)  Do you
foresee an issue with that?
I'm not saying that it's not possible, it just that its a divergence from what
we're currently doing, and as such I think it might be confusing to people.

But given that it's unlikely that manifest/locator will actually be run in the
case of a zones install - especially on a live system, I'm not sure we have
any other choice.

Note, this part of the design isn't run on the live system. This is the part that supports zone configurations specified in the AI manifest of the global system install. So in theory, manifest-locator could be used, but I don't see an issue with the AI client itself processing this.

5.1.2 Structure of zone configuration files

      ai_manifest.xml
          Based on the comments in 5.2.1, and given that there is a difference
          it the AI manifest tags when applying them to zones, should we
          consider having a separate DTD such as ai_zones.dtd?
I don't think so because we actually want to be able to accept a
manifest that is used to install a global, to be usable to install a
non-global zone.  The transparency makes managing AI manifests a much
easier on the user.  In the design spec, this was eluded to as a future
enhancement, but I've gotten comments from Dave that this might be
desirable now.

So I think we want to keep the definition one and the same for this
purpose instead of separating them.
OK, I just thought that it might make things more transparent to the user
providing the manifest that if it doesn't even validate against the DTD then
it's unlikely to work at all.

Using the same manifest as AI will mean that it could be mis-understood what
is supported and what is not.
5.1.3 Processing the<configuration>   elements

      NIT: In the list of AI->CUD checkpoints, the "Transfer IPS" should really 
be
      one or more Transfer (not specifically IPS) checkpoints, depending on the
      contents of the AI Manifest.
      Ginnie document on ICTs is a more complete list of ICT checkpoints now,
      obsoleting the list in the AI document. After "Transfer Logs" there is now
      a "Create Snapshot" checkpoint (to create the @install checkpoint).
The ICT to Checkpoint document lists all ICTs, but it doesn't seem to
note which ones were applicable to AI and which ones were not.  Other
than the Create Snapshot checkpoint, are there any others?

One question I actually had regarding the ICT conversion was that the
SetFlushIPSContentFlag checkpoint, which is in the AI list, is grouped
into the CleanupLiveCD ICT checkpoint.  Does AI not need that ICT anymore?
AI does still require it, and it's a change in the ICT design that I missed.
Ginnie has created it as a separate checkpoint now for us.

OK.

      The zone_rpool_dataset that is passed with the -Z option should be placed
      into the Data Object Cache so that checkpoints can find it when required.
Yep, this is noted in 5.2.2

      I notice that there isn't a TargetSelection checkpoint being used - it's
      at this point that things are moved around between the Target.DISCOVERED
      tree and the imported Manifest to the Target.DESIRED tree that TI
      requires it's information to be in. This would be where you would add the
      new structure of BE and datasets for TI to create for you.
I'll add TargetSelection to the zone checkpoint list.  It doesn't sound
like its going differ from global zone usage.
Oh, it will probably behave slightly differently since it's a zone install -
e.g. we won't be telling TI to create a new rpool, but I think it can switch
the behaviour based on the fact that a zone_rpool_dataset was provided.

Actually, maybe it will need to be a separate "TargetSelectionZone" checkpoint
on that basis - don't want to risk TI actually damaging and existing rpool,
disks, etc.

I'll see how this shakes out during implementation...

5.2.1 Manifest Validation Zone

      I think that this checkpoint wouldn't be necessarily different if the AI
      manifest for a zones install was simply a different DTD.

      Where there is overlap between the DTDs maybe we could break re-organise
      things such that the AI and AI_Zones DTDs should share common elements.
[...]

5.2.2 Target Instantiation Zone

      I'm not sure why this is different to standard the standard TI checkpoint.

      I can't see good reason that things can't be common if the information
      that is passed to TI doesn't tell it to create a new rpool (i.e. the
      action != "create", but instead just tells it to add datasets.

      Yes, it's possible that the TI code will need changes to do this
      perfectly, but I'm not sure it warrants a new checkpoint but more of a
      refinement of the existing one.

      As mentioned in 5.2 above, you will likely need some checkpoint to create
      the Target.DESIRED tree for TI to get it's instructions from.
5.2.2.1 libbe::be_init_nested_be()

      I think that the standard TI above could handle this fairly simply if the
      BE to be created is a zone BE as opposed to a boot BE. It could even
      toggle it's use based on the locating of the zone_zpool_dataset in the
      DOC.
What I was trying to avoid is littering our lower level checkpoint code
with the following in *random* places:

     if (installing_zone_root())
        ...
        ..
     else
        ...
        ..


Hence the subclassing (or separation of the class, as in the
ManifestValidation case) of certain classes to make it more apparent
and, in the long run, easier to maintain.  But perhaps to reconcile
this, I could just augment the effected classes' initialization to
accept an 'installingZoneRoot' optional parameter.  Code in the class
would still have the if/else clauses, but at least it's identified at
the class level.  For example, ManifestValidation would look something
like the following:

     def ManifestValidationCheckpoint(..., ..., installingZoneRoot=False)


Thoughts?
I think that could work...

I do understand the desire to separate out the work into different classes - I
guess the amount of separation really is the deciding factor - if there isn't
a lot of shared code, then it probably does make sense to have different class
implementation.

One thing I don't want to do though is have a mix in the implementation. I'd rather keep it all consistent one way or another.

5.2.3 Initialise SMF Zone

      Again, I'm not sure if there is a need for another checkpoint - the
      difference is very small - and could hinge on the specification of the
      zone_zpool_dataset in the DOC.
[...]

5.2.4 Transfer Logs Zone

      I think this requirement could be mitigated by the standard CUD checkpoint
      just doing the same thing and using /var/run/auto_install.<pid>/* for it's
      logs during an install.
      I would be interesting to know if people think this would be wrong for a
      standard install to use...
Yes, my intention was to make this change as part of the standard
Transfer Logs checkpoint, not just for zone root install.  This actually
needs to apply to *any* temporary work files that the AI client, and any
of its checkpoints, need to use.  So the change actually spans beyond
just the Transfer Logs checkpoint.

Perhaps the main AI program code needs to set some WORKDIR variable in
the DOC, and all checkpoints need to base off that in creating temp files.
Hmm, that's certainly a possibility, I'll take a look at doing this.

So just to be clear, you would expect something to be in the DOC like:

     ApplicationParameters(DataObject):
         application_name = "auto-install"
         work_dir = "/var/run/install.PID"

This is possibly something generic that we could create in install_common, and
then anyone that wants to know this type of information could use it by using
class methods like:

     work_dir = ApplicationParameters.get_work_dir(doc)

How does that sound?

This sounds promising. Do you think you can include this as part of AI->CUD? If not, this is something I could probably include.


thanks,
-ethan

Thanks,

Darren.

_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to