Hi Alok,

On 06/ 2/10 12:51 PM, Alok Aggarwal wrote:
Hi Sarah,

On Wed, 2 Jun 2010, Sarah Jelinek wrote:

On 06/ 1/10 03:59 PM, Alok Aggarwal wrote:
Hi Sarah,

On Tue, 1 Jun 2010, Sarah Jelinek wrote:

So, at a high level I see DC doing something like:

Target instantiation - including boot archive and pkg image directories
Transfer- pkg image population
Image modification-to do the boot archive image modifications that need to be done
Transfer-populate boot archive area
Post Install-boot archive compression, gen cd contents, create images

Would this not work?

Karen responded pretty well as to why this won't work.
In brief, we don't quite have the information
needed to instantiate the boot_archive UFS until much
later in the DC process when the pkg_image area as well
as the boot_archive have been populated.

So, really it seems that with the new architecture,
the boot_archive_archive finalizer needs to be broken
into a number of different checkpoints.
Yes, I agree. but, I think that these are instances of additional Transfer checkpoints, and TI checkpoints. I think that the post installation processing of the boot archive can be done as one post install checkpoint. The computing of various boot archive properties should be done in the client, imo, not as a checkpoint. The client can gather the data from the DOC and compute the values, then call TI to instantiate the boot archive.

I don't think the DC client app should be in the
business of computing values that are then used
by some of the DC checkpoints.

A better architecture would be to have the DC app simply setup the manifest-parser/engine/DOC/logging
as well as the checkpoints. And, let the bulk
of the actual "image construction work" happen
within the checkpoints themselves -- that's a
much more dynamic model than one where the DC app
itself does a bunch of processing.

Fair enough.

What is the post processing exactly for SPARC?

Post processing for SPARC includes fiocompress'ing
the boot_archive contents and installing the ufs
boot blocks.

So, back to the original question, should TI/Transfer -

a) provide alternate interfaces so that they can be called
directly from within a Checkpoint? Or, b) should a consumer (that is a Checkpoint) of TI/Transfer
   instead use the Checkpoint interfaces?

The advantage of (a) seems to be that it flows much better
for DC as the app whereas the disadvantage is that it can be viewed
as a violation of the CUD architecture in some ways.

The advantage of (b) seems to be that it conforms to CUD whereas
the disadvantage is that for some DC checkpoints that need to call into TI/Transfer, the processing just becomes a little tedious.

I didn't realize this was the original question. It isn't clear to me why we need to make an allowance for DC in this area.

Each checkpoint is a functional boundary. So, it isn't clear to me why a Checkpoint would have to call directly into say TI or Transfer. If there are really separate functional boundaries for DC then they need to be separate Checkpoints.

I am arguing that some of these are just instances of TI and Transfer. So, as part of the TI Checkpoint for a boot archive, it has to calculate the various properties required to do this instantiation. The TI Checkpoint for a boot archive is different than the TI Checkpoint for a zpool, and dataset, UFS, etc... As part of an instance of this checkpoint type, it may have to do some calculation to correctly instantiate the checkpoint.

sarah
****
Alok

_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to