Karen,

A few comments that are mostly pretty high-level since others have commented on the more detailed issues I spotted already.

I think there's an over-emphasis on singletons in this design. We've discussed DOC already and I think resolved that it won't be, but I would prefer not to see singletons other than the InstallEngine. It is a natural singleton, whereas the others seem to be a case of this design dictating behavior where it isn't actually necessary. For example, if some checkpoint wants/needs to do some alternate logging for some reason, or if the application has a need for multiple logs to satisfy some other integration requirement, there's no reason not to allow use of a different logging instance.

A further comment on the logging: why not allow the application to specify an alternate logger instance vs. the one that you would instantiate automatically? That seems more flexible that merely allowing a log level. Beyond that, I don't quite understand the reason that each checkpoint gets a "sub-logger"; what benefit does this provide?

Most, if not all, applications that use the engine will be privileged apps and as such should not be using /tmp for storage of any data. /var/run, please, perhaps falling back to /tmp if you find that you don't have write access there; and use TMPDIR environment variable if you need to provide flexibility to developers or support personnel.

In section 7.4, there's reference to an "installation target", which seems to make the engine dependent on a specific checkpoint (Target Instantiation) and elements of its schema, but you don't really say that here or list it as an imported interface. This seems to be an exception (at least in part) to the principle of the engine treating all checkpoints equally. Wouldn't it make more sense to define a method on InstallEngine that the application or a checkpoint could call to set the ZFS dataset that the engine should be using for its storage requirements?

I'm disappointed that the methodology for determining checkpoint progress weighting is still TBD. I'd thought this was one of the things we were trying to sort out in prototyping, but perhaps I assumed too much. When can we expect this to be specified?

In section 11, we seem to be eliminating the ability that DC currently has to continue in spite of errors in a particular checkpoint. Has this been discussed with the DC team?

Finally, a moderately out-there question that is admittedly not part of the existing requirements list: what if we wanted to make checkpoints executable in parallel at some point in the future? Would we look at a tree model, rather than linear list, for checkpoint specification, or something else? Is there anything else in this design that would hinder that possibility (or things we could easily modify now to help allow it more easily)? An existing case where I might want to do this right now is to generate USB images at the same time as an ISO in DC, rather than generating a USB from an ISO exclusively (the current behavior). It's also the case that many of the existing ICT's could probably be run in parallel since they generally wouldn't conflict.

Dave
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to