Hi Alok,
Here are my comments:
Section 1.1:
-------------
General comment about this section is that it needs to have a lot more
detail
about what is changed and what is not changed by this work. There are bits
and pieces of what is different throughout the document, but I think it is
useful to summarize all the simlarities and differences in 1 single
location.
For example, some of the things that I personally would find helpful is:
* User experience: is there any change? Will everything that's currently
supported continue to be supported? Will the user notice any difference
in using the DC app?
* A definition section that maps between old DC terminology to
new terminology would also be helpful.
Section 1.2.3:
----------------
last sentence of the 2nd paragraph:
"The DOC will also be used to rollback to a checkpoint and
resume the build process from there.".
I believe that's how the engine will use the DOC to provide the stop/resume
functionalities. DC will not do that. So, I don't think it should
be mentioned here.
Section 2.2
----------------
Since this is the specification for what DC will EVENTUALLY be. I think
this
section should have a more specific title, such as "non-goals" for first
release of DC or something. The way things are sepecified right now
makes it
look like those are the things that you will never do.
Section 2.3, The Data Object Cache bullet
-------------------------------------------
"interfaces to rollback and snapshot": Will those be directly used by DC?
If so, how? and for what purpose?
Section 2.3, The Install Engine bullet
-------------------------------------------
What about the functionality to list checkpoints available for resume,
and stop/and resume?
Section 3.1, 2nd paragraph, where it talks about the loggers
-------------------------------------------------------------
Current DC also display the content that would go into the simple log
by using a console logger. Is the new design going to provide that
functionality?
Section 3.1, 4th paragraph
---------------------------
After the manifest is parsed and stored in DOC, before or after DC
registers the list of checkpoints, DC also needs to set the "dataset"
property in the engine so the engine knows which dataset it should snapshot
to provide the pause/resume capability
Section 3.1, 6th paragraph
---------------------------
The DC app's role is to drive the image creation process. It should not
be aware of what checkpoints are being executed, and what they do.
So, I don't think this paragrah should belong here, since this section 3.1
is about what the DC app do, not what checkpoints do.
Section 3.1, 8th paragraph
---------------------------
2 distinct topics are discussed in this paragraph. One topic is on how
DC and checkpoint retrieve data from DOC. The other topic is on how
DC handles interrupts. I think this paragraph should be split into 2.
Section 3.1
-------------
- It is not discussed in this section, but I think a discussion on what
will DC do after it calls engine.execute() is needed. Will it block
and wait for that call to return? Will it provide a callback function
for the engine...etc...
Section 3.1, the diagram:
--------------------------
As I have commented elsewhere, VMC is not an app, it is just one of
the output type of the DC app. Also, the order of the
"core application classes" should be listed to reflect the actual order.
general comments for DC checkpoint sections: (section 3.5)
------------------------------
1) Some of the checkpoints are used "internal" to the DC implementation,
such as manifest_parser and target_instantiation. Others are
user specified checkpoints via the manifest. I think we should
make a distingtion between these checkpoints, and document how the
DC app treats them. For example, will the internal checkpoints
show up when you do "distro_const -l"? Will people be able to
resume from those...etc..
2) For each the the checkpoints, I think it would be useful
to list the following information:
* checkpoint name - the name users use for the -r and -p arguments,
and the checkpoint name registered with the engine, if different.
* Name of the Checkpoint class that will implement the checkpoint
* If that checkpoint reads info from DOC or writes info to DOC,
those should be listed as well.
* If that checkpoint takes arguments, it would be useful to
list all the required or optional arguments it takes.
3) In many checkpoint descriptions, you mentioned that XXXX will have a
mod_name
of XXXXXX. For example, in section 3.5.3, it is mentioned that
"The checkpoint named 'transfer_ips' will have a mod_name
of the CUD transfer module and will....". What is this "mod_name"
value for?
4) Quiet a few checkpoints tries to put all the general stuff into
a "parent" checkpoint, and then, have the media specific modifications
in a child checkpoint that will call the parent checkpoint's execute()
method. I am concerned about the fact that a checkpoint is trying
to act like an engine, and calling another checkpoint's execute() directly.
Just like our previous discussion of calling the transfer
module's execute() directly, a checkpoint that calls another's
checkpoint's execute() directly might make progress reporting inaccurate.
For example, if the PrePkgImgMod.execute() reports progress to the
logger, the logger will call the engine to normalize the progress
information, and the engine won't know that PrePkgImgMod checkpoint
is running.
Section 3.5.2
---------------
Currently, the boot_archive_archive script also calls libti to release
the mount point for the root archive libti created. What component will be
responsible for that in the new design?
3.5.5 boot_archive_initialize
------------------------------
The current boot_archive_initialize.py script also creates a few
directories after cpio is done. Where will those code be moved to?
3.5.7 plat_setup
------------------
Why not merge this script into the general boot archive configure script?
Just check for sparc, since this always has to be done for sparc.
3.7 User supplied checkpoints
-----------------------------
There's no discussion in the document about where all the checkpoints
we supplied will be stored. In the same token, I think we should discuss
whether there's any recommended spot for people to store user supplied
checkpoints. The engine provides the functionality to issue a warning
if it is loading checkpoints from any directory that's outside of sys.path.
Is the DC app going to utilize that functionality?
3.10 Logging
--------------
It is not discussed, but is it implied that the simple and detail logs will
have the same file name that's being used now? Also, will new simple
and detail
logs be created for each run of the DC app?
Thanks,
--Karen
On 07/ 7/10 03:52 PM, Alok Aggarwal wrote:
I have posted the first version of the DC re-design
spec. The section on VMC is still TBD pending implementation details
that are being worked out.
The document can be found here (Thanks, Dave) -
http://src.opensolaris.org/source/xref/caiman/caiman-docs/distro_constructor/dc_cud_design.pdf
It can also be obtained by cloning the caiman-docs
repo -
ssh://[email protected]/hg/caiman/caiman-docs
and can be found under distro_constructor/dc_cud_design.[pdf, odt]
Please review and provide feedback by July 16. If you
plan on reviewing the document, please let me know that
offline.
Thanks,
Alok
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss