On 05/10/10 03:43 PM, Karen Tung wrote:
Hi Jean,
First, I want to make sure I understand the problem this proposal solves.
Are we trying to solve the problem of explicitly "tagging" certain data
in the data cache so multiple invocation of the *same* checkpoint would
know which data is meant for which invocation?
Yes. It is so a checkpoint knows which data is explicitly mean for it.
Assuming the above is true, please see my other
questions/comments inline.
On 05/10/10 10:21, jean.mccormack wrote:
During the last prototype meeting, Dave and I were tasked with
figuring out the DOC interface to pass data to the checkpoint
modules. We met to discuss this last week with Evan and Sanjay
attending for portions of the meeting to help.
Input on this proposal is requested from Dermot, Darren, Karen and
Sarah but anyone else is welcome to respond.
There will be a class CheckpointNode that will be an ABC and will
inherit from DataObject. It will have a name attribute.
Each type of checkpoint (TI, TD, Transfer etc) will have it's own
subclass of CheckpointNode that will have attributes
specific to that type of checkpoint.
The xml that will be generated would look like this:
<checkpoint>
<name>"name of checkpoint"</name>
<checkpoint specific attributes to be defined by each checkpoint>
</checkpoint>
So, the CheckpointNode object is intended to be the central location to
putting all the input parameters for the "named" checkpoint? What about
the case where 2 different invocations of the same checkpoint
are adding data to the cache, will each add their data to their "own"
CheckpointNode area?
They don't add data to their node. Where they add their data would be
something passed in via the checkpoint node.
For example, TD needs to add data to the DOC. So the checkpoint node for
DC would have an attribute that
would specify where to dump it's discovered data.
To further explain this I'll use an example of a client that does TD,
TI, TI, Transfer, Transfer. Note this is not meant
to be any real code, it's more pseudocode than anything.
Client()
td_node = TD_ChkptNode("TargetDiscovery")
ti_node1 = TI_ChkptNode("TI_IPS")
ti_node2 = TI_ChkptNode("TI_CPIO")
xfer_node1 = Xfer_ChkptNode("XFER_IPS")
xfer_node2 = Xfer_ChkptNode("XFER_CPIO")
...
This just creates the objects, who will add them to the cache?
That should be done by the client. I believe we would want a method for
each subclass that
would actually write this out to the cache.
TD = register_checkpoint(td.discover, td_node)
TI1 = register_checkpoint(ti.instantiate, ti_node1)
TI2 = register_checkpoint(ti.instantiate, ti_node2)
Xfer1 = register_checkpoint(xfer.transfer, xfer_node1)
Xfer2 = register_checkpoint(xfer.transfer, xfer_node2)
I assume here, the engine just passes the nodes as an argument to
the checkpoint, and it is up to the checkpoint to do whatever is
necessary with the nodes?
Yes.
# Because we bounce out of engine after TD runs we may want to
tell TD where to put things.
td_node.dst = "Discovered Targets" # TD_ChkptNode has property
"dst" which is just a name.
# this name is the name it
will give the root node of
# a tree of nodes, Physical
and Logical that it discoveres.
td_node.start = "..." # maybe for DC we don't want it to do
physical target discovery...
# Now execute engine just running TD:
execute(TD)
# And now we need to add information to other nodes
ti_node1.create = ... # root node of some tree of nodes that the
App wants
ti_node2.create = ... # root node of some tree of nodes that the
App wants
xfer_node1.src ="http://some/ips/repo:port"
xfer_node1.dst = "rpool/jean/pkg_imag" # image area for IPS to
install to.
xfer_node2.src = "/"
xfer_node2.dst = "rpool/jean/whatever" # area to cpio to
# And execute remaining checkpoints:
execute(TI1, TI2, Xfer1, Xfer2) #<---- execute the rest of the
checkpoints.
The above works well since the application created all the
CheckpointNode.
If one checkpoint wants to give some input to another checkpoint, how
will
they look up the other checkpoint's CheckpointNode? If checkpoint 1
searches
the cache for the other checkpoint's CheckpointNode by name, that means
checkpoint 1 would have to know about the other checkpoints, which the
architecture
does not allow. My understanding of the architecture is that each
checkpoint
should operate using data from the cache independently without knowing
other checkpoints.
My understanding is also that the checkpoints shouldn't know about each
other so you wouldn't
want 1 checkpoint writing to another's node. If data is needed by
checkpoint B that checkpoint A
is providing, the app would take data from the DOC that is written by
checkpoint A and put it into
the node for checkpoint B.
I have the following suggestion that can solve the problem. It is
also consistent
with the architecture. Taking your previous checkpoints as an example:
The application registers the checkpoints.
engine.register_checkpoint("discovery", td.discover)
engine.register_checkpoint("ti-1", ti.instantiate)
engine.register_checkpoint("ti-2", ti.instantiate)
engine.register_checkpoint("xfer-1", xfer.transfer)
engine.register_checkpoint("xfer-2", xfer.transfer)
The checkpoints ti.instantiate and xfer.transfer will define
public interfaces about what input it expects, and where in the
cache is expects to find those input.
For example, ti.instantiate checkpoint implementation defines that it
will look for
the value(s) referenced by the predefined key "DIRS-TO-CREATE".
xfer.transfer checkpoint implementation defines that it will look for
values(s) in the predefined values SRC, and DST.
Since the application knows all about the different checkpoints, and
it is
in "central-command", after it registers all the checkpoints,
it can first set the DIRS-TO-CREATE value in the cache to a particular
value.
Then, it execute the checkpoints discovery and ti-1.
Then, it resets the DIRS-to-CREATE to a different value that will be
consumed
by ti-2, and execute ti-2. Now, the SRC and DST for xfer-1 can be
set, if it hasn't
been set already, then, execute xfer-1, and pause. Reset values for
SRC and DST
and execute xfer-2.
Sortof what we were talking about except that you set these values in
each checkpoint's node.
There isn't a central spot.
So to change what you are saying....
Since the application knows all about the different checkpoints, and it is
in "central-command", after it registers all the checkpoints,
it can first set the ti1/DIRS-TO-CREATE value in the cache to a
particular value.
It also sets ti2/DIRS-TO-CREATE to a different value.
Then, it execute the checkpoints discovery and ti-1 and ti2.
This means you don't have to return control to the app between
checkpoints. Just let it rip!
Same idea, just less bouncing in and out of execute.
Jean
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss