On 06/11/10 04:40 PM, Karen Tung wrote:
Hi Darren,
More responses inline. I also remove all the sections
where I have no further comment.
On 06/10/10 12:26 AM, Darren Kenny wrote:
Hi Karen,
More responses, with some parts cut out where I've no more comment...
On 06/10/10 12:17 AM, Karen Tung wrote:
On 06/09/10 02:24, Darren Kenny wrote:
Section 4, page 5:
I'm wondering if the programming language to use is really a choice for
this project since it's been decided at the higher level of the CUD
design
that Python is the language of choice for new implementations.
I didn't see any specific mention of Python in the architecture design doc.
It only talks about using an Object Oriented language. So, I thought I will
be complete, and talk about other potential choices here.
That's true, but I feel that this is something that should be fixed in the
architecture document rather than here since it's certainly the assumption
that the majority of people are working on.
I agree with you that it should be specified in the architecture document.
Let me check with Sarah again on this. When we were reviewing the
architecture document, I actually mentioned about this. She said at
that time that she doesn't want to put any implementation language
info there.
If the application want to do something else when execute_checkpoint() is
running, it should run execute_checkpoint() in a thread.
Ok, but I wonder if this is an extra level of threading that really shouldn't
be needed? For example, right now, the GUI doesn't need to run liborchestrator
in a separate thread when doing TD, it's something that's hidden from the GUI,
a change like this means that the GUI needs to add another thread-level. (It's
more of an example really, since the GUI is probably going to need re-work
anyway to work with the new CUD).
It just seems that the Engine would seem to be the logical place to manage the
threading of things, since it's likely to be doing threading anyway...
What you said makes sense. Let me think about that as I add
the threading section.
Section 6.3.1, page 8:
Maybe the use of the term "dangerous" is too severe? It might be better
to
be specific about the risk, and how it might be able to be prevented.
I can talk about risk, but is there really a way to prevent it?
We do allow checkpoints to exist everywhere. Do we want to restrict it
to loading from a certain location?
I think it's more how people implementing checkpoints and using the Engine
could mitigate the risk, if at all possible - e.g. we could make the use of
a directory path a high-risk while using a module name - to be loaded from
sys.path - would be considered safer? Maybe thus producing a warning if it's
seen to be a full path, or something...
That sounds like a good suggestion. I will add that as a recommendation
in the description of the register_checkpoint() function. Since we
only throw a warning if it is outside our "recommended area",
people will still be able to continue. So, we are not limiting their
ability to have their checkpoint implementation anywhere.
Section 6.7.1, page 12:
How are you going to guarantee that removal of snapshots from /tmp will
not remove snapshots from other processes - e.g. are you using a dir for
each run (/tmp/install_snapshots.PID/), just might be worth spelling out.
Section 7.4, the second bullet point did talk about using PID for DOC
snapshots that are stored in /tmp. Do you think that's sufficient?
I think that's fine, but I would prefer to see things put in a sub-directory -
especially in the case of DC - to avoid lots of messy files in /tmp.
(Dave has mentioned the possible mis-use of /tmp).
Based on Dave's recommendation, I will use /var/tmp.
What you said about putting checkpoint related data for a certain
run in a directory makes sense. I don't even need to use
pid to guaranteed having a unique name. I can use the Python
feature to generate a temp name that's unique. So,
each process will have it's own directory in /var/tmp like this:
/var/tmp/<tmp_dir_name>/
I did *not* recommend /var/tmp, but /var/run. They are distinctly
different.
Dave
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss