Dave,
On 07/08/10 14:01, Dave Miner wrote:
Ethan,
A little later than requested, but comments on this update.
An overall nit is that this could use some spell-checking. More typos
than I'm used to :-)
Interesting. I just reran spell-check on the document and it reports no
errors. Intentionally misspelled a word and check still reports no errors.
I'll figure out what's broken; but I suppose I really should have known I
wasn't a perfect speller :-)
5.2.6: I'm concerned about the "supported scripting types" statement.
This would seem to imply that we'll keep perl 5 (some variant thereof)
part of the installation media. Imminent package refactoring will
remove its historical entanglement with the system core, I believe, so
its presence isn't a given, and I'm somewhat reluctant to commit here.
Python's mildly problematic because of our dependency on it; upgrade
to some later version may put 2.6 on the undesirable list, but I guess
that's something we'd have to address at a larger scale. Perhaps a
more limited commitment initially that excludes perl and is explicit
that only ksh93 is supported of the shells would be advisable?
I agree, an initial limited commitment would probably be the more
prudent thing to do. I am ok with excluding perl and limiting the shell
to ksh93. (I would actually be fine with excluding python initially as well
for that matter.)
5.2.6.2: It would be nice to be able to do some testing without having
to boot an AI client. Would it be possible (and sufficient) to
package up the aimanifest command, the aiuser account, and some sort
of aitest command that could run the script in the same sort of
environment (including the environment variables from 5.2.7) as
auto-install would? Seems like this would help you with development,
actually :-)
Having such a command could help testing to some level, but it wouldn't
entirely test all aspects.
1. A set of env variables can be mocked up by such an aitest command
to test that the script executes properly (but the values of the environment
variables obviously won't be what they are when the script runs on real
clients).
2. For cases where the script calls additional commands, it would be
hard to test outside of the AI image its intended to run in. AI images
may differ from release to release wrt what commands are available in
them and the aiuser account as well, so an 'aitest' command run on the
server, or wherever, won't quite capture this aspect of validation.
5.2.7: In the SI_MANIFEST_SCRIPT description, is the location the URL?
Yes. If you think URL is more clear, I can change the description.
(I didn't want to use URL at first since with the prompt case, a file path
may be supported; but thinking about it now, for that case file:// can be
used in the value.)
5.3: Just idly thinking, but would it be helpful to the user to sort
out a way to return both a script and a manifest to the client based
on criteria and thus provide both automatically, so you could do
partial derivation more easily without the "aimanifest load" having to
figure out a URL? Not sure if this creates more problems than it
solves, but thought I'd bring it up.
The manifest file that gets loaded doesn't necessarily have to come from
the same place the script comes from, so I thought it would be more flexible
just to leave that to the script. Also, if the starting manifest
received on
the client potentially varies from what the script expects, this may cause
unexpected results.
As we discussed offline, making available additional bundles of data
(or files) to this derivation process, or to the AI client in general, could
be something we look to move towards in the future, and that usage
would not be precluded if the script does the work of figuring out what
file to go load.
thanks,
-ethan
Dave
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss