On 01/18/11 12:48 PM, William Schumann wrote:
Dave,
On 01/18/11 04:23 PM, Dave Miner wrote:
On 01/17/11 11:18 AM, William Schumann wrote:
Dave,
On 01/14/11 05:46 PM, Dave Miner wrote:
On 01/14/11 09:45 AM, William Schumann wrote:
Dave,
Responding to particular questions inline:
On 01/12/11 09:51 PM, Dave Miner wrote:
On 12/21/10 11:32 AM, William Schumann wrote:
Requesting code review for enhancements for service configuration
profiles in the Automated Installer.
http://cr.opensolaris.org/~wmsch/profile/
...
create_profile.py
214: Is continuing here the right thing to do when we're creating
multiple profiles? Does the user have a straightforward way to
clean up in the face of a partially completed "transaction"?
The user will be informed of profiles that fail to be added. There
will be no cleanup necessary. The user will be expected to take
note of profiles that were not created, correct the problem for those
in error, and resubmit them.
So what happens if I correct the ones that failed, and then re-submit
the same command line with the ones that were added? What I'm trying
to understand is whether the user ends up with a tedious experience or
one that is easily resolved.
The profiles that were already in the database would generate error
messages, and the profiles would be skipped.
So, if I had also modified one of the profiles that had been previously added,
then it wouldn't be re-added?
The forthcoming 'installadm update-profile/update-manifest' is slated to handle
modifications.
The concern I have is that, in some sense, the user may expecting the profiles
that are expressed in a single command to be
treated as a single "transaction", and you've created a "partial commit".
Some user will expect this. The idea is that the user will see messages
reporting exactly what happened. The user will be expected
to read the messages.
I'm wondering if it wouldn't be better to just handle profiles singly (i.e.
remove support for handling multiple in a single
installadm command invocation) unless we can provide a better transaction model.
I still don't see anything wrong with this model. If you think it could use
more consideration before committing, then that's
fine. There isn't a substantial benefit with supporting multiples, anyway.
Originally, I was thinking of using wildcarding in a
shell; e.g., create_profile *.xml (which was voted out some time ago).
These sorts of questions are where it's usually helpful to do a little
bit of usability testing. I'm not necessarily opposed to the multiple
manifest handling, but would like its results to be consistent,
predictable and understandable to users in the face of failures. If we
can meet that, then it's fine to keep it in. If not, then I think a
more conservative approach is advisable.
Other failures in this loop (such as at 261, 266) will cause the
transaction to be aborted, so I'm concerned that inconsistent
semantics here will be challenging to support.
Will correct these to print an error message for the input profile in
question and processing will continue to the next input profile.
delete_profile.py
93: Assuming I understand this right, providing a name that doesn't
exist will abort the transaction at that point, but other
failures later on (111, 119) will continue. As with create, I think
consistent error semantics would be a good idea when handling
multiple objects.
This is like the errors just above. Will correct in the same manner.
locate_profile.py
285: I'm doubtful this validation is the right thing to do. It seems
to pre-suppose that the server will have the same DTD the
client would have in the boot service, but that seems to impose a
requirement on the server that it be updated to the latest
possible version of the DTD, which is generally undesirable to
require from an operational point of view. Or am I
misunderstanding how this would work? This presumably could impact
the validate_profile_string() function in common_profile.py, too.
This is a service_bundle DTD validation by the CGI, taking place
after XML validation. It's original purpose was to validate a
dynamic profile completely (files can be in the user's space), since
the user would be free to break it after submitting it with
'installadm create-profile'. The same DTD validation occurs for all
profiles in create-profile. If a DTD-level validation does not
occur on the server, it could be quite some time before the user
discovers the error - some time during or after the installation
(or upon rebooting if not careful). It assumes that the AI server has
a copy of the service_bundle DTD with the correct version for
the service in question. If the user does not update the package
delivering service_bundle, it will have to be obtained by other
means. It is assumed that versions of service_bundle will be
differentiated by the version indicator at the end the file name,
currently :1, so that it can be copied to /usr/lib. It is expected
that under normal circumstances, the AI server is upgraded to
the level of the OS being installed.
To summarize the last paragraph:
- the correct service_bundle DTD would be required for 'installadm
create-profile', too
- it is better to validate against the service_bundle DTD and require
operational adjustments in the odd case, which should mean
only copying the DTD of the needed version.
I don't believe this is workable, it requires a great deal of manual
fiddling by the user to achieve correct results (seemingly including
installing packages that are mis-aligned with the server release) in
cases where the server and clients are skewed, which are *very*
common. Unless we can come up with a better answer where the, this
can't remain.
As I stated, it could be resolved simply by copying the appropriate
version of /usr/share/lib/xml/dtd/service_bundle.dtd.NNN
Assuming that doing so might be more difficult than I think it would, I
would propose reducing a missing service_bundle DTD in create-profile to
a warning only, since we don't know that the profile in question if
invalid if we don't have the DTD. So if the profile were valid XML, it
would be accepted into the database with only a warning.
The service bundle DTD is a packaged portion of the system. We do not want to
ever be recommending that the user copy random
versions of files from the later versions of the system/packages to earlier
ones as a standard procedure, as it breaks our general
support model for the OS. So, yes, I think this will be more difficult than you
think; to do this properly involves back-porting
the DTD to earlier versions of the OS that would need to be able to consume
that DTD, and that takes real time for someone, and
doesn't work well for releases that have gone out of support but might
otherwise work OK as an AI server.
I can perhaps go along with reducing it to a warning, though I'm concerned that
this will also generate support calls and their
accompanying costs.
Yes, this will result in some calls. However, I keep in mind difficulties in
hand-coding XML, which keeps me pushing for as much
validation help as possible, since these problems are far more common for users.
I understand why you want to provide it, but we need to ensure that it
doesn't create more problems than it solves.
Dave
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss