Stephen Hahn writes:
> This project seeks to develop a software packaging system that is able
> to update, in a safe and predictable fashion, a variety of software
> components across a range of system install contexts. The image
> packaging system is intended to solve a collection of software
> update-related problems, as outlined in [1 - 5].
I very much like the idea of getting rid of scripting within the
packaging mechanism; it's the source of a lot of trouble.
However, I have questions about this proposal:
- To what extent would we be forcing others to adopt this now-
[Open]Solaris-proprietary packaging system? If the project
guarantees a shim layer of unbroken SysV packaging support, then
that's probably a good thing. Otherwise, I think the proposal
should address the costs forced onto others: those who end up
having to deliver software twice (once for older environments,
once for newer, and probably not bothering to do at least one of
those).
- What of existing solutions? I see a lot of discussion in the blog
postings about the problems we've had with our constrained use of
System V packaging, but is the team completely dedicated to
creating yet another packaging system? What about other systems
that already exist? Were they evaluated or is that work still to
be done? I'd expect that we're in better shape in terms of
layered applications (package managers) if we can adopt an
already-implemented solution, even an "imperfect" one, rather than
designing a new albeit perfect one.
- What becomes of the versioning problem? We have some really hard
problems here that are neatly illustrated by our current patching
problems:
1. We don't always know what bits depend on what other bits.
If we did, then patch contents would always be "correct"
and wouldn't require manual error-prone tweaking. They're
not always correct, and patch generation is a human-
mediated process.
This means that "version 2 of package A depends on version
1 of package B" is an incomplete mechanism. We're going to
need tools and processes that allow us to identify those
dependencies in a fine-grained way. This is a particularly
difficult problem when the dependencies are not of the
trivial symbol-table type, but are rather of the behavior
type. Does this project generate those practices that make
individual package versioning safe enough to use?
2. We currently rely on the fact that consolidations are all
built all at once in a coordinated action, and then
delivered as a matched set. This lets us ignore some
complicated problems that occur when interfaces used across
projects are allowed to have arbitrary and not-necessarily-
compatible change.
One way around this would be to say that each consolidation
delivers just one new-fangled package: ON is a single lump of
stuff, at least until we take pains (over a long period of time,
probably) to refactor ON into smaller parts. Is that part of the
future plan?
- How do we avoid getting caught up in the Debian packaging
dependency problem from hell? I used to run Debian at home, and I
ran screaming from it when I ran into numerous "impossible" cases
-- essentially points where I wanted both A and B, but each
required an incompatible version of some common package (C), and I
had to "pick" one to run and forget the other. I ended up with a
confusing mass of locked version numbers and a really good reason
to install Solaris instead.
Perhaps a problem of my own causing (I shouldn't have used the
relatively fresh stuff in "Unstable" when the "Stable" branch had
comfortable mold on it), but it points back to having fairly tight
controls on the build environment used for each component (in
order to get common dependencies to work out), and thus a clear
set of new policies to go along with a new packaging project.
--
James Carlson, Solaris Networking <james.d.carlson at sun.com>
Sun Microsystems / 1 Network Drive 71.232W Vox +1 781 442 2084
MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677