On Sun, Sep 16, 2007 at 07:34:30PM +0100, Peter Tribble wrote:

> It sounds as if you're planning on making radical alterations to the
> whole way in which software is managed.
> 
> It sounds rather like Conary, actually.

Indeed it does.  :)

> Are you managing dependencies at the level of individual files?
> This is hard enough for software you know about; it gets even harder
> for software you don't know about yet.
> 
> If we need partial packages, it makes me wonder what problem
> we're trying to solve.

Only packages can depend on things, and only packages can be depended on.
At least, that's our working hypothesis.  As for partial packages, the
point is to consume bandwidth only for the bits you actually need.  There's
little point in slurping down 1.6GB of Solaris every two weeks if only
300MB has actually changed.

> > Our repository stores files individually, which eliminates that issue
> > -- you only ever pull exactly what you need to transition your system
> > to the package versions you've requested.
> 
> How does this work without a repository?

Without a repository?  For the moment, we assume one, even if it's local.
We'll probably have to move beyond that, but I don't think it's going to
prove to be a common occurrence.

> > (That said, downloading multiple files has more transaction overhead;
> > we want to download multiple files from a package as a bundle, though
> > we were investigating MIME, I believe, for that operation.  Krister
> > will have to fill you in on that.)
> 
> Nothing I suggested precludes what you're describing here. In fact, this
> sounds suspiciously like the way patches deliver partial packages in
> signed jar files.

True.  We can cons up an archive for an arbitrary set of files, which was
the bit I was missing, for some reason, not just entire packages.

> > The downside with zip in particular is that the table of contents is at
> > the end of the file, which means that you can't do anything with it
> > until you've finished downloading it.
> 
> Yes, but you wouldn't want to do a software transaction until you actually
> had all the data to hand. (If you're pulling this from a repository, metadata
> operations could be done independently.)

Right.  One thing I didn't understand about zip archives was that each file
was compressed individually, making it possible to unpack an archive
knowing only the boundaries between the files.  If the server is capable of
serving up that information while the client is downloading the rest of the
data, then we can achieve a kind of streaming.

The actual metadata -- describing the bits as they're laid down on disk,
rather than how they're bundled in the archive -- is retrieved before the
bits, so the client has that information already.

Danek

Reply via email to