With the diagram below, I've tried to capture the various possible on-disk formats and where they fit into the process of getting bits onto a target system when an installer is involved.



I had thought that the original discussion, including the zap proposal, was about possible formats for the box labeled "On-disk packages".  However, the more recent messages seems to be talking about the format for either of the download files (1) or (2) or the repository package storage format.

Currently, we have the SVR4 format that can be used for the "On-disk Packages" box, in that the pkgsend command supports an SVR4 package as an input format (at least for now).  However, the path down the left hand side of this diagram through the "Installer Download File (2)" box cannot be implemented at this time because the "pkg" command for installing a package cannot deal with anything on disk; it has to use a repository.

Note that in this diagram, the two Installer boxes represent very different installers. The one on the top does an unzip of the image and executes configuration actions (unclear at this point as to how that would be done), registration, etc. The one on the left has to do an image-create and the equivalent of a "pkg install" on the uninstalled packages along with the configuration actions, registration, etc.

A variation of the lower-left path that could be considered is to give the "Installer Download File (2)" and its installer the capability of running a local pkg.depotd. With this, the "Repository Package Storage" would actually show up inside the "Installer Download File (2)" and then the "Run Installer" box would include execution of pkg.depotd and "pkg install" commands.

Does this diagram accurately reflect the role of on-disk formats for packages?  And if so, what is the intent for making IPS able to handle the technical challenges described above?

Thanks.
Tom



Shawn Walker wrote:
On Feb 4, 2008 2:27 PM, Peter Tribble <[EMAIL PROTECTED]> wrote:
  
That's a nifty name, and
your reasoning behind it seems quite sound. However, I think I still
prefer repositories to host individual files as they do now instead of
big archives.
      
Well, one snag with individual files is that you end up with very
high latencies per request.
    

That all depends on how you handle the requests for each file.

I agree that if you're requesting each file that you would have high
latency; especially on a connection across an ocean (e.g. AU to US).

However, I think that can be mitigated to an acceptable degree by
efficient communication between the server and client.

  
If people will indulge me for a while:

One of the reasons for choosing zip is that it's universally available,
and has bindings to many languages. Yes, there are other
compression schemes, but are they likely to installed out of the box
on most machines you'll ever use?
    

This is the best reason I've seen to choose zip over lzma, etc. so
far. The ubiquity of the tools for the zip format and its populate
almost ensures packagers and users will be able to bend it to their
own nefarious purposes with ease.

However, if an alternative compression algorithm delivers better
performance and better compression (noticeably so), I would be
inclined to favour it, or at least make it the default.

  
Another reason for using something that exists already is that I
absolutely don't want to be forced to use specific tools to manipulate
a package. I see it as a great advantage that I could open up a
zap file one pretty much any box in existence and look inside.
Another part of this is that simply unzipping the archive gives
me the files that are runnable without installation (it may be possible
to avoid having to supply software in both packaged and unpackaged
formats [eg. java where you can get either SVR4 packages or self
extracting executables] by structuring the files inside the archive
appropriately).

(The wish to have fully functional files upon simple unpacking
argues against a tar file containing compressed files; it also
means you can't use the mechanism on the installation
media of the files being wrapped up again inside a cpio
archive.)
    

I must admit, being able to provide a file that allows a user to
either simply unzip it somewhere and run the application (possibly) or
use it as a package sounds quite beneficial to me.

As a developer, I certainly would appreciate the lessening of software
distribution burdens.

  
If you are after optimized operations (such as only getting that
subset of the files in a package that you want) then I suspect
you would actually want to talk to an IPS server and engage in
a conversation with it. But I would have the server having fully
populated zap files on its disk and spitting back an optimized
zap file back to the client :-)
    

I'm not so sure about this personally.

I think I'd rather serve my repositories from a zfs, compressed
filesystem as individual files and have the option of serving
"on-the-fly" zap compressed files back to the client.

Having the files that comprise a package separated out individually
for repository purposes opens up some interesting possibilities for
load-balancing content serving.

While I know it's possible to do that with byte ranges of files, I
can't help but think it is far easier to do it using individual files.

  

begin:vcard
fn:Tom Mueller
n:Mueller;Tom
org:Sun Microsystems, Inc.;Update Center/OpenInstaller Software
adr:;;21915 Hillandale Dr;Elkhorn;NE;68022;USA
email;internet:[EMAIL PROTECTED]
title:Senior Staff Engineer
tel;work:877-250-4011
tel;fax:877-250-4011
tel;home:402-916-9943
x-mozilla-html:TRUE
version:2.1
end:vcard

_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to