Paul Robinson wrote:
... if I want to install package A which requires B and C, B
requires D and E, and D requires F, your installer would go Start A -> I
need B -> Start B -> I need D -> Start D -> I need F -> Install F -> Install
D -> I need E -> Install E -> Install B -> Install C
....
In the chain above, if F isn't available for some reason, you have A, B and D all half installed on your machine waiting for a 32Kb library on an un-mirrored FTP server in bulgaria... hmmm...

Yes, meanwhile, the server providing B times out your connection, the whole install gets rolled back, and you have to start again from scratch. Not pretty.

One way to address this would be to separate "install-time"
requirements from "run-time" requirements.

If you need it at run time, surely it make sense to grab it at build time? I'm not sure I can see the benefit of seperating it out

The benefit being that only "install time" requirements actually need to be installed recursively. Run-time requirements can be queued up and installed afterwards. This reduces the likelihood that network installs will get stalled for long periods of time. Of course, this doesn't really solve the underlying problem.

As I said, if you can get requirements information from somewhere else
(the INDEX files available on CDs and FTP sites), then you can build a full,
properly sorted list of packages and install them in order without any
stalls.  That's the approach I'm planning to pursue next.

I'm thinking about backward compatability on the command line for -r that grabs the "master re-direct" file in the format above..

Hmmm.. There are two problems here: The first is maintenance. Suppose a couple of friends of mine set up a site with packages that they're building. I want to be able to add their site to my personal list of package sources without having to go bug the "Official FreeBSD FTP Package Uber-Person" to get their packages added to the master file. This means that my pkg_add needs to be able to search multiple sites no matter what.

Don't rely on a single definitive source of package information.
Having some sort of redirect support in the INDEX file is fine
and easy to add, but you still need the client to be able to search
multiple sources.  This is one thing the Debian folks got right.

The other problem is that the current -r is fundamentally limited
(to a single network source) and draws a rather pointless distinction
(you get to search either disk sources with PKG_PATH _OR_ you get
to search a network source, but not both).  I'd like to erase that
distinction so that pkg_add just searches all available sources.

I can see where a flag to inhibit network downloads might be
useful. (I'm sitting on an airplane with my laptop and
I _think_ I downloaded everything I needed before I left the office.)
However, flags like -r that select individual sources just strike
me as rather pointless.


Perhaps you could formulate a couple of scenarios in which it would
be beneficial to be able to mix and match these two?

Where the port exists, but a pre-built binary isn't available,

Okay, I can see some merit in having pkg_add mine the ports system as a source of packages. Basically, if the ports version is the newest, provide the user an option of installing from there. Easy to do, but I'd be cautious with this. Building OpenOffice or KDE from ports is an adventure that most people would rather skip, and pkg_add shouldn't be automatically starting a port compile just because it notices that there's a 1.0.3 port and a 1.0.2 package.

Of course, there's also some merit to working on this issue from
the other side.  In many cases, port requirements could easily be
satisfied from packages.  (How many people really need to compile
the JPEG library?)

... or where somebody wants easy install with his own configure options.

If people really want their own config options, then they should work with the port directly. There's no feasible way to wrap every possible port option into a single tool. The whole point of the ports framework is the extreme flexibility it provides.

... you get the advantage of being able to wrap ports
into the /var/db/pkg DB   ... you could write a
> switch to make a package ... based on the port.

All of this already exists.  Ports already register with the /var/db/pkg
DB and the ports framework already has make targets to build packages
from any port.

...  whereas I'm thinking of a DB on disk, or
retrieved over the network for EACH PACKAGE, ...

This already exists; it's called /usr/ports. See the pkg-* files, the Makefile, etc. Those already exist, and can be mined by other tools. (Look at crunchgen for some tips on how to effectively mine data from conforming Makefiles.)

Tim Kientzle

_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to