I meant to sound off earlier about many things in this thread, such as:

- not all dependencies are libraries ("need gnome >= 2.0 because its
settings manager is how you change the color scheme in program Z" or
"must have an MTA so that the disk-health-checker can notify you of
impending failures") - so even if you statically linked everything,
well, you'd still want to be able to check some kinds of dependencies;
 they're not always ignorable or work-around-able.
- it's a really fragile thing to use different versions of libc and
ld-linux.so than your system runs with natively.  it's possible, i've
done it a handful of times ("must run new debian program on ancient
redhat!"), but supporting it is not something that you can do
reliably.  in particular, i wouldn't do it for more than one or two
programs that were rarely used.  it's a nicer idea in theory than in
practice.

however, i write mostly to reply to one idea here:

On Fri, Mar 21, 2014 at 11:52 PM, Keith Lofstrom <[email protected]> wrote:

> So far, the most practical solution suggested by this thread is to
> run other distros as virtuals - I can live with the virtualization
> performance penalty for small and quick programs.  For the big,
> slow stuff,  I'll have to keep tweaking C code, sigh.

If you run code for days on end, maybe this makes sense.  if your runs
are shorter, make sure you're optimizing for the right thing - your
time!  time spent waiting for an ever-so-slightly slower virtual
environment (depending on your CPU, the overhead is really *quite* low
these days!) to finish running something that installed with 0 hassle
is a lot more fun (since you can be doing, well, anything else!) than
spending the same amount of time hacking on the kind of code you
describe, which is not designed for (pleasurable) maintainability.
make sure your optimizations make sense :)
_______________________________________________
PLUG mailing list
[email protected]
http://lists.pdxlinux.org/mailman/listinfo/plug

Reply via email to