On 04/02/2012 4:13 AM, Thomas Leonard wrote:

But it doesn't remove the need to handle diamond dependencies. Very
often (most of the time, I think), if two libraries depend on a common
one then it's because they need to communicate using a common API or
common set of data-types.

This is true to an extent. Within Rust (when the versioning mechanism is finished, tested, integrated into cargo) we'll handle disagreeing-diamonds ok.

For libraries outside Rust, there are mechanisms people have worked out for each OS to try to manage such disagreeing-diamonds too (sonames, which don't work because they don't sort and don't get bumped often enough; symbol versioning, which might work except nobody uses it; SxS, which works more like rust, but gets enormous).

What I'm saying, repeating here since it didn't seem to stick with the last message, is is that there is a threshold at which any such version-everything solution *must give up*.

You cannot version the entire universe. Not even the software-and-hardware universe. Both because it's too big and because (as I get to down below) each level is itself non-computable.

For example, if I write an application using GTK to handle windows,
and I import a widget library, then it's no good having the widget
library using its own version of GTK, because I won't be able to embed
its widgets in my windows.

I know, it's awful, and you can't always solve this. Even if you put the GTKs *right next to each other* in the target environment you can't even solve this, because they have global PLT-resolved symbols they fail to version and the loader may pick some from one library and some from another as it lazily resolves stuff.

Sure, but now the user needs to manage two separate logging
configurations for the same application, or risk them overwriting each
other's log files (although this is probably a bad example, as I see
that Rust very sensibly has built-in logging support).

By the way, does this mean that a Rust program must link against
exactly the same version of the library that was used at build time?
e.g. if I upgrade libssl then I must rebuild all applications using
it?

No, we'll probably support some level of version-range or prefix-matching. Despite the fact that this *weakens* the likelihood of proper functioning.

I accept, however, that precise-versioning and loose-coupled-upgrades are in inherent engineering-tradeoff with one another. So I'm ok picking a point on the tradeoff spectrum based on my sense of taste and annoyance (in consultation with others). I don't believe it's solvable.

...

Really, as I said above, I feel like this whole conversation is stuck on your assumption that the problem is actually solvable (or even precisely denotable). And in a very thorough and concrete way, I don't think it is. For perhaps 4 broad reasons (along with a zillion technical details):

 1. There's tension between precise-coupling (for correctness) and
    loose-coupling (for ease of use). This is why versioning systems
    often support imprecise wildcard matching, symbolic names, etc.
    And as soon as you have symbolic names you have a problem of
    naming authority, which -- if you take seriously -- you wind up
    having to invent something like PKI in order to solve. Punt to
    DNS or GPG whenever possible, maybe, but Naming Is Hard and it's
    usually a source of endless assumption-mismatch, precisely because
    names occupy a weird, ill-defined neither-zone between structured
    data and opaque nonces.

 2. A -> B dependency when A and B are turing-complete artifacts is
    generally non-computable. I.e. you can't even *define* when you
    have "captured" a dependency accurately. You can capture a declared
    dependency but it doesn't actually guarantee there's no undeclared
    dependency. This is why many people are taking to using bit-identity
    of the delivered artifacts (git content addressing, "ship the DLLs
    side-by-side with the .exe", or just shipping whole system images
    / VM images / etc. rather than symbolically-described "packages")

 3. Enumerating the "dependencies of A" depends on what you want to
    *do* with A. 0install talks about executing, but we need to compile,
    bootstrap-compile, recompile-LLVM-and-self-host-with-it, build docs,
    build debugging output, run tests ... you wind up reinventing build
    systems in general in order to handle all the things a developer
    wants to capture the "dependencies for".

 4. The *entire* environment is a dependency. It's often not just
    non-computable but physically off limits to versioning. I.e. you
    can version a kernel but not the firmware of the machine it's
    running, on because the firmware isn't under software control,
    can't be analyzed/sandboxed/controlled. This goes double for
    the network environment in which something runs or, heaven forbid,
    the physical environment in which the system winds up instantiated.

The problem is just ... bottomless. I'm sorry. I know you've spent a lot of energy on versioning and built an ambitious system. I'm sympathetic. I did so too, a long time back. But I'm relatively convinced at this point that there's no Correct Solution, just a bunch of wrong ones that have different forms of associated pain and cognitive load, and at some point you have to draw a line and get on with whatever else you were Actually Doing, rather than further work on versioning. We're doing a programming language. I'm ok developing cargo and rustc based on perceived and encountered needs, rather than striving for perfection.

-Graydon
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to