Thank you, lots of food for thought. For the go package-writer, now, I 
guess the best practice so as not to be part of the problem is to

1- in minor revisions, of course, keep backward-compatibility.
2- in major revisions, either:
    - make a new project, for instance, if my first project was 
github.com/foo/bar, make a new one called github.com/foo/bar2 for version 2,
    - if applicable, keep the old directory structure as is for backward 
compatibility, and make a new one for the new version, for instance, if in 
github.com/foo/bar I used to define the packages github.com/foo/bar/baz and 
github.com/foo/bar/foobar, keep them but now recommend the use of packages 
github.com/foo/bar/version2/baz and github.com/foo/bar/version2/foobar,

both having advantages and drawbacks.

Le mardi 26 décembre 2017 16:22:35 UTC+1, David Collier-Brown a écrit :
>
> Lots of in-line reply-bits (;-))
>
> On Tuesday, December 26, 2017 at 8:58:47 AM UTC-5, Fabien wrote:
>>
>> Thank you, that's a very interesting read.
>>
>> TLDR: If I understood everything, NP-complete != DLL hell, but anyway, go 
>> is vulnerable to DLL hell, so we're screwed.
>>
>> I'm not sure I understand everything, though. Both your article and Russ 
>> Cox' you mention seem to confuse NP-completeness and the concept of DLL 
>> hell. I thought DLL hell was the inability, plain and simple, to install 
>> two versions of the same DLL (and, AFAIK, two different versions of a go 
>> package). The problem of whether you need two incompatible versions or not 
>> happens to be NP-complete, but that's not the problem per se, is it ?
>>
>
> I view the inability to have two versions of the same thing as the cause, 
> and DLL hell the effect, with finding a solution to DLL Hell NP-hard. 
>
> There is a weaker sense of DLL hell too, that of having to find all the 
> different dependancies. We started out with things like "rpm search", but 
> folks quickly automated. I stil lremember trying to find all the 
> dependancies for a copy of  MythTV. Erk! You also see people adressing that 
> weaker case via flatpacks, docker images and the like, and it somewhat 
> mitigates the harder case.
>
>
>> If I understood correctly your article and Russ Cox', the proposed 
>> solution is either to offer, in each subsequent version of a package / 
>> shared library / DLL, both the old and the new API, or to allow the 
>> installation of several packages. However this leads to problems, as you 
>> now tend to have bloated binaries that contain old compatibility code you 
>> don't necessarily need. If version 1 and 2 of package C are incompatible, 
>> I'm expecting the latest binary version of package C to contain both 
>> version 1 and version 2.
>>
>
> Yes: we're trading space for time. In Multics there were a number of 
> things you could do with the binder to keep the RSS small, at the cost of a 
> larger on-dicklibrary image.
>  
>
>>
>> However that's good for DLLs and other shared objects, but that seems 
>> overkill in the case of go, since each executable includes all the library 
>> code it uses. 
>>
>
> In a static-linking environment like Go or Linux, you can drop all the 
> dead code when you create the final executable. There is a recent article 
> at LWN about doing so for Linux at  
> https://lwn.net/SubscriberLink/741494/2efc53db170f03c5/
>
>  
>
>> If I need package A and package B, and both those packages only use the 
>> version 2 of C, I don't wan't to include the version 1 of C, too, in my 
>> executable: it is already big enough. If you want to avoid it, you need to 
>> be able to specify specific versions of packages, and you're almost back to 
>> square one, with an NP-hard problem. 
>> But this is not DLL hell anymore: first, you know your program can be 
>> installed, not matter what. You might need to duplicate lots of code, have 
>> several different versions of your packages, but at least there's a 
>> possible solution. Secondly, this is no more a decision problem (is there a 
>> solution at all), but an optimization problem (what is the best possible 
>> solution): the initial, suboptimal solution (just install the last version 
>> of each package, recursively) is found in linear time and the optimization 
>> process can be interrupted at any moment if it takes too long, leaving you 
>> with a suboptimal but workable solution.
>>
>> So, for instance :
>>
>> let's say I want to install package p, that requires both packages a and 
>> b.
>>
>> Here are package a's versions, and their dependencies :
>>
>> a:
>>     v1: c >=1
>>     v2: c = 2
>>
>> And package b:
>>
>> b:
>>    v1: c >= 2, d >= 1
>>    v2: c >= 3, d >= 1
>>
>> There are two more dependencies, c and d, here are their own versions and 
>> dependencies:
>>
>> c:
>>     v1: nothing
>>     v2: nothing
>>     v3: nothing
>>
>> d:
>>     v1: c = 1
>>
>> If I can't have two versions of the same package installed, I have to go 
>> through a NP-complete problem (and use a SAT solver or equivalent) to find 
>> I'm actually out of luck: there is now way to install p while having at 
>> most one version of each package. That is DLL hell.
>>
>> But if I can have several versions installed, a first reasonable option 
>> is to propose to install the latest version of a and b, and then the latest 
>> versions of each of their dependencies, recursively. If incompatible 
>> versions are needed, duplicate them. So, here, that initial solution would 
>> be a2, b2, c1, c2, c3, d1. Good, and found in linear time, but not perfect, 
>> since there are duplicate packages. I can now try to find an optimal 
>> solution, optimal being defined as "the newer versions the better, the 
>> least installed packages the better". Here, the optimal solution is a1, b2, 
>> c1, c3, d1 which is only 4 packages, the trick being to downgrade to 
>> version 1 of package a. Still NP-complete (NP hard, actually), but no DLL 
>> hell, and I already had a workable solution in linear time.
>>
>
> There is a longish discusion of David J (my evil twin) Brown's approach to 
> latest-version-preference in 
> https://www.usenix.org/legacy/publications/library/proceedings/als00/2000papers/papers/full_papers/browndavid/browndavid_html/
>
> We were woking in a dynamic environment, so actually droppping the 
> unneeded bits wasn'ty part of the plan.  We probaqbly could have 
> page-aligned a lot of stuff in the libraries, too, so that we reduce the 
> RSS.
>
> No-one has discussed the garbage elimination problem, though. In 
> development, we only needed to ship and support a few versions, as 
> intermediate versions were unused by anyone save I and my colleagues (Hey, 
> Edsel!).    Arguably we need a way to phase out old versions in GLIBC and 
> the kernel.
>
>
>> Now, am I wrong to think that go, in its current state, is still 
>> vulnerable to DLL hell, even with the vendoring mechanism?
>>
>
> Yes, but the likliehood is reduced, and the pain is localized to the point 
> at which you try to move forwared to a new release of a dependency.   You 
> either suceed or you defer the update until the dependanct's dependancies 
> are up to date.
>
> I'm biased: I think the GLIBC approach should be used universally 
> throughout Linux so that I can avoid reimplementing it in Go, node.js, 
> Rust, Python, Perl, ... ite ad infinitum (;-)) 
>
> --dave
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to