Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-09-23 Thread Alberto G. Corona
Just thinking  aloud:

What if  we add  -current  ?

pacage -current

Would select the versions of the package that were current art the time the
cabal file was uploaded and sucessfully compiled in hackage,  if the packae
is installed from hackage

If the cabal file is local then current == any.

This option would eliminate the need to guess bounds for package
dependencies. It would also give more guaranties that the package will
compile sucessfully when downloaded from hackage.

Certainly,it would not guarantee it if your version of ghc differs from the
one in Hackage, but it would make things more simple and would reduce the
spectrum of possible failures


2012/8/24 wren ng thornton w...@freegeek.org

 On 8/22/12 12:35 PM, David Menendez wrote:

 As I see it, there are four possibilities for a given version of
 dependency:

 1. The version DOES work. The author (or some delegate) has compiled
 the package against this version and the resulting code is considered
 good.
 2. The version SHOULD work. No one has tested against this version,
 but the versioning policy promises not to break anything.
 3. The version MIGHT NOT work. No one has tested against this version,
 and the versioning policy allows breaking changes.
 4. The version DOES NOT work. This has been tested and the resulting
 code (if any) is considered not good.

 Obviously, cases 1 and 4 can only apply to previously released
 versions. The PVP requires setting upper bounds in order to
 distinguish cases 2 and 3 for the sake of future compatibility.
 Leaving off upper bounds except when incompatibility is known
 essentially combines cases 2 and 3.


 Right-o.



  So there are two failure modes:

 I. A version which DOES work is outside the bounds (that is, in case
 3). I think eliminating case 3 is too extreme. I like the idea of
 temporarily overriding upper bounds with a command-line option. The
 danger here is that we might actually be in case 4, in which case we
 don't want to override the bounds, but requiring an explicit override
 gives users a chance to determine if a particular version is
 disallowed because it is untested or because it is known to be
 incompatible.


 There are two failure modes with overriding stated bounds, however. On the
 one hand, the code could fail to compile. Okay, we know we're in case 4;
 all is well. On the other hand the code could successfully compile in ways
 the package designer knows to be buggy/wrong; we're actually in case 4, but
 the user does not know this. This is why it's problematic to simply allow
 overriding constraints. The package developer has some special knowledge
 that the compiler lacks, but if all constraints are considered equal then
 the developer has no way to convey that knowledge to the user (i.e., in an
 automated machine-checkable way). Consequently, the user can end up in a
 bad place because they thought this second failure mode was actually the
 success mode.

 This is why I advocate distinguishing hard constraints from soft
 constraints. By making this distinction, the developer has a means of
 conveying their knowledge to users. A soft bound defines an explicit
 boundary between case 1 and cases 2--4, which can be automatically (per
 PVP) extended to an implicit boundary between cases 1--2 and cases 3--4; a
 boundary which, as you say, can only be truly discovered after the code has
 been published. Extending soft boundaries in this way should be safe; at
 least it's as safe as possible with the foresight available to us. On the
 other hand, a hard bound defines an explicit boundary between case 4 and
 cases 1--3. If these are overridable, things may break silently as
 discussed above--- but the important thing is, in virtue of distinguishing
 hard and soft bounds, the user is made aware of this fact. By
 distinguishing hard and soft bounds, the developer can convey their special
 knowledge to the user. The user can ignore this information, but at least
 they'll do so in an informed way.


 --
 Live well,
 ~wren

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Alberto.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-23 Thread wren ng thornton

On 8/22/12 9:18 AM, Leon Smith wrote:

I think we actually agree more than we disagree;  I do think distinguishing
hard and soft upper bounds (no matter what they are called)  would help,
  and I'm just trying to justify them to some of the more dismissive
attitudes towards the idea


Hopefully. Though you suggested conflating the hard/soft distinction and 
the reactive/proactive distinction, and I can't see how that would even 
make sense. The former is a matter of ontology (i.e., categorization of 
what things can/do/should exist), whereas the latter is a matter of 
policy (i.e., how people can/do/should behave). Clearly there's some 
relation between the two, but the distinctions are talking about 
completely different topics.




The only thing I think we (might) disagree on is the relative importance of
distinguishing hard and soft bounds versus being able to change bounds
easily after the fact (and *without* changing the version number associated
with the package.)

And on that count,  given the choice,  I pick being able to change bounds
after the fact, hands down.


Well sure, just updating Cabal to say it has soft upper bounds doesn't 
mean much unless they're actually overridable somehow ;)



I'm still dubious of being able to override hard bounds with a 
commandline flag. If the hard bound is valid then when you pass the flag 
to ignore the bound either (a) the code won't compile ---so the flag 
doesn't help any---, or (b) the code will compile in a way known to be 
silently wrong/buggy ---so the flag is evil.  Circumventing a (valid) 
hard bound is going to require altering the code, so what benefit is 
there in avoiding to alter the .cabal file at the same time?


The only case I can conceive of it being helpful to circumvent a hard 
bound is if, in fact, the statement of the hard bound is incorrect. But, 
if that's the case, it's a bug; and surely correcting that bug should 
warrant nudging the fourth version number, ne? Also, this situation 
doesn't strike me as being common enough to warrant the effort of 
implementation. If it came for free from whatever work it takes to 
implement soft bounds (which must necessarily be overridable), I 
wouldn't really care. But if eliminating this burden would help in 
getting soft bounds implemented, then I see no downside to axing it.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-23 Thread wren ng thornton

On 8/22/12 12:35 PM, David Menendez wrote:

As I see it, there are four possibilities for a given version of dependency:

1. The version DOES work. The author (or some delegate) has compiled
the package against this version and the resulting code is considered
good.
2. The version SHOULD work. No one has tested against this version,
but the versioning policy promises not to break anything.
3. The version MIGHT NOT work. No one has tested against this version,
and the versioning policy allows breaking changes.
4. The version DOES NOT work. This has been tested and the resulting
code (if any) is considered not good.

Obviously, cases 1 and 4 can only apply to previously released
versions. The PVP requires setting upper bounds in order to
distinguish cases 2 and 3 for the sake of future compatibility.
Leaving off upper bounds except when incompatibility is known
essentially combines cases 2 and 3.


Right-o.



So there are two failure modes:

I. A version which DOES work is outside the bounds (that is, in case
3). I think eliminating case 3 is too extreme. I like the idea of
temporarily overriding upper bounds with a command-line option. The
danger here is that we might actually be in case 4, in which case we
don't want to override the bounds, but requiring an explicit override
gives users a chance to determine if a particular version is
disallowed because it is untested or because it is known to be
incompatible.


There are two failure modes with overriding stated bounds, however. On 
the one hand, the code could fail to compile. Okay, we know we're in 
case 4; all is well. On the other hand the code could successfully 
compile in ways the package designer knows to be buggy/wrong; we're 
actually in case 4, but the user does not know this. This is why it's 
problematic to simply allow overriding constraints. The package 
developer has some special knowledge that the compiler lacks, but if all 
constraints are considered equal then the developer has no way to convey 
that knowledge to the user (i.e., in an automated machine-checkable 
way). Consequently, the user can end up in a bad place because they 
thought this second failure mode was actually the success mode.


This is why I advocate distinguishing hard constraints from soft 
constraints. By making this distinction, the developer has a means of 
conveying their knowledge to users. A soft bound defines an explicit 
boundary between case 1 and cases 2--4, which can be automatically (per 
PVP) extended to an implicit boundary between cases 1--2 and cases 3--4; 
a boundary which, as you say, can only be truly discovered after the 
code has been published. Extending soft boundaries in this way should be 
safe; at least it's as safe as possible with the foresight available to 
us. On the other hand, a hard bound defines an explicit boundary between 
case 4 and cases 1--3. If these are overridable, things may break 
silently as discussed above--- but the important thing is, in virtue of 
distinguishing hard and soft bounds, the user is made aware of this 
fact. By distinguishing hard and soft bounds, the developer can convey 
their special knowledge to the user. The user can ignore this 
information, but at least they'll do so in an informed way.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-22 Thread Leon Smith
I think we actually agree more than we disagree;  I do think distinguishing
hard and soft upper bounds (no matter what they are called)  would help,
 and I'm just trying to justify them to some of the more dismissive
attitudes towards the idea

The only thing I think we (might) disagree on is the relative importance of
distinguishing hard and soft bounds versus being able to change bounds
easily after the fact (and *without* changing the version number associated
with the package.)

And on that count,  given the choice,  I pick being able to change bounds
after the fact, hands down.   I believe this is more likely to
significantly improve the current situation than distinguishing the two
types of bound alone.   However,  being able to specify both (and change
both) after the fact may prove to be even better.

Best,
Leon

On Sat, Aug 18, 2012 at 11:52 PM, wren ng thornton w...@freegeek.orgwrote:

 On 8/17/12 11:28 AM, Leon Smith wrote:

 And the
 difference between reactionary and proactive approaches I think is a
 potential justification for the hard and soft upper bounds;  perhaps
 we
 should instead call them reactionary and proactive upper bounds
 instead.


 I disagree. A hard constraint says this package *will* break if you
 violate me. A soft constraint says this package *may* break if you
 violate me. These are vastly different notions of boundary conditions, and
 they have nothing to do with a proactive vs reactionary stance towards
 specifying constraints (of either type).

 The current problems of always giving (hard) upper bounds, and the
 previous problems of never giving (soft) upper bounds--- both stem from a
 failure to distinguish hard from soft! The current/proactive approach fails
 because the given constraints are interpreted by Cabal as hard constraints,
 when in truth they are almost always soft constraints. The
 previous/reactionary approach fails because when the future breaks noone
 bothered to write down when the last time things were known to work.

 To evade both problems, one must distinguish these vastly different
 notions of boundary conditions. Hard constraints are necessary for
 blacklisting known-bad versions; soft constraints are necessary for
 whitelisting known-good versions. Having a constraint at all shows where
 the grey areas are, but it fails to indicate whether that grey is most
 likely to be black or white.

 --
 Live well,
 ~wren


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-22 Thread David Menendez
As I see it, there are four possibilities for a given version of dependency:

1. The version DOES work. The author (or some delegate) has compiled
the package against this version and the resulting code is considered
good.
2. The version SHOULD work. No one has tested against this version,
but the versioning policy promises not to break anything.
3. The version MIGHT NOT work. No one has tested against this version,
and the versioning policy allows breaking changes.
4. The version DOES NOT work. This has been tested and the resulting
code (if any) is considered not good.

Obviously, cases 1 and 4 can only apply to previously released
versions. The PVP requires setting upper bounds in order to
distinguish cases 2 and 3 for the sake of future compatibility.
Leaving off upper bounds except when incompatibility is known
essentially combines cases 2 and 3.

So there are two failure modes:

I. A version which DOES work is outside the bounds (that is, in case
3). I think eliminating case 3 is too extreme. I like the idea of
temporarily overriding upper bounds with a command-line option. The
danger here is that we might actually be in case 4, in which case we
don't want to override the bounds, but requiring an explicit override
gives users a chance to determine if a particular version is
disallowed because it is untested or because it is known to be
incompatible.

II. A version which DOES NOT work is inside the bounds (that is, in
case 2). This happens when a package does not follow its own version
policy. For example, during the base-4 transition, a version of
base-3.0 was released which introduced a few breaking changes (e.g.,
it split the Arrow class). Alternately, a particular version might be
buggy. This can already be handled by adding constraints on the
command line, but it's better to release a new version of the package
with more restrictive constraints.

(This might not be enough, though. If I release foo-1.0.0 which
depends on bar-1.0.*, and then bar-1.0.1 is released with with a bug
or breaking change, I can release foo-1.0.0.1 which disallows
bar-1.0.1. But we need some way of preventing cabal from using
foo-1.0.0. Can Hackage deprecate specific versions?)



On Wed, Aug 22, 2012 at 9:18 AM, Leon Smith leon.p.sm...@gmail.com wrote:
 I think we actually agree more than we disagree;  I do think distinguishing
 hard and soft upper bounds (no matter what they are called)  would help,
 and I'm just trying to justify them to some of the more dismissive attitudes
 towards the idea

 The only thing I think we (might) disagree on is the relative importance of
 distinguishing hard and soft bounds versus being able to change bounds
 easily after the fact (and *without* changing the version number associated
 with the package.)

 And on that count,  given the choice,  I pick being able to change bounds
 after the fact, hands down.   I believe this is more likely to significantly
 improve the current situation than distinguishing the two types of bound
 alone.   However,  being able to specify both (and change both) after the
 fact may prove to be even better.

 Best,
 Leon


 On Sat, Aug 18, 2012 at 11:52 PM, wren ng thornton w...@freegeek.org
 wrote:

 On 8/17/12 11:28 AM, Leon Smith wrote:

 And the
 difference between reactionary and proactive approaches I think is a
 potential justification for the hard and soft upper bounds;  perhaps
 we
 should instead call them reactionary and proactive upper bounds
 instead.


 I disagree. A hard constraint says this package *will* break if you
 violate me. A soft constraint says this package *may* break if you violate
 me. These are vastly different notions of boundary conditions, and they
 have nothing to do with a proactive vs reactionary stance towards specifying
 constraints (of either type).

 The current problems of always giving (hard) upper bounds, and the
 previous problems of never giving (soft) upper bounds--- both stem from a
 failure to distinguish hard from soft! The current/proactive approach fails
 because the given constraints are interpreted by Cabal as hard constraints,
 when in truth they are almost always soft constraints. The
 previous/reactionary approach fails because when the future breaks noone
 bothered to write down when the last time things were known to work.

 To evade both problems, one must distinguish these vastly different
 notions of boundary conditions. Hard constraints are necessary for
 blacklisting known-bad versions; soft constraints are necessary for
 whitelisting known-good versions. Having a constraint at all shows where the
 grey areas are, but it fails to indicate whether that grey is most likely to
 be black or white.

 --
 Live well,
 ~wren


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 

Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Erik Hesselink
I am strongly against this, especially for packages in the platform.

If you fail to specify an upper bound, and I depend on your package,
your dependencies can break my package! For example, say I develop
executable A and I depend on library B == 1.0. Library B depends on
library C = 0.5 (no upper bound). Now C 0.6 is released, which is
incompatible with B. This suddenly breaks my build, even though I have
not changed anything about my code or dependencies. This goes against
the 'robust' aspect mentioned as one of the properties of the Haskell
platform, and against the Haskell philosophy of correctness in
general.

This is not an imaginary problem. At my company, we've run into these
problems numerous times already. Since we also have people who are not
experts at Cabal and the Haskell ecosystem building our software, this
can be very annoying. The fix is also not trivial: we can add a
dependency on a package we don't use to all our executables or we can
fork the library (B, in the example above) and add an upper bound/fix
the code. Both add a lot of complexity that we don't want. Add to that
the build failures and associated emails from CI systems like Jenkins.

I can see the maintenance burder you have, since we have to do the
same for our code. But until some Cabal feature is added to ignore
upper bounds or specify soft upper bounds, please follow the PVP, also
in this regard. It helps us maintain a situation where only our own
actions can break our software.

Erik

On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.com wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell at this
 point, as the number of projects on Hackage that are intended to hack around
 the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I try to
 rebuild some of my packages to see if they work with the GHC 7.6 release
 candidate.

 A substantial number of the difficulties I am encountering are related to
 packages specifying upper bounds on their dependencies. This is a recurrent
 problem, and its source lies in the recommendations of the PVP itself
 (problematic phrase highlighted in bold):

 When publishing a Cabal package, you should ensure that your dependencies
 in the build-depends field are accurate. This means specifying not only
 lower bounds, but also upper bounds on every dependency.


 I understand that the intention behind requiring tight upper bounds was
 good, but in practice this has worked out terribly, leading to depsolver
 failures that prevent a package from being installed, when everything goes
 smoothly with the upper bounds relaxed. The default response has been for a
 flurry of small updates to packages in which the upper bounds are loosened,
 thus guaranteeing that the problem will recur in a year or less. This is
 neither sensible, fun, nor sustainable.

 In practice, when an author bumps a version of a depended-upon package, the
 changes are almost always either benign, or will lead to compilation failure
 in the depending-upon package. A benign change will obviously have no
 visible effect, while a compilation failure is actually better than a
 depsolver failure, because it's more informative.

 This leaves the nasty-but-in-my-experience-rare case of runtime failures
 caused by semantic changes. In these instances, a downstream package should
 reactively add an upper bound once a problem is discovered.

 I propose that the sense of the recommendation around upper bounds in the
 PVP be reversed: upper bounds should be specified only when there is a known
 problem with a new version of a depended-upon package.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Chris Dornan
I think we should encourage stable build environments to know precisely
which package versions they have been using and to keep using them until
told otherwise. Even when the types and constraints all work out there is a
risk that upgraded packages will break. Everybody here wants cabal to just
install the packages without problem, but if you want to insulate yourself
from package upgrades surely sticking with proven combinations is the way to
go.

Chris

-Original Message-
From: haskell-cafe-boun...@haskell.org
[mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Erik Hesselink
Sent: 20 August 2012 08:33
To: Bryan O'Sullivan
Cc: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
our friends

I am strongly against this, especially for packages in the platform.

If you fail to specify an upper bound, and I depend on your package, your
dependencies can break my package! For example, say I develop executable A
and I depend on library B == 1.0. Library B depends on library C = 0.5 (no
upper bound). Now C 0.6 is released, which is incompatible with B. This
suddenly breaks my build, even though I have not changed anything about my
code or dependencies. This goes against the 'robust' aspect mentioned as one
of the properties of the Haskell platform, and against the Haskell
philosophy of correctness in general.

This is not an imaginary problem. At my company, we've run into these
problems numerous times already. Since we also have people who are not
experts at Cabal and the Haskell ecosystem building our software, this can
be very annoying. The fix is also not trivial: we can add a dependency on a
package we don't use to all our executables or we can fork the library (B,
in the example above) and add an upper bound/fix the code. Both add a lot of
complexity that we don't want. Add to that the build failures and associated
emails from CI systems like Jenkins.

I can see the maintenance burder you have, since we have to do the same for
our code. But until some Cabal feature is added to ignore upper bounds or
specify soft upper bounds, please follow the PVP, also in this regard. It
helps us maintain a situation where only our own actions can break our
software.

Erik

On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.com
wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell 
 at this point, as the number of projects on Hackage that are intended 
 to hack around the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I 
 try to rebuild some of my packages to see if they work with the GHC 
 7.6 release candidate.

 A substantial number of the difficulties I am encountering are related 
 to packages specifying upper bounds on their dependencies. This is a 
 recurrent problem, and its source lies in the recommendations of the 
 PVP itself (problematic phrase highlighted in bold):

 When publishing a Cabal package, you should ensure that your 
 dependencies in the build-depends field are accurate. This means 
 specifying not only lower bounds, but also upper bounds on every
dependency.


 I understand that the intention behind requiring tight upper bounds 
 was good, but in practice this has worked out terribly, leading to 
 depsolver failures that prevent a package from being installed, when 
 everything goes smoothly with the upper bounds relaxed. The default 
 response has been for a flurry of small updates to packages in which 
 the upper bounds are loosened, thus guaranteeing that the problem will 
 recur in a year or less. This is neither sensible, fun, nor sustainable.

 In practice, when an author bumps a version of a depended-upon 
 package, the changes are almost always either benign, or will lead to 
 compilation failure in the depending-upon package. A benign change 
 will obviously have no visible effect, while a compilation failure is 
 actually better than a depsolver failure, because it's more informative.

 This leaves the nasty-but-in-my-experience-rare case of runtime 
 failures caused by semantic changes. In these instances, a downstream 
 package should reactively add an upper bound once a problem is discovered.

 I propose that the sense of the recommendation around upper bounds in 
 the PVP be reversed: upper bounds should be specified only when there 
 is a known problem with a new version of a depended-upon package.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Simon Marlow

On 15/08/2012 21:44, Johan Tibell wrote:

On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery allber...@gmail.com wrote:

So we are certain that the rounds of failures that led to their being
*added* will never happen again?


It would be useful to have some examples of these. I'm not sure we had
any when we wrote the policy (but Duncan would know more), but rather
reasoned our way to the current policy by saying that things can
theoretically break if we don't have upper bounds, therefore we need
them.


I haven't read the whole thread (yet), but the main motivating example 
for upper bounds was when we split the base package (GHC 6.8) - 
virtually every package on Hackage broke.  Now at the time having upper 
bounds wouldn't have helped, because you would have got a depsolver 
failure instead of a type error.  But following the uproar about this we 
did two things: the next release of GHC (6.10) came with two versions of 
base, *and* we recommended that people add upper bounds.  As a result, 
packages with upper bounds survived the changes.


Now, you could argue that we're unlikely to do this again.  But the main 
reason we aren't likely to do this again is because it was so painful, 
even with upper bounds and compatibility libraries.  With better 
infrastructure and tools, *and* good dependency information, it should 
be possible to do significant reorganisations of the core packages.


As I said in my comments on Reddit[1], I'm not sure that removing upper 
bounds will help overall.  It removes one kind of failure, but 
introduces a new kind - and the new kind is scary, because existing 
working packages can suddenly become broken as a result of a change to a 
different package.  Will it be worse or better overall?  I have no idea. 
 What I'd rather see instead though is some work put into 
infrastructure on Hackage to make it easy to change the depdendencies on 
existing packages.


Cheers,
Simon

[1] 
http://www.reddit.com/r/haskell/comments/ydkcq/pvp_upper_bounds_are_not_our_friends/c5uqohi


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Erik Hesselink
Hub looks interesting, I'll have to try it out (though I'm not on an
RPM based distro). But isn't this the goal of things like semantic
versioning [0] and the PVP? To know that you can safely upgrade to a
bugfix release, and relavily safely to a minor release, but on a major
release, you have to take care?

Haskell makes it much easier to see if you can use a new major (or
minor) version of a library, since the type checker catches many (but
not all!) problems for you. However, this leads to libraries breaking
their API's much more easily, and that in turn causes the problems
voiced in this thread. However, fixing all versions seems like a bit
of a blunt instrument, as it means I'll have to do a lot of work to
bring even bug fixes in.

Erik

[0] http://semver.org/



On Mon, Aug 20, 2012 at 3:13 PM, Chris Dornan ch...@chrisdornan.com wrote:
 I think we should encourage stable build environments to know precisely
 which package versions they have been using and to keep using them until
 told otherwise. Even when the types and constraints all work out there is a
 risk that upgraded packages will break. Everybody here wants cabal to just
 install the packages without problem, but if you want to insulate yourself
 from package upgrades surely sticking with proven combinations is the way to
 go.

 Chris

 -Original Message-
 From: haskell-cafe-boun...@haskell.org
 [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Erik Hesselink
 Sent: 20 August 2012 08:33
 To: Bryan O'Sullivan
 Cc: haskell-cafe@haskell.org
 Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
 our friends

 I am strongly against this, especially for packages in the platform.

 If you fail to specify an upper bound, and I depend on your package, your
 dependencies can break my package! For example, say I develop executable A
 and I depend on library B == 1.0. Library B depends on library C = 0.5 (no
 upper bound). Now C 0.6 is released, which is incompatible with B. This
 suddenly breaks my build, even though I have not changed anything about my
 code or dependencies. This goes against the 'robust' aspect mentioned as one
 of the properties of the Haskell platform, and against the Haskell
 philosophy of correctness in general.

 This is not an imaginary problem. At my company, we've run into these
 problems numerous times already. Since we also have people who are not
 experts at Cabal and the Haskell ecosystem building our software, this can
 be very annoying. The fix is also not trivial: we can add a dependency on a
 package we don't use to all our executables or we can fork the library (B,
 in the example above) and add an upper bound/fix the code. Both add a lot of
 complexity that we don't want. Add to that the build failures and associated
 emails from CI systems like Jenkins.

 I can see the maintenance burder you have, since we have to do the same for
 our code. But until some Cabal feature is added to ignore upper bounds or
 specify soft upper bounds, please follow the PVP, also in this regard. It
 helps us maintain a situation where only our own actions can break our
 software.

 Erik

 On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell
 at this point, as the number of projects on Hackage that are intended
 to hack around the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I
 try to rebuild some of my packages to see if they work with the GHC
 7.6 release candidate.

 A substantial number of the difficulties I am encountering are related
 to packages specifying upper bounds on their dependencies. This is a
 recurrent problem, and its source lies in the recommendations of the
 PVP itself (problematic phrase highlighted in bold):

 When publishing a Cabal package, you should ensure that your
 dependencies in the build-depends field are accurate. This means
 specifying not only lower bounds, but also upper bounds on every
 dependency.


 I understand that the intention behind requiring tight upper bounds
 was good, but in practice this has worked out terribly, leading to
 depsolver failures that prevent a package from being installed, when
 everything goes smoothly with the upper bounds relaxed. The default
 response has been for a flurry of small updates to packages in which
 the upper bounds are loosened, thus guaranteeing that the problem will
 recur in a year or less. This is neither sensible, fun, nor sustainable.

 In practice, when an author bumps a version of a depended-upon
 package, the changes are almost always either benign, or will lead to
 compilation failure in the depending-upon package. A benign change
 will obviously have no visible effect, while a compilation failure is
 actually better than a depsolver failure, because it's more informative.

 This leaves the nasty-but-in-my-experience-rare case of runtime
 failures caused

Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Gregory Collins
My two (or three) cents:

   - Given a choice between a world where there is tedious work for package
   maintainers vs. a world where packages randomly break for end users (giving
   them a bad impression of the entire Haskell ecosystem), I choose the former.

   - More automation can ease the burden here. Michael Snoyman's packdeps
   tool is a great start in this direction, and it would be even better if it
   automagically fixed libraries for you and bumped your version number
   according to the PVP.

   - This is a great problem to have. There's so much work happening that
   people find it hard to stay on the treadmill? Things could be a lot worse.
   I guarantee you that our friends in the Standard ML community are not
   having this discussion. :-)

G
-- 
Gregory Collins g...@gregorycollins.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Chris Dornan
Of course if you wish or need to upgrade a package then you can just upgrade
it -- I am not suggesting anyone should forgo upgrades! It is just that
there is no need to make the stability of a build process dependent on new
package releases.

To upgrade a package I would fork my sandbox, hack away at the package
database (removing, upgrading, installing packages) until I have a candidate
combination of packages and swap in the new sandbox into my work tree and
test it, reverting to the tried and tested environment if things don't work
out.  If the new configuration works then then I would dump the new package
configuration and check it in. Subsequent updates and builds on other work
trees should pick up the new environment.

The key thing I was looking for was control of when your build environment
gets disrupted and stability in between -- even when building from the repo.

Chris

-Original Message-
From: Erik Hesselink [mailto:hessel...@gmail.com] 
Sent: 20 August 2012 14:35
To: Chris Dornan
Cc: Bryan O'Sullivan; haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
our friends

Hub looks interesting, I'll have to try it out (though I'm not on an RPM
based distro). But isn't this the goal of things like semantic versioning
[0] and the PVP? To know that you can safely upgrade to a bugfix release,
and relavily safely to a minor release, but on a major release, you have to
take care?

Haskell makes it much easier to see if you can use a new major (or
minor) version of a library, since the type checker catches many (but not
all!) problems for you. However, this leads to libraries breaking their
API's much more easily, and that in turn causes the problems voiced in this
thread. However, fixing all versions seems like a bit of a blunt instrument,
as it means I'll have to do a lot of work to bring even bug fixes in.

Erik

[0] http://semver.org/



On Mon, Aug 20, 2012 at 3:13 PM, Chris Dornan ch...@chrisdornan.com wrote:
 I think we should encourage stable build environments to know 
 precisely which package versions they have been using and to keep 
 using them until told otherwise. Even when the types and constraints 
 all work out there is a risk that upgraded packages will break. 
 Everybody here wants cabal to just install the packages without 
 problem, but if you want to insulate yourself from package upgrades 
 surely sticking with proven combinations is the way to go.

 Chris

 -Original Message-
 From: haskell-cafe-boun...@haskell.org 
 [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Erik Hesselink
 Sent: 20 August 2012 08:33
 To: Bryan O'Sullivan
 Cc: haskell-cafe@haskell.org
 Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds 
 are not our friends

 I am strongly against this, especially for packages in the platform.

 If you fail to specify an upper bound, and I depend on your package, 
 your dependencies can break my package! For example, say I develop 
 executable A and I depend on library B == 1.0. Library B depends on 
 library C = 0.5 (no upper bound). Now C 0.6 is released, which is 
 incompatible with B. This suddenly breaks my build, even though I have 
 not changed anything about my code or dependencies. This goes against 
 the 'robust' aspect mentioned as one of the properties of the Haskell 
 platform, and against the Haskell philosophy of correctness in general.

 This is not an imaginary problem. At my company, we've run into these 
 problems numerous times already. Since we also have people who are not 
 experts at Cabal and the Haskell ecosystem building our software, this 
 can be very annoying. The fix is also not trivial: we can add a 
 dependency on a package we don't use to all our executables or we can 
 fork the library (B, in the example above) and add an upper bound/fix 
 the code. Both add a lot of complexity that we don't want. Add to that 
 the build failures and associated emails from CI systems like Jenkins.

 I can see the maintenance burder you have, since we have to do the 
 same for our code. But until some Cabal feature is added to ignore 
 upper bounds or specify soft upper bounds, please follow the PVP, also 
 in this regard. It helps us maintain a situation where only our own 
 actions can break our software.

 Erik

 On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell
 at this point, as the number of projects on Hackage that are intended 
 to hack around the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I 
 try to rebuild some of my packages to see if they work with the GHC
 7.6 release candidate.

 A substantial number of the difficulties I am encountering are 
 related to packages specifying upper bounds on their dependencies. 
 This is a recurrent problem, and its source lies in the 
 recommendations of the PVP itself

Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Brent Yorgey
On Thu, Aug 16, 2012 at 06:07:06PM -0400, Joey Adams wrote:
 On Wed, Aug 15, 2012 at 3:38 PM, Bryan O'Sullivan b...@serpentine.com wrote:
  I propose that the sense of the recommendation around upper bounds in the
  PVP be reversed: upper bounds should be specified only when there is a known
  problem with a new version of a depended-upon package.
 
 I, too, agree.  Here is my assortment of thoughts on the matter.
 
 Here's some bad news: with cabal 1.14 (released with Haskell Platform
 2012.2), cabal init defaults to bounds like these:
 
   build-depends:   base ==4.5.*, bytestring ==0.9.*,
   http-types ==0.6.*

I'm not sure why you think this is bad news.  I designed this to
conform exactly to the current PVP.  If the PVP is changed then I will
update cabal init to match.

-Brent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Iavor Diatchki
Hello,

I also completely agree with Bryan's point which is why I usually don't add
upper bounds on the dependencies of the packages that I maintain---I find
that the large majority of updates to libraries tend to be backward
compatible, so being optimistic seems like a good idea.

By the way, something I encounter quite often is a situation where two
packages both build on Hacakge just fine, but are not compatible with each
other.  Usually it goes like this:

  1. Package A requires library X = V  (typically, because it needs a bug
fix or a new feature).
  2. Package B requires library X  V (typically, because someone added a
conservative upper bound that needs to be updated).

Trying to use A and B together leads to failure, which is usually resolved
by having to install B manually, and remove it's upper bound by hand.  This
is rather unfortunate, because not only it's inconvenient but, also, now
there is no released version of package B that you can explicitly depend on.

-Iavor



On Mon, Aug 20, 2012 at 7:11 AM, Brent Yorgey byor...@seas.upenn.eduwrote:

 On Thu, Aug 16, 2012 at 06:07:06PM -0400, Joey Adams wrote:
  On Wed, Aug 15, 2012 at 3:38 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
   I propose that the sense of the recommendation around upper bounds in
 the
   PVP be reversed: upper bounds should be specified only when there is a
 known
   problem with a new version of a depended-upon package.
 
  I, too, agree.  Here is my assortment of thoughts on the matter.
 
  Here's some bad news: with cabal 1.14 (released with Haskell Platform
  2012.2), cabal init defaults to bounds like these:
 
build-depends:   base ==4.5.*, bytestring ==0.9.*,
http-types ==0.6.*

 I'm not sure why you think this is bad news.  I designed this to
 conform exactly to the current PVP.  If the PVP is changed then I will
 update cabal init to match.

 -Brent

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-19 Thread Felipe Almeida Lessa
On Sat, Aug 18, 2012 at 3:57 AM, David Feuer david.fe...@gmail.com wrote:
 If the language is changed (without possibility of breakage, I
 believe) so that names declared in a module shadow imported names,
 incompatibility can only arise if two different imports offer the same
 name, and it is actually used.

This already happens in practice (e.g. take, how many modules
declare that?) and is one of the problems that qualified imports
solve.

Cheers,

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-19 Thread Chris Dornan
I agree with Bryan's proposal unreservedly. 

However, I think there might be a way to resolve the tension between:

  * maintaining and publishing a definite dependent-package configuration
that is known to work and

  * having a package unencumbered with arbitrary restrictions on which
future versions of its dependent packages it will work with.

Can't we have both? The cabal file would only eliminate package version that
are known not to work as Bryan suggests but the versions of the dependent
packages that the package has been tested with -- the 'reference
configuration' -- could be recorded separately. A separate build tool could
take the reference configuration and direct cabal to rebuild the reference
instance, specifying the fully qualified packages to use.

I have a set of tools for doing this because (see http://justhub.org -- and
if you have root access to an RPM-based Linux then the chances are you can
try it out, otherwise the sources are available).

For each project or package, separate from the cabal file recording the hard
dependencies,  I record the current reference configuration in a file which
lists the base installation (generally a Haskell platform but it can be a
bare compiler) and the list of fully-qualified dependent modules. Like the
cabal file the reference configuration would get checked into the VCS and/or
included in the Hackage tarball.

A normal build process first builds the environment ensuring that the
correct platform is selected and the exact package dependencies are
installed -- this is usually just a checking step unless the package
environment has been disturbed or the reference configuration has been
revised. Once the environment has been checked the normal program build
process proceeds as normal, where the real work generally happens.

Once the project is checked out on another system (or a package is installed
anew) the build step would actually build all of the dependent packages.

For this to really work a sandbox mechanism is needed -- merely trying out a
package/project shouldn't trash your only development environment! 

If a library package is to be integrated into a live project the reference
environment probably won't be the one you need but I find it useful to be
able to build it anyway in a clean environment and incrementally
up/downgrade the packages, searching out a compatible configuration. (Being
able to easily push and recover sandboxes is helpful here too.)

Would this way of working resolve the tension we are seeing here? I am so
used to working this way it is difficult for me to say.

(As others have said here and elsewhere, functional packaging is really
cool.)

Chris


-Original Message-
From: haskell-cafe-boun...@haskell.org
[mailto:haskell-cafe-boun...@haskell.org] On Behalf Of MightyByte
Sent: 16 August 2012 04:02
To: Ivan Lazar Miljenovic
Cc: Haskell Cafe
Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
our friends

On Wed, Aug 15, 2012 at 9:19 PM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 On 16 August 2012 08:55, Brandon Allbery allber...@gmail.com wrote:
 Indeed.  But the ghc release that split up base broke cabalised 
 packages with no warning to users until they failed to compile.  
 Upper bounds were put in place to avoid that kind of breakage in the
future.

 I like having upper bounds on version numbers... right up until people 
 abuse them.

I also tend to favor having upper bounds.  Obviously they impose a cost, but
it's not clear to me at all that getting rid of them is a better tradeoff.
I've had projects that I put aside for awhile only to come back and discover
that they would no longer build because I hadn't put upper bounds on all my
package dependencies.  With no upper bounds, a package might not be very
likely to break for incremental version bumps, but eventually it *will*
break.  And when it does it's a huge pain to get it building again.  If I
have put effort into making a specific version of my package work properly
today, I want it to always work properly in the future (assuming that
everyone obeys the PVP).  I don't think it's unreasonable that some
activation energy be required to allow one's project to work with a new
version of some upstream dependency.

Is that activation energy too high right now?  Almost definitely.  But
that's a tool problem, not a problem with the existence of upper bounds
themselves.  One tool-based way to help with this problem would be to add a
flag to Cabal/cabal-install that would cause it to ignore upper bounds.
(Frankly, I think it would also be great if Cabal/cabal-install enforced
upper version bounds automatically if none were specified.)  Another
approach that has been discussed is detecting dependencies that are only
used internally[1], and I'm sure there are many other possibilities.  In
short, I think we should be moving more towards purely functional builds
that reduce the chance that external factors will break things

Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-18 Thread David Feuer
On Thu, Aug 16, 2012 at 9:53 AM, Felipe Almeida Lessa
felipe.le...@gmail.com wrote:

 If you import qualified then adding functions will never break anything.

If the language is changed (without possibility of breakage, I
believe) so that names declared in a module shadow imported names,
incompatibility can only arise if two different imports offer the same
name, and it is actually used.

David

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-18 Thread wren ng thornton

On 8/17/12 11:28 AM, Leon Smith wrote:

And the
difference between reactionary and proactive approaches I think is a
potential justification for the hard and soft upper bounds;  perhaps we
should instead call them reactionary and proactive upper bounds instead.


I disagree. A hard constraint says this package *will* break if you 
violate me. A soft constraint says this package *may* break if you 
violate me. These are vastly different notions of boundary conditions, 
and they have nothing to do with a proactive vs reactionary stance 
towards specifying constraints (of either type).


The current problems of always giving (hard) upper bounds, and the 
previous problems of never giving (soft) upper bounds--- both stem from 
a failure to distinguish hard from soft! The current/proactive approach 
fails because the given constraints are interpreted by Cabal as hard 
constraints, when in truth they are almost always soft constraints. The 
previous/reactionary approach fails because when the future breaks noone 
bothered to write down when the last time things were known to work.


To evade both problems, one must distinguish these vastly different 
notions of boundary conditions. Hard constraints are necessary for 
blacklisting known-bad versions; soft constraints are necessary for 
whitelisting known-good versions. Having a constraint at all shows where 
the grey areas are, but it fails to indicate whether that grey is most 
likely to be black or white.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Daniel Trstenjak

On Thu, Aug 16, 2012 at 11:33:17PM -0400, wren ng thornton wrote:
 However, there are certainly cases where we have hard upper
 bounds[1][2][3], and ignoring those is not fine. Circumventing hard
 upper bounds should require altering the .cabal file, given as
 getting things to compile will require altering the source code as
 well.

It's ok to have soft and hard upper bounds, but it should be always
possible to ignore both - having a separate ignore option for each -
without modifying the cabal file.

I've the confidence, that most cabal users should be able to handle
this. Nothing is more annoying if you know what you're doing, but don't
have the power to do it.


Greetings,
Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Heinrich Apfelmus

Brent Yorgey wrote:

Yitzchak Gale wrote:

For actively maintained packages, I think the
problem is that package maintainers don't find
out promptly that an upper bound needs to be
bumped. One way to solve that would be a
simple bot that notifies the package maintainer
as soon as an upper bound becomes out-of-date.


This already exists:

  http://packdeps.haskellers.com/


Indeed. It even has RSS feeds, like this

 http://packdeps.haskellers.com/feed/reactive-banana

Extremely useful!


Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Thomas Schilling
My thoughts on the matter got a little long, so I posted them here:

http://nominolo.blogspot.co.uk/2012/08/beyond-package-version-policies.html

On 17 August 2012 12:48, Heinrich Apfelmus apfel...@quantentunnel.dewrote:

 Brent Yorgey wrote:

 Yitzchak Gale wrote:

 For actively maintained packages, I think the
 problem is that package maintainers don't find
 out promptly that an upper bound needs to be
 bumped. One way to solve that would be a
 simple bot that notifies the package maintainer
 as soon as an upper bound becomes out-of-date.


 This already exists:

   http://packdeps.haskellers.**com/ http://packdeps.haskellers.com/


 Indeed. It even has RSS feeds, like this

  
 http://packdeps.haskellers.**com/feed/reactive-bananahttp://packdeps.haskellers.com/feed/reactive-banana

 Extremely useful!


 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com



 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Push the envelope. Watch it bend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Leon Smith
I see good arguments on both sides of the upper bounds debate,  though at
the current time I think the best solution is to omit upper bounds (and I
have done so for most/all of my packages on hackage).But I cannot agree
with this enough:

On Thu, Aug 16, 2012 at 4:45 AM, Joachim Breitner
m...@joachim-breitner.dewrote:

 I think what we’d need is a more relaxed policy with modifying a
 package’s meta data on hackage. What if hackage would allow uploading a
 new package with the same version number, as long as it is identical up
 to an extended version range? Then the first person who stumbles over an
 upper bound that turned out to be too tight can just fix it and upload
 the fixed package directly, without waiting for the author to react.


I think that constraint ranges of a given package should be able to both be
extended and restricted after the fact.   Those in favor of the reactionary
approach (as I am at the moment, or Bryan O'Sullivan) would find the
ability of to restrict the version range useful,  while those in favor of
the proactive approach (like Joachim Breitner or Doug Beardsley) would find
the ability to extend the version range useful.

I suspect that attitudes towards upper bounds may well change if we can set
version ranges after the fact.I know mine very well might.And the
difference between reactionary and proactive approaches I think is a
potential justification for the hard and soft upper bounds;  perhaps we
should instead call them reactionary and proactive upper bounds instead.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread MigMit
What if instead of upper (and lower) bounds we just specify our interface 
requirements? Like package bull-shit should provide value Foo.Bar.baz :: 
forall a. [a] - [a] - [a] or more general. Sure, it won't help dealing with 
strictness/lazyness, but it would capture most interface differences. And, in 
case the requirements aren't specified, we could also specify the default, like 
bool-shit 2.0 is known to fulfil this requirement; if yours doesn't, consider 
installing this one.


On Aug 17, 2012, at 7:28 PM, Leon Smith leon.p.sm...@gmail.com wrote:

 I see good arguments on both sides of the upper bounds debate,  though at the 
 current time I think the best solution is to omit upper bounds (and I have 
 done so for most/all of my packages on hackage).But I cannot agree with 
 this enough:
 
 On Thu, Aug 16, 2012 at 4:45 AM, Joachim Breitner m...@joachim-breitner.de 
 wrote:
 I think what we’d need is a more relaxed policy with modifying a
 package’s meta data on hackage. What if hackage would allow uploading a
 new package with the same version number, as long as it is identical up
 to an extended version range? Then the first person who stumbles over an
 upper bound that turned out to be too tight can just fix it and upload
 the fixed package directly, without waiting for the author to react.
 
 I think that constraint ranges of a given package should be able to both be 
 extended and restricted after the fact.   Those in favor of the reactionary 
 approach (as I am at the moment, or Bryan O'Sullivan) would find the ability 
 of to restrict the version range useful,  while those in favor of the 
 proactive approach (like Joachim Breitner or Doug Beardsley) would find the 
 ability to extend the version range useful.
 
 I suspect that attitudes towards upper bounds may well change if we can set 
 version ranges after the fact.I know mine very well might.And the 
 difference between reactionary and proactive approaches I think is a 
 potential justification for the hard and soft upper bounds;  perhaps we 
 should instead call them reactionary and proactive upper bounds instead. 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Bryan O'Sullivan
On Fri, Aug 17, 2012 at 12:34 PM, MigMit miguelim...@yandex.ru wrote:

 What if instead of upper (and lower) bounds we just specify our interface
 requirements?


We already have a simple versioning scheme for which, despite it being easy
to grasp, we have amply demonstrated that we cannot make it work well,
because it has emergent properties that cause it to not scale well across a
large community.

Any vastly more complicated and detailed versioning scheme has a huge
burden to prove that it won't collapse dramatically more quickly. (Frankly,
I think that anything involving specify every detail of your known
dependencies is dead on arrival from a practical standpoint: it's way too
much work.)

For that matter, I think that this burden applies to my own proposal to
omit upper bounds unless they're needed.

Fortunately, we now have several years of historical dependency data that
we can go back and mine, and thereby try to model the effects of particular
suggested changes.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread MigMit

On Aug 18, 2012, at 12:35 AM, Bryan O'Sullivan b...@serpentine.com wrote:

 We already have a simple versioning scheme for which, despite it being easy 
 to grasp, we have amply demonstrated that we cannot make it work well, 
 because it has emergent properties that cause it to not scale well across a 
 large community.

Well, I think that the main reason for this failure is that despite being easy 
to grasp, this scheme doesn't really reflect the reality. It seems to be chosen 
arbitrarily.

 Any vastly more complicated and detailed versioning scheme has a huge burden 
 to prove that it won't collapse dramatically more quickly. (Frankly, I think 
 that anything involving specify every detail of your known dependencies is 
 dead on arrival from a practical standpoint: it's way too much work.)

That's not true. All this work can be automated — if you're the developer, 
you've certainly compiled the code yourself, and the compiler knows what was 
imported from other packages. The only problem is to make it save this 
information into a specific file.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Brandon Allbery
On Fri, Aug 17, 2012 at 4:35 PM, Bryan O'Sullivan b...@serpentine.comwrote:

 Any vastly more complicated and detailed versioning scheme has a huge
 burden to prove that it won't collapse dramatically more quickly. (Frankly,
 I think that anything involving specify every detail of your known
 dependencies is dead on arrival from a practical standpoint: it's way too
 much work.)


If you do it in terms of hashed signatures, you can make the compiler and
toolchain do the work for you.  The problem with version numbers is that it
*is* manual.  It can't catch behavioral changes, though; you probably need
some kind of epoch override for that in the case where it doesn't come with
corresponding API changes.

The reason it's never done is that you can't really do it with C or C++.
 We can mostly avoid that (the FFI is an issue, but it is anyway:  cabal
can't really check C stuff, unless you use pkg-config which was two
operating modes:  exact versions or no versioning).

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Michael Sloan
I agree that Haskell's design gives us a good leg up on the problem of
acquiring and comparing APIs. However, I don't think that this
manifest solution really buys us enough to justify the complexity.

There're also some specific, perhaps resolvable, but unsightly problems, which
I outline here:
http://www.reddit.com/r/haskell/comments/ydtl9/beyond_package_version_policies/c5uro04

(also another pitch for my proposed solution to this variety of problems)

-mgsloan

On Fri, Aug 17, 2012 at 2:34 PM, Brandon Allbery allber...@gmail.com wrote:
 On Fri, Aug 17, 2012 at 4:35 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:

 Any vastly more complicated and detailed versioning scheme has a huge
 burden to prove that it won't collapse dramatically more quickly. (Frankly,
 I think that anything involving specify every detail of your known
 dependencies is dead on arrival from a practical standpoint: it's way too
 much work.)


 If you do it in terms of hashed signatures, you can make the compiler and
 toolchain do the work for you.  The problem with version numbers is that it
 *is* manual.  It can't catch behavioral changes, though; you probably need
 some kind of epoch override for that in the case where it doesn't come with
 corresponding API changes.

 The reason it's never done is that you can't really do it with C or C++.  We
 can mostly avoid that (the FFI is an issue, but it is anyway:  cabal can't
 really check C stuff, unless you use pkg-config which was two operating
 modes:  exact versions or no versioning).

 --
 brandon s allbery  allber...@gmail.com
 wandering unix systems administrator (available) (412) 475-9364 vm/sms


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-17 Thread Gershom Bazerman

On 8/16/12 11:33 PM, wren ng thornton wrote:

[1] Parsec 2 vs 3, for a very long time
[2] mtl 1 vs 2, for a brief interim
[3] John Lato's iteratee =0.3 vs =0.4, for legacy code
...


I think this is a great point! As others have said, maintainers 
typically, but not always, know when their code is likely to break 
client libraries and software. But super-major numbers (i.e. the x of 
x.y) get bumped for other reasons, and sometimes sub-major (majorette?) 
numbers (i.e. the y of x.y) get bumped for massively breaking changes as 
well (which is in perfect accord with the PVP).


One other solution would be to introduce a new, optional (at least at 
first) field in cabal files -- perhaps named something like 
api-version or api-epoch. This is just an integer, and it just gets 
bumped on massively breaking changes likely to force all client code to 
adapt. That way I can specify a traditional package lower bound with a 
real version number, and also specify (optionally, at least at first) an 
upper bound of an api-epoch. Most of my packages have never 
experienced an api-epoch event, and many likely won't ever. Most of 
their dependencies -- likewise.


At the cost of some extra (optional) annotations, this gives us a sort 
of compromise between the current, very painful, situation and the 
no-upper-bound situation where occasionally an epoch event breaks an 
enormous chunk of the ecosystem.


Cheers,
Gershom

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Daniel Trstenjak

On Wed, Aug 15, 2012 at 03:54:04PM -0700, Michael Sloan wrote:
 Upper bounds are a bit of a catch-22 when it comes to library authors evolving
 their APIs:
 
 1) If library clients aren't encouraged to specify which version of the
exported API they target, then changing APIs can lead to opaque compile
errors (without any information about which API is intended).  This could
lead the client to need to search for the appropriate version of the
library.

Having the version number A.B.*, than most packages seem to mostly
increase B or lower parts of the version number.

If an upper bound is missing, than cabal could use any package in the range 
A.*.* .

If an author wants to make breaking changes to his API, than he could
indicate this by increasing A.

I've nothing against your proposal, I just don't think that it will be
done that soon.


Greetings,
Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Ketil Malde
Bryan O'Sullivan b...@serpentine.com writes:

 I propose that the sense of the recommendation around upper bounds in the
 PVP be reversed: upper bounds should be specified *only when there is a
 known problem with a new version* of a depended-upon package.

Another advantage to this is that it's not always clear what constitutes
an API change.  I had to put an upper bound on binary, since 0.5
introduced laziness changes that broke my program.  (I later got some
help to implement a workaround, but binary-0.4.4 is still substantially
faster).  Understandably, the authors didn't see this as a breaking API
change.

So, +1.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Chris Smith
I am tentatively in agreement that upper bounds are causing more
problems than they are solving.  However, I want to suggest that
perhaps the more fundamental issue is that Cabal asks the wrong person
to answer questions about API stability.  As a package author, when I
release a new version, I know perfectly well what incompatible changes
I have made to it... and those might include, for example:

1. New modules, exports or instances... low risk
2. Changes to less frequently used, advanced, or internal APIs...
moderate risk
3. Completely revamped commonly used interfaces... high risk

Currently *all* of these categories have the potential to break
builds, so require the big hammer of changing the first-dot version
number.  I feel like I should be able to convey this level of risk,
though... and it should be able to be used by Cabal.  So, here's a
proposal just to toss out there; no idea if it would be worth the
complexity or not:

A. Cabal files should get a new Compatibility field, indicating the
level of compatibility from the previous release: low, medium, high,
or something like that, with definitions for what each one means.

B. Version constraints should get a new syntax:

bytestring ~ 0.10.* (allow later versions that indicate low or
moderate risk)
bytestring ~~ 0.10.* (allow later versions with low risk; we use
the dark corners of this one)
bytestring == 0.10.* (depend 100% on 0.10, and allow nothing else)

Of course, this adds a good bit of complexity to the constraint
solver... but not really.  It's more like a pre-processing pass to
replace fuzzy constraints with precise ones.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Twan van Laarhoven

On 16/08/12 14:07, Chris Smith wrote:

As a package author, when I
release a new version, I know perfectly well what incompatible changes
I have made to it... and those might include, for example:

1. New modules, exports or instances... low risk
2. Changes to less frequently used, advanced, or internal APIs...
moderate risk
3. Completely revamped commonly used interfaces... high risk


Would adding a single convenience function be low or high risk? You say it is 
low risk, but it still risks breaking a build if a user has defined a function 
with the same name. I think the only meaningful distinction you can make are:
  1. No change to public API at all, user code is guaranteed to compile and 
work if it did so before.

 Perhaps new modules could also fall under this category, I'm not sure.
  2. changes to exports, instances, modules, types, etc. But with the guarantee 
that if it compiles, it will be correct
  3. changes to functionality, which require the user to reconsider all code. 
even if it compiles, it might be wrong


For the very common case 2, the best solution is to just go ahead and try to 
compile it.



A. Cabal files should get a new Compatibility field, indicating the
level of compatibility from the previous release: low, medium, high,
or something like that, with definitions for what each one means.


You would need to indicate how large the change is compared to a certain 
previous version. Moderate change compared to 0.10, large change compared to 0.9.



B. Version constraints should get a new syntax:

 bytestring ~ 0.10.* (allow later versions that indicate low or
moderate risk)
 bytestring ~~ 0.10.* (allow later versions with low risk; we use
the dark corners of this one)
 bytestring == 0.10.* (depend 100% on 0.10, and allow nothing else)

Of course, this adds a good bit of complexity to the constraint
solver... but not really.  It's more like a pre-processing pass to
replace fuzzy constraints with precise ones.



Perhaps it would be cleaner if you specified what parts of the API you depend 
on, instead of an arbitrary distinction between 'internal' and 'external' parts. 
From cabal's point of view the best solution would be to have a separate 
package for the internals. Then the only remaining distinction is between 
'breaking' and 'non-breaking' changes. The current policy is to rely on major 
version numbers. But this could instead be made explicit: A cabal package should 
declare what API version of itself it is mostly-compatible with.


To avoid forcing the creation of packages just for versioning, perhaps 
dependencies could be specified on parts of a package?


build-depends: bytestring.internal ~ 0.11

and the bytestring package would specify what parts have changed:

compatibility: bytestring.internal = 0.11, bytestring.external = 0.10

But these names introduce another problem: they will not be fine-grained enough 
until it is too late. You only know how the API is partitioned when, in the 
future, a part of it changes while another part does not.



Twan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Chris Smith
Twan van Laarhoven twa...@gmail.com wrote:
 Would adding a single convenience function be low or high risk? You say it
 is low risk, but it still risks breaking a build if a user has defined a
 function with the same name.

Yes, it's generally low-risk, but there is *some* risk.  Of course, it
could be high risk if you duplicate a Prelude function or a name that
you know is in use elsewhere in a related or core library... these
decisions would involve knowing something about the library space,
which package maintainers often do.

 I think the only meaningful distinction you can make are:

Except that the whole point is that this is *not* the only distinction
you can make.  It might be the only distinction with an exact
definition that can be checked by automated tools, but that doesn't
change the fact that when I make an incompatible change to a library
I'm maintaining, I generally have a pretty good idea of which kinds of
users are going to be fixing their code as a result.  The very essence
of my suggestion was that we accept the fact that we are working in
probabilities here, and empower package maintainers to share their
informed evaluation.  Right now, there's no way to provide that
information: the PVP is caught up in exactly this kind of legalism
that only cares whether a break is possible or impossible, without
regard to how probable it is.  The complaint that this new mechanism
doesn't have exactly such a black and white set of criteria associated
with it is missing the point.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Ivan Lazar Miljenovic
On 16 August 2012 20:50, Ketil Malde ke...@malde.org wrote:
 Bryan O'Sullivan b...@serpentine.com writes:

 I propose that the sense of the recommendation around upper bounds in the
 PVP be reversed: upper bounds should be specified *only when there is a
 known problem with a new version* of a depended-upon package.

 Another advantage to this is that it's not always clear what constitutes
 an API change.  I had to put an upper bound on binary, since 0.5
 introduced laziness changes that broke my program.  (I later got some
 help to implement a workaround, but binary-0.4.4 is still substantially
 faster).  Understandably, the authors didn't see this as a breaking API
 change.

Except 0.4 - 0.5 _is_ a major version bump according to the PVP.


 So, +1.

 -k
 --
 If I haven't seen further, it is by standing in the footprints of giants

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
http://IvanMiljenovic.wordpress.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread timothyhobbs

So that we are using concrete examples.  here is an example of a change that
really shouldn't break any package:




https://github.com/timthelion/threadmanager/commit/c23e19cbe78cc6964f23fdb90
b7029c5ae54dd35





The exposed functions are the same.  The behavior is changed.  But as the
commiter of the change, I cannot imagine that it would break any currently
working code.




There is another issue though.  With this kind of change, there is no reason
for a package which was written for the old version of the library, to be 
built with the new version.  If I am correct, that this change changes
nothing for currently working code, then why should an old package be built
with the newer package?

The advantage in this case, is merely that we want to prevent version
duplication.  We don't want to waste disk space by installing every possible
iteration of a library.





I personally think that disk space is so cheep, that this last consideration
is not so important.  If there are packages that only build with old 
versions of GHC, and old libraries, why can we not just seamlessly install
them?  One problem, is if we want to use those old libraries with new code. 
 Take the example of Python2 vs Python3.  Yes, we can seamlessly install
python2 libraries, even though we use python3 normally, but we cannot MIX 
python2 libraries with python3 libraries.




Maybe we could make Haskell linkable objects smart enough that we COULD mix
old with new?  That sounds complicated.




I think, Michael Sloan is onto something though with his idea of
compatibility layers.  I think that if we could write simple dictionary
packages that would translate old API calls to new ones, we could use old 
code without modification.  This would allow us to build old libraries which
normally wouldn't be compatible with something in base using a base-old-to-
new dictionary package.  Then we could use these old libraries without
modification with new code.




It's important that this be possible from the side of the person USING the
library, and not the library author.   It's impossible to write software, if
you spend all of your time waiting for someone else to update their
libraries.





Timothy





-- Původní zpráva --
Od: Ivan Lazar Miljenovic ivan.miljeno...@gmail.com
Datum: 16. 8. 2012
Předmět: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
our friends
On 16 August 2012 20:50, Ketil Malde ke...@malde.org wrote:
 Bryan O'Sullivan b...@serpentine.com writes:

 I propose that the sense of the recommendation around upper bounds in the
 PVP be reversed: upper bounds should be specified *only when there is a
 known problem with a new version* of a depended-upon package.

 Another advantage to this is that it's not always clear what constitutes
 an API change. I had to put an upper bound on binary, since 0.5
 introduced laziness changes that broke my program. (I later got some
 help to implement a workaround, but binary-0.4.4 is still substantially
 faster). Understandably, the authors didn't see this as a breaking API
 change.

Except 0.4 - 0.5 _is_ a major version bump according to the PVP.


 So, +1.

 -k
 --
 If I haven't seen further, it is by standing in the footprints of giants

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
(http://www.haskell.org/mailman/listinfo/haskell-cafe)



--
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
http://IvanMiljenovic.wordpress.com(http://IvanMiljenovic.wordpress.com)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
(http://www.haskell.org/mailman/listinfo/haskell-cafe)___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Felipe Almeida Lessa
On Thu, Aug 16, 2012 at 10:01 AM, Chris Smith cdsm...@gmail.com wrote:
 Twan van Laarhoven twa...@gmail.com wrote:
 Would adding a single convenience function be low or high risk? You say it
 is low risk, but it still risks breaking a build if a user has defined a
 function with the same name.

 Yes, it's generally low-risk, but there is *some* risk.  Of course, it
 could be high risk if you duplicate a Prelude function or a name that
 you know is in use elsewhere in a related or core library... these
 decisions would involve knowing something about the library space,
 which package maintainers often do.

If you import qualified then adding functions will never break anything.

Cheers,

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Yitzchak Gale
Bryan O'Sullivan wrote:
 A substantial number of the difficulties I am encountering are related to
 packages specifying upper bounds on their dependencies. This is a recurrent
 problem, and its source lies in the recommendations of the PVP itself

I think the PVP recommendation is good, though admittedly
one that in practice can be taken with a grain of salt.

Publishing supposedly stable and supported packages
with no upper bounds leads to persistent build problems
that are tricky to solve.

A good recent example is the encoding package.
This package depends on HaXML = 1.19, with
no upper bound. However, the current version of HaXML
is 1.23, and the encoding package cannot build
against it due to API changes. Furthermore, uploading
a corrected version of encoding wouldn't even
solve the problem completely. Anyone who already
has the current version of encoding will have
build problems as soon as they upgrade HaXML.
The cabal dependencies are lying, so there is no
way for cabal to know that encoding is the culprit.
Build problems caused by missing upper bounds
last forever; their importance fades only gradually.

Whereas it is trivially easy to correct an upper
bound that has become obsolete, and once you
fix it, it's fixed.

For actively maintained packages, I think the
problem is that package maintainers don't find
out promptly that an upper bound needs to be
bumped. One way to solve that would be a
simple bot that notifies the package maintainer
as soon as an upper bound becomes out-of-date.

For unresponsive package maintainers or
unmaintained packages, it would be helpful to
have some easy temporary fix mechanism as
suggested by Joachim.

Joachim also pointed out the utility of upper bounds
for platform packaging.

Why throw away much of the robustness of
the package versioning system just because
of a problem we are having with these trivially
easy upper-bound bumps?  Let's just find a
solution for the problem at hand.

Thanks,
Yitz

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread dag.odenh...@gmail.com
On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.comwrote:

 A benign change will obviously have no visible effect, while a compilation
 failure is actually *better* than a depsolver failure, because it's more
 informative.


But with upper bounds you give Cabal a chance to try and install a
supported version, thus avoiding failure all together.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Brent Yorgey
On Thu, Aug 16, 2012 at 05:30:07PM +0300, Yitzchak Gale wrote:
 
 For actively maintained packages, I think the
 problem is that package maintainers don't find
 out promptly that an upper bound needs to be
 bumped. One way to solve that would be a
 simple bot that notifies the package maintainer
 as soon as an upper bound becomes out-of-date.

This already exists:

  http://packdeps.haskellers.com/
 
-Brent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Joey Adams
On Wed, Aug 15, 2012 at 3:38 PM, Bryan O'Sullivan b...@serpentine.com wrote:
 I propose that the sense of the recommendation around upper bounds in the
 PVP be reversed: upper bounds should be specified only when there is a known
 problem with a new version of a depended-upon package.

I, too, agree.  Here is my assortment of thoughts on the matter.

Here's some bad news: with cabal 1.14 (released with Haskell Platform
2012.2), cabal init defaults to bounds like these:

  build-depends:   base ==4.5.*, bytestring ==0.9.*, http-types ==0.6.*

Also, one problem with upper bounds is that they often backfire.  If
version 0.2 of your package does not have upper bounds, but 0.2.1 does
(because you found out about a breaking upstream change), users who
try to install your package may get 0.2 instead of the latest, and
still get the problem you were trying to shield against.

A neat feature would be a cabal option to ignore upper bounds.  With
--ignore-upper-bounds, cabal would select the latest version of
everything, and print a list of packages with violated upper bounds.

-Joey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread wren ng thornton

On 8/15/12 11:02 PM, MightyByte wrote:

One tool-based way to help with this problem would
be to add a flag to Cabal/cabal-install that would cause it to ignore
upper bounds.


I'd much rather have a distinction between hard upper bounds (known to 
fail with) vs soft upper bounds (tested with).


Soft upper bounds are good for future proofing, both short- and 
long-range. So ignoring soft upper bounds is all well and good if things 
still work.


However, there are certainly cases where we have hard upper 
bounds[1][2][3], and ignoring those is not fine. Circumventing hard 
upper bounds should require altering the .cabal file, given as getting 
things to compile will require altering the source code as well. Also, 
hard upper bounds are good for identifying when there are 
semantics-altering changes not expressed in the type signatures of an 
API. Even if relaxing the hard upper bound could allow the code to 
compile, it is not guaranteed to be correct.


The problem with the current policy is that it mandates hard upper 
bounds as a solution to the problem of libraries not specifying soft 
upper bounds. This is indeed a tooling problem, but let's identify the 
problem for what it is: not all upper bounds are created equally, and 
pretending they are only leads to confusion and pain.



[1] Parsec 2 vs 3, for a very long time
[2] mtl 1 vs 2, for a brief interim
[3] John Lato's iteratee =0.3 vs =0.4, for legacy code
...

--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Bryan O'Sullivan
Hi, folks -

I'm sure we are all familiar with the phrase cabal dependency hell at
this point, as the number of projects on Hackage that are intended to hack
around the problem slowly grows.

I am currently undergoing a fresh visit to that unhappy realm, as I try to
rebuild some of my packages to see if they work with the GHC 7.6 release
candidate.

A substantial number of the difficulties I am encountering are related to
packages specifying upper bounds on their dependencies. This is a recurrent
problem, and its source lies in the recommendations of the PVP itself
(problematic phrase highlighted in bold):

When publishing a Cabal package, you should ensure that your dependencies
 in the build-depends field are accurate. This means specifying not only
 lower bounds, *but also upper bounds* on every dependency.


I understand that the intention behind requiring tight upper bounds was
good, but in practice this has worked out terribly, leading to depsolver
failures that prevent a package from being installed, when everything goes
smoothly with the upper bounds relaxed. The default response has been for a
flurry of small updates to packages in which the upper bounds are loosened,
thus guaranteeing that the problem will recur in a year or less. This is
neither sensible, fun, nor sustainable.

In practice, when an author bumps a version of a depended-upon package, the
changes are almost always either benign, or will lead to compilation
failure in the depending-upon package. A benign change will obviously have
no visible effect, while a compilation failure is actually *better* than a
depsolver failure, because it's more informative.

This leaves the nasty-but-in-my-experience-rare case of runtime failures
caused by semantic changes. In these instances, a downstream package should
*reactively* add an upper bound once a problem is discovered.

I propose that the sense of the recommendation around upper bounds in the
PVP be reversed: upper bounds should be specified *only when there is a
known problem with a new version* of a depended-upon package.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Johan Tibell
On Wed, Aug 15, 2012 at 12:38 PM, Bryan O'Sullivan b...@serpentine.com wrote:
 I propose that the sense of the recommendation around upper bounds in the
 PVP be reversed: upper bounds should be specified only when there is a known
 problem with a new version of a depended-upon package.

This argument precisely captures my feelings on this subject. I will
be removing upper bounds next time I make releases of my packages.

-- Johan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Brandon Allbery
On Wed, Aug 15, 2012 at 3:57 PM, Johan Tibell johan.tib...@gmail.comwrote:

 On Wed, Aug 15, 2012 at 12:38 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
  I propose that the sense of the recommendation around upper bounds in the
  PVP be reversed: upper bounds should be specified only when there is a
 known
  problem with a new version of a depended-upon package.

 This argument precisely captures my feelings on this subject. I will
 be removing upper bounds next time I make releases of my packages.


So we are certain that the rounds of failures that led to their being
*added* will never happen again?

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Bryan O'Sullivan
On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery allber...@gmail.comwrote:


 So we are certain that the rounds of failures that led to their being
 *added* will never happen again?


Of course I am sure that problems will arise as a result of recommending
that upper bounds be added reactively; didn't I say as much? I expect that
to be a much lesser problem than the current situation.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Johan Tibell
On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery allber...@gmail.com wrote:
 So we are certain that the rounds of failures that led to their being
 *added* will never happen again?

It would be useful to have some examples of these. I'm not sure we had
any when we wrote the policy (but Duncan would know more), but rather
reasoned our way to the current policy by saying that things can
theoretically break if we don't have upper bounds, therefore we need
them.

-- Johan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread David Thomas
Would it make sense to have a known-to-be-stable-though soft upper bound
added proactively, and a known-to-break-above hard bound added reactively,
so people can loosen gracefully as appropriate?
On Aug 15, 2012 1:45 PM, Johan Tibell johan.tib...@gmail.com wrote:

 On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery allber...@gmail.com
 wrote:
  So we are certain that the rounds of failures that led to their being
  *added* will never happen again?

 It would be useful to have some examples of these. I'm not sure we had
 any when we wrote the policy (but Duncan would know more), but rather
 reasoned our way to the current policy by saying that things can
 theoretically break if we don't have upper bounds, therefore we need
 them.

 -- Johan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Bryan O'Sullivan
On Wed, Aug 15, 2012 at 1:50 PM, David Thomas davidleotho...@gmail.comwrote:

 Would it make sense to have a known-to-be-stable-though soft upper bound
 added proactively, and a known-to-break-above hard bound added reactively,
 so people can loosen gracefully as appropriate?

I don't think so. It adds complexity, but more importantly it's usual for
the existing upper bounds to refer to versions that don't exist at the time
of writing (and hence can't be known to be stable).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Michael Blume
 it's usual for the existing upper bounds to refer to versions that don't 
 exist at the time of writing (and hence can't be known to be stable).

Well, known to be stable given semantic versioning, then.

http://semver.org/

On Wed, Aug 15, 2012 at 1:55 PM, Bryan O'Sullivan b...@serpentine.com wrote:
 On Wed, Aug 15, 2012 at 1:50 PM, David Thomas davidleotho...@gmail.com
 wrote:

 Would it make sense to have a known-to-be-stable-though soft upper bound
 added proactively, and a known-to-break-above hard bound added reactively,
 so people can loosen gracefully as appropriate?

 I don't think so. It adds complexity, but more importantly it's usual for
 the existing upper bounds to refer to versions that don't exist at the time
 of writing (and hence can't be known to be stable).
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Carter Schonwald
As someone who recurrently is nudging a large number of maintainers every
major ghc release to bump their bounds, I favor the no upper bounds
approach! :)

plus the whole improving ecosystem of build bot tools which play nice with
cabal et al that are cropping up mean that in principal we could debug
missing upper bounds via sort of temporal bisecting over the event stream
of maximum available versions at a given time to sort that. (but that piece
isn't that important)

more pragmatically, cabal when used with hackage doesn't let you override
version constraints, it just lets you add additional constraints. This
makes sense if we assume that the library author is saying things will
definitely break if you violate them, but in practice upper bounds are
made up guesstimation.

YES, its presumably semantic versioning doesn't create a problem, but with
the hackage eco system, when dealing with intelligently engineering libs
that are regularly maintained, version upper bounds create more problems
than than solve.

just my two cents. (yes yes yes, please drop upper bounds!)

cheers
-Carter

On Wed, Aug 15, 2012 at 5:04 PM, Michael Blume blume.m...@gmail.com wrote:

  it's usual for the existing upper bounds to refer to versions that don't
 exist at the time of writing (and hence can't be known to be stable).

 Well, known to be stable given semantic versioning, then.

 http://semver.org/

 On Wed, Aug 15, 2012 at 1:55 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
  On Wed, Aug 15, 2012 at 1:50 PM, David Thomas davidleotho...@gmail.com
  wrote:
 
  Would it make sense to have a known-to-be-stable-though soft upper bound
  added proactively, and a known-to-break-above hard bound added
 reactively,
  so people can loosen gracefully as appropriate?
 
  I don't think so. It adds complexity, but more importantly it's usual for
  the existing upper bounds to refer to versions that don't exist at the
 time
  of writing (and hence can't be known to be stable).
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Lorenzo Bolla
I definitely agree!
http://www.reddit.com/r/haskell/comments/x4knd/what_is_the_reason_for_haskells_cabal_package/

L.


On Wed, Aug 15, 2012 at 12:38:33PM -0700, Bryan O'Sullivan wrote:
 Hi, folks -
 
 I'm sure we are all familiar with the phrase cabal dependency hell at this
 point, as the number of projects on Hackage that are intended to hack around
 the problem slowly grows.
 
 I am currently undergoing a fresh visit to that unhappy realm, as I try to
 rebuild some of my packages to see if they work with the GHC 7.6 release
 candidate.
 
 A substantial number of the difficulties I am encountering are related to
 packages specifying upper bounds on their dependencies. This is a recurrent
 problem, and its source lies in the recommendations of the PVP itself
 (problematic phrase highlighted in bold):
 
 
 When publishing a Cabal package, you should ensure that your dependencies
 in the build-depends field are accurate. This means specifying not only
 lower bounds, but also upper bounds on every dependency.
 
 
 I understand that the intention behind requiring tight upper bounds was good,
 but in practice this has worked out terribly, leading to depsolver failures
 that prevent a package from being installed, when everything goes smoothly 
 with
 the upper bounds relaxed. The default response has been for a flurry of small
 updates to packages in which the upper bounds are loosened, thus guaranteeing
 that the problem will recur in a year or less. This is neither sensible, fun,
 nor sustainable.
 
 In practice, when an author bumps a version of a depended-upon package, the
 changes are almost always either benign, or will lead to compilation failure 
 in
 the depending-upon package. A benign change will obviously have no visible
 effect, while a compilation failure is actually better than a depsolver
 failure, because it's more informative.
 
 This leaves the nasty-but-in-my-experience-rare case of runtime failures 
 caused
 by semantic changes. In these instances, a downstream package should 
 reactively
  add an upper bound once a problem is discovered.
 
 I propose that the sense of the recommendation around upper bounds in the PVP
 be reversed: upper bounds should be specified only when there is a known
 problem with a new version of a depended-upon package.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


-- 
Lorenzo Bolla
http://lbolla.info

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Brandon Allbery
On Wed, Aug 15, 2012 at 4:44 PM, Johan Tibell johan.tib...@gmail.comwrote:

 On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery allber...@gmail.com
 wrote:
  So we are certain that the rounds of failures that led to their being
  *added* will never happen again?

 It would be useful to have some examples of these. I'm not sure we had


Upper package versions did not originally exist, and nobody wanted them.
 You can see the result in at least half the packages on Hackage:  upper
versions came in when base got broken up, and when bytestring was merged
into base --- both of which caused massive breakage that apparently even
the people around at the time and involved with it no longer remember.

I'm not going to argue the point though; ignore history and remove them if
you desire.

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Carter Schonwald
no one is disputing that there are conditional changes in dependencies
depending on library versions.

an interesting intermediate point would be have a notion of testing with 
constraints in cabal and engineering cabal to support a
--withTestedConstraints to have a simple composable way of handling
constructing build plans.

at the end of the day, its an engineering problem coupled with a social
factors problem. Those are hard :)


On Wed, Aug 15, 2012 at 5:44 PM, Brandon Allbery allber...@gmail.comwrote:

 On Wed, Aug 15, 2012 at 4:44 PM, Johan Tibell johan.tib...@gmail.comwrote:

 On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery allber...@gmail.com
 wrote:
  So we are certain that the rounds of failures that led to their being
  *added* will never happen again?

 It would be useful to have some examples of these. I'm not sure we had


 Upper package versions did not originally exist, and nobody wanted them.
  You can see the result in at least half the packages on Hackage:  upper
 versions came in when base got broken up, and when bytestring was merged
 into base --- both of which caused massive breakage that apparently even
 the people around at the time and involved with it no longer remember.

 I'm not going to argue the point though; ignore history and remove them if
 you desire.

 --
 brandon s allbery  allber...@gmail.com
 wandering unix systems administrator (available) (412) 475-9364 vm/sms


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Michael Sloan
Upper bounds are a bit of a catch-22 when it comes to library authors evolving
their APIs:

1) If library clients aren't encouraged to specify which version of the
   exported API they target, then changing APIs can lead to opaque compile
   errors (without any information about which API is intended).  This could
   lead the client to need to search for the appropriate version of the
   library.

   If library clients are encouraged to specify which versions of the
exported API they target,
  then changing the API breaks all of the clients.

There are a couple of hacky ways to do #1 without having the errors be so opaque

1) Start a tradition of commenting the cabal dependencies with the version of
   the package that worked for the author.

2) Build in support for these known good versions into cabal, perhaps
   generated on sdistrelease, or with a particular build flag.  (they don't
   need to be stored in the .cabal file)

3) Attempt to post-process GHC error messages to guess when the issue might be
   caused by the package version being different according to the PVP.  This
   could work alright for scoping but wouldn't work well for type errors
   (which matter more)

I like the idea of doing automated upper-bound-determination!  That would be
very convenient. It's a bit tricky though - should tests be included?

I think the most ideal solution is to attack the core problem: things break
when dependencies change their interface.  This is pretty expected in the
development world at large, but I think that Haskell can do better.

The main idea is that package developers should be free to update their API as
they realize that new names or interfaces are better.  Currently there's the
problem that the APIs that actually get used subsequently stagnate due to fear
of breakage.

The solution that I think makes the most sense is to start exporting modules
which express the old interface in terms of the new.  What's interesting about
this is that most non-semantic changes to things other than ADTs and
typeclasses can already be expressed in plain Haskell code.

This idea, among other things, inspired the instance templates proposal,
which is somewhat related to superclass default instances.  With this language
extension, it would be possible to express compatibility layers for instance
definitions, something that is impossible with Superclass Default Instances.

https://github.com/mgsloan/instance-templates

I've also got a start on a utility for extracting API signatures from
packages.  Currently it just pretty prints the API in a fashion that attempts
to be amenable to textual diffing:

https://github.com/mgsloan/api-compat/
https://github.com/mgsloan/api-compat/blob/master/examples/diagrams-core.api.diff

The intent is to make the tool interactive, giving the user a chance to let
the tool know which exports / modules have been renamed.  After the user
provides this information, it should be possible to generate almost all of the
compatibility code.

In order to make it convenient to use these compatibility modules, we'd want
to have some cabal- invoked code generation that would generate proxy modules
that re-export the appropriate version. This could all happen in a separate
hs-source-dir.

The next step in this toolchain is something that's very hard to do nicely,
because it can change code layout: automated rewriting of user code to target
the new version (this is equivalent to inlining the compatibility module
definitions).  However, even a tool that would take you to all of the
places that
need changing would be invaluable.

Wouldn't it be excellent, if the Haskell eco-system managed something that no
other language affords?:

* Automatic refactoring to target new API versions

* Expression of these refactorings in the language itself

* Rigorous, structured documentation of inter-version changes.  This'd also
  provide a nice place to put haddocks with further change information.

-Michael

On Wed, Aug 15, 2012 at 2:34 PM, Carter Schonwald
carter.schonw...@gmail.com wrote:
 As someone who recurrently is nudging a large number of maintainers every
 major ghc release to bump their bounds, I favor the no upper bounds
 approach! :)

 plus the whole improving ecosystem of build bot tools which play nice with
 cabal et al that are cropping up mean that in principal we could debug
 missing upper bounds via sort of temporal bisecting over the event stream
 of maximum available versions at a given time to sort that. (but that piece
 isn't that important)

 more pragmatically, cabal when used with hackage doesn't let you override
 version constraints, it just lets you add additional constraints. This makes
 sense if we assume that the library author is saying things will definitely
 break if you violate them, but in practice upper bounds are made up
 guesstimation.

 YES, its presumably semantic versioning doesn't create a problem, but with
 the hackage eco system, when dealing with intelligently engineering 

Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Brandon Allbery
On Wed, Aug 15, 2012 at 6:46 PM, Carter Schonwald 
carter.schonw...@gmail.com wrote:

 no one is disputing that there are conditional changes in dependencies
 depending on library versions.


Indeed.  But the ghc release that split up base broke cabalised packages
with no warning to users until they failed to compile.  Upper bounds were
put in place to avoid that kind of breakage in the future.

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Ivan Lazar Miljenovic
On 16 August 2012 08:55, Brandon Allbery allber...@gmail.com wrote:
 On Wed, Aug 15, 2012 at 6:46 PM, Carter Schonwald
 carter.schonw...@gmail.com wrote:

 no one is disputing that there are conditional changes in dependencies
 depending on library versions.


 Indeed.  But the ghc release that split up base broke cabalised packages
 with no warning to users until they failed to compile.  Upper bounds were
 put in place to avoid that kind of breakage in the future.

There's also the case where people blindly put something like base 
10 in the .cabal file, and then it broke on the next GHC release.
This happend with ghc-core-0.5: it completely failed to build with
base-4 (and because cabal-install kept defaulting packages to use
base-3 I think a lot of people missed cases like this and blindly
thought it worked).

I like having upper bounds on version numbers... right up until people
abuse them.

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
http://IvanMiljenovic.wordpress.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Conrad Parker
On 16 August 2012 03:38, Bryan O'Sullivan b...@serpentine.com wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell at this
 point, as the number of projects on Hackage that are intended to hack around
 the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I try to
 rebuild some of my packages to see if they work with the GHC 7.6 release
 candidate.

Likewise ...

 A substantial number of the difficulties I am encountering are related to
 packages specifying upper bounds on their dependencies. This is a recurrent
 problem, and its source lies in the recommendations of the PVP itself
 (problematic phrase highlighted in bold):

I think part of the problem might be that some packages (like
bytestring, transformers?) have had their major version number
incremented even despite being backwards-compatible. Perhaps there are
incompatible changes, but most of the cabal churn I've seen recently
has involved incrementing the bytestring upper bound to 0.11 without
requiring any code changes to modules using Data.ByteString.

IMO it'd be better to include a separate versioning entry like
libtool's version-info, consisting of Current:Revision:Age
(http://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html),
and leave the published version for human, marketing purposes.

I remember discussing this with Duncan at ICFP last year, and he
suggested that the existing PVP is equivalent to the libtool scheme in
that the major release should only be incremented if
backwards-compatibility breaks.

However I think people also expect to use the published version as a
kind of marketing, to indicate that the project has reached some
milestone or stability, or is part of some larger, separately
versioned group of packages (eg. new compiler or platform release).
The PVP pretty much ensures that incrementing a major version for such
reasons is going to break your package for all its users.

Conrad.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread MightyByte
On Wed, Aug 15, 2012 at 9:19 PM, Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com wrote:
 On 16 August 2012 08:55, Brandon Allbery allber...@gmail.com wrote:
 Indeed.  But the ghc release that split up base broke cabalised packages
 with no warning to users until they failed to compile.  Upper bounds were
 put in place to avoid that kind of breakage in the future.

 I like having upper bounds on version numbers... right up until people
 abuse them.

I also tend to favor having upper bounds.  Obviously they impose a
cost, but it's not clear to me at all that getting rid of them is a
better tradeoff.  I've had projects that I put aside for awhile only
to come back and discover that they would no longer build because I
hadn't put upper bounds on all my package dependencies.  With no upper
bounds, a package might not be very likely to break for incremental
version bumps, but eventually it *will* break.  And when it does it's
a huge pain to get it building again.  If I have put effort into
making a specific version of my package work properly today, I want it
to always work properly in the future (assuming that everyone obeys
the PVP).  I don't think it's unreasonable that some activation energy
be required to allow one's project to work with a new version of some
upstream dependency.

Is that activation energy too high right now?  Almost definitely.  But
that's a tool problem, not a problem with the existence of upper
bounds themselves.  One tool-based way to help with this problem would
be to add a flag to Cabal/cabal-install that would cause it to ignore
upper bounds.  (Frankly, I think it would also be great if
Cabal/cabal-install enforced upper version bounds automatically if
none were specified.)  Another approach that has been discussed is
detecting dependencies that are only used internally[1], and I'm sure
there are many other possibilities.  In short, I think we should be
moving more towards purely functional builds that reduce the chance
that external factors will break things, and it seems like removing
upper version bounds is a step in the other direction.

[1] 
http://cdsmith.wordpress.com/2011/01/21/a-recap-about-cabal-and-haskell-libraries/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Brandon Allbery
On Wed, Aug 15, 2012 at 11:02 PM, MightyByte mightyb...@gmail.com wrote:

 be to add a flag to Cabal/cabal-install that would cause it to ignore
 upper bounds.  (Frankly, I think it would also be great if


Ignore, or at least treat them as being like flags... if the versions don't
converge with them, start relaxing them and retrying, then print a warning
about the versions it slipped and attempt the build.

That said, I'd be in favor of moving toward something based on ABI
versioning instead; package versions as used by the PVP are basically a
manual switch emulating that.  It's not generally done because other parts
of the toolchain (notably ld's shared object versioning) don't support it,
but given that Cabal has greater control over the build process for Haskell
programs it would be worth exploring having Cabal deal with it.  (This is
not something you can do with existing C or C++ toolchains; the smarts
would need to be in make.)

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-15 Thread Michael Snoyman
On Thu, Aug 16, 2012 at 5:38 AM, Conrad Parker con...@metadecks.org wrote:

 On 16 August 2012 03:38, Bryan O'Sullivan b...@serpentine.com wrote:
  Hi, folks -
 
  I'm sure we are all familiar with the phrase cabal dependency hell at
 this
  point, as the number of projects on Hackage that are intended to hack
 around
  the problem slowly grows.
 
  I am currently undergoing a fresh visit to that unhappy realm, as I try
 to
  rebuild some of my packages to see if they work with the GHC 7.6 release
  candidate.

 Likewise ...

  A substantial number of the difficulties I am encountering are related to
  packages specifying upper bounds on their dependencies. This is a
 recurrent
  problem, and its source lies in the recommendations of the PVP itself
  (problematic phrase highlighted in bold):

 I think part of the problem might be that some packages (like
 bytestring, transformers?) have had their major version number
 incremented even despite being backwards-compatible. Perhaps there are
 incompatible changes, but most of the cabal churn I've seen recently
 has involved incrementing the bytestring upper bound to 0.11 without
 requiring any code changes to modules using Data.ByteString.


In general, I've been taking the approach recently that we have two classes
of packages: some (like transformers and bytestring) have mostly-stable
APIs, and most code I write only relies on those APIs. If I'm just using
Data.ByteString for the ByteString type and a few functions like readFile
and map, it's highly unlikely that the next version will introduce some
breaking change. In those cases, I've been leaving off the upper bound
entirely.

For other packages that haven't yet stabilized, I've still been keeping the
upper bound. In many cases, even that isn't necessary. I've tried removing
the upper bounds on those as well, but I almost always end up getting
someone filing a bug report that I left off some upper bound and therefore
a compile failed.

I agree with Bryan's argument, but I'd like to keep consistency for most
packages on Hackage. If the community goes in this direction, I'll go along
too.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe