Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Erik Hesselink
I am strongly against this, especially for packages in the platform.

If you fail to specify an upper bound, and I depend on your package,
your dependencies can break my package! For example, say I develop
executable A and I depend on library B == 1.0. Library B depends on
library C = 0.5 (no upper bound). Now C 0.6 is released, which is
incompatible with B. This suddenly breaks my build, even though I have
not changed anything about my code or dependencies. This goes against
the 'robust' aspect mentioned as one of the properties of the Haskell
platform, and against the Haskell philosophy of correctness in
general.

This is not an imaginary problem. At my company, we've run into these
problems numerous times already. Since we also have people who are not
experts at Cabal and the Haskell ecosystem building our software, this
can be very annoying. The fix is also not trivial: we can add a
dependency on a package we don't use to all our executables or we can
fork the library (B, in the example above) and add an upper bound/fix
the code. Both add a lot of complexity that we don't want. Add to that
the build failures and associated emails from CI systems like Jenkins.

I can see the maintenance burder you have, since we have to do the
same for our code. But until some Cabal feature is added to ignore
upper bounds or specify soft upper bounds, please follow the PVP, also
in this regard. It helps us maintain a situation where only our own
actions can break our software.

Erik

On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.com wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell at this
 point, as the number of projects on Hackage that are intended to hack around
 the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I try to
 rebuild some of my packages to see if they work with the GHC 7.6 release
 candidate.

 A substantial number of the difficulties I am encountering are related to
 packages specifying upper bounds on their dependencies. This is a recurrent
 problem, and its source lies in the recommendations of the PVP itself
 (problematic phrase highlighted in bold):

 When publishing a Cabal package, you should ensure that your dependencies
 in the build-depends field are accurate. This means specifying not only
 lower bounds, but also upper bounds on every dependency.


 I understand that the intention behind requiring tight upper bounds was
 good, but in practice this has worked out terribly, leading to depsolver
 failures that prevent a package from being installed, when everything goes
 smoothly with the upper bounds relaxed. The default response has been for a
 flurry of small updates to packages in which the upper bounds are loosened,
 thus guaranteeing that the problem will recur in a year or less. This is
 neither sensible, fun, nor sustainable.

 In practice, when an author bumps a version of a depended-upon package, the
 changes are almost always either benign, or will lead to compilation failure
 in the depending-upon package. A benign change will obviously have no
 visible effect, while a compilation failure is actually better than a
 depsolver failure, because it's more informative.

 This leaves the nasty-but-in-my-experience-rare case of runtime failures
 caused by semantic changes. In these instances, a downstream package should
 reactively add an upper bound once a problem is discovered.

 I propose that the sense of the recommendation around upper bounds in the
 PVP be reversed: upper bounds should be specified only when there is a known
 problem with a new version of a depended-upon package.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Compositional Compiler Construction, Oberon0 examples available

2012-08-20 Thread Doaitse Swierstra

On Aug 19, 2012, at 10:40 , Heinrich Apfelmus apfel...@quantentunnel.de wrote:

 Doaitse Swierstra wrote:
 Over the years we have been constructing a collection of Embedded
 Domain Specific Languages for describing compilers which are
 assembled from fragments which can be compiled individually. In this
 way one can gradually ``grow a language'' in a large number of small
 steps. The technique replaces things like macro extensions or
 Template Haskell; it has become feasible to just extend the language
 at hand by providing  extra modules. The nice thing is that existing
 code does not have to be adapted, nor has to be available nor has to
 be recompiled.
 Recently we have been using (and adapting) the frameworks such that
 we could create an entry in the ldta11 (http://ldta.info/tool.html)
 tool challenge, where one has to show how one's tools can be used to
 create a compiler for the Oberon0 language, which is used a a running
 example in Wirth's compiler construction book.
 We have uploaded our implementation to hackage at:
 http://hackage.haskell.org/package/oberon0.
 More information can be found at the wiki:
 http://www.cs.uu.nl/wiki/bin/view/Center/CoCoCo
 You may take a look at the various Gram modules to see how syntax is
 being defined, and at the various Sem modules to see how we use our
 first class attribute grammars to implement the static semantics
 associated with the various tasks of the challenge.
 We hope you like it, and comments are welcome,
 
 Awesome!
 
 I have a small question: Last I remember, you've mainly been using your UUAGC 
 preprocessor to write attribute grammars in Haskell, especially for UHC. Now 
 that you have first-class attribute grammars in Haskell (achievement 
 unlocked), what do you intend to do with the preprocessor? How do these two 
 approaches compare at the moment and where would you like to take them?
 
 
 Best regards,
 Heinrich Apfelmus

On the page http://www.cs.uu.nl/wiki/bin/view/Center/CoCoCo there is a link 
(http://www.fing.edu.uy/~mviera/papers/VSM12.pdf) to a paper we presented at 
LDTA (one of the ETAPS events) this spring. It explains how UUAGC can be used 
to generate first class compiler modules. 

We have also a facility for grouping attributes, so one can trade flexibility 
for speed. The first class approach stores list of attributes as nested 
cartesian products, access to which a clever compiler might be able to 
optimize. This however would correspond  a form of specialisation, so you can 
hardly say that we have really independent modules; as always global 
optimisation is never compositional). From the point of view of the first class 
approach such grouped non-termionals are seen as a single composite 
non-terminal. 

  Doaitse


 
 --
 http://apfelmus.nfshost.com
 
 
 ___
 Haskell mailing list
 hask...@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] fclabels 0.5

2012-08-20 Thread Sergey Mironov
Hi. I'm porting old code, which uses fclabels 0.5. Old fclabels
define Iso typeclass as follows:

class Iso f where
  iso :: a :-: b - f a - f b
  iso (Lens a b) = osi (b - a)
  osi :: a :-: b - f b - f a
  osi (Lens a b) = iso (b - a)

Newer one defines iso:

class Iso (~) f where
  iso :: Bijection (~) a b - f a ~ f b

instance Arrow (~) = Iso (~) (Lens (~) f) where
  iso bi = arr ((\a - lens (fw bi . _get a) (_set a . first (bw bi))) . unLens)

instance Arrow (~) = Iso (~) (Bijection (~) a) where
  iso = arr . (.)

but no osi. I'm not a guru in categories, can you help me define osi?

Thanks
Sergey.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fclabels 0.5

2012-08-20 Thread Erik Hesselink
Untested, but this should be about right:

osi (Bij f b) = iso (Bij b f)

Erik

On Mon, Aug 20, 2012 at 2:35 PM, Sergey Mironov ier...@gmail.com wrote:
 Hi. I'm porting old code, which uses fclabels 0.5. Old fclabels
 define Iso typeclass as follows:

 class Iso f where
   iso :: a :-: b - f a - f b
   iso (Lens a b) = osi (b - a)
   osi :: a :-: b - f b - f a
   osi (Lens a b) = iso (b - a)

 Newer one defines iso:

 class Iso (~) f where
   iso :: Bijection (~) a b - f a ~ f b

 instance Arrow (~) = Iso (~) (Lens (~) f) where
   iso bi = arr ((\a - lens (fw bi . _get a) (_set a . first (bw bi))) . 
 unLens)

 instance Arrow (~) = Iso (~) (Bijection (~) a) where
   iso = arr . (.)

 but no osi. I'm not a guru in categories, can you help me define osi?

 Thanks
 Sergey.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Chris Dornan
I think we should encourage stable build environments to know precisely
which package versions they have been using and to keep using them until
told otherwise. Even when the types and constraints all work out there is a
risk that upgraded packages will break. Everybody here wants cabal to just
install the packages without problem, but if you want to insulate yourself
from package upgrades surely sticking with proven combinations is the way to
go.

Chris

-Original Message-
From: haskell-cafe-boun...@haskell.org
[mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Erik Hesselink
Sent: 20 August 2012 08:33
To: Bryan O'Sullivan
Cc: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
our friends

I am strongly against this, especially for packages in the platform.

If you fail to specify an upper bound, and I depend on your package, your
dependencies can break my package! For example, say I develop executable A
and I depend on library B == 1.0. Library B depends on library C = 0.5 (no
upper bound). Now C 0.6 is released, which is incompatible with B. This
suddenly breaks my build, even though I have not changed anything about my
code or dependencies. This goes against the 'robust' aspect mentioned as one
of the properties of the Haskell platform, and against the Haskell
philosophy of correctness in general.

This is not an imaginary problem. At my company, we've run into these
problems numerous times already. Since we also have people who are not
experts at Cabal and the Haskell ecosystem building our software, this can
be very annoying. The fix is also not trivial: we can add a dependency on a
package we don't use to all our executables or we can fork the library (B,
in the example above) and add an upper bound/fix the code. Both add a lot of
complexity that we don't want. Add to that the build failures and associated
emails from CI systems like Jenkins.

I can see the maintenance burder you have, since we have to do the same for
our code. But until some Cabal feature is added to ignore upper bounds or
specify soft upper bounds, please follow the PVP, also in this regard. It
helps us maintain a situation where only our own actions can break our
software.

Erik

On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.com
wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell 
 at this point, as the number of projects on Hackage that are intended 
 to hack around the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I 
 try to rebuild some of my packages to see if they work with the GHC 
 7.6 release candidate.

 A substantial number of the difficulties I am encountering are related 
 to packages specifying upper bounds on their dependencies. This is a 
 recurrent problem, and its source lies in the recommendations of the 
 PVP itself (problematic phrase highlighted in bold):

 When publishing a Cabal package, you should ensure that your 
 dependencies in the build-depends field are accurate. This means 
 specifying not only lower bounds, but also upper bounds on every
dependency.


 I understand that the intention behind requiring tight upper bounds 
 was good, but in practice this has worked out terribly, leading to 
 depsolver failures that prevent a package from being installed, when 
 everything goes smoothly with the upper bounds relaxed. The default 
 response has been for a flurry of small updates to packages in which 
 the upper bounds are loosened, thus guaranteeing that the problem will 
 recur in a year or less. This is neither sensible, fun, nor sustainable.

 In practice, when an author bumps a version of a depended-upon 
 package, the changes are almost always either benign, or will lead to 
 compilation failure in the depending-upon package. A benign change 
 will obviously have no visible effect, while a compilation failure is 
 actually better than a depsolver failure, because it's more informative.

 This leaves the nasty-but-in-my-experience-rare case of runtime 
 failures caused by semantic changes. In these instances, a downstream 
 package should reactively add an upper bound once a problem is discovered.

 I propose that the sense of the recommendation around upper bounds in 
 the PVP be reversed: upper bounds should be specified only when there 
 is a known problem with a new version of a depended-upon package.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Wanted: Haskell binding for libbdd (buddy)

2012-08-20 Thread Johannes Waldmann
Are there any Haskell bindings for BDD libraries
(reduced ordered binary decision diagrams)?

E.g., it seems buddy is commonly used
http://packages.debian.org/squeeze/libbdd-dev
and it has an Ocaml binding.

Yes, there is http://hackage.haskell.org/package/obdd
but I need better performance (with the same API, ideally).

Thanks - J.W.

PS: I wonder where performance goes out the window  ...
I suspect  Map (Int,Int) whatever should really be
a hashtable but I don't like it in IO, it should be in ST?




signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Simon Marlow

On 15/08/2012 21:44, Johan Tibell wrote:

On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery allber...@gmail.com wrote:

So we are certain that the rounds of failures that led to their being
*added* will never happen again?


It would be useful to have some examples of these. I'm not sure we had
any when we wrote the policy (but Duncan would know more), but rather
reasoned our way to the current policy by saying that things can
theoretically break if we don't have upper bounds, therefore we need
them.


I haven't read the whole thread (yet), but the main motivating example 
for upper bounds was when we split the base package (GHC 6.8) - 
virtually every package on Hackage broke.  Now at the time having upper 
bounds wouldn't have helped, because you would have got a depsolver 
failure instead of a type error.  But following the uproar about this we 
did two things: the next release of GHC (6.10) came with two versions of 
base, *and* we recommended that people add upper bounds.  As a result, 
packages with upper bounds survived the changes.


Now, you could argue that we're unlikely to do this again.  But the main 
reason we aren't likely to do this again is because it was so painful, 
even with upper bounds and compatibility libraries.  With better 
infrastructure and tools, *and* good dependency information, it should 
be possible to do significant reorganisations of the core packages.


As I said in my comments on Reddit[1], I'm not sure that removing upper 
bounds will help overall.  It removes one kind of failure, but 
introduces a new kind - and the new kind is scary, because existing 
working packages can suddenly become broken as a result of a change to a 
different package.  Will it be worse or better overall?  I have no idea. 
 What I'd rather see instead though is some work put into 
infrastructure on Hackage to make it easy to change the depdendencies on 
existing packages.


Cheers,
Simon

[1] 
http://www.reddit.com/r/haskell/comments/ydkcq/pvp_upper_bounds_are_not_our_friends/c5uqohi


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Erik Hesselink
Hub looks interesting, I'll have to try it out (though I'm not on an
RPM based distro). But isn't this the goal of things like semantic
versioning [0] and the PVP? To know that you can safely upgrade to a
bugfix release, and relavily safely to a minor release, but on a major
release, you have to take care?

Haskell makes it much easier to see if you can use a new major (or
minor) version of a library, since the type checker catches many (but
not all!) problems for you. However, this leads to libraries breaking
their API's much more easily, and that in turn causes the problems
voiced in this thread. However, fixing all versions seems like a bit
of a blunt instrument, as it means I'll have to do a lot of work to
bring even bug fixes in.

Erik

[0] http://semver.org/



On Mon, Aug 20, 2012 at 3:13 PM, Chris Dornan ch...@chrisdornan.com wrote:
 I think we should encourage stable build environments to know precisely
 which package versions they have been using and to keep using them until
 told otherwise. Even when the types and constraints all work out there is a
 risk that upgraded packages will break. Everybody here wants cabal to just
 install the packages without problem, but if you want to insulate yourself
 from package upgrades surely sticking with proven combinations is the way to
 go.

 Chris

 -Original Message-
 From: haskell-cafe-boun...@haskell.org
 [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Erik Hesselink
 Sent: 20 August 2012 08:33
 To: Bryan O'Sullivan
 Cc: haskell-cafe@haskell.org
 Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
 our friends

 I am strongly against this, especially for packages in the platform.

 If you fail to specify an upper bound, and I depend on your package, your
 dependencies can break my package! For example, say I develop executable A
 and I depend on library B == 1.0. Library B depends on library C = 0.5 (no
 upper bound). Now C 0.6 is released, which is incompatible with B. This
 suddenly breaks my build, even though I have not changed anything about my
 code or dependencies. This goes against the 'robust' aspect mentioned as one
 of the properties of the Haskell platform, and against the Haskell
 philosophy of correctness in general.

 This is not an imaginary problem. At my company, we've run into these
 problems numerous times already. Since we also have people who are not
 experts at Cabal and the Haskell ecosystem building our software, this can
 be very annoying. The fix is also not trivial: we can add a dependency on a
 package we don't use to all our executables or we can fork the library (B,
 in the example above) and add an upper bound/fix the code. Both add a lot of
 complexity that we don't want. Add to that the build failures and associated
 emails from CI systems like Jenkins.

 I can see the maintenance burder you have, since we have to do the same for
 our code. But until some Cabal feature is added to ignore upper bounds or
 specify soft upper bounds, please follow the PVP, also in this regard. It
 helps us maintain a situation where only our own actions can break our
 software.

 Erik

 On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell
 at this point, as the number of projects on Hackage that are intended
 to hack around the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I
 try to rebuild some of my packages to see if they work with the GHC
 7.6 release candidate.

 A substantial number of the difficulties I am encountering are related
 to packages specifying upper bounds on their dependencies. This is a
 recurrent problem, and its source lies in the recommendations of the
 PVP itself (problematic phrase highlighted in bold):

 When publishing a Cabal package, you should ensure that your
 dependencies in the build-depends field are accurate. This means
 specifying not only lower bounds, but also upper bounds on every
 dependency.


 I understand that the intention behind requiring tight upper bounds
 was good, but in practice this has worked out terribly, leading to
 depsolver failures that prevent a package from being installed, when
 everything goes smoothly with the upper bounds relaxed. The default
 response has been for a flurry of small updates to packages in which
 the upper bounds are loosened, thus guaranteeing that the problem will
 recur in a year or less. This is neither sensible, fun, nor sustainable.

 In practice, when an author bumps a version of a depended-upon
 package, the changes are almost always either benign, or will lead to
 compilation failure in the depending-upon package. A benign change
 will obviously have no visible effect, while a compilation failure is
 actually better than a depsolver failure, because it's more informative.

 This leaves the nasty-but-in-my-experience-rare case of runtime
 failures caused by 

Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Gregory Collins
My two (or three) cents:

   - Given a choice between a world where there is tedious work for package
   maintainers vs. a world where packages randomly break for end users (giving
   them a bad impression of the entire Haskell ecosystem), I choose the former.

   - More automation can ease the burden here. Michael Snoyman's packdeps
   tool is a great start in this direction, and it would be even better if it
   automagically fixed libraries for you and bumped your version number
   according to the PVP.

   - This is a great problem to have. There's so much work happening that
   people find it hard to stay on the treadmill? Things could be a lot worse.
   I guarantee you that our friends in the Standard ML community are not
   having this discussion. :-)

G
-- 
Gregory Collins g...@gregorycollins.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Chris Dornan
Of course if you wish or need to upgrade a package then you can just upgrade
it -- I am not suggesting anyone should forgo upgrades! It is just that
there is no need to make the stability of a build process dependent on new
package releases.

To upgrade a package I would fork my sandbox, hack away at the package
database (removing, upgrading, installing packages) until I have a candidate
combination of packages and swap in the new sandbox into my work tree and
test it, reverting to the tried and tested environment if things don't work
out.  If the new configuration works then then I would dump the new package
configuration and check it in. Subsequent updates and builds on other work
trees should pick up the new environment.

The key thing I was looking for was control of when your build environment
gets disrupted and stability in between -- even when building from the repo.

Chris

-Original Message-
From: Erik Hesselink [mailto:hessel...@gmail.com] 
Sent: 20 August 2012 14:35
To: Chris Dornan
Cc: Bryan O'Sullivan; haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
our friends

Hub looks interesting, I'll have to try it out (though I'm not on an RPM
based distro). But isn't this the goal of things like semantic versioning
[0] and the PVP? To know that you can safely upgrade to a bugfix release,
and relavily safely to a minor release, but on a major release, you have to
take care?

Haskell makes it much easier to see if you can use a new major (or
minor) version of a library, since the type checker catches many (but not
all!) problems for you. However, this leads to libraries breaking their
API's much more easily, and that in turn causes the problems voiced in this
thread. However, fixing all versions seems like a bit of a blunt instrument,
as it means I'll have to do a lot of work to bring even bug fixes in.

Erik

[0] http://semver.org/



On Mon, Aug 20, 2012 at 3:13 PM, Chris Dornan ch...@chrisdornan.com wrote:
 I think we should encourage stable build environments to know 
 precisely which package versions they have been using and to keep 
 using them until told otherwise. Even when the types and constraints 
 all work out there is a risk that upgraded packages will break. 
 Everybody here wants cabal to just install the packages without 
 problem, but if you want to insulate yourself from package upgrades 
 surely sticking with proven combinations is the way to go.

 Chris

 -Original Message-
 From: haskell-cafe-boun...@haskell.org 
 [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Erik Hesselink
 Sent: 20 August 2012 08:33
 To: Bryan O'Sullivan
 Cc: haskell-cafe@haskell.org
 Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds 
 are not our friends

 I am strongly against this, especially for packages in the platform.

 If you fail to specify an upper bound, and I depend on your package, 
 your dependencies can break my package! For example, say I develop 
 executable A and I depend on library B == 1.0. Library B depends on 
 library C = 0.5 (no upper bound). Now C 0.6 is released, which is 
 incompatible with B. This suddenly breaks my build, even though I have 
 not changed anything about my code or dependencies. This goes against 
 the 'robust' aspect mentioned as one of the properties of the Haskell 
 platform, and against the Haskell philosophy of correctness in general.

 This is not an imaginary problem. At my company, we've run into these 
 problems numerous times already. Since we also have people who are not 
 experts at Cabal and the Haskell ecosystem building our software, this 
 can be very annoying. The fix is also not trivial: we can add a 
 dependency on a package we don't use to all our executables or we can 
 fork the library (B, in the example above) and add an upper bound/fix 
 the code. Both add a lot of complexity that we don't want. Add to that 
 the build failures and associated emails from CI systems like Jenkins.

 I can see the maintenance burder you have, since we have to do the 
 same for our code. But until some Cabal feature is added to ignore 
 upper bounds or specify soft upper bounds, please follow the PVP, also 
 in this regard. It helps us maintain a situation where only our own 
 actions can break our software.

 Erik

 On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
 Hi, folks -

 I'm sure we are all familiar with the phrase cabal dependency hell
 at this point, as the number of projects on Hackage that are intended 
 to hack around the problem slowly grows.

 I am currently undergoing a fresh visit to that unhappy realm, as I 
 try to rebuild some of my packages to see if they work with the GHC
 7.6 release candidate.

 A substantial number of the difficulties I am encountering are 
 related to packages specifying upper bounds on their dependencies. 
 This is a recurrent problem, and its source lies in the 
 recommendations of the PVP itself 

Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Brent Yorgey
On Thu, Aug 16, 2012 at 06:07:06PM -0400, Joey Adams wrote:
 On Wed, Aug 15, 2012 at 3:38 PM, Bryan O'Sullivan b...@serpentine.com wrote:
  I propose that the sense of the recommendation around upper bounds in the
  PVP be reversed: upper bounds should be specified only when there is a known
  problem with a new version of a depended-upon package.
 
 I, too, agree.  Here is my assortment of thoughts on the matter.
 
 Here's some bad news: with cabal 1.14 (released with Haskell Platform
 2012.2), cabal init defaults to bounds like these:
 
   build-depends:   base ==4.5.*, bytestring ==0.9.*,
   http-types ==0.6.*

I'm not sure why you think this is bad news.  I designed this to
conform exactly to the current PVP.  If the PVP is changed then I will
update cabal init to match.

-Brent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell master thesis project

2012-08-20 Thread Francesco Mazzoli
Hi list(s),

I've been hooked on Haskell for a while now (some of you might know me as
bitonic on #haskell), and I find myself to decide on a project for my masters
thesis.

Inspired by the David Terei's master thesis (he wrote the LLVM backend), I was
wondering if there were any projects requiring similar effort that will benefit
the Haskell community.

--
Francesco * Often in error, never in doubt

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wanted: Haskell binding for libbdd (buddy)

2012-08-20 Thread Serguey Zefirov
2012/8/20 Johannes Waldmann waldm...@imn.htwk-leipzig.de:
 Are there any Haskell bindings for BDD libraries
 (reduced ordered binary decision diagrams)?

 E.g., it seems buddy is commonly used
 http://packages.debian.org/squeeze/libbdd-dev
 and it has an Ocaml binding.

 Yes, there is http://hackage.haskell.org/package/obdd
 but I need better performance (with the same API, ideally).

 Thanks - J.W.

 PS: I wonder where performance goes out the window  ...
 I suspect  Map (Int,Int) whatever should really be
 a hashtable but I don't like it in IO, it should be in ST?

Actually, all Maps there should be IntMap's, strict ones. And yes,
cache field should be two-level IntMap too.

The type Index is good for external typed access, but internally one
should use IntMap.




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell position in Israel

2012-08-20 Thread Michael Snoyman
Hi all,

Just passing on this job opportunity for another company. SQream is
looking for Haskellers located in Israel. They are working on high
performance solutions for large databases using Haskell. If you're
interested, please contact me off-list, and I'll pass your information
along.

Thanks,
Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell master thesis project

2012-08-20 Thread Jay Sulzberger



On Mon, 20 Aug 2012, Francesco Mazzoli f...@mazzo.li wrote:


Hi list(s),

I've been hooked on Haskell for a while now (some of you might know me as
bitonic on #haskell), and I find myself to decide on a project for my masters
thesis.

Inspired by the David Terei's master thesis (he wrote the LLVM backend), I was
wondering if there were any projects requiring similar effort that will benefit
the Haskell community.

--
Francesco * Often in error, never in doubt


The map from Source Code to Executable is one of the Great
Functors of Programming.  Lisp has the advantage that this
functor is visible and the objects and maps of the domain
category Source Code are easy to pick up and modify.

I think ghc may be instructed to output a textual representation
of the Core code produced on the way to the executable from
Haskell source code.  But this representation is, in part,
inadequate:

1. The representation is not faithful.  Thus, for example, we
cannot feed the textual representation of Core into the next part
of the Haskell compiler pipeline and get the same executable
we would get by running ghc on the Haskell source code.

2. The textual syntax is not sufficiently regular, so that maps
in the Source Code category, Core Variant, are not as easy to
code as they might be.

Of course, I write Lisp, so if you improved the Core side-pipe I
could easily continue to write sexps, and then have them
transformed to a Haskell executable.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Flipping type constructors

2012-08-20 Thread Ryan Ingram
It seems really hard to solve this, since the type checker works before
instance selection has had a chance to do anything.

Instead of looking at the instance declaration, look at the use site:

   lift x

expects the argument to have type

   x :: t m a

for some t :: (* - *) - * - *, m :: * - *, and a :: *; it's not until t
is known that we can do instance selection, and in your case, EitherT M A B
doesn't have the required form and so is a type error already.

I think the best answer is sadly to just have to have a (kind-polymorphic!)
newtype flip and deal with it.

I can imagine there being some way to (ab)use kind polymorphism to redefine
MonadTrans in a way that allows the 'm' argument to appear in more places
in the target type, but I'm not clever enough to come up with a proposal
for how to do so.

  -- ryan


On Mon, Aug 13, 2012 at 4:38 PM, Tony Morris tonymor...@gmail.com wrote:

 I have a data-type that is similar to EitherT, however, I have ordered
 the type variables like so:

 data EitherT (f :: * - *) (a :: *) (b :: *) = ...

 This allows me to declare some desirable instances:

 instance Functor f = Bifunctor (EitherT f)
 instance Foldable f = Bifoldable (EitherT f)
 instance Traversable f = Bitraversable (EitherT f)

 However, I am unable to declare a MonadTrans instance:

 instance MonadTrans (EitherT a) -- kind error

 I looked at Control.Compose.Flip to resolve this, but it does not appear
 to be kind-polymorphic.

 http://hackage.haskell.org/packages/archive/TypeCompose/0.9.1/doc/html/src/Control-Compose.html#Flip

 I was wondering if there are any well-developed techniques to deal with
 this? Of course, I could just write my own Flip with the appropriate
 kinds and be done with it. Maybe there is a more suitable way?


 --
 Tony Morris
 http://tmorris.net/



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-20 Thread Iavor Diatchki
Hello,

I also completely agree with Bryan's point which is why I usually don't add
upper bounds on the dependencies of the packages that I maintain---I find
that the large majority of updates to libraries tend to be backward
compatible, so being optimistic seems like a good idea.

By the way, something I encounter quite often is a situation where two
packages both build on Hacakge just fine, but are not compatible with each
other.  Usually it goes like this:

  1. Package A requires library X = V  (typically, because it needs a bug
fix or a new feature).
  2. Package B requires library X  V (typically, because someone added a
conservative upper bound that needs to be updated).

Trying to use A and B together leads to failure, which is usually resolved
by having to install B manually, and remove it's upper bound by hand.  This
is rather unfortunate, because not only it's inconvenient but, also, now
there is no released version of package B that you can explicitly depend on.

-Iavor



On Mon, Aug 20, 2012 at 7:11 AM, Brent Yorgey byor...@seas.upenn.eduwrote:

 On Thu, Aug 16, 2012 at 06:07:06PM -0400, Joey Adams wrote:
  On Wed, Aug 15, 2012 at 3:38 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
   I propose that the sense of the recommendation around upper bounds in
 the
   PVP be reversed: upper bounds should be specified only when there is a
 known
   problem with a new version of a depended-upon package.
 
  I, too, agree.  Here is my assortment of thoughts on the matter.
 
  Here's some bad news: with cabal 1.14 (released with Haskell Platform
  2012.2), cabal init defaults to bounds like these:
 
build-depends:   base ==4.5.*, bytestring ==0.9.*,
http-types ==0.6.*

 I'm not sure why you think this is bad news.  I designed this to
 conform exactly to the current PVP.  If the PVP is changed then I will
 update cabal init to match.

 -Brent

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-20 Thread Ryan Ingram
Also, I have to admit I was a bit handwavy here; I meant P in a
metatheoretic sense, that is P(a) is some type which contains 'a' as a
free variable, and thus the 'theorem' is really a collection of theorems
parametrized on the P you choose.

For example, P(a) could be Show a  a - Int; in that case we get the
theorem

exists a. (Show a, a - Int)
 =
forall r. (forall a. Show a = (a - Int) - r) - r

as witnessed by the following code (using the ExistentialQuantification and
RankNTypes extensions)

data P = forall a. Show a = MkP (a - Int)
type CPS_P r = (forall a. Show a = (a - Int) - r) - r

isoR :: P - forall r. CPS_P r
isoR (MkP f) k =
   -- pattern match on MkP brings a fresh type T into scope,
   -- along with f :: T - Int, and the constraint Show T.
   -- k :: forall a. Show a = (a - Int) - r
   -- so, k {T} f :: r


isoL :: (forall r. CPS_P r) - P
isoL k = k (\x - MkP x)
-- k :: forall r. (forall a. Show a = (a - Int) - r) - r
-- k {P} = (forall a. Show a = (a - Int) - P) - P
-- MkP :: forall a. Show a = (a - Int) - P
-- therefore, k {P} MkP :: P

Aside: the type 'exists a. (Show a, a - Int)' is a bit odd, and is another
reason we don't have first class existentials in Haskell.  The 'forall'
side is using currying (a - b - r) = ((a,b) - r) which works because
the constraint = can be modeled by dictionary passing.  But we don't have
a simple way to represent the dictionary (Show a) as a member of a tuple.

One answer is to pack it up in another existential; I find this a bit of
a misnomer since there's nothing existential about this data type aside
from the dictionary:

data ShowDict a = Show a = MkShowDict

Then the theorem translation is a bit more straightforward:

data P = forall a. MkP (ShowDict a, a - Int)
type CPS_P r = (forall a. (ShowDict a, a - Int) - r) - r

-- theorem: P = forall r. CPS_P r

isoL :: P - forall r. CPS_P r
isoL (MkP x) k = k x

isoR :: (forall r. CPS_P r) - P
isoR k = k (\x - MkP x)

  -- ryan

On Sat, Aug 18, 2012 at 8:24 PM, wren ng thornton w...@freegeek.org wrote:

 On 8/17/12 12:54 AM, Alexander Solla wrote:

 On Thu, Aug 16, 2012 at 8:07 PM, wren ng thornton w...@freegeek.org
 wrote:

 Though bear in mind we're discussing second-order quantification here,
 not
 first-order.


 Can you expand on what you mean here?  I don't see two kinds of
 quantification in the type language (at least, reflexively, in the context
 of what we're discussing).  In particular, I don't see how to quantify
 over
 predicates for (or sets of, via the extensions of the predicates) types.

 Is Haskell's 'forall' doing double duty?


 Nope, it's the forall of mathematics doing double duty :)

 Whenever doing quantification, there's always some domain being quantified
 over, though all too often that domain is left implicit; whence lots of
 confusion over the years. And, of course, there's the scope of the
 quantification, and the entire expression. For example, consider the
 expression:

 forall a. P(a)

 The three important collections to bear in mind are:
 (1) the collection of things a ranges over
 (2) the collection of things P(a) belongs to
 (3) the collection of things forall a. P(a) belongs to

 So if we think of P as a function from (1) to (2), and name the space of
 such functions (1-2), then we can think of the quantifier as a function
 from (1-2) to (3).


 When you talk to logicians about quantifiers, the first thing they think
 of is so-called first-order quantification. That is, given the above
 expression, they think that the thing being quantified over is a collection
 of individuals who live outside of logic itself[1]. For example, we could
 be quantifying over the natural numbers, or over the kinds of animals in
 the world, or any other extra-logical group of entities.

 In Haskell, when we see the above expression we think that the thing being
 quantified over is some collection of types[2]. But, that means when we
 think of P as a function it's taking types and returning types! So the
 thing you're quantifying over and the thing you're constructing are from
 the same domain[3]! This gets logicians flustered and so they call it
 second-order (or more generally, higher-order) quantification. If you
 assert the primacy of first-order logic, it makes sense right? In the
 first-order case we're quantifying over individuals; in the second-order
 case we're quantifying over collections of individuals; so third-order
 would be quantifying over collections of collections of individuals; and on
 up to higher orders.


 Personally, I find the names first-order and second-order rather
 dubious--- though the distinction is a critical one to make. Part of the
 reason for its dubiousness can be seen when you look at PTSes which make
 explicit that (1), (2), and (3) above can each be the same or different in
 all combinations. First-order quantification is the sort of thing you get
 from Pi/Sigma types in dependently typed languages like LF; second-order
 quantification is the sort 

Re: [Haskell-cafe] Wanted: Haskell binding for libbdd (buddy)

2012-08-20 Thread Peter Gammie
On 20/08/2012, at 11:19 PM, Johannes Waldmann wrote:

 Are there any Haskell bindings for BDD libraries
 (reduced ordered binary decision diagrams)?
 
 E.g., it seems buddy is commonly used
 http://packages.debian.org/squeeze/libbdd-dev
 and it has an Ocaml binding.

My hBDD bindings are on Hackage. I once had a binding to buddy but found CUDD 
to have superior performance for my application.

cheers
peter

-- 
http://peteg.org/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe