[Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread John Lato
 From: Jeff Heard jefferson.r.he...@gmail.com

 Is there a way to do something like autoconf and configure
 dependencies at install time?  Building buster, I keep adding
 dependencies and I'd like to keep that down to a minimum without the
 annoyance of littering Hackage with dozens of packages.  For instance,
 today I developed an HTTP behaviour and that of course requires
 network and http, which were previously not required.  I'm about to
 put together a haxr XML-RPC behaviour as well, and that of course
 would add that much more to the dependency list.  HaXml, haxr, and
 haxr-th most likely.

 so... any way to do that short of making a bunch of separate packages
 with one or two modules apiece?  Otherwise I'm thinking of breaking
 things up into buster, buster-ui, buster-network, buster-console, and
 buster-graphics to start and adding more as I continue along.


I'd be interested in hearing answers to this as well.  I'm not a fan
of configure-style compile-time conditional compilation, at least for
libraries.  It makes it much harder to specify dependencies.  With
this, if package Foo depends on buster and the HTTP behavior, it's no
longer enough to specify build-depends: buster because that will
only work if buster was configured properly on any given system.

I think that the proper solution is to break up libraries into
separate packages as Jeff suggests (buster, buster-ui, etc.), but then
the total packages on hackage would explode.  I don't feel great about
doing that with my own packages either; is it a problem?  If so, maybe
there could be just one extra package, e.g. buster and buster-extras.
Is there a better solution I'm missing?

John Lato
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread Martijn van Steenbergen

John Lato wrote:

From: Jeff Heard jefferson.r.he...@gmail.com

Is there a way to do something like autoconf and configure
dependencies at install time?  Building buster, I keep adding
dependencies and I'd like to keep that down to a minimum without the
annoyance of littering Hackage with dozens of packages.  For instance,
today I developed an HTTP behaviour and that of course requires
network and http, which were previously not required.  I'm about to
put together a haxr XML-RPC behaviour as well, and that of course
would add that much more to the dependency list.  HaXml, haxr, and
haxr-th most likely.

so... any way to do that short of making a bunch of separate packages
with one or two modules apiece?  Otherwise I'm thinking of breaking
things up into buster, buster-ui, buster-network, buster-console, and
buster-graphics to start and adding more as I continue along.



I'd be interested in hearing answers to this as well.  I'm not a fan
of configure-style compile-time conditional compilation, at least for
libraries.  It makes it much harder to specify dependencies.  With
this, if package Foo depends on buster and the HTTP behavior, it's no
longer enough to specify build-depends: buster because that will
only work if buster was configured properly on any given system.

I think that the proper solution is to break up libraries into
separate packages as Jeff suggests (buster, buster-ui, etc.), but then
the total packages on hackage would explode.  I don't feel great about
doing that with my own packages either; is it a problem?  If so, maybe
there could be just one extra package, e.g. buster and buster-extras.
Is there a better solution I'm missing?


Cabal's flag system sounds like a nice solution for this, except I don't 
know if it's possible to add specific flags to your build dependencies, i.e.


build-depends: buster -fhttp

Martijn.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread John Dorsey
John Lato wrote:
 I think that the proper solution is to break up libraries into
 separate packages as Jeff suggests (buster, buster-ui, etc.), but then
 the total packages on hackage would explode.  I don't feel great about

I thought about this a while back and came to the conclusion that the
package count should only grow by a small contant factor due to this,
and that's a lot better than dealing with hairy and problematic
dependencies.

It should usually be:

  libfoo
  libfoo-blarg
  libfoo-xyzzy
  etc.

and more rarely:

  libbar-with-xyzzy
  libbar-no-xyzzy
  etc.

each providing libbar.  Although I don't remember whether Cabal has
'provides'.  The latter case could explode exponentially for weird
packages that have several soft dependencies that can't be managed in
the plugin manner, but I can't see that being a real issue.

This looks manageable to me, but I'm no packaging guru.  I guess it's a
little harder for authors/maintainers of packages that look like leaves
in the dependency tree, which could be bad.  Am I missing something bad?

Regards,
John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread Edward Kmett
This has been a lot on my mind lately as my current library provides
additional functionality to data types from a wide array of other packages.
I face a version of Wadler's expression problem.

I provide a set of classes for injecting into monoids/seminearrings/etc. to
allow for quick reductions over different data structures. The problem is
that of course the interfaces are fairly general so whole swathes of types
(including every applicative functor!) qualifies for certain operations.

Perhaps the ultimate answer would be to push more of the instances down into
the source packages. I can do this with some of the monoid instances, but
convincing folks of the utility of the fact that their particular
applicative forms a right-seminearring when it contains a monoid is another
matter entirely.

The problem is there is no path to get there from here. Getting another
library to depend on mine, they have to pick up the brittle dependency set I
have now. Splitting my package into smaller packages fails because I need to
keep the instances for 3rd party data types packed with the class
definitions to avoid orphan instances and poor API design. So the option to
split things into the equivalent of 'buster-ui', 'buster-network' and so
forth largely fails on that design criterion. I can do that for new monoids,
rings and so forth that I define that purely layer on top of the base
functionality I provide, but not for ones that provide additional instances
for 3rd party data types.

I can keep adding libraries as dependencies like I am doing now, but that
means that my library continues to accrete content at an alarming rate and
more importantly every one introduces a greater possibility of build issues,
because I can only operate in an environment where every one of my
dependencies can install.

This further exacerbates the problem that no one would want to add all of my
pedantic instances because to do so they would have to inject a huge brittle
dependency into their package.

The only other alternative that I seem to have at this point in the cabal
packaging system is to create a series of flags for optional functionality.
This solves _my_ problem, in particular it lets me install on a much broader
base of environments, but now the order in which my package was installed
with respect to its dependencies matters. In particular, clients of the
library won't know if they have access to half of the instances, and so are
stuck limiting themselves to working either on a particular computer, or
using the intersection of the functionality I can provide.

Perhaps, what I would ideally like to have would be some kind of 'augments'
or 'codependencies' clause in the cabal file inside of flags and build
targets that indicates packages that should force my package to
reinstall after a package matching the version range inside the
codependencies clause is installed or at least prompt indicatig that new
functionality would be available and what packages you should reinstall.

This would let me have my cake and eat it too. I could provide a wide array
of instances for different stock data types, and I could know that if
someone depends on both, say,  'monoids' and 'parsec 3' that the parsec
instances will be present and usable in my package.

Most importantly, it would allow me to fix my 'expression problem'. Others
could introduce dependencies on the easier to install library allowing me to
shrink the library and I would be able to install in more environments.

-Edward Kmett

On Tue, Apr 7, 2009 at 9:20 AM, John Dorsey hask...@colquitt.org wrote:

 John Lato wrote:
  I think that the proper solution is to break up libraries into
  separate packages as Jeff suggests (buster, buster-ui, etc.), but then
  the total packages on hackage would explode.  I don't feel great about

 I thought about this a while back and came to the conclusion that the
 package count should only grow by a small contant factor due to this,
 and that's a lot better than dealing with hairy and problematic
 dependencies.

 It should usually be:

  libfoo
  libfoo-blarg
  libfoo-xyzzy
  etc.

 and more rarely:

  libbar-with-xyzzy
  libbar-no-xyzzy
  etc.

 each providing libbar.  Although I don't remember whether Cabal has
 'provides'.  The latter case could explode exponentially for weird
 packages that have several soft dependencies that can't be managed in
 the plugin manner, but I can't see that being a real issue.

 This looks manageable to me, but I'm no packaging guru.  I guess it's a
 little harder for authors/maintainers of packages that look like leaves
 in the dependency tree, which could be bad.  Am I missing something bad?

 Regards,
 John

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread John Dorsey
Edward,

Thanks for straightening me out; I see the problems better now.  In
particular I was missing:

1)  Orphaned types (and related design issues) get in the way of
splitting the package.

2)  Viral dependencies work in two directions, since upstream packages
must pick up your deps to include instances of your classes.

I'm thinking out loud, so bear with me.

 The problem is there is no path to get there from here. Getting another
 library to depend on mine, they have to pick up the brittle dependency set I
 have now. Splitting my package into smaller packages fails because I need to
 keep the instances for 3rd party data types packed with the class
 definitions to avoid orphan instances and poor API design. So the option to

Some class instances can go in three places:

a)  The source package for the type, which then picks up your deps.  Bad.

b)  Your package, which then has a gazillion deps.  Bad.

c)  Your sub-packages, in which case they're orphaned.  Bad.

I have to wonder whether (c) isn't the least of evils.  Playing the
advocate:

-  Orphaned instances are bad because of the risk of multiple instances.
That risk should be low in this case; if anyone else wanted an instance
of, say, a Prelude ADT for your library's class, their obvious option is
to use your sub-package.

-  If you accept the above, then orphaning the intance in a sub-package
that's associated with either the type's or the class's home is morally
better than providing an instance in an unaffiliated third package.

-  Orphaning in sub-packages as a stopgap could make it much easier to
get your class (and the instance) added to those upstream packages where
it makes sense to do so.

This clearly doesn't solve all parts of the problem.  You may have other
design concerns that make sub-packages undesirable.  Even with instance
definitions removed you may still have enough dependencies to deter
integration.  The problem probably extends beyond just class instances.

 The only other alternative that I seem to have at this point in the cabal
 packaging system is to create a series of flags for optional functionality.

This sounds like rat hole of a different nature.  You lose the ability
to tell if an API is supported based on whether the package that implements
it is installed.  An installed and working package can cease to function
after (possibly automatic) reinstallation when other packages become
available.  Complicated new functionality is required in Cabal.

Regards,
John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread Simon Michael
I'm facing this problem too, with hledger. It has optional happstack and 
vty interfaces which add to the difficulty and platform-specificity of 
installation. Currently I publish all in one package with a cabal flag 
for each interface, with happstack off and vty on by default. vty isn't 
available on windows, but I understand that cabal is smart enough to 
flip the flags until it finds a combination that is installable, so I 
hoped it would just turn off vty for windows users. It didn't, though.


An alternative is to publish separate packages, debian style: 
libhledger, hledger, hledger-vty, hledger-happs etc. These are more 
discoverable and easier to document for users. It does seem hackage 
would be less fun to browse if it fills up with all these variants. But 
maybe it's simpler.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Configuring cabal dependencies at install-time

2009-04-07 Thread John Lato
The problem of type class instances for third-party types is exactly
how I ran into this.  Currently I don't know of a good solution, where
good means that it meets these criteria:

1.  Does not introduce orphan instances
2.  Allows for instances to be provided based upon the user's
installed libraries
3.  Allows for a separation of core package dependencies and
dependencies that are only
 included to provide instances
4.  Has sane dependency requirements (within the current Cabal framework)

This seems harder than the problem Jeff has with buster, because the
separate packages of buster-http, buster-ui, etc. makes sense both
organizationally and as an implementation issue, it's more a question
of the politeness of putting that collection on hackage.  For type
class instances, this isn't an option unless one provides orphan
instances.

I like Edward's suggestion of an augments flag.  As I envision it,
package Foo would provide something like a phantom instance of a
type from package Bar, where the instance is not actually available
until the matching library Bar is installed, at which point the
compiler would compile the instance (or at least flag Foo for
recompilation) and make it available.  I have no idea how much work
this would take, or where one would go about starting to implement it,
though.

John

On Tue, Apr 7, 2009 at 3:10 PM, Edward Kmett ekm...@gmail.com wrote:
 This has been a lot on my mind lately as my current library provides
 additional functionality to data types from a wide array of other packages.
 I face a version of Wadler's expression problem.

 I provide a set of classes for injecting into monoids/seminearrings/etc. to
 allow for quick reductions over different data structures. The problem is
 that of course the interfaces are fairly general so whole swathes of types
 (including every applicative functor!) qualifies for certain operations.

 Perhaps the ultimate answer would be to push more of the instances down into
 the source packages. I can do this with some of the monoid instances, but
 convincing folks of the utility of the fact that their particular
 applicative forms a right-seminearring when it contains a monoid is another
 matter entirely.

 The problem is there is no path to get there from here. Getting another
 library to depend on mine, they have to pick up the brittle dependency set I
 have now. Splitting my package into smaller packages fails because I need to
 keep the instances for 3rd party data types packed with the class
 definitions to avoid orphan instances and poor API design. So the option to
 split things into the equivalent of 'buster-ui', 'buster-network' and so
 forth largely fails on that design criterion. I can do that for new monoids,
 rings and so forth that I define that purely layer on top of the base
 functionality I provide, but not for ones that provide additional instances
 for 3rd party data types.

 I can keep adding libraries as dependencies like I am doing now, but that
 means that my library continues to accrete content at an alarming rate and
 more importantly every one introduces a greater possibility of build issues,
 because I can only operate in an environment where every one of my
 dependencies can install.

 This further exacerbates the problem that no one would want to add all of my
 pedantic instances because to do so they would have to inject a huge brittle
 dependency into their package.

 The only other alternative that I seem to have at this point in the cabal
 packaging system is to create a series of flags for optional functionality.
 This solves _my_ problem, in particular it lets me install on a much broader
 base of environments, but now the order in which my package was installed
 with respect to its dependencies matters. In particular, clients of the
 library won't know if they have access to half of the instances, and so are
 stuck limiting themselves to working either on a particular computer, or
 using the intersection of the functionality I can provide.

 Perhaps, what I would ideally like to have would be some kind of 'augments'
 or 'codependencies' clause in the cabal file inside of flags and build
 targets that indicates packages that should force my package to
 reinstall after a package matching the version range inside the
 codependencies clause is installed or at least prompt indicatig that new
 functionality would be available and what packages you should reinstall.

 This would let me have my cake and eat it too. I could provide a wide array
 of instances for different stock data types, and I could know that if
 someone depends on both, say,  'monoids' and 'parsec 3' that the parsec
 instances will be present and usable in my package.

 Most importantly, it would allow me to fix my 'expression problem'. Others
 could introduce dependencies on the easier to install library allowing me to
 shrink the library and I would be able to install in more environments.

 -Edward Kmett

 On Tue, Apr 7, 2009 at 9:20