Re: GHC 7.8 release?

2013-02-13 Thread Simon Marlow

On 13/02/13 07:06, wren ng thornton wrote:

On 2/12/13 3:37 AM, Simon Marlow wrote:

One reason for the major version bumps is that base is a big
conglomeration of modules, ranging from those that hardly ever change
(Prelude) to those that change frequently (GHC.*). For example, the new
IO manager that is about to get merged in will force a major bump of
base, because it changes GHC.Event.  The unicode support in the IO
library was similar: although it only added to the external APIs that
most people use, it also changed stuff inside GHC.* that we expose for a
few clients.

The solution to this would be to split up base further, but of course
doing that is itself a major upheaval.  However, having done that, it
might be more feasible to have non-API-breaking releases.


While it will lead to much wailing and gnashing of teeth in the short
term, if it's feasible to break GHC.* off into its own package, then I
think we should. The vast majority of base seems quite stable or else is
rolling along at a reasonable pace. And yet, every time a new GHC comes
out, there's a new wave of fiddling the knobs on cabal files because
nothing really changed. On the other hand, GHC.* moves rather quickly.
Nevertheless, GHC.* is nice to have around, so we don't want to just
hide that churning. The impedance mismatch here suggests that they
really should be separate packages. I wonder whether GHC.* should be
moved in with ghc-prim, or whether they should remain separate...

But again, this depends on how feasible it would be to actually split
the packages apart. Is it feasible?


So I think we'd need to add another package, call it ghc-base perhaps. 
The reason is that ghc-prim sits below the integer package 
(integer-simple or integer-gmp).


It's feasible to split base, but to a first approximation what you end 
up with is base renamed to ghc-base, and then the new base contains just 
stub modules that re-export stuff from ghc-base.  In fact, maybe you 
want to do it exactly like this for simplicity - all the code goes in 
ghc-base.  There would be some impact on compilation time, as we'd have 
twice as many interfaces to read.


I believe Ian has done some experiments with splitting base further, so 
he might have more to add here.


Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


base package (was: GHC 7.8 release?)

2013-02-13 Thread Roman Cheplyaka
* Simon Marlow marlo...@gmail.com [2013-02-13 09:00:15+]
 It's feasible to split base, but to a first approximation what you
 end up with is base renamed to ghc-base, and then the new base
 contains just stub modules that re-export stuff from ghc-base.

It would be great to have a portable base, without any GHC-specific
stuff in it. After all, modules like Control.Monad or Data.Foldable are
pure Haskell2010.

The only obstacle I see is that ghc-base, as you call it, uses some
amount of base definitions, and so we have a loop.

How hard would it be to break this loop?

That is, either make GHC.* modules self-contained, or make base not to
depend on GHC.*?

Roman

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-13 Thread Krzysztof Skrzętnicki
Perhaps looking at what base-compat has to offer (*A compatibility layer
for base*) is relevant to this discussion. See:

https://github.com/sol/base-compat#readme
http://hackage.haskell.org/package/base-compat

I didn't use it but it looks as promising approach to base API stability.

All best,
Krzysztof Skrzętnicki



On Wed, Feb 13, 2013 at 10:00 AM, Simon Marlow marlo...@gmail.com wrote:

 On 13/02/13 07:06, wren ng thornton wrote:

 On 2/12/13 3:37 AM, Simon Marlow wrote:

 One reason for the major version bumps is that base is a big
 conglomeration of modules, ranging from those that hardly ever change
 (Prelude) to those that change frequently (GHC.*). For example, the new
 IO manager that is about to get merged in will force a major bump of
 base, because it changes GHC.Event.  The unicode support in the IO
 library was similar: although it only added to the external APIs that
 most people use, it also changed stuff inside GHC.* that we expose for a
 few clients.

 The solution to this would be to split up base further, but of course
 doing that is itself a major upheaval.  However, having done that, it
 might be more feasible to have non-API-breaking releases.


 While it will lead to much wailing and gnashing of teeth in the short
 term, if it's feasible to break GHC.* off into its own package, then I
 think we should. The vast majority of base seems quite stable or else is
 rolling along at a reasonable pace. And yet, every time a new GHC comes
 out, there's a new wave of fiddling the knobs on cabal files because
 nothing really changed. On the other hand, GHC.* moves rather quickly.
 Nevertheless, GHC.* is nice to have around, so we don't want to just
 hide that churning. The impedance mismatch here suggests that they
 really should be separate packages. I wonder whether GHC.* should be
 moved in with ghc-prim, or whether they should remain separate...

 But again, this depends on how feasible it would be to actually split
 the packages apart. Is it feasible?


 So I think we'd need to add another package, call it ghc-base perhaps. The
 reason is that ghc-prim sits below the integer package (integer-simple or
 integer-gmp).

 It's feasible to split base, but to a first approximation what you end up
 with is base renamed to ghc-base, and then the new base contains just stub
 modules that re-export stuff from ghc-base.  In fact, maybe you want to do
 it exactly like this for simplicity - all the code goes in ghc-base.  There
 would be some impact on compilation time, as we'd have twice as many
 interfaces to read.

 I believe Ian has done some experiments with splitting base further, so he
 might have more to add here.

 Cheers,
 Simon



 __**_
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.**org Glasgow-haskell-users@haskell.org
 http://www.haskell.org/**mailman/listinfo/glasgow-**haskell-usershttp://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (was: GHC 7.8 release?)

2013-02-13 Thread Joachim Breitner
Hi,

Am Mittwoch, den 13.02.2013, 11:34 +0200 schrieb Roman Cheplyaka:
 It would be great to have a portable base, without any GHC-specific
 stuff in it. After all, modules like Control.Monad or Data.Foldable
 are pure Haskell2010.

while you are considering to split base, please also consider separating
IO out. We can expect compiling Haskell to, say JavaScript or other
targets that are not processes in the usual sense. For these IO might
not make sense.

Having something below base that provides the pure stuff (common data
structures etc.) would enable libraries to easily say: „My algorithm can
be used in normal programs as well as in programs that are compiled to
JS“ by not depending on base, but on, say, pure-base.

Greetings,
Joachim

-- 
Joachim nomeata Breitner
Debian Developer
  nome...@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
  JID: nome...@joachim-breitner.de | http://people.debian.org/~nomeata



signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-13 Thread Ian Lynagh
On Wed, Feb 13, 2013 at 09:00:15AM +, Simon Marlow wrote:
 
 I believe Ian has done some experiments with splitting base further,
 so he might have more to add here.

There are some sensible chunks that can be pulled out, e.g. Foreign.*
can be pulled out into a separate package fairly easily IIRC.

Part of the problem is that it's hard to see what's possible without
actually doing it, because base is so large, and has so many imports and
import loops. IMO by far the easiest way to improve base would be to
start off by breaking it up into lots of small packages (some of which
will probably be single-module packages, others may contain an entire
hierarchy like Foreign.*, and others may contain an odd mixture of
modules due to dependency problems).

Then we can look at which packages ought to be split up, which ought to
be coalesced, and which ought to be moved higher up or lower down the
dependency tree, and then look at which module imports are preventing
what we want to do and see if there's a way to fix them (either by
moving a definition and reversing an import, or by changing an import to
import something lower down the dependency tree instead).

If we go this route, then we would probably want to end up without a
package called 'base', and then to make a new package called 'base' that
just re-exports modules from all the new packages. I imagine the first
release would let people use the new base without warnings, a year later
new base would give deprecated warnings, and the following year we'd
remove it. We could do this process slower, but the sooner packages move
off of base, the sooner they benefit from fewer major version bumps.

The advantages would be:
* the new packages would be easier to maintain than base is
* we could more easily make other improvements we'd like to make, e.g.
  we could move the unix and Win32 packages further down the tree
  without having to do it in one big leap, and without having to put
  them below the whole of base
* if one module causes a major version bump, then only libraries using
  that functionality would need to relax their dependencies, rather than
  every single package
* some targets (JS, JVM, .NET, etc) or other implementations might want
  to do things like IO, concurrency, etc, completely differently. This
  way they'd just use a different io/concurrency package, rather than
  having to have a different implementation of parts of base
* it would be nice if pure libraries could simply not depend on the io
  package etc, and thus clearly do no IO

The disadvantage is that, at some point between the first release and
the release that removes base, each package will have to have its
dependencies updated.


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC 7.8 release?

2013-02-13 Thread Simon Peyton-Jones
I would like to see base split up, but I am reluctant to invest the effort to 
do it.   I'm not going to do it myself, and Ian's time is so precious on many 
fronts.  

My guess is that it'll need someone else to take the lead, with advice from HQ.

Simon

|  -Original Message-
|  From: ghc-devs-boun...@haskell.org [mailto:ghc-devs-boun...@haskell.org] On
|  Behalf Of Ian Lynagh
|  Sent: 13 February 2013 13:58
|  To: Simon Marlow
|  Cc: wren ng thornton; glasgow-haskell-users; ghc-d...@haskell.org
|  Subject: Re: GHC 7.8 release?
|  
|  On Wed, Feb 13, 2013 at 09:00:15AM +, Simon Marlow wrote:
|  
|   I believe Ian has done some experiments with splitting base further,
|   so he might have more to add here.
|  
|  There are some sensible chunks that can be pulled out, e.g. Foreign.*
|  can be pulled out into a separate package fairly easily IIRC.
|  
|  Part of the problem is that it's hard to see what's possible without
|  actually doing it, because base is so large, and has so many imports and
|  import loops. IMO by far the easiest way to improve base would be to
|  start off by breaking it up into lots of small packages (some of which
|  will probably be single-module packages, others may contain an entire
|  hierarchy like Foreign.*, and others may contain an odd mixture of
|  modules due to dependency problems).
|  
|  Then we can look at which packages ought to be split up, which ought to
|  be coalesced, and which ought to be moved higher up or lower down the
|  dependency tree, and then look at which module imports are preventing
|  what we want to do and see if there's a way to fix them (either by
|  moving a definition and reversing an import, or by changing an import to
|  import something lower down the dependency tree instead).
|  
|  If we go this route, then we would probably want to end up without a
|  package called 'base', and then to make a new package called 'base' that
|  just re-exports modules from all the new packages. I imagine the first
|  release would let people use the new base without warnings, a year later
|  new base would give deprecated warnings, and the following year we'd
|  remove it. We could do this process slower, but the sooner packages move
|  off of base, the sooner they benefit from fewer major version bumps.
|  
|  The advantages would be:
|  * the new packages would be easier to maintain than base is
|  * we could more easily make other improvements we'd like to make, e.g.
|we could move the unix and Win32 packages further down the tree
|without having to do it in one big leap, and without having to put
|them below the whole of base
|  * if one module causes a major version bump, then only libraries using
|that functionality would need to relax their dependencies, rather than
|every single package
|  * some targets (JS, JVM, .NET, etc) or other implementations might want
|to do things like IO, concurrency, etc, completely differently. This
|way they'd just use a different io/concurrency package, rather than
|having to have a different implementation of parts of base
|  * it would be nice if pure libraries could simply not depend on the io
|package etc, and thus clearly do no IO
|  
|  The disadvantage is that, at some point between the first release and
|  the release that removes base, each package will have to have its
|  dependencies updated.
|  
|  
|  Thanks
|  Ian
|  
|  
|  ___
|  ghc-devs mailing list
|  ghc-d...@haskell.org
|  http://www.haskell.org/mailman/listinfo/ghc-devs

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (was: GHC 7.8 release?)

2013-02-13 Thread Stephen Paul Weber

Somebody claiming to be Roman Cheplyaka wrote:

* Simon Marlow marlo...@gmail.com [2013-02-13 09:00:15+]

It's feasible to split base, but to a first approximation what you
end up with is base renamed to ghc-base, and then the new base
contains just stub modules that re-export stuff from ghc-base.


It would be great to have a portable base, without any GHC-specific
stuff in it. After all, modules like Control.Monad or Data.Foldable are
pure Haskell2010.


+1

--
Stephen Paul Weber, @singpolyma
See http://singpolyma.net for how I prefer to be contacted
edition right joseph


signature.asc
Description: Digital signature
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


base package (Was: GHC 7.8 release?)

2013-02-13 Thread Joachim Breitner
Hi,

Am Mittwoch, den 13.02.2013, 13:58 + schrieb Ian Lynagh:
 If we go this route, then we would probably want to end up without a
 package called 'base', and then to make a new package called 'base'
 that just re-exports modules from all the new packages.

can you transparently re-export a module from another package? I.e. if
base depends on io, IO provides System.IO, is there a way for base to
tell ghc to pretend that System.IO is in base, but that there is no
conflict if io happens to be un-hidden as well.

It seems that something like this would be required to move modules from
base to something below it without breaking existing code.

Also, if it works that smooth, this would not have to be one big
reorganization, but could be done piece by piece.

 The disadvantage is that, at some point between the first release and
 the release that removes base, each package will have to have its
 dependencies updated.

Why remove base? If it is just a list of dependencies and list of
modules to be re-exported, then keeping it (but advocate that it should
not be used) should not be too much a burden.


(This is assuming that the reorganizing should not change existing
module names. If your plan was to give the modules new names, this
problem does not exist, but I’d rather prefer the less intrusive
approach.)

Greetings,
Joachim



-- 
Joachim nomeata Breitner
Debian Developer
  nome...@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
  JID: nome...@joachim-breitner.de | http://people.debian.org/~nomeata



signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-13 Thread Ian Lynagh
On Wed, Feb 13, 2013 at 06:28:22PM +0100, Joachim Breitner wrote:
 
 Am Mittwoch, den 13.02.2013, 13:58 + schrieb Ian Lynagh:
  If we go this route, then we would probably want to end up without a
  package called 'base', and then to make a new package called 'base'
  that just re-exports modules from all the new packages.
 
 can you transparently re-export a module from another package? I.e. if
 base depends on io, IO provides System.IO, is there a way for base to
 tell ghc to pretend that System.IO is in base, but that there is no
 conflict if io happens to be un-hidden as well.

No. But there are currently no packages that depend on both base and io,
and anyone adding a dependency on io would remove the base dependency at
the same time.

 It seems that something like this would be required to move modules from
 base to something below it without breaking existing code.

I don't see why that's necessary. base would end up containing a load of
modules that look something like

{-# LANGUAGE PackageImports #-}
module System.IO (module X) where

import io System.IO as X

 Also, if it works that smooth, this would not have to be one big
 reorganization, but could be done piece by piece.

It's tricky to do it piece by piece. It's hard to remove individual
sensible pieces in the first place, and it means that you can't
subsequently move modules between packages later without breaking code
depending on the new packages.

  The disadvantage is that, at some point between the first release and
  the release that removes base, each package will have to have its
  dependencies updated.
 
 Why remove base? If it is just a list of dependencies and list of
 modules to be re-exported, then keeping it (but advocate that it should
 not be used) should not be too much a burden.

* Any package using it doesn't benefit from the reduced version bumps,
  so we do actually want packages to move away from it

* Even though base (probably) wouldn't require a lot of work at any one
  time, it would require a little work every now and again, and that
  adds up to a lot of work

* Any time a module is added to one of the new packages, either we'd
  have to spend time adding it to base too, or packages continuing to
  use base wouldn't (easily) be able to use that new module.

 (This is assuming that the reorganizing should not change existing
 module names. If your plan was to give the modules new names, this
 problem does not exist, but I’d rather prefer the less intrusive
 approach.)

The odd module might be renamed, and there will probably be a handful of
definitions that move from one module to another, but for the most part
I was envisaging that we'd end up with the same modules exporting the
same things.


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-13 Thread Joachim Breitner
Hi,

I have started a wikipage with the list of all modules from base, for a
first round of shuffling, grouping and brainstorming:

http://hackage.haskell.org/trac/ghc/wiki/SplitBase


Am Mittwoch, den 13.02.2013, 18:09 + schrieb Ian Lynagh:
 On Wed, Feb 13, 2013 at 06:28:22PM +0100, Joachim Breitner wrote:
  Am Mittwoch, den 13.02.2013, 13:58 + schrieb Ian Lynagh:
   If we go this route, then we would probably want to end up without a
   package called 'base', and then to make a new package called 'base'
   that just re-exports modules from all the new packages.
  
  can you transparently re-export a module from another package? I.e. if
  base depends on io, IO provides System.IO, is there a way for base to
  tell ghc to pretend that System.IO is in base, but that there is no
  conflict if io happens to be un-hidden as well.
 
 No. But there are currently no packages that depend on both base and io,
 and anyone adding a dependency on io would remove the base dependency at
 the same time.

hmm, that reminds me of how haskell98 was handled, and it was slightly
annoying when haskell98 and base eventually were made to conflict, and
we had to patch some unmaintained packages.

Ok, in this case io would be introduced with the intention of being used
exclusive from base. So as long as we make sure that the set of modules
exported by base is always the union of all modules provided by package
that have any module in common with base this would be fine.

(Why this condition? Imagine io adding IO.GreatModule without base also
providing the module. Then a program that still uses base cannot use
IO.GreatModule without fixing the dependencies _now_ (or using package
imports everywhere). It would be nice if library authors allowed to do
the change whenever convenient.)

  Also, if it works that smooth, this would not have to be one big
  reorganization, but could be done piece by piece.
 
 It's tricky to do it piece by piece. It's hard to remove individual
 sensible pieces in the first place, and it means that you can't
 subsequently move modules between packages later without breaking code
 depending on the new packages.

Agreed.

   The disadvantage is that, at some point between the first release and
   the release that removes base, each package will have to have its
   dependencies updated.
  
  Why remove base? If it is just a list of dependencies and list of
  modules to be re-exported, then keeping it (but advocate that it should
  not be used) should not be too much a burden.
 
 * Any package using it doesn't benefit from the reduced version bumps,
   so we do actually want packages to move away from it

We want them to do so. We should not force them (most surely will...)

 * Even though base (probably) wouldn't require a lot of work at any one
   time, it would require a little work every now and again, and that
   adds up to a lot of work

Hopefully it is just updating the set of modules to be exported, sounds
like it could be automated, given a list of packages.

 * Any time a module is added to one of the new packages, either we'd
   have to spend time adding it to base too, or packages continuing to
   use base wouldn't (easily) be able to use that new module.

Hence we should add them; shouldn’t be too much work.


After every larger change to base I am forced to touch old, hardly
maintained code that I do not know, to get the packages working in
Debian again. Hence my plea for staying compatible as long as feasible.

Greetings,
Joachim

-- 
Joachim nomeata Breitner
Debian Developer
  nome...@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
  JID: nome...@joachim-breitner.de | http://people.debian.org/~nomeata



signature.asc
Description: This is a digitally signed message part
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-13 Thread Stephen Paul Weber

Somebody signing messages as Joachim Breitner wrote:

I have started a wikipage with the list of all modules from base, for a
first round of shuffling, grouping and brainstorming:

http://hackage.haskell.org/trac/ghc/wiki/SplitBase


Looks like a good start!

Here's an idea: why not use the `haskell2010` package as one of the 
groupings?  It seems like this sort of reorganisation could help solve the 
problem we currently have where one cannot using any of the features of 
`base` along with the `haskell2010` modules.


--
Stephen Paul Weber, @singpolyma
See http://singpolyma.net for how I prefer to be contacted
edition right joseph


signature.asc
Description: Digital signature
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-13 Thread Felipe Almeida Lessa
On Wed, Feb 13, 2013 at 4:32 PM, Joachim Breitner
m...@joachim-breitner.de wrote:
 No. But there are currently no packages that depend on both base and io,
 and anyone adding a dependency on io would remove the base dependency at
 the same time.

 hmm, that reminds me of how haskell98 was handled, and it was slightly
 annoying when haskell98 and base eventually were made to conflict, and
 we had to patch some unmaintained packages.

 Ok, in this case io would be introduced with the intention of being used
 exclusive from base. So as long as we make sure that the set of modules
 exported by base is always the union of all modules provided by package
 that have any module in common with base this would be fine.

 (Why this condition? Imagine io adding IO.GreatModule without base also
 providing the module. Then a program that still uses base cannot use
 IO.GreatModule without fixing the dependencies _now_ (or using package
 imports everywhere). It would be nice if library authors allowed to do
 the change whenever convenient.)

There should also be a condition stating that base should only
re-export modules, and that those re-exports need to have the same
name on another package.  This condition guarantees that the only
thing you need to change is the import list, and even this change
could be (at least partially) automated via a tool take took all your
imports and decided which new packages export them.

-- 
Felipe.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: base package (Was: GHC 7.8 release?)

2013-02-13 Thread Ian Lynagh
On Wed, Feb 13, 2013 at 07:32:06PM +0100, Joachim Breitner wrote:
 
 I have started a wikipage with the list of all modules from base, for a
 first round of shuffling, grouping and brainstorming:
 
 http://hackage.haskell.org/trac/ghc/wiki/SplitBase

Great, thanks for taking the lead on this!

The disadvantage is that, at some point between the first release and
the release that removes base, each package will have to have its
dependencies updated.
   
   Why remove base? If it is just a list of dependencies and list of
   modules to be re-exported, then keeping it (but advocate that it should
   not be used) should not be too much a burden.
  
  * Any package using it doesn't benefit from the reduced version bumps,
so we do actually want packages to move away from it
 
 We want them to do so. We should not force them (most surely will...)

A lot of packages won't react until something actually breaks.

(and I suspect many are unmaintained and unused, and won't react even
once it does break).

  * Even though base (probably) wouldn't require a lot of work at any one
time, it would require a little work every now and again, and that
adds up to a lot of work
 
 Hopefully it is just updating the set of modules to be exported, sounds
 like it could be automated, given a list of packages.
 
  * Any time a module is added to one of the new packages, either we'd
have to spend time adding it to base too, or packages continuing to
use base wouldn't (easily) be able to use that new module.
 
 Hence we should add them; shouldn’t be too much work.

I realised that there's actually no reason that the new 'base' package
has to come with GHC (even immediately after the break-up); it can just
be a package on Hackage (and, if desired, in the Haskell Platform).

So it could easily be maintained by someone else, and thus be not much
work for you, and 0 work for me  :-)


Thanks
Ian


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] Call for Papers (ICFEM 2013)

2013-02-13 Thread Huibiao Zhu
ICFEM 2013 CALL FOR PAPERS

 

 

15th International Conference on Formal Engineering Methods (ICFEM 2013)

 

Queenstown, New Zealand, 29 October - 1 November 2013

 

http://www.cs.auckland.ac.nz/icfem2013/

 

 

The 15th International Conference on Formal Engineering Methods (ICFEM 2013)
will be held at the Crowne Plaza Hotel in Queenstown, New Zealand from 29
October to 1 November 2013. Since 1997, ICFEM has been serving as an
international forum for researchers and practitioners who have been
seriously applying formal methods to practical applications. Researchers and
practitioners, from industry, academia, and government, are encouraged to
attend, and to help advance the state of the art. We are interested in work
that has been incorporated into real production systems, and in theoretical
work that promises to bring practical and tangible benefit.

 

ICFEM 2013 is organized and sponsored by The University of Auckland and will
be held in the world renowned travel destination - Queenstown. Around 1.9
million visitors are drawn to Queenstown each year to enjoy their own
unforgettable travel experience. We are looking forward to your submissions
and participation.

 

SCOPE AND TOPICS

 

Submissions related to the following principal themes are encouraged, but
any topics relevant to the field of formal methods and their practical
applications will also be considered.

 

+ Abstraction and refinement

+ Formal specification and modeling

+ Program analysis

+ Software verification

+ Software model checking

+ Formal approaches to software testing

+ Formal methods for self-adaptive systems

+ Formal methods for object and component systems

+ Formal methods for concurrent and real-time systems

+ Formal methods for cloud computing and cyber-physical systems

+ Formal methods for software safety, security, reliability and
dependability

+ Tool development, integration and experiments involving verified systems

+ Formal methods used in certifying products under international standards

+ Formal model-based development and code generation

 

SUBMISSION AND PUBLICATION

 

Submissions to the conference must not have been published or be
concurrently considered for publication elsewhere. All submissions will be
judged on the basis of originality, contribution to the field, technical and
presentation quality, and relevance to the conference. The proceedings will
be published in the Springer Lecture Notes in Computer Science series.

 

Papers should be written in English and not exceed 16 pages in LNCS format
(see http://www.springer.de/comp/lncs/authors.html for details). Submission
should be made through the ICFEM 2013 submission page
(https://www.easychair.org/conferences/?conf=icfem2013), handled by the
EasyChair conference management system.

 

IMPORTANT DATES

 

Abstract Submissions Due: 15 April 2013

Full Paper Submissions Due: 22 April 2013

Acceptance Notification: 18 June 2013

Camera-ready Papers Due: 15 July 2013

 

ORGANIZING COMMITTEE

 

General Co-Chairs

Jin Song Dong, National University of Singapore, Singapore.

Ian Hayes, The University of Queensland, Australia.

Steve Reeves, The University of Waikato, New Zealand.

 

Program Committee Co-Chairs

Lindsay Groves, Victoria University of Wellington, New Zealand.

Jing Sun, The University of Auckland, New Zealand.

 

Workshop and Tutorial Co-Chairs

Yang Liu, Nanyang Technological University, Singapore.

Jun Sun, Singapore University of Technology and Design, Singapore.

 

Local Organization Chair

Gillian Dobbie, The University of Auckland, New Zealand.

 

Publicity Co-Chairs

Jonathan Bowen, London South Bank University  Chairman, Museophile Limited,
United Kingdom.

Huibiao Zhu, East China Normal University, China.

 

PROGRAM COMMITTEE

 

Bernhard K. Aichernig, Graz University of Technology, Austria.

Yamine Ait Ameur, LISI/ENSMA, France.

Keijiro Araki, Kyushu University, Japan.

Farhad Arbab, CWI and Leiden University, The Netherlands.

Richard Banach, University of Manchester, United Kingdom.

Nikolaj Bjorner, Microsoft Research Redmond, USA.

Jonathan Bowen, London South Bank University  Chairman, Museophile Limited,
United Kingdom.

Michael Butler, University of Southampton, United Kingdom.

Andrew Butterfield, Trinity College Dublin, Ireland.

Wei-Ngan Chin, National University of Singapore, Singapore.

Jim Davies, University of Oxford, United Kingdom.

Jin Song Dong, National University of Singapore, Singapore

Zhenhua Duan, Xidian University, China.

Colin Fidge, Queensland University of Technology, Australia.

John Fitzgerald, Newcastle University, United Kingdom.

Joaquim Gabarro, Universitat Politecnica de Catalunya, Spain.

Stefania Gnesi, ISTI-CNR, Italy.

Radu Grosu, State University of New York at Stony Brook, USA.

Lindsay Groves, Victoria University of Wellington, New Zealand.

Ian Hayes, University of Queensland, Australia.

Mike Hinchey, Lero, Ireland.

Peter Gorm Larsen, Engineering College of Aarhus, 

[Haskell] Haskell Weekly News: Issue 258

2013-02-13 Thread Daniel Santa Cruz
Welcome to issue 258 of the HWN, an issue covering crowd-sourced bits
of information about Haskell from around the web. This issue covers the
week of February 03 to 09, 2013.

Quotes of the Week

   * shachaf: The trouble with the lens rabbit hole is that there are a
   few of us here at the bottom, digging.

   * monochrom: a monad is like drinking water from a bottle without
  human mouth touching bottle mouth

Top Reddit Stories

   * GHCi 7.4.2 is finally working on ARM
 Domain: luzhuomi.blogspot.com, Score: 49, Comments: 15
 On Reddit: [1] http://goo.gl/yBh9o
 Original: [2] http://goo.gl/Kgx9R

   * Implementation of a Java Just In Time Compiler in Haskell
 Domain: blog.wien.tomnetworks.com, Score: 41, Comments: 15
 On Reddit: [3] http://goo.gl/BjpX3
 Original: [4] http://goo.gl/f1Qrh

   * I/O is pure
 Domain: chris-taylor.github.com, Score: 32, Comments: 41
 On Reddit: [5] http://goo.gl/pYMXz
 Original: [6] http://goo.gl/9xeKd

   * Introduction to Haskell, Lecture 4 is Live (Pattern matching and
Guards)
 Domain: shuklan.com, Score: 28, Comments: 7
 On Reddit: [7] http://goo.gl/wvFO6
 Original: [8] http://goo.gl/sKF5C

   * What makes comonads important?
 Domain: self.haskell, Score: 20, Comments: 41
 On Reddit: [9] http://goo.gl/CqS7Z
 Original: [10] http://goo.gl/CqS7Z

   * OdHac, the Haskell Hackathon in Odessa
 Domain: haskell.org, Score: 18, Comments: 2
 On Reddit: [11] http://goo.gl/ZumYI
 Original: [12] http://goo.gl/Pfuaq

   * NYC Haskell Meetup Video: Building a Link Shortener with Haskell and
Snap
 Domain: vimeo.com, Score: 18, Comments: 3
 On Reddit: [13] http://goo.gl/j42PG
 Original: [14] http://goo.gl/RRgsl

   * Evaluation-State Assertions in Haskell
 Domain: joachim-breitner.de, Score: 18, Comments: 6
 On Reddit: [15] http://goo.gl/lMMT1
 Original: [16] http://goo.gl/7eLI5

   * A post about computational classical type theory
 Domain: queuea9.wordpress.com, Score: 17, Comments: 14
 On Reddit: [17] http://goo.gl/r8DOO
 Original: [18] http://goo.gl/UIxIY

   * parellella has a forum board for haskell. anyone working in this
direction?
 Domain: forums.parallella.org, Score: 13, Comments: 5
 On Reddit: [19] http://goo.gl/BSfRd
 Original: [20] http://goo.gl/qLnG0

   * Haskell Lectures - University of Virginia (Nishant Shukla)
 Domain: shuklan.com, Score: 13, Comments: 3
 On Reddit: [21] http://goo.gl/CxTPI
 Original: [22] http://goo.gl/JkhXz

   * NYC Haskell Meetup Video: Coding and Reasoning with Purity, Strong
Types and Monads
 Domain: vimeo.com, Score: 13, Comments: 2
 On Reddit: [23] http://goo.gl/MmGZb
 Original: [24] http://goo.gl/Vj63w

   * Introducing fixed-points via Haskell
 Domain: apfelmus.nfshost.com, Score: 9, Comments: 34
 On Reddit: [25] http://goo.gl/UuCc1
 Original: [26] http://goo.gl/SCVJc

Top StackOverflow Questions

   * Is spoon unsafe in Haskell?
 votes: 19, answers: 2
 Read on SO: [27] http://goo.gl/6RDku

   * What general structure does this type have?
 votes: 17, answers: 4
 Read on SO: [28] http://goo.gl/2GV0V

   * Why aren't Nums Ords in Haskell?
 votes: 14, answers: 2
 Read on SO: [29] http://goo.gl/8Hrkn

   * Why does cabal install reinstall packages already in .cabal/lib
 votes: 12, answers: 1
 Read on SO: [30] http://goo.gl/yMXCh

   * Free Monad of a Monad
 votes: 12, answers: 2
 Read on SO: [31] http://goo.gl/kpV2q

   * Haskell pattern match “diverge” and ⊥
 votes: 11, answers: 2
 Read on SO: [32] http://goo.gl/VeRUJ

   * How can one make a private copy of Hackage
 votes: 11, answers: 2
 Read on SO: [33] http://goo.gl/ZNAwL

   * Recursion Schemes in Agda
 votes: 9, answers: 1
 Read on SO: [34] http://goo.gl/xtmqr

Until next time,
+Daniel Santa Cruz

References

   1.
http://luzhuomi.blogspot.com/2013/02/ghci-742-is-finally-working-on-arm.html
   2.
http://www.reddit.com/r/haskell/comments/17zxgj/ghci_742_is_finally_working_on_arm/
   3. http://blog.wien.tomnetworks.com/2013/02/06/thesis/
   4.
http://www.reddit.com/r/haskell/comments/17zqgr/implementation_of_a_java_just_in_time_compiler_in/
   5. http://chris-taylor.github.com/
   6. http://www.reddit.com/r/haskell/comments/1874at/io_is_pure/
   7. http://shuklan.com/haskell/lec04.html
   8.
http://www.reddit.com/r/haskell/comments/17y18q/introduction_to_haskell_lecture_4_is_live_pattern/
   9.
http://www.reddit.com/r/haskell/comments/185k0s/what_makes_comonads_important/
  10.
http://www.reddit.com/r/haskell/comments/185k0s/what_makes_comonads_important/
  11.
http://www.haskell.org/pipermail/haskell-cafe/2013-February/106216.html
  12.
http://www.reddit.com/r/haskell/comments/17xxyf/odhac_the_haskell_hackathon_in_odessa/
  13. http://vimeo.com/59109358
  14.
http://www.reddit.com/r/haskell/comments/185la5/nyc_haskell_meetup_video_building_a_link/
  15.

[Haskell] ANNOUNCE: graphviz-2999.16.0.0

2013-02-13 Thread Ivan Lazar Miljenovic
I'm pleased to announce that I've finally tracked down all the pesky
little bugs that prevented me from releasing version 2999.16.0.0 of my
graphviz library, that helps you use the Graphviz suite of tools to
visualise graphs.

The important changes are:

* Add support for Graphviz-2.30.0:

- New attributes:

+ `Area`
+ `Head_LP`
+ `LayerListSep`
+ `LayerSelect`
+ `Tail_LP`
+ `XLP`

- `BgColor`, `Color` and `FillColor` now take a list of colors
  with optional weightings.

- Layer handling now supports layer sub-ranges.

- Added the voronoi-based option to `Overlap`.

- Added the `Striped` and `Wedged` styles.

* Updates to attributes and their values:

- The following attributes have had their values changed to better
  reflect what they represent:

+ `FontPath` takes a `Path` value.

+ `Layout` takes a `GraphvizCommand` (and thus
  `GraphvizCommand` has been moved to
  `Data.GraphViz.Attributes.Complete`).

- Added `MDS` to `Model` (which had somehow been overlooked).

- Various attributes now have defaults added/updated/removed if
  wrong.

- Removed the following deprecated attributes:

+ `ShapeFile`

+ `Z`

* Now any value that has a defined default can be parsed when the Dot
  code just has `attribute=` (which `dot -Tcanon` is fond of doing
  to reset attributes).

* `GraphvizCommand` now defines `Sfdp` (which had somehow been
  overlooked up till now).

* The `canonicalise` and related functions have been re-written;
  whilst there might be some cases where their behaviour does not
  fully match how `dot -Tcanon` and `tred` behave (due to the
  interaction of various attributes), the new implementation is much
  more sane.

* Use temporary files rather than pipes when running dot, etc.

Makes it more portable, and also avoids issues where dot, etc. use
100% CPU when a graph is passed in via stdin.

Original patch by Henning Thielemann.


-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
http://IvanMiljenovic.wordpress.com

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread wren ng thornton

On 2/12/13 5:47 PM, Nehal Patel wrote:

And so my question is, that in 2013, why isn't this process a full fledged part 
of the language? I imagine I just don't know what I'm talking about, so correct 
me if I'm wrong, but this is how I understand the workflow used in practice 
with program derivation:  1) state the problem pedantically in code, 2) work 
out a bunch of proofs with pen and paper, 3) and then translate that back into 
code, perhaps leaving you with function_v1, function_v2, function_v3, etc   -- 
that seems fine for 1998, but why is it still being done this way?


I think there's a lot more complexity in (machine-verifiably) proving 
things than you realize. I've done a fair amount of programming in Coq 
and a bit in Twelf and Agda, and one of the things I've learned is that 
the kinds of proof we do on paper are rarely rigorous and are almost 
never spelled out in full detail. That is, afterall, not the point of a 
pen  paper proof. Pen  paper proofs are about convincing humans that 
some idea makes sense, and humans are both reasonable and gullible. 
Machine-verified proofs, on the other hand, are all about walking a 
naive computer through every possible contingency and explaining how and 
why things must be the case even in the worst possible world. There's no 
way to tell the compiler you know what I mean, or the other cases are 
similar, or left as an exercise for the reader. And it's not until 
trying to machine-formalize things that we realize how often we say 
things like that on paper. Computers are very bad at generalizing 
patterns and filling in the blanks; but they're very good at 
exhaustively enumerating contingencies. So convincing a computer is 
quite the opposite from convincing a human.



That said, I'm all for getting more theorem proving goodness into 
Haskell. I often lament the fact that there's no way to require proof 
that instances obey the laws of a type class, and that GHC's rewrite 
rules don't require any proof that the rule is semantics preserving. The 
big question is, with things as they are today, does it make more sense 
to take GHC in the direction of fully-fledged dependent types? or does 
it make more sense to work on integration with dedicated tools for 
proving things?


There's already some work towards integration. Things like Yices and SBV 
can be used to prove many things, though usually not so much about 
programs per se. Whereas Liquid Haskell[1] is working specifically 
toward automated proving of preconditions and postconditions.



[1] http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/about/

--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ticking time bomb

2013-02-13 Thread Alfredo Di Napoli
+1 for keeping this alive.
Apart from the initial hype, now this issue is slowly losing attention but
I think we should always keep the risk we are exposed to.
Being I will sound pessimistic, but we should learn from the competitors
mistakes :)

Cheers,
A.

On 12 February 2013 08:49, Bob Ippolito b...@redivi.com wrote:

 The Python and Ruby communities are actively working on improving the
 security of their packaging infrastructure. I haven't paid close attention
 to any of the efforts so far, but anyone working on cabal/hackage security
 should probably take a peek. I lurk on Python's catalog-sig list and here's
 the interesting bits I've noticed from the past few weeks:

 [Catalog-sig] [Draft] Package signing and verification process
 http://mail.python.org/pipermail/catalog-sig/2013-February/004832.html

 [Catalog-sig] [DRAFT] Proposal for fixing PyPI/pip security
 http://mail.python.org/pipermail/catalog-sig/2013-February/004994.html

 Python PyPi Security Working Document:

 https://docs.google.com/document/d/1e3g1v8INHjHsUJ-Q0odQOO8s91KMAbqLQyqj20CSZYA/edit

 Rubygems Threat Model:
 http://mail.python.org/pipermail/catalog-sig/2013-February/005099.html

 https://docs.google.com/document/d/1fobWhPRqB4_JftFWh6iTWClUo_SPBnxqbBTdAvbb_SA/edit

 TUF: The Update Framework
 https://www.updateframework.com/



 On Fri, Feb 1, 2013 at 4:07 AM, Christopher Done chrisd...@gmail.comwrote:

 Hey dude, it looks like we made the same project yesterday:


 http://www.reddit.com/r/haskell/comments/17njda/proposal_a_trivial_cabal_package_signing_utility/

 Yours is nice as it doesn't depend on GPG. Although that could be a
 nice thing because GPG manages keys. Dunno.

 Another diff is that mine puts the .sig inside the .tar.gz, yours puts
 it separate.

 =)

 On 31 January 2013 09:11, Vincent Hanquez t...@snarc.org wrote:
  On 01/30/2013 07:27 PM, Edward Z. Yang wrote:
 
  https://status.heroku.com/incidents/489
 
  Unsigned Hackage packages are a ticking time bomb.
 
  I agree this is terrible, I've started working on this, but this is
 quite a
  bit of work and other priorities always pop up.
 
  https://github.com/vincenthz/cabal
  https://github.com/vincenthz/cabal-signature
 
  My current implementation generate a manifest during sdist'ing in
 cabal, and
  have cabal-signature called by cabal on the manifest to create a
  manifest.sign.
 
  The main issue i'm facing is how to create a Web of Trust for doing all
 the
  public verification bits.
 
  --
  Vincent
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread Austin Seipp
On Tue, Feb 12, 2013 at 4:47 PM, Nehal Patel nehal.a...@gmail.com wrote:

 A few months ago I took the Haskell plunge, and all goes well...
 ... snip ...
 And so my question is, that in 2013, why isn't this process a full fledged 
 part of the language? I imagine I just don't know what I'm talking about, so 
 correct me if I'm wrong, but this is how I understand the workflow used in 
 practice with program derivation:  1) state the problem pedantically in code, 
 2) work out a bunch of proofs with pen and paper, 3) and then translate that 
 back into code, perhaps leaving you with function_v1, function_v2, 
 function_v3, etc   -- that seems fine for 1998, but why is it still being 
 done this way?

There is no one-stop shop for this sort of stuff. There are plenty of
ways to show equivalence properties for certain classes of programs.

I am unsure why you believe people tend to use pen and paper -
machines are actually pretty good at this stuff! For example, I have
been using Cryptol lately, which is a functional language from Galois
for cryptography. One feature it has is equivalence checking: you can
ask if two functions are provably equivalent. This is used in the
examples to prove that a naive Rijndael implementation is equivalent
to an optimized AES implementation.

You might be asking why you can't do this for Haskell. Well, you can -
kind of. There is a library called SBV which is very similar in spirit
to Cryptol but expressed as a Haskell DSL (and not really for Crypto
specifically.) All SBV programs are executable as Haskell programs -
but we can also compile them to C, and prove properties about them by
using an SMT solver. This is 'free' and requires no programmer
intervention beyond stating the property. So we can specify faster
implementations, and properties that show they're equivalent. SMT
solvers have a very low cost and are potentially very useful for
certain problems.

So there's a lot of good approaches to tackling these problems in
certain domains, some of them more automated than others. You can't
really provide constructive 'proofs' like you do in Agda. The language
just isn't meant for it, and isn't really expressive enough for it,
even when GHC features are heavily abused.

 What I'm asking about might sound related to theorem provers, -- but if so ,I 
 feel like what I'm thinking about is not so much the very difficult problem 
 of automated proofs or even proof assistants, but the somewhat simpler task 
 of proof verification. Concretely, here's a possibility of how I imagine   
 the workflow could be:

I believe you are actually precisely talking about theorem provers, or
at least this is the impression I get. The reason for that is the next
part:

 ++ in code, pedantically setup the problem.
 ++ in code, state a theorem, and prove it -- this would require a revision to 
 the language (Haskell 201x) and perhaps look like Isabella's ISAR -- a 
 -structured- proof syntax that is easy for humans to construct and understand 
 -- in particular it would possible to easily reuse previous theorems and 
 leave out various steps.  At compile time, the compiler would check that the 
 proof was correct, or perhaps complain that it can't see how I went from step 
 3 to step 4, in which case I might have to add in another step (3b) to  help 
 the compiler verify the proof.
 ++ afterwards, I would use my new theorems to create semantically identical 
 variants of my original functions (again this process would be integrated 
 into the language)

Because people who write things in theorem provers reuse things! Yes,
many large programs in Agda and Coq for example use external libraries
of reusable proofs. An example of this is Agda's standard library for
a small case, or CoRN for Coq in the large. And the part about
'leaving things out' and filling them in if needed - well, both of
these provers have had implicit arguments for quite a while. But you
can't really make a proof out of axioms and letting the compiler
figure out everything. It's just not how those tools in particular
work.

I find it very difficult to see the difference between what you are
proposing and what people are doing now - it's often possible to give
proof of something regardless of the underlying object. You may prove
some law holds over a given structure for example, and then show some
other definitive 'thing' is classifed by that structure. An example is
a monoid, which has an associative binary operation. The /fact/ that
operation is associative is a law about a monoids' behavior.

And if you think about it, we have monoids in Haskell. And we expect
that to hold. And then actually, if you think about it, that's pretty
much the purpose of a type class in Haskell: to say some type abides
by a set of laws and properties regardless of its definitive
structure. And it's also like an interface in Java, sort of: things
which implement that basic interface must abide by *some* rules of
nature that make it so.

My point is, the 

[Haskell-cafe] optimization of recursive functions

2013-02-13 Thread Andrew Polonsky
Hello,

Is there any difference in efficiency between these two functions, when
compiled with all optimizations?

map f [] = []
map f (a:as) = f a : map f as

and

map f x = map' x where
   map' [] = []
   map' (a:as) = f a : map' as

Thanks,
Andrew
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem installing cabal-dev

2013-02-13 Thread David Turner

On 12/02/2013 10:59, JP Moresmau wrote:

Hello David, what I did is get cabal-dev from source (git clone
git://github.com/creswick/cabal-dev.git
http://github.com/creswick/cabal-dev.git). This build fine, the upper
bounds have been edited. Hopefully the new version will be released soon.


Thanks, good to hear.

With a bit of digging I've also managed to find and fix the upper bound 
that was causing my other dependency problem - priority-queue can depend 
on containers  0.5 rather than  0.4.


I've been in touch with the maintainer, James Cook, who says (a) he'll 
try and get round to releasing a new version with this upper bound 
relaxed, and (b) there are more specialised implementations of priority 
queues these days that will be more performant, so I should prefer those.


Cheers,


--
David Turner
Senior Consultant
Tracsis PLC

Tracsis PLC is a company registered in
England and Wales with number 5019106.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-mod and cabal targets

2013-02-13 Thread Francesco Mazzoli
At Wed, 13 Feb 2013 15:32:57 +0900 (JST),
Kazu Yamamoto (山本和彦) wrote:
 
 Francesco,
 
  I can confirm that 1.11.1 works.
 
 I think I fixed this problem.
 Would you try the master branch?
 
   https://github.com/kazu-yamamoto/ghc-mod

Hi Kazu,

Now I get another error:

   Error:command line: cannot satisfy -package wl-pprint

even if ‘wl-pprint’ is installed, and ‘cabal configure; cabal build’ runs fine.

Francesco

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread Rustom Mody
On Wed, Feb 13, 2013 at 4:17 AM, Nehal Patel nehal.a...@gmail.com wrote:

 A few months ago I took the Haskell plunge, and all goes well... -- but I
 really want to understand the paradigms as fully as possible, and as it
 stands, I find myself with three or four questions for which I've yet to
 find suitable answers.  I've picked one to ask the cafe -- like my other
 questions, it's somewhat amorphous and ill posed -- so much thanks in
 advance for any thoughts and comments!

 
 Why isn't Program Derivation a first class citizen?
 ---

 (Hopefully the term program derivation is commonly understood?  I mean
 it in the sense usually used to describe the main motif of Bird's The
 Algebra of Programming.  Others seem to use it as well...)

 For me, it has come to mean solving problems in roughly the following way
 1) Defining the important functions and data types in a pedantic way so
 that the semantics are clear as possible to a human, but possibly
 inefficient (I use quotes because one of my other questions is about
 whether it is really possible to reason effectively about program
 performance in ghc/Haskell…)
 2) Work out some proofs of various properties of your functions and data
 types
 3) Use the results from step 2 to provide an alternate implementation with
 provably same semantics but possibly much better performance.

 To me it seems that so much of Haskell's design is dedicated to making
 steps 1-3 possible, and those steps represent for me (and I imagine many
 others) the thing that makes Haskell (and it cousins) so beautiful.

 And so my question is, that in 2013, why isn't this process a full fledged
 part of the language? I imagine I just don't know what I'm talking about,
 so correct me if I'm wrong, but this is how I understand the workflow used
 in practice with program derivation:  1) state the problem pedantically in
 code, 2) work out a bunch of proofs with pen and paper, 3) and then
 translate that back into code, perhaps leaving you with function_v1,
 function_v2, function_v3, etc   -- that seems fine for 1998, but why is it
 still being done this way?

 What I'm asking about might sound related to theorem provers, -- but if so
 ,I feel like what I'm thinking about is not so much the very difficult
 problem of automated proofs or even proof assistants, but the somewhat
 simpler task of proof verification. Concretely, here's a possibility of how
 I imagine   the workflow could be:

 ++ in code, pedantically setup the problem.
 ++ in code, state a theorem, and prove it -- this would require a revision
 to the language (Haskell 201x) and perhaps look like Isabella's ISAR -- a
 -structured- proof syntax that is easy for humans to construct and
 understand -- in particular it would possible to easily reuse previous
 theorems and leave out various steps.  At compile time, the compiler would
 check that the proof was correct, or perhaps complain that it can't see how
 I went from step 3 to step 4, in which case I might have to add in another
 step (3b) to  help the compiler verify the proof.
 ++ afterwards, I would use my new theorems to create semantically
 identical variants of my original functions (again this process would be
 integrated into the language)

 While I feel like theorem provers offer some aspects of this workflow (in
 particular with the ability to export scala or haskell code when you are
 done), I feel that in practice it is only useful for modeling fairly
 technically things like protocols and crypto stuff -- whereas if this
 existed within haskell proper it would find much broader use and have
 greater impact.

 I haven't fully fleshed out all the various steps of what exactly it would
 mean to have program derivation be a first class citizen, but I'll pause
 here and followup if a conversation ensues.

 To me, it seems that something like this should be possible -- am i being
 naive? does it already exist?  have people tried and given up? is it just
 around the corner?  can you help me make sense of all of this?

 thanks! nehal



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



I am stating these things from somewhat foggy memory (dont have any
lambda-calculus texts handy) and so will welcome corrections from those who
know better…

In lambda-calculus, if reduction means beta-reduction, then equality is
decidable
Adding eta into the mix makes it undecidable
Haskell is basically a sugared beta-reducer
A prover must -- to be able to go anywhere -- be a beta-eta reducer.
Which is why a theorem-prover must be fundamentally more powerful and
correspondingly less decidable than an implemented programming language.

If a theorem-prover were as 'hands-free' as a programming language
Or if an implemented programming language could do the proving that a
theorem-prover could do, it would contradict the halting-problem/Rice
theorem.

-- 
http://www.the-magus.in

Re: [Haskell-cafe] Structured Graphs

2013-02-13 Thread Andres Löh
Hi John.

 What are the prospects for Haskell supporting Structured Graphs as defined
 here?
 http://www.cs.utexas.edu/~wcook/Drafts/2012/graphs.pdf

 Is there an interest by developers of GHC in doing this?

Could you be more specific about the kind of support you'd expect
from GHC? Basically all the code in the paper just works with current
GHC, no specific support is needed.

Cheers,
  Andres

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Branimir Maksimovic

ByteString gains most improvements as String must be converted o CStringfirst, 
internaly, in regex (this is warpper for libpcre), while ByteString not.libpcre 
is much faster than posix (I guess posix is also wrapper).Interface for libpcre 
is same as for Posix, there is no real effortin replacing it.
 Date: Tue, 12 Feb 2013 20:32:01 -0800
 From: bri...@aracnet.com
 To: nicolasb...@gmail.com
 CC: bm...@hotmail.com; b...@redivi.com; haskell-cafe@haskell.org
 Subject: Re: [Haskell-cafe] performance question
 
 On Tue, 12 Feb 2013 15:57:37 -0700
 Nicolas Bock nicolasb...@gmail.com wrote:
 
Here is haskell version that is faster than python, almost as fast as 
   c++.
   You need to install bytestring-lexing package for readDouble.
 
 
 I was hoping Branimir could comment on how the improvements were allocated.
 
 how much is due to text.regex.pcre (which looks to be a wrapper to libpcre) ?
 
 how much can be attributed to using data.bytestring ?
 
 you have to admit, it's amazing how well a byte-compiled, _dynamically typed_ 
 interpreter can do against an actualy native code compiler.  Can't regex be 
 done effectively in haskell ?  Is it something that can't be done, or is it 
 just such minimal effort to link to pcre that it's not worth the trouble ?
 
 
 Brian
 

  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-mod and cabal targets

2013-02-13 Thread 山本和彦
 Now I get another error:
 
Error:command line: cannot satisfy -package wl-pprint
 
 even if ‘wl-pprint’ is installed, and ‘cabal configure; cabal build’ runs 
 fine.

It seems to me that you installed multiple GHCs and wl-pprint is not
installed for one of them. Is my guess corrent?

--Kazu
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-mod and cabal targets

2013-02-13 Thread Francesco Mazzoli
At Wed, 13 Feb 2013 19:51:15 +0900 (JST),
Kazu Yamamoto (山本和彦) wrote:
 
  Now I get another error:
  
 Error:command line: cannot satisfy -package wl-pprint
  
  even if ‘wl-pprint’ is installed, and ‘cabal configure; cabal build’ runs 
  fine.
 
 It seems to me that you installed multiple GHCs and wl-pprint is not
 installed for one of them. Is my guess corrent?

Nope :).  I have one ‘ghc’, and this is my ‘ghc-pkg list’:
http://hpaste.org/82293.  ‘ghci -package wl-pprint’ runs just fine.

Francesco

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread Jon Fairbairn
Rustom Mody rustompm...@gmail.com writes:

 On Wed, Feb 13, 2013 at 4:17 AM, Nehal Patel nehal.a...@gmail.com wrote:
 
 Why isn't Program Derivation a first class citizen?
 ---
 I am stating these things from somewhat foggy memory (dont have any
 lambda-calculus texts handy) and so will welcome corrections from those who
 know better…

 In lambda-calculus, if reduction means beta-reduction, then equality is

semi

 decidable

 If a theorem-prover were as 'hands-free' as a programming language
 Or if an implemented programming language could do the proving that a
 theorem-prover could do, it would contradict the halting-problem/Rice
 theorem.

Just so, but I’ve long (I suggested something of the sort to a
PhD student in about 1991 but he wasn’t interested) thought
that, since automatic programme transformation isn’t going to
work (for the reason you give), having manually written
programme tranformations as part of the code would be a useful
way of coding. RULES pragmas go a little way towards this, but
the idea would be that the language supply various known valid
transformation operators (akin to theorem proving such as in
HOL), and programmers would explicitly write transformation for
their functions that the compiler would apply.

-- 
Jón Fairbairn jon.fairba...@cl.cam.ac.uk


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-mod and cabal targets

2013-02-13 Thread 山本和彦
 Nope :).  I have one ‘ghc’, and this is my ‘ghc-pkg list’:
 http://hpaste.org/82293.  ‘ghci -package wl-pprint’ runs just fine.

Uhhhm. Are you using sandbox?

--Kazu

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-mod and cabal targets

2013-02-13 Thread Francesco Mazzoli
At Wed, 13 Feb 2013 22:01:35 +0900 (JST),
Kazu Yamamoto (山本和彦) wrote:
 
  Nope :).  I have one ‘ghc’, and this is my ‘ghc-pkg list’:
  http://hpaste.org/82293.  ‘ghci -package wl-pprint’ runs just fine.
 
 Uhhhm. Are you using sandbox?

Actually it turns out that I have a ‘cabal-dev’ directory both in / and /src in
the project folder, which is weird - could it be that ghc-mod invoked it?

In any case, if I delete those, I get:

bitonic@clay ~/src/kant (git)-[master] % ghc-mod check -g-v 
src/Kant/Syntax.hs
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package array-0.4.0.0 ... linking ... done.
Loading package deepseq-1.3.0.0 ... linking ... done.
Loading package containers-0.4.2.1 ... linking ... done.
Loading package semigroups-0.8.4.1 ... linking ... done.
Loading package transformers-0.3.0.0 ... linking ... done.
Loading package comonad-3.0.0.2 ... linking ... done.
Loading package contravariant-0.2.0.2 ... linking ... done.
Loading package semigroupoids-3.0.0.1 ... linking ... done.
Loading package bifunctors-3.0 ... linking ... done.
Loading package prelude-extras-0.2 ... linking ... done.
Loading package bound-0.5.0.2 ... linking ... done.
Loading package prelude-extras-0.3 ... linking ... done.
Loading package bound-0.6 ... linking ... done.
Loading package bytestring-0.9.2.1 ... linking ... done.
Loading package filepath-1.3.0.0 ... linking ... done.
Loading package old-locale-1.0.0.4 ... linking ... done.
Loading package old-time-1.1.0.0 ... linking ... done.
Loading package unix-2.5.1.0 ... linking ... done.
Loading package directory-1.1.0.2 ... linking ... done.
Loading package terminfo-0.3.2.5 ... linking ... done.
Loading package haskeline-0.7.0.3 ... linking ... done.
Loading package mtl-2.1.1 ... linking ... done.
Loading package text-0.11.2.0 ... linking ... done.
Loading package parsec-3.1.2 ... linking ... done.
Loading package transformers-0.2.2.0 ... linking ... done.
Loading package mtl-2.0.1.0 ... linking ... done.
Loading package parsec-3.1.3 ... linking ... done.
Loading package pretty-1.1.1.0 ... linking ... done.
Loading package wl-pprint-1.1 ... linking ... done.
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import occurred
ghc-mod:0:0:Probably mutual module import 

Re: [Haskell-cafe] optimization of recursive functions

2013-02-13 Thread Alp Mestanogullari
If a difference appears, I believe
http://blog.johantibell.com/2010/09/static-argument-transformation.htmlwould
be involved. Also, the second map function could be inlined by GHC,
avoiding calling f through a pointer because at the call site, we know
what 'f' is (this is also mentionned in the blog post by Johan).


On Wed, Feb 13, 2013 at 9:55 AM, Andrew Polonsky
andrew.polon...@gmail.comwrote:

 Hello,

 Is there any difference in efficiency between these two functions, when
 compiled with all optimizations?

 map f [] = []
 map f (a:as) = f a : map f as

 and

 map f x = map' x where
map' [] = []
map' (a:as) = f a : map' as

 Thanks,
 Andrew

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Alp Mestanogullari
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] searching the attic for lambada

2013-02-13 Thread John Lask


I'm interested in resurrecting the idl generator from lambada:

http://www.dcs.gla.ac.uk/mail-www/haskell/msg02391.html

is the code out there in anyone's attic?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Brandon Allbery
On Tue, Feb 12, 2013 at 11:32 PM, bri...@aracnet.com wrote:

 actualy native code compiler.  Can't regex be done effectively in haskell
 ?  Is it something that can't be done, or is it just such minimal effort to
 link to pcre that it's not worth the trouble ?


PCRE is pretty heavily optimized.  POSIX regex engines generally rely on
vendor regex libraries which my not be well optimized; there is a native
Haskell implementation as well, but that one runs into a different issue,
namely a lack of interest (regexes are often seen as foreign to
Haskell-think, so there's little interest in making them work well; people
who *do* need them for some reason usually punt to pcre).

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-mod and cabal targets

2013-02-13 Thread Francesco Mazzoli
At Wed, 13 Feb 2013 13:24:48 +,
Francesco Mazzoli wrote:
 
 At Wed, 13 Feb 2013 22:01:35 +0900 (JST),
 Kazu Yamamoto (山本和彦) wrote:
  
   Nope :).  I have one ‘ghc’, and this is my ‘ghc-pkg list’:
   http://hpaste.org/82293.  ‘ghci -package wl-pprint’ runs just fine.
  
  Uhhhm. Are you using sandbox?
 
 Actually it turns out that I have a ‘cabal-dev’ directory both in / and /src 
 in
 the project folder, which is weird - could it be that ghc-mod invoked it?
 
 In any case, if I delete those, I get:
 
 [...]
 
 Francesco

...but if I try it in Emacs it seems to work.  So I guess that’s fine by me :).

Thanks,
Francesco

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Nicolas Bock
Since I have very little experience with Haskell and am not used to
Haskell-think yet, I don't quite understand your statement that regexes are
seen as foreign to Haskell-think. Could you elaborate? What would a more
native solution look like? From what I have learned so far, it seems to
me that Haskell is a lot about clear, concise, and well structured code. I
find regexes extremely compact and powerful, allowing for very concise
code, which should fit the bill perfectly, or shouldn't it?

Thanks,

nick



On Wed, Feb 13, 2013 at 8:12 AM, Brandon Allbery allber...@gmail.comwrote:

 On Tue, Feb 12, 2013 at 11:32 PM, bri...@aracnet.com wrote:

 actualy native code compiler.  Can't regex be done effectively in haskell
 ?  Is it something that can't be done, or is it just such minimal effort to
 link to pcre that it's not worth the trouble ?


 PCRE is pretty heavily optimized.  POSIX regex engines generally rely on
 vendor regex libraries which my not be well optimized; there is a native
 Haskell implementation as well, but that one runs into a different issue,
 namely a lack of interest (regexes are often seen as foreign to
 Haskell-think, so there's little interest in making them work well; people
 who *do* need them for some reason usually punt to pcre).

 --
 brandon s allbery kf8nh   sine nomine
 associates
 allber...@gmail.com
 ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonad
 http://sinenomine.net

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread David Thomas
One way in which regexps are foreign to Haskell-think is that, if they
break, they generally break at run-time.  This could be ameliorated with
template haskell, but a substantial portion of Haskell coders find that a
smell itself.


On Wed, Feb 13, 2013 at 8:32 AM, Nicolas Bock nicolasb...@gmail.com wrote:

 Since I have very little experience with Haskell and am not used to
 Haskell-think yet, I don't quite understand your statement that regexes are
 seen as foreign to Haskell-think. Could you elaborate? What would a more
 native solution look like? From what I have learned so far, it seems to
 me that Haskell is a lot about clear, concise, and well structured code. I
 find regexes extremely compact and powerful, allowing for very concise
 code, which should fit the bill perfectly, or shouldn't it?

 Thanks,

 nick



 On Wed, Feb 13, 2013 at 8:12 AM, Brandon Allbery allber...@gmail.comwrote:

 On Tue, Feb 12, 2013 at 11:32 PM, bri...@aracnet.com wrote:

 actualy native code compiler.  Can't regex be done effectively in
 haskell ?  Is it something that can't be done, or is it just such minimal
 effort to link to pcre that it's not worth the trouble ?


 PCRE is pretty heavily optimized.  POSIX regex engines generally rely on
 vendor regex libraries which my not be well optimized; there is a native
 Haskell implementation as well, but that one runs into a different issue,
 namely a lack of interest (regexes are often seen as foreign to
 Haskell-think, so there's little interest in making them work well; people
 who *do* need them for some reason usually punt to pcre).

 --
 brandon s allbery kf8nh   sine nomine
 associates
 allber...@gmail.com
 ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonad
 http://sinenomine.net



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Nicolas Trangez
On Wed, 2013-02-13 at 08:39 -0800, David Thomas wrote:
 One way in which regexps are foreign to Haskell-think is that, if
 they
 break, they generally break at run-time.  This could be ameliorated
 with
 template haskell

Care to elaborate on the ameliorate using TH part? I figure regexes
would be mostly used to parse some runtime-provided string, so how could
compile-TH provide any help?

Nicolas



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Free lunch with GADTs

2013-02-13 Thread Tristan Ravitch
Hi cafe,

I'm playing around with GADTs and was hoping they would let me have my
cake and eat it too.  I am trying to use GADTs to tag a type with some
extra information.  This information is very useful in a few cases,
but the more common case is that I just have a list of items without
the tags.  Ideally I'll be able to recover the tags from the common
case via pattern matching.  What I have right now is:



{-# LANGUAGE GADTs, ExistentialQuantification, EmptyDataDecls, RankNTypes, 
LiberalTypeSynonyms #-}
data Tag1
data Tag2

data T tag where
  T1 :: Int - T Tag1
  T2 :: Double - T Tag2

f1 :: [T t] - Int
f1 = foldr worker 0
  where
worker (T1 i) a = i+a
worker _ a = a

f2 :: [T Tag1] - Int
f2 = foldr worker 0
  where
worker :: T Tag1 - Int - Int
worker (T1 i) a = a + i



In f2 I can work with just values with one tag, but f1 is also
important to me (working with the untagged values).  f1 type checks
but I do not know how to make a list [T t].  If I try the naive thing
in ghci, it understandably tells me:

 let l = [T1 5, T2 6.0]

interactive:6:16:
Couldn't match type `Tag2' with `Tag1'
Expected type: T Tag1
  Actual type: T Tag2
In the return type of a call of `T2'
In the expression: T2 6.0
In the expression: [T1 5, T2 6.0]


Fair enough.  Is there any magical way to make this possible?  I am
hoping to avoid making a separate existential wrapper around my type.
I know that approach works, but that would make the more common case
much less convenient.  I also remember trying to use a type synonym
with -XLiberalTypeSynonyms:

type SimpleT = forall tag . T tag

I managed to basically get that working with another variant of my
list processing function, but it required -XImpredicativeTypes, which
apparently did very unfortunate things to type inference.


Thanks


pgpMrhnpZ4X6k.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread MigMit
Well, this runtime errors are actually type errors. Regexps are actually a DSL, 
which is not embedded in Haskell. But it could be. Strings won't work for that, 
but something like that would:

filter (match $ a  many anyChar  .txt) filenames

and this certainly can be produced by TH like that:

filter (match $(regexp a.*\\.txt)) filenames

On Feb 13, 2013, at 8:43 PM, Nicolas Trangez nico...@incubaid.com wrote:

 On Wed, 2013-02-13 at 08:39 -0800, David Thomas wrote:
 One way in which regexps are foreign to Haskell-think is that, if
 they
 break, they generally break at run-time.  This could be ameliorated
 with
 template haskell
 
 Care to elaborate on the ameliorate using TH part? I figure regexes
 would be mostly used to parse some runtime-provided string, so how could
 compile-TH provide any help?
 
 Nicolas
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Brandon Allbery
On Wed, Feb 13, 2013 at 11:32 AM, Nicolas Bock nicolasb...@gmail.comwrote:

 Since I have very little experience with Haskell and am not used to
 Haskell-think yet, I don't quite understand your statement that regexes are
 seen as foreign to Haskell-think. Could you elaborate? What would a more
 native solution look like? From what I have learned so far, it seems to
 me that Haskell is a lot about clear,


The native solution is a parser like parsec/attoparsec.  The problem with
regexes is that you can't at compile time verify that, for example, you
have as many matching groups in the regex as the code using it expects, nor
does an optional matching group behave as a Maybe like it should; nor are
there nice ways to recover.  A parser gives you full control and better
compile time checking, and is generally recommended.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread David Thomas
I don't think you can do much about fails to match the input string -
indeed, that's often desired behavior...  and matches the wrong thing you
can only catch with testing.

The simplest place template haskell could help with is when the expression
isn't a valid expression in the first place, and will fail to compile.  If
you're just validating, I don't think you can do better; in order to
improve your confidence of correctness, your only option is testing against
a set of positives and negatives.

If you're capturing, you might be able to do a little better, if you are
able to get some of that info into the types (number of capture groups
expected, for instance) - then, if your code expects to deal with a
different number of captured pieces than your pattern represents, it can be
caught at compile time.

If you're capturing strings that you intend to convert to other types, and
can decorate regexp components with the type they're going to capture
(which can then be quickchecked - certainly a pattern should never match
and then fail to read, c), and if you are able to propagate this info
during composition, you might actually be able to catch a good chunk of
errors.

Note that much of this works quite a bit different than most existing
regexp library APIs, where you pass a bare string and captures wind up in
some kind of list, which I expect is much of the reason no one's done it
yet (so far as I'm aware).


On Wed, Feb 13, 2013 at 8:43 AM, Nicolas Trangez nico...@incubaid.comwrote:

 On Wed, 2013-02-13 at 08:39 -0800, David Thomas wrote:
  One way in which regexps are foreign to Haskell-think is that, if
  they
  break, they generally break at run-time.  This could be ameliorated
  with
  template haskell

 Care to elaborate on the ameliorate using TH part? I figure regexes
 would be mostly used to parse some runtime-provided string, so how could
 compile-TH provide any help?

 Nicolas



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread David Thomas
The fact that parsec and attoparsec exist and can be pressed into service
with reasonable performance (I think?) on tasks for which regexps are
suitable is probably another big part of the reason no one's done it yet.
 I expect much of the plumbing would wind up looking a lot like those,
actually.


On Wed, Feb 13, 2013 at 9:43 AM, David Thomas davidleotho...@gmail.comwrote:

 I don't think you can do much about fails to match the input string -
 indeed, that's often desired behavior...  and matches the wrong thing you
 can only catch with testing.

 The simplest place template haskell could help with is when the expression
 isn't a valid expression in the first place, and will fail to compile.  If
 you're just validating, I don't think you can do better; in order to
 improve your confidence of correctness, your only option is testing against
 a set of positives and negatives.

 If you're capturing, you might be able to do a little better, if you are
 able to get some of that info into the types (number of capture groups
 expected, for instance) - then, if your code expects to deal with a
 different number of captured pieces than your pattern represents, it can be
 caught at compile time.

 If you're capturing strings that you intend to convert to other types, and
 can decorate regexp components with the type they're going to capture
 (which can then be quickchecked - certainly a pattern should never match
 and then fail to read, c), and if you are able to propagate this info
 during composition, you might actually be able to catch a good chunk of
 errors.

 Note that much of this works quite a bit different than most existing
 regexp library APIs, where you pass a bare string and captures wind up in
 some kind of list, which I expect is much of the reason no one's done it
 yet (so far as I'm aware).


 On Wed, Feb 13, 2013 at 8:43 AM, Nicolas Trangez nico...@incubaid.comwrote:

 On Wed, 2013-02-13 at 08:39 -0800, David Thomas wrote:
  One way in which regexps are foreign to Haskell-think is that, if
  they
  break, they generally break at run-time.  This could be ameliorated
  with
  template haskell

 Care to elaborate on the ameliorate using TH part? I figure regexes
 would be mostly used to parse some runtime-provided string, so how could
 compile-TH provide any help?

 Nicolas



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Brandon Allbery
On Wed, Feb 13, 2013 at 12:46 PM, David Thomas davidleotho...@gmail.comwrote:

 The fact that parsec and attoparsec exist and can be pressed into service
 with reasonable performance (I think?) on tasks for which regexps are
 suitable is probably another big part of the reason no one's done it yet.
  I expect much of the plumbing would wind up looking a lot like those,
 actually.


When I started out with Haskell, one of my early thoughts was about
designing a DSL for Icon-style pattern matching; I dropped it when I
realized I was reinventing (almost identically, at least for its lower
level combinators) Parsec.  Nothing really to be gained except from a
tutelary standpoint.  And the mapping from Icon patterns to regex patterns
is pretty much mechanical if you phrase it so you aren't executing code in
the middle.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] searching the attic for lambada

2013-02-13 Thread Stephen Tetley
Hi John

I have a copy of version 0.1 and a related PDF - I'll email you as
attachments them offlist

On 13 February 2013 14:16, John Lask jvl...@hotmail.com wrote:

 I'm interested in resurrecting the idl generator from lambada:

 http://www.dcs.gla.ac.uk/mail-www/haskell/msg02391.html

 is the code out there in anyone's attic?


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Aleksey Khudyakov

On 10.02.2013 02:30, Nicolas Bock wrote:

Hi Aleksey,

could you show me how I would use ByteString? I can't get the script to
compile. It's complaining that:

No instance for (RegexContext
Regex Data.ByteString.ByteString
(AllTextSubmatches [] a0))

which is too cryptic for me. Is it not able to form a regular expression
with a ByteString argument? From the documentation of Text.Regex.Posix
it seems that it should be. Maybe it's because I am trying to read
(r!!1) :: Double which I am having issues with also. Is (r!!1) a
ByteString? And if so, how would I convert that to a Double?

It's error message from regex library you use. I can't say what exactly 
it means, I never used it. But most likely it cannot work with bytestrings.


Most other languages rely on regexp as go to tool for parsing. In 
haskell main parsing tools are parser combinators such as parsec[1] or

attoparsec[2]. Parsec is more generic and attoparsec is much faster.

In attachment program which uses attoparsec for parsing it's about 
2times slower than C++ example posted in the thread.


[1] http://hackage.haskell.org/package/parsec
[2] http://hackage.haskell.org/package/attoparsec
{-# LANGUAGE OverloadedStrings #-}
import Control.Applicative
import Data.Histogram.Fill
import Data.Histogram  (Histogram)

import Data.Attoparsec.Text.Lazy(parse,Result(..))
import Data.Attoparsec.Text  hiding (parse,Done,Fail)

import qualified Data.Text.Lazyas T
import qualified Data.Text.Lazy.IO as T
import Prelude hiding (takeWhile)


hb :: HBuilder Double (Histogram LogBinD Int)
hb = forceInt - mkSimple (logBinDN 1e-8 10 10)

streamBS :: T.Text - [Double]
streamBS bs
  | T.null bs = []
  | otherwise  = case parse go bs of
   Done rest x - x : streamBS rest
   Fail _ cxt e - error $ e ++   ++ show cxt
  where
num = decimal :: Parser Int
go =  string matrix(
   * num * char ',' * num * char ')'
   * takeWhile (==' ') * char '=' * takeWhile (== ' ') * double * endOfLine

main :: IO ()
main = do
  print . fillBuilder hb . streamBS = T.getContents





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Aleksey Khudyakov

On 13.02.2013 21:41, Brandon Allbery wrote:

On Wed, Feb 13, 2013 at 11:32 AM, Nicolas Bock nicolasb...@gmail.com
mailto:nicolasb...@gmail.com wrote:

Since I have very little experience with Haskell and am not used to
Haskell-think yet, I don't quite understand your statement that
regexes are seen as foreign to Haskell-think. Could you elaborate?
What would a more native solution look like? From what I have
learned so far, it seems to me that Haskell is a lot about clear,


The native solution is a parser like parsec/attoparsec.  The problem
with regexes is that you can't at compile time verify that, for example,
you have as many matching groups in the regex as the code using it
expects, nor does an optional matching group behave as a Maybe like it
should; nor are there nice ways to recover.  A parser gives you full
control and better compile time checking, and is generally recommended.

Regexps only have this problem if they are compiled from string. Nothing 
prevents from building them using combinators. regex-applicaitve[1] uses 
this approach and quite nice to use.


[1] http://hackage.haskell.org/package/regex-applicative

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How to quantify the overhead?

2013-02-13 Thread Daryoush Mehrtash
Before I write the code I like to be able to quantify my expected result.
 But I am having hard time quantifying my expected result
for alternative approaches in Haskell. I would appreciate any comments
from experienced Haskell-ers on this problem:

Suppose I have a big list of integers and I like to find the first two that
add up to a number, say 10.

One approach is to put the numbers in  the map as I read them and each step
look for the 10-x of the number before going on to the next value in the
list.

Alternatively, I am trying to see if I it make sense to covert the list to
a series of computations that each is looking for the 10-x in the rest of
the list (running them  in breath first search).   I like to find out
sanity of using the Applicative/Alternative class's (|) operator to set
up the computation.  So each element of the list  (say x)  is recursively
converted to a computation to SearchFor (10 - x)   that is (|)-ed with
all previous computation until one of the computations returns the pair
that add up to 10 or all computation reach end of the list.

Does the approach make any sense?

What kind of overhead should I expect if I convert the numbers to a
computation on the list?  In one case I would have a number in a map and in
the other I would have a number in a computation over a list.  How do I
quantify the overheads?


Thanks,

Daryoush
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread ok
 On 13.02.2013 21:41, Brandon Allbery wrote:
 The native solution is a parser like parsec/attoparsec.

Aleksey Khudyakov alexey.sklad...@gmail.com replied

 Regexps only have this problem if they are compiled from string. Nothing
 prevents from building them using combinators. regex-applicative[1] uses
 this approach and quite nice to use.

 [1] http://hackage.haskell.org/package/regex-applicative

That _is_ a nice package, but
  it _is_ 'a parser like parsec/attoparsec'.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Brandon Allbery
On Wed, Feb 13, 2013 at 5:45 PM, o...@cs.otago.ac.nz wrote:

  On 13.02.2013 21:41, Brandon Allbery wrote:
  The native solution is a parser like parsec/attoparsec.

 Aleksey Khudyakov alexey.sklad...@gmail.com replied

  Regexps only have this problem if they are compiled from string. Nothing
  prevents from building them using combinators. regex-applicative[1] uses
  this approach and quite nice to use.
 
  [1] http://hackage.haskell.org/package/regex-applicative

 That _is_ a nice package, but
   it _is_ 'a parser like parsec/attoparsec'.


Well, yes; it's a case in point.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell Weekly News: Issue 258

2013-02-13 Thread Daniel Santa Cruz
Welcome to issue 258 of the HWN, an issue covering crowd-sourced bits
of information about Haskell from around the web. This issue covers the
week of February 03 to 09, 2013.

Quotes of the Week

   * shachaf: The trouble with the lens rabbit hole is that there are a
   few of us here at the bottom, digging.

   * monochrom: a monad is like drinking water from a bottle without
  human mouth touching bottle mouth

Top Reddit Stories

   * GHCi 7.4.2 is finally working on ARM
 Domain: luzhuomi.blogspot.com, Score: 49, Comments: 15
 On Reddit: [1] http://goo.gl/yBh9o
 Original: [2] http://goo.gl/Kgx9R

   * Implementation of a Java Just In Time Compiler in Haskell
 Domain: blog.wien.tomnetworks.com, Score: 41, Comments: 15
 On Reddit: [3] http://goo.gl/BjpX3
 Original: [4] http://goo.gl/f1Qrh

   * I/O is pure
 Domain: chris-taylor.github.com, Score: 32, Comments: 41
 On Reddit: [5] http://goo.gl/pYMXz
 Original: [6] http://goo.gl/9xeKd

   * Introduction to Haskell, Lecture 4 is Live (Pattern matching and
Guards)
 Domain: shuklan.com, Score: 28, Comments: 7
 On Reddit: [7] http://goo.gl/wvFO6
 Original: [8] http://goo.gl/sKF5C

   * What makes comonads important?
 Domain: self.haskell, Score: 20, Comments: 41
 On Reddit: [9] http://goo.gl/CqS7Z
 Original: [10] http://goo.gl/CqS7Z

   * OdHac, the Haskell Hackathon in Odessa
 Domain: haskell.org, Score: 18, Comments: 2
 On Reddit: [11] http://goo.gl/ZumYI
 Original: [12] http://goo.gl/Pfuaq

   * NYC Haskell Meetup Video: Building a Link Shortener with Haskell and
Snap
 Domain: vimeo.com, Score: 18, Comments: 3
 On Reddit: [13] http://goo.gl/j42PG
 Original: [14] http://goo.gl/RRgsl

   * Evaluation-State Assertions in Haskell
 Domain: joachim-breitner.de, Score: 18, Comments: 6
 On Reddit: [15] http://goo.gl/lMMT1
 Original: [16] http://goo.gl/7eLI5

   * A post about computational classical type theory
 Domain: queuea9.wordpress.com, Score: 17, Comments: 14
 On Reddit: [17] http://goo.gl/r8DOO
 Original: [18] http://goo.gl/UIxIY

   * parellella has a forum board for haskell. anyone working in this
direction?
 Domain: forums.parallella.org, Score: 13, Comments: 5
 On Reddit: [19] http://goo.gl/BSfRd
 Original: [20] http://goo.gl/qLnG0

   * Haskell Lectures - University of Virginia (Nishant Shukla)
 Domain: shuklan.com, Score: 13, Comments: 3
 On Reddit: [21] http://goo.gl/CxTPI
 Original: [22] http://goo.gl/JkhXz

   * NYC Haskell Meetup Video: Coding and Reasoning with Purity, Strong
Types and Monads
 Domain: vimeo.com, Score: 13, Comments: 2
 On Reddit: [23] http://goo.gl/MmGZb
 Original: [24] http://goo.gl/Vj63w

   * Introducing fixed-points via Haskell
 Domain: apfelmus.nfshost.com, Score: 9, Comments: 34
 On Reddit: [25] http://goo.gl/UuCc1
 Original: [26] http://goo.gl/SCVJc

Top StackOverflow Questions

   * Is spoon unsafe in Haskell?
 votes: 19, answers: 2
 Read on SO: [27] http://goo.gl/6RDku

   * What general structure does this type have?
 votes: 17, answers: 4
 Read on SO: [28] http://goo.gl/2GV0V

   * Why aren't Nums Ords in Haskell?
 votes: 14, answers: 2
 Read on SO: [29] http://goo.gl/8Hrkn

   * Why does cabal install reinstall packages already in .cabal/lib
 votes: 12, answers: 1
 Read on SO: [30] http://goo.gl/yMXCh

   * Free Monad of a Monad
 votes: 12, answers: 2
 Read on SO: [31] http://goo.gl/kpV2q

   * Haskell pattern match “diverge” and ⊥
 votes: 11, answers: 2
 Read on SO: [32] http://goo.gl/VeRUJ

   * How can one make a private copy of Hackage
 votes: 11, answers: 2
 Read on SO: [33] http://goo.gl/ZNAwL

   * Recursion Schemes in Agda
 votes: 9, answers: 1
 Read on SO: [34] http://goo.gl/xtmqr

Until next time,
+Daniel Santa Cruz

References

   1.
http://luzhuomi.blogspot.com/2013/02/ghci-742-is-finally-working-on-arm.html
   2.
http://www.reddit.com/r/haskell/comments/17zxgj/ghci_742_is_finally_working_on_arm/
   3. http://blog.wien.tomnetworks.com/2013/02/06/thesis/
   4.
http://www.reddit.com/r/haskell/comments/17zqgr/implementation_of_a_java_just_in_time_compiler_in/
   5. http://chris-taylor.github.com/
   6. http://www.reddit.com/r/haskell/comments/1874at/io_is_pure/
   7. http://shuklan.com/haskell/lec04.html
   8.
http://www.reddit.com/r/haskell/comments/17y18q/introduction_to_haskell_lecture_4_is_live_pattern/
   9.
http://www.reddit.com/r/haskell/comments/185k0s/what_makes_comonads_important/
  10.
http://www.reddit.com/r/haskell/comments/185k0s/what_makes_comonads_important/
  11.
http://www.haskell.org/pipermail/haskell-cafe/2013-February/106216.html
  12.
http://www.reddit.com/r/haskell/comments/17xxyf/odhac_the_haskell_hackathon_in_odessa/
  13. http://vimeo.com/59109358
  14.
http://www.reddit.com/r/haskell/comments/185la5/nyc_haskell_meetup_video_building_a_link/
  15.

Re: [Haskell-cafe] optimization of recursive functions

2013-02-13 Thread wren ng thornton

On 2/13/13 3:55 AM, Andrew Polonsky wrote:

Hello,

Is there any difference in efficiency between these two functions, when
compiled with all optimizations?

map f [] = []
map f (a:as) = f a : map f as

and

map f x = map' x where
map' [] = []
map' (a:as) = f a : map' as



As Alp says, there is some difference since the second definition closes 
over f. Whether that closure is an optimization or a pessimization is a 
bit fragile. When there are multiple things being closed over, then the 
closure is faster than passing things to the recursion. But, at least in 
the past, there was a transition point where passing arguments is faster 
when there's only one of them; I haven't run benchmarks on newer 
versions of GHC to see if that's still true.


The second point is whether these definitions get inlined at their use 
sites or not. If they get inlined, then the second has a definite 
advantage since we will know f statically and can use a direct call 
rather than an indirect call (unless the static f is a locally bound 
variable and therefore still unknown). Being able to make indirect calls 
direct is one of the major benefits of inlining. However, in new 
versions of GHC, it bases decisions about whether to inline or not on 
how many arguments the definition has. Thus, you'd get more inlining if 
you used the definition:


map f = map'
where
map' [] = []
map' (a:as) = f a : map' as

since this allows inlining when map is only applied to f, rather than 
applied to both f and x. Sometimes this extra inlining is good (for the 
reasons mentioned above), but sometimes it's bad because it can lead to 
code bloat. Remember, the major bottleneck of computing these days is 
access to disk and memory. So increasing the size of the compiled file 
can lead to slowdown because you end up blowing the instruction cache 
and needing to load from memory more often. (The exact details of when 
this result occurs are, of course, extremely dependent on the particular 
program and what's causing the code bloat.)


In short, the worker/wrapper transform often improves performance, but 
for this specific example it's hard to say which is faster. Just 
benchmark it! Criterion is an excellent tool for answering questions 
like these, though it doesn't quite get at the inlining details. I 
wouldn't worry too much about it unless (a) profiling indicated that 
this function was a problem, or (b) I had a large client program that I 
could test performance with. Of course, if you're concerned about 
performance, another thing to look at here is list-fusion which will 
allow the compiler to remove intermediate lists.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread wren ng thornton

On 2/13/13 11:32 AM, Nicolas Bock wrote:

Since I have very little experience with Haskell and am not used to
Haskell-think yet, I don't quite understand your statement that regexes are
seen as foreign to Haskell-think. Could you elaborate? What would a more
native solution look like? From what I have learned so far, it seems to
me that Haskell is a lot about clear, concise, and well structured code. I
find regexes extremely compact and powerful, allowing for very concise
code, which should fit the bill perfectly, or shouldn't it?


Regexes are powerful and concise for recognizing regular languages. They 
are not, however, very good for *parsing* regular languages; nor can 
they handle non-regular languages (unless you're relying on the badness 
of pcre). In other languages people press regexes into service for 
parsing because the alternative is using an external DSL like lex/yacc, 
javaCC, etc. Whereas, in Haskell, we have powerful and concise tools for 
parsing context-free languages and beyond (e.g., parsec, attoparsec).


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] performance question

2013-02-13 Thread Erik de Castro Lopo
wren ng thornton wrote:

 Regexes are powerful and concise for recognizing regular languages. They 
 are not, however, very good for *parsing* regular languages; nor can 
 they handle non-regular languages (unless you're relying on the badness 
 of pcre). In other languages people press regexes into service for 
 parsing because the alternative is using an external DSL like lex/yacc, 
 javaCC, etc. Whereas, in Haskell, we have powerful and concise tools for 
 parsing context-free languages and beyond (e.g., parsec, attoparsec).

This cannot be emphasized heavily enough.

Once you have learnt how to use one or more of these parsec libraries they
will become your main tool for parsing everything from complex input languages
like haskell itself, all the way down to relatively simple config files.

Parsec style parsers are built up out of small composable (and more
importantly reusable) combinators, that are easier to read and easier
to maintain than anything other than the most trivial regex.

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: graphviz-2999.16.0.0

2013-02-13 Thread Ivan Lazar Miljenovic
I'm pleased to announce that I've finally tracked down all the pesky
little bugs that prevented me from releasing version 2999.16.0.0 of my
graphviz library, that helps you use the Graphviz suite of tools to
visualise graphs.

The important changes are:

* Add support for Graphviz-2.30.0:

- New attributes:

+ `Area`
+ `Head_LP`
+ `LayerListSep`
+ `LayerSelect`
+ `Tail_LP`
+ `XLP`

- `BgColor`, `Color` and `FillColor` now take a list of colors
  with optional weightings.

- Layer handling now supports layer sub-ranges.

- Added the voronoi-based option to `Overlap`.

- Added the `Striped` and `Wedged` styles.

* Updates to attributes and their values:

- The following attributes have had their values changed to better
  reflect what they represent:

+ `FontPath` takes a `Path` value.

+ `Layout` takes a `GraphvizCommand` (and thus
  `GraphvizCommand` has been moved to
  `Data.GraphViz.Attributes.Complete`).

- Added `MDS` to `Model` (which had somehow been overlooked).

- Various attributes now have defaults added/updated/removed if
  wrong.

- Removed the following deprecated attributes:

+ `ShapeFile`

+ `Z`

* Now any value that has a defined default can be parsed when the Dot
  code just has `attribute=` (which `dot -Tcanon` is fond of doing
  to reset attributes).

* `GraphvizCommand` now defines `Sfdp` (which had somehow been
  overlooked up till now).

* The `canonicalise` and related functions have been re-written;
  whilst there might be some cases where their behaviour does not
  fully match how `dot -Tcanon` and `tred` behave (due to the
  interaction of various attributes), the new implementation is much
  more sane.

* Use temporary files rather than pipes when running dot, etc.

Makes it more portable, and also avoids issues where dot, etc. use
100% CPU when a graph is passed in via stdin.

Original patch by Henning Thielemann.


-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
http://IvanMiljenovic.wordpress.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe