Re: [Haskell-cafe] A question about laziness and performance in document serialization.

2013-08-22 Thread Roman Cheplyaka
* Kyle Hanson hanoo...@gmail.com [2013-08-20 18:23:48-0700]
 So I am not entirely clear on how to optimize for performance for lazy
 bytestrings.
 
 Currently I have a (Lazy) Map that contains large BSON values (more than
 1mb when serialized each). I can serialize BSON documents to Lazy
 ByteStrings using Data.Binary.runPut. I then write this bytestring to a
 socket using Network.Socket.ByteString.Lazy.
 
 My question is this, if the Map object doesn't change (no updates) when it
 serializes the same document to the socket 2x in a row, does it re-evaluate
 the whole BSON value and convert it to a bytestring each time?

Yes.

 Lets say I wanted to have a cache of bytestings so I have another Map
 object that has the serialized bytestrings that I populate it with every
 time the original BSON Map changes. Should the map be strict or lazy?

This is the wrong question. The right question is, do you want the
values be strict (evaluated) or lazy (kept unevaluated until required)?

If you want values to be lazy, then you have to use the lazy Map.

If you want values to be strict, then you may either use the strict Map,
or still use the lazy Map but make sure that the values are evaluated
when you place them in the map. Using the strict Map is probably a
better idea, but the lazy Map lets you have finer control over what is
lazy and what is forced (should you need it).

Note that the lazy bytestring is just a lazy list of strict bytestrings.
Even placing it in the strict map wouldn't force its evaluation.

 Should the bytestrings it stores be strict or lazy?

For a cache, it makes sense to store strict bytestrings (unless they are
so large that it may be hard to allocate that much of contiguous space).

Lazy bytestrings are useful for streaming, when you use a chunk and then
discard it.

Using strict bytestrings doesn't imply that you want to store them
evaluated. Depending on your circumstances, it may be a good idea to
store strict bytestrings lazily, so that they do not take space and time
until they are requested for the first time.

Simply operating with the words lazy and strict may be very confusing,
since they refer to different things in different contexts. Every time
you read that something is lazy or strict, try to decipher it in terms
of the basic evaluation properties.

HTH,
Roman


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A question about laziness and performance in document serialization.

2013-08-20 Thread Kyle Hanson
So I am not entirely clear on how to optimize for performance for lazy
bytestrings.

Currently I have a (Lazy) Map that contains large BSON values (more than
1mb when serialized each). I can serialize BSON documents to Lazy
ByteStrings using Data.Binary.runPut. I then write this bytestring to a
socket using Network.Socket.ByteString.Lazy.

My question is this, if the Map object doesn't change (no updates) when it
serializes the same document to the socket 2x in a row, does it re-evaluate
the whole BSON value and convert it to a bytestring each time?

Lets say I wanted to have a cache of bytestings so I have another Map
object that has the serialized bytestrings that I populate it with every
time the original BSON Map changes. Should the map be strict or lazy?
Should the bytestrings it stores be strict or lazy?

Any help in understanding laziness would be appreciated.

--
Kyle Hanson
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-18 Thread Malcolm Wallace

On 12 Sep 2012, at 16:04, Eric Velten de Melo wrote:

 The behaviour I want to achieve is like this: I want the program when
 compiled to read from a file, parsing the PGM and at the same time
 apply transformations to the entries as they are read and write them
 back to another PGM file.
 
 Such problems are the main motivation for iteratees, conduits, pipes,
 etc. Every such library contains procedures for doing exactly what you
 want.
 
 
 It would be really awesome, though, if it were possible to use a
 parser written in Parsec with this, in the spirit of avoiding code
 rewriting and enhancing expressivity and abstraction.

The polyparse library on Hackage is another parser combinator framework that 
allows lazy incremental parsing.
http://hackage.haskell.org/package/polyparse

A PDF paper/tutorial is here:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.118.1754rep=rep1type=pdf

Regards,
Malcolm

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-17 Thread John Lato
 Subject: Re: [Haskell-cafe] Either Monad and Laziness

 On 9/14/12 5:16 PM, Eric Velten de Melo wrote:
 But now I'm kinda lost. Is there an easy way to explain the difference 
 between:
 -iteratee
 -conduit
 -enumerator

I tend to group them into three families.  'iteratee' and 'enumerator'
are fairly directly drawn from Oleg's code, with mostly implementation
differences (at least when compared to the other families).  They've
tended to keep Oleg's original names (iteratee, enumerator,
enumeratee).

The biggest user-visible difference between iteratee and enumerator is
the level of datastream abstraction.  iteratee abstracts over the
stream, and enumerator abstracts over elements of the stream.  The
stream is explicitly a list of elements.  This exposes some of the
details of data chunking to the user, which has both advantages and
disadvantages (iteratee exposes this also, but it's not necessary as
is the case for enumerator).

The second family (chronologically) includes conduit and (maybe)
iterIO.  I've written a little about this group at
http://johnlato.blogspot.sg/2012/06/understandings-of-iteratees.html
Although they serve the same goals in spirit, the implementation may
or may not necessarily be an iteratee/enumerator arrangement (e.g.
conduit).  This is a technical observation, not a criticism, depending
on exactly what you consider to define the style in the first place.
This group has usually renamed functions.  I discuss some of the other
differences on my blog.

The third familiy is all the pipes-* stuff.  This group tends towards
emphasizing the relationship between iteratee/enumerator pairs and
coroutines, and also emphasizing (to use Oleg terminology) composition
of enumeratees.  I've been meaning to write more about this group, but
thus far have been unable to do so.

I'd rather not hijack by evangelizing, but obviously I think iteratee
provides several important advantages over the other options.

John L.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-16 Thread wren ng thornton

On 9/14/12 5:16 PM, Eric Velten de Melo wrote:

But now I'm kinda lost. Is there an easy way to explain the difference between:
-iteratee
-conduit
-enumerator


John Lato's iteratee library is the original one based on Oleg 
Kiselyov's work. I've used it a fair deal and am quite fond of it.


Some folks didn't like it so much though; whence enumerator, conduit, 
pipes, pipes-core,... I've followed the discussions back and forth over 
those libraries, but I've not used them nor sat down to compare them 
head-to-head.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-14 Thread Eric Velten de Melo
On 13 September 2012 20:29, wren ng thornton w...@freegeek.org wrote:
 On 9/12/12 5:37 PM, Francesco Mazzoli wrote:

 At Wed, 12 Sep 2012 12:04:31 -0300,
 Eric Velten de Melo wrote:

 It would be really awesome, though, if it were possible to use a
 parser written in Parsec with this, in the spirit of avoiding code
 rewriting and enhancing expressivity and abstraction.


 There is http://hackage.haskell.org/package/attoparsec-conduit and
 http://hackage.haskell.org/package/attoparsec-enumerator, which turn
 attoparsec parsers into enumerators/conduits, and
 http://hackage.haskell.org/package/attoparsec-parsec, which is a
 compatibility
 layer between attoaparsec and parsec.  Good luck :).


 Not to mention attoparsec-iteratee, for the iteratee minded folks:

 http://hackage.haskell.org/package/attoparsec-iteratee


Hm... I guess I'm spoiled for choice then. :)

But now I'm kinda lost. Is there an easy way to explain the difference between:
-iteratee
-conduit
-enumerator

I'm very curious about everything concerning Haskell and new
interesting abstractions and ways of doing things, but I might not
have the time to delve deeper into that.


 --
 Live well,
 ~wren


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-13 Thread wren ng thornton

On 9/12/12 5:37 PM, Francesco Mazzoli wrote:

At Wed, 12 Sep 2012 12:04:31 -0300,
Eric Velten de Melo wrote:

It would be really awesome, though, if it were possible to use a
parser written in Parsec with this, in the spirit of avoiding code
rewriting and enhancing expressivity and abstraction.


There is http://hackage.haskell.org/package/attoparsec-conduit and
http://hackage.haskell.org/package/attoparsec-enumerator, which turn
attoparsec parsers into enumerators/conduits, and
http://hackage.haskell.org/package/attoparsec-parsec, which is a compatibility
layer between attoaparsec and parsec.  Good luck :).


Not to mention attoparsec-iteratee, for the iteratee minded folks:

http://hackage.haskell.org/package/attoparsec-iteratee


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Either Monad and Laziness

2012-09-12 Thread oleg

 I am currently trying to rewrite the Graphics.Pgm library from hackage
 to parse the PGM to a lazy array. 

Laziness and IO really do not mix. 

 The problem is that even using a lazy array structure, because the
 parser returns an Either structure it is only possible to know if the
 parser was successful or not after the whole file is read, 

That is one of the problems. Unexpected memory blowups could be
another problem. The drawbacks of lazy IO are well documented by now.

 The behaviour I want to achieve is like this: I want the program when
 compiled to read from a file, parsing the PGM and at the same time
 apply transformations to the entries as they are read and write them
 back to another PGM file.

Such problems are the main motivation for iteratees, conduits, pipes,
etc. Every such library contains procedures for doing exactly what you
want. Please check Hackage. John Lato's iteratee library, for example,
has procedure for handling sound (AIFF) files -- which may be very
big. IterateeM has the TIFF decoder -- which is incremental and
strict. TIFF is much harder to parse than PGM.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-12 Thread Eric Velten de Melo
Thanks for all the tips! The iteratees seem worth checking out. I'll
see what I can do and will report back if I come up with something.

Eric

On 12 September 2012 03:03,  o...@okmij.org wrote:

 I am currently trying to rewrite the Graphics.Pgm library from hackage
 to parse the PGM to a lazy array.

 Laziness and IO really do not mix.

 The problem is that even using a lazy array structure, because the
 parser returns an Either structure it is only possible to know if the
 parser was successful or not after the whole file is read,

 That is one of the problems. Unexpected memory blowups could be
 another problem. The drawbacks of lazy IO are well documented by now.

 The behaviour I want to achieve is like this: I want the program when
 compiled to read from a file, parsing the PGM and at the same time
 apply transformations to the entries as they are read and write them
 back to another PGM file.

 Such problems are the main motivation for iteratees, conduits, pipes,
 etc. Every such library contains procedures for doing exactly what you
 want. Please check Hackage. John Lato's iteratee library, for example,
 has procedure for handling sound (AIFF) files -- which may be very
 big. IterateeM has the TIFF decoder -- which is incremental and
 strict. TIFF is much harder to parse than PGM.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-12 Thread Eric Velten de Melo
On 12 September 2012 11:46, Eric Velten de Melo ericvm...@gmail.com wrote:
 Thanks for all the tips! The iteratees seem worth checking out. I'll
 see what I can do and will report back if I come up with something.

 Eric

 On 12 September 2012 03:03,  o...@okmij.org wrote:

 I am currently trying to rewrite the Graphics.Pgm library from hackage
 to parse the PGM to a lazy array.

 Laziness and IO really do not mix.

 The problem is that even using a lazy array structure, because the
 parser returns an Either structure it is only possible to know if the
 parser was successful or not after the whole file is read,

 That is one of the problems. Unexpected memory blowups could be
 another problem. The drawbacks of lazy IO are well documented by now.

 The behaviour I want to achieve is like this: I want the program when
 compiled to read from a file, parsing the PGM and at the same time
 apply transformations to the entries as they are read and write them
 back to another PGM file.

 Such problems are the main motivation for iteratees, conduits, pipes,
 etc. Every such library contains procedures for doing exactly what you
 want. Please check Hackage. John Lato's iteratee library, for example,
 has procedure for handling sound (AIFF) files -- which may be very
 big. IterateeM has the TIFF decoder -- which is incremental and
 strict. TIFF is much harder to parse than PGM.


It would be really awesome, though, if it were possible to use a
parser written in Parsec with this, in the spirit of avoiding code
rewriting and enhancing expressivity and abstraction.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-12 Thread Francesco Mazzoli
At Wed, 12 Sep 2012 12:04:31 -0300,
Eric Velten de Melo wrote:
 It would be really awesome, though, if it were possible to use a
 parser written in Parsec with this, in the spirit of avoiding code
 rewriting and enhancing expressivity and abstraction.

There is http://hackage.haskell.org/package/attoparsec-conduit and
http://hackage.haskell.org/package/attoparsec-enumerator, which turn
attoparsec parsers into enumerators/conduits, and
http://hackage.haskell.org/package/attoparsec-parsec, which is a compatibility
layer between attoaparsec and parsec.  Good luck :).

--
Francesco * Often in error, never in doubt

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Either Monad and Laziness

2012-09-11 Thread Eric Velten de Melo
Hello,

I am currently trying to rewrite the Graphics.Pgm library from hackage
to parse the PGM to a lazy array. The current implementation parses it
straight to UArray, which is strict.

The behaviour I want to achieve is like this: I want the program when
compiled to read from a file, parsing the PGM and at the same time
apply transformations to the entries as they are read and write them
back to another PGM file.

The problem is that even using a lazy array structure, because the
parser returns an Either structure it is only possible to know if the
parser was successful or not after the whole file is read, therefore
requiring it to read the entire file before applying the
transformations, ruining the property of laziness.

Is there some way to keep this from happening? Should I even want to
make it like this? Not really a real life situation, but imagine I
want to read a really large PGM file which does not fit into RAM
memory and I don't want to be forced to have the whole array in the
memory at once.

One alternative I thought was parsing only the PGM header and then
read the rest of the input without using Parsec and the Either Monad.
In the event the data is corrupted, though, I would not know how to
recover from it.

Any thoughts? Hopefully I'm not saying anything really stupid.

Eric

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-11 Thread Evan Laforge
On Tue, Sep 11, 2012 at 9:36 AM, Eric Velten de Melo
ericvm...@gmail.com wrote:
 Any thoughts? Hopefully I'm not saying anything really stupid.

You can intersperse decoding errors in the output, e.g. output is
[Either Error DecodedChunk].  Then all the processors have to deal
with it, but if you just want to pass the error through then just 'map
. fmap' instead of 'map'.  This means processors can also inject their
own errors or logs into the output, which may be very useful.

Or you could use runtime exceptions, i.e. the decoder is lazy but can
call error.  This is bad for reliability but if you know you always
want to crash on a bad parse it keeps the return value simple.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Either Monad and Laziness

2012-09-11 Thread timothyhobbs
Use a tuple: (Result,Maybe Error) rather than an Either.  Do everything
lazily, and in the case of an error, undo the result.


-- Původní zpráva --
Od: Eric Velten de Melo ericvm...@gmail.com
Datum: 11. 9. 2012
Předmět: [Haskell-cafe] Either Monad and Laziness
Hello,

I am currently trying to rewrite the Graphics.Pgm library from hackage
to parse the PGM to a lazy array. The current implementation parses it
straight to UArray, which is strict.

The behaviour I want to achieve is like this: I want the program when
compiled to read from a file, parsing the PGM and at the same time
apply transformations to the entries as they are read and write them
back to another PGM file.

The problem is that even using a lazy array structure, because the
parser returns an Either structure it is only possible to know if the
parser was successful or not after the whole file is read, therefore
requiring it to read the entire file before applying the
transformations, ruining the property of laziness.

Is there some way to keep this from happening? Should I even want to
make it like this? Not really a real life situation, but imagine I
want to read a really large PGM file which does not fit into RAM
memory and I don't want to be forced to have the whole array in the
memory at once.

One alternative I thought was parsing only the PGM header and then
read the rest of the input without using Parsec and the Either Monad.
In the event the data is corrupted, though, I would not know how to
recover from it.

Any thoughts? Hopefully I'm not saying anything really stupid.

Eric

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
(http://www.haskell.org/mailman/listinfo/haskell-cafe)___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-26 Thread Yves Parès
One day, I _really_ should learn all GHCI commands...

Thanks, Felipe ^^

2012/1/25 Felipe Almeida Lessa felipe.le...@gmail.com

 On Wed, Jan 25, 2012 at 7:38 PM, Yves Parès yves.pa...@gmail.com wrote:
  But I haven't found a way to tell GHCI to fully evaluate 'x' but _not_
 print
  its value.

 Use the :force, Yves!

  let {a = htrace a 12; b = htrace b 29; c = htrace c 10; d = htrace
 d 90; x = htrace , (htrace + (a+b), htrace * (c*d)) }
  :force x
 ,
 +
  a
  b
 *
  c
  d
 x = (41,900)

 Cheers! =)

 --
 Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-25 Thread Eugene Kirpichov
Thanks!

I released it:

http://hackage.haskell.org/package/htrace
http://github.com/jkff/htrace

On Wed, Jan 25, 2012 at 4:18 AM, Felipe Almeida Lessa 
felipe.le...@gmail.com wrote:

 Really nice!  Looks like it could be a useful mini-package on Hackage.

 --
 Felipe.




-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-25 Thread Claus Reinke

Look how one can watch the evaluation tree of a computation, to debug
laziness-related problems.


You might like the old Hood/GHood:

http://hackage.haskell.org/package/hood
http://hackage.haskell.org/package/GHood

Background info/papers:

http://www.ittc.ku.edu/csdl/fpg/Tools/Hood
http://www.ittc.ku.edu/csdl/fpg/node/26
http://community.haskell.org/~claus/GHood/

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-25 Thread Yves Parès
Hi, nice little package!

I just made a fork and added a new function makeHTrace to be able to have
separate variables 'level'.
I also add the htrace type signature (or else haddock won't generate
documentation for this module):
https://github.com/YwenP/htrace

I was also investigating in a way to fix an annoyment. You see, in GHCI:

 let {a = htrace a 12; b = htrace b 29; c = htrace c 10; d = htrace
d 90; x = htrace , (htrace + (a+b), htrace * (c*d)) }
 x

prints:

,
(+
  a
  b
41,*
  c
  d
900)

Instead, we'd like to have (if I'm right):

,
  +
a
b
  *
c
d
(41,900)

But I haven't found a way to tell GHCI to fully evaluate 'x' but _not_
print its value.

2012/1/25 Eugene Kirpichov ekirpic...@gmail.com

 Thanks!

 I released it:

 http://hackage.haskell.org/package/htrace
 http://github.com/jkff/htrace


 On Wed, Jan 25, 2012 at 4:18 AM, Felipe Almeida Lessa 
 felipe.le...@gmail.com wrote:

 Really nice!  Looks like it could be a useful mini-package on Hackage.

 --
 Felipe.




 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-25 Thread Felipe Almeida Lessa
On Wed, Jan 25, 2012 at 7:38 PM, Yves Parès yves.pa...@gmail.com wrote:
 But I haven't found a way to tell GHCI to fully evaluate 'x' but _not_ print
 its value.

Use the :force, Yves!

 let {a = htrace a 12; b = htrace b 29; c = htrace c 10; d = htrace d 
 90; x = htrace , (htrace + (a+b), htrace * (c*d)) }
 :force x
,
+
  a
  b
*
  c
  d
x = (41,900)

Cheers! =)

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-24 Thread Eugene Kirpichov
Hi cafe,

Look how one can watch the evaluation tree of a computation, to debug
laziness-related problems.

{-# LANGUAGE BangPatterns #-}
module HTrace where

import Data.List (foldl')
import Data.IORef
import System.IO.Unsafe

level = unsafePerformIO $ newIORef 0

htrace str x = unsafePerformIO $ do
  lvl - readIORef level
  putStrLn (replicate (4*lvl) ' ' ++ str)
  writeIORef level (lvl+1)
  let !vx = x
  writeIORef level lvl
  return vx

xs = map (\x - htrace (show x) x) [1..10]

s = foldl (\a b - htrace + (a+b)) 0 xs
s2 = foldl' (\a b - htrace + (a+b)) 0 xs

b = htrace b 2
c = htrace c 3
a = htrace a $ b + c
x = htrace x $ b + c

*HTrace a
a
b
c
5
*HTrace x
x
5

*HTrace s
+
+
+
+
+
+
+
+
+
+
1
2
3
4
5
6
7
8
9
10
55

(reload)
*HTrace s2
+
1
+
2
+
3
+
4
+
5
+
6
+
7
+
8
+
9
+
10
55

-- 
Eugene Kirpichov
Principal Engineer, Mirantis Inc. http://www.mirantis.com/
Editor, http://fprog.ru/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-24 Thread HASHIMOTO, Yusaku
Great, It illustrates why difference lists are awesome.

import HTrace

app :: [a] - [a] - [a]
app [] ys = htrace app ys
app (x:xs) ys = htrace app (x:app xs ys)

rev1 [] = htrace [] []
rev1 (x:xs) = htrace rev1 (app (rev1 xs) [x])

rev2 []     ys = htrace ys ys
rev2 (x:xs) ys = htrace : (rev2 xs (x:ys))

*Main rev1 [1..10]
rev1
rev1
rev1
rev1
rev1
rev1
rev1
rev1
rev1
rev1
[]
app
app
app
app
app
app
app
app
app
app
[10app
app
app
app
app
app
app
app
app
,9app
app
app
app
app
app
app
app
,8app
app
app
app
app
app
app
,7app
app
app
app
app
app
,6app
app
app
app
app
,5app
app
app
app
,4app
app
app
,3app
app
,2app
,1]
*Main rev2 [1..10]

interactive:4:1:
No instance for (Show ([a0] - [a0]))
  arising from a use of `print'
Possible fix: add an instance declaration for (Show ([a0] - [a0]))
In a stmt of an interactive GHCi command: print it
*Main rev2 [1..10] []
:
:
:
:
:
:
:
:
:
:
ys
[10,9,8,7,6,5,4,3,2,1]

Thanks for sharing!

On 25 January 2012 01:47, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Hi cafe,

 Look how one can watch the evaluation tree of a computation, to debug
 laziness-related problems.

 {-# LANGUAGE BangPatterns #-}
 module HTrace where

 import Data.List (foldl')
 import Data.IORef
 import System.IO.Unsafe

 level = unsafePerformIO $ newIORef 0

 htrace str x = unsafePerformIO $ do
   lvl - readIORef level
   putStrLn (replicate (4*lvl) ' ' ++ str)
   writeIORef level (lvl+1)
   let !vx = x
   writeIORef level lvl
   return vx

 xs = map (\x - htrace (show x) x) [1..10]

 s = foldl (\a b - htrace + (a+b)) 0 xs
 s2 = foldl' (\a b - htrace + (a+b)) 0 xs

 b = htrace b 2
 c = htrace c 3
 a = htrace a $ b + c
 x = htrace x $ b + c

 *HTrace a
 a
     b
     c
 5
 *HTrace x
 x
 5

 *HTrace s
 +
     +
         +
             +
                 +
                     +
                         +
                             +
                                 +
                                     +
                                         1
                                     2
                                 3
                             4
                         5
                     6
                 7
             8
         9
     10
 55

 (reload)
 *HTrace s2
 +
     1
 +
     2
 +
     3
 +
     4
 +
     5
 +
     6
 +
     7
 +
     8
 +
     9
 +
     10
 55

 --
 Eugene Kirpichov
 Principal Engineer, Mirantis Inc. http://www.mirantis.com/
 Editor, http://fprog.ru/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hierarchical tracing for debugging laziness

2012-01-24 Thread Felipe Almeida Lessa
Really nice!  Looks like it could be a useful mini-package on Hackage.

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-03 Thread Dominique Devriese
2011/5/3 Manuel M T Chakravarty c...@cse.unsw.edu.au:
 Interestingly, today (at least the academic fraction of) the Haskell
 community appears to hold the purity of the language in higher
 regard than its laziness.

I find Greg Morissett's comment on Lennart Augustsson's article pro
lazy evaluation very interesting:

  
http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html#c7969361694724090315

What I find interesting is that he considers (non-)termination an
effect, which Haskell does not manage to control like it does other
types of effects. Dependently typed purely functional languages like
Coq (or Agda if you prefer ;)) do manage to control this (at the cost
of replacing general recursion with structural recursion) and require
you to model non-termination in a monad (or Applicative functor) like
in YNot or Agda's partiality monad (written _⊥) which models just
non-termination.

I have the impression that this separation of the partiality effect
provides a certain independence of evaluation order which neither ML
(because of side-effects) nor Haskell (because of non-strict
semantics) manage to provide. Such an independence seems very useful
for optimization and parallel purposes.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-03 Thread Jan-Willem Maessen
On Tue, May 3, 2011 at 1:32 AM, Manuel M T Chakravarty
c...@cse.unsw.edu.au wrote:
...  Interestingly, today (at least the academic fraction of) the Haskell 
community appears to hold the purity of the language in higher regard than its 
laziness.

As someone who implemented Haskell with quite a bit less laziness, I'm
inclined to agree.

That said, I think it's easy to underestimate just how much of the
structure of the language really encourages a lazy evaluation
strategy.  One example: where blocks scope over groups of conditional
RHSs.  This is very handy, in that we can bind variables that are then
used in some, but not all, of the disjuncts.  Grabbing the first
example that comes to hand from my own code:

tryOne (gg, uu) e
  | not (consistent uu)  = (gg', uu)
  | uu==uu' = (gg, uu)
  | otherwise = (gg', uu')
  where gg' = gg `addEquation` e
uu' = uu `addEquation` e

This kind of thing happens all over the place in Haskell code -- it's
a very natural coding style -- but if you want to preserve purity it's
tough to compile without laziness.

-Jan-Willem Maessen

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-03 Thread Dan Doel
On Tue, May 3, 2011 at 2:26 AM, Dominique Devriese
dominique.devri...@cs.kuleuven.be wrote:
 What I find interesting is that he considers (non-)termination an
 effect, which Haskell does not manage to control like it does other
 types of effects. Dependently typed purely functional languages like
 Coq (or Agda if you prefer ;)) do manage to control this (at the cost
 of replacing general recursion with structural recursion) and require
 you to model non-termination in a monad (or Applicative functor) like
 in YNot or Agda's partiality monad (written _⊥) which models just
 non-termination.

Dependent typing isn't really necessary. Only totality. Of course,
there's some agreement that dependent types help you get back some of
the power you'd lose by going total (by helping you write precise
enough types for your programs to be accomplished in the more limited
recursive manner).

 I have the impression that this separation of the partiality effect
 provides a certain independence of evaluation order which neither ML
 (because of side-effects) nor Haskell (because of non-strict
 semantics) manage to provide. Such an independence seems very useful
 for optimization and parallel purposes.

Total lambda calculi tend to yield the same results irrespective of
evaluation strategy. I guess that's useful for optimization, because
you can apply transformations wherever you want without worrying about
changing the definedness of something (because everything is
guaranteed to be well defined regardless of your evaluation strategy).

I don't see how it obviates strictness analysis, though. For instance:

sumAcc a (x:xs) = sumAcc (a + x) xs
sumAcc a [] = a

... case sumAcc 0 l of { n - ... }

Even in a total language, accumulating lazy thunks is likely to be
inefficient for when we go to use the accumulator, whereas one can
also construct examples (even in a total and inductive fragment) where
lazy evaluation will be superior. So you need to do analysis to
determine which things should be strict and which should be lazy for
efficiency, even if you aren't obligated to do it to determine whether
your optimizations are semantics-preserving.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-03 Thread Manuel M T Chakravarty
I completely agree that laziness enables a number of nice coding idioms and, as 
Lennart described so eloquently, it does facilitate a combinator-based coding 
style among other things:

  http://augustss.blogspot.com/2011/05/more-points-for-lazy-evaluation-in.html

(Note that even Bob admits that this is an advantage.)

What I meant is that if asked what is more important about Haskell, its 
laziness or purity, I think most people would pick purity.  (But then it's a 
strange decision to make as laziness implies a need for purity as discussed.)

Manuel

Jan-Willem Maessen:
 On Tue, May 3, 2011 at 1:32 AM, Manuel M T Chakravarty
 c...@cse.unsw.edu.au wrote:
 ...  Interestingly, today (at least the academic fraction of) the Haskell 
 community appears to hold the purity of the language in higher regard than 
 its laziness.
 
 As someone who implemented Haskell with quite a bit less laziness, I'm
 inclined to agree.
 
 That said, I think it's easy to underestimate just how much of the
 structure of the language really encourages a lazy evaluation
 strategy.  One example: where blocks scope over groups of conditional
 RHSs.  This is very handy, in that we can bind variables that are then
 used in some, but not all, of the disjuncts.  Grabbing the first
 example that comes to hand from my own code:
 
tryOne (gg, uu) e
  | not (consistent uu)  = (gg', uu)
  | uu==uu' = (gg, uu)
  | otherwise = (gg', uu')
  where gg' = gg `addEquation` e
uu' = uu `addEquation` e
 
 This kind of thing happens all over the place in Haskell code -- it's
 a very natural coding style -- but if you want to preserve purity it's
 tough to compile without laziness.
 
 -Jan-Willem Maessen


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread Ketil Malde

I'm following Harper's blog, Existential Type¹, which I find to be an
enjoyable and entertainingly written tirade about the advantages of
teaching functional programming - specifically ML - to students.  Of
course, he tends to be critical of Haskell, but it's nice to get some
thought provoking opinion from somebody who knows a bit about the
business.

Recently, he had a piece on monads, and how to do them in ML, and one
statement puzzled me:

  There is a particular reason why monads had to arise in Haskell,
   though, which is to defeat the scourge of laziness.

My own view is/was that monads were so successful in Haskell since it
allowed writing flexible programs with imperative features, without
sacrificing referential transparency.  Although people are quick (and
rightly so) to point out that this flexibility goes way beyond IO, I
think IO was in many ways the killer application for monads.  Before IO,
we had very limited functionality (like 'interact' taking a 'String -
String' function and converting it into an exectuable program) to build
real programs from.

Laziness does require referential transparency (or at least, it is
easier to get away with the lack of RT in a strict language), so I can
see that he is indirectly correct, but RT is a goal in itself.  Thus, I
wonder if there are any other rationale for a statement like that?

-k

¹ http://existentialtype.wordpress.com/
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread MigMit
Yes, I'm following it too, and it seems to me that Harper just allows his 
dislike for Haskell to take advantage of his judgement. Monads as a way to deal 
with laziness are a very common misconception.

Отправлено с iPhone

May 2, 2011, в 11:54, Ketil Malde ke...@malde.org написал(а):

 
 I'm following Harper's blog, Existential Type¹, which I find to be an
 enjoyable and entertainingly written tirade about the advantages of
 teaching functional programming - specifically ML - to students.  Of
 course, he tends to be critical of Haskell, but it's nice to get some
 thought provoking opinion from somebody who knows a bit about the
 business.
 
 Recently, he had a piece on monads, and how to do them in ML, and one
 statement puzzled me:
 
  There is a particular reason why monads had to arise in Haskell,
   though, which is to defeat the scourge of laziness.
 
 My own view is/was that monads were so successful in Haskell since it
 allowed writing flexible programs with imperative features, without
 sacrificing referential transparency.  Although people are quick (and
 rightly so) to point out that this flexibility goes way beyond IO, I
 think IO was in many ways the killer application for monads.  Before IO,
 we had very limited functionality (like 'interact' taking a 'String -
 String' function and converting it into an exectuable program) to build
 real programs from.
 
 Laziness does require referential transparency (or at least, it is
 easier to get away with the lack of RT in a strict language), so I can
 see that he is indirectly correct, but RT is a goal in itself.  Thus, I
 wonder if there are any other rationale for a statement like that?
 
 -k
 
 ¹ http://existentialtype.wordpress.com/
 -- 
 If I haven't seen further, it is by standing in the footprints of giants
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread Scott Turner
On 2011-05-02 03:54, Ketil Malde wrote:
   There is a particular reason why monads had to arise in Haskell,
though, which is to defeat the scourge of laziness.
 
 I wonder if there are any other rationale for a statement like that?

He spends one paragraph dismissing the usefulness of referential
transparency

   You cannot easily convert between functional and monadic
style without a radical restructuring of code.  ... you
are deprived of the useful concept of a benign effect

I imagine that he considered how he prefers ML the way it is, without
base libraries thoroughly rewritten to support purity. If purity and RT
aren't the reason why Haskell uses monads, what's left?  Laziness does
have disadvantages.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread Dominique Devriese
2011/5/2 Ketil Malde ke...@malde.org:
  There is a particular reason why monads had to arise in Haskell,
   though, which is to defeat the scourge of laziness.

 My own view is/was that monads were so successful in Haskell since it
 allowed writing flexible programs with imperative features, without
 sacrificing referential transparency.  [...]

 Laziness does require referential transparency (or at least, it is
 easier to get away with the lack of RT in a strict language), so I can
 see that he is indirectly correct, but RT is a goal in itself.  Thus, I
 wonder if there are any other rationale for a statement like that?

I agree with your analysis. Throughout his different articles, I think
Harper partly has a point when he says that laziness brings certain
disadvantages (like e.g. complex memory and CPU behaviour) to Haskell
(although I disagree with some of his other  arguments here). However,
like you say, he misses the ball by amalgamating laziness with
referential transparency, where the first probably requires the second
but not vice versa. This allows him to simply dismiss both, which is
convenient for his apparent conclusion that ML is strictly better
than Haskell, since referential transparency and purity are (in my
view) one of the things ML lacks most when compared to Haskell. His
only other argument against referential transparency and purity seems
to be his mentioning of benign effects, which is weak for two
reasons: first, benign effects are clearly not what typical ML
programs use non-purity for and second, benign effects can be
supported much more conservatively using Haskell's unsafePerformIO.

Dominique

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread Felipe Almeida Lessa
On Mon, May 2, 2011 at 9:29 AM, Dominique Devriese
dominique.devri...@cs.kuleuven.be wrote:
 I agree with your analysis. Throughout his different articles, I think
 Harper partly has a point when he says that laziness brings certain
 disadvantages (like e.g. complex memory and CPU behaviour) to Haskell
 (although I disagree with some of his other  arguments here). However,
 like you say, he misses the ball by amalgamating laziness with
 referential transparency, where the first probably requires the second
 but not vice versa. This allows him to simply dismiss both, which is
 convenient for his apparent conclusion that ML is strictly better
 than Haskell, since referential transparency and purity are (in my
 view) one of the things ML lacks most when compared to Haskell. His
 only other argument against referential transparency and purity seems
 to be his mentioning of benign effects, which is weak for two
 reasons: first, benign effects are clearly not what typical ML
 programs use non-purity for and second, benign effects can be
 supported much more conservatively using Haskell's unsafePerformIO.

Or, instead of unsafePerformIO, runST.

Saying that we should allow effects everywhere because there are
benign effects is like saying that we should allow GOTOs everywhere
because there are some benign GOTOs.  By allowing these benign
things we also open a can of worms of malign things.

Cheers,

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread Tony Morris
On 02/05/11 17:54, Ketil Malde wrote:
 I'm following Harper's blog, Existential Type¹, which I find to be an
 enjoyable and entertainingly written tirade about the advantages of
 teaching functional programming - specifically ML - to students.  Of
 course, he tends to be critical of Haskell, but it's nice to get some
 thought provoking opinion from somebody who knows a bit about the
 business.

 Recently, he had a piece on monads, and how to do them in ML, and one
 statement puzzled me:

   There is a particular reason why monads had to arise in Haskell,
though, which is to defeat the scourge of laziness.

 My own view is/was that monads were so successful in Haskell since it
 allowed writing flexible programs with imperative features, without
 sacrificing referential transparency.  Although people are quick (and
 rightly so) to point out that this flexibility goes way beyond IO, I
 think IO was in many ways the killer application for monads.  Before IO,
 we had very limited functionality (like 'interact' taking a 'String -
 String' function and converting it into an exectuable program) to build
 real programs from.

 Laziness does require referential transparency (or at least, it is
 easier to get away with the lack of RT in a strict language), so I can
 see that he is indirectly correct, but RT is a goal in itself.  Thus, I
 wonder if there are any other rationale for a statement like that?

 -k

 ¹ http://existentialtype.wordpress.com/
Interesting how I have been authoring and subsequently using monads in
scala for several years and it is strictness that gets in the way more
than anything.

http://github.com/scalaz/scalaz/

I have been teaching functional programming for quite a while, both in
universities and outside of academia, and I am of the opinion that
Haskell's suitability in first place has no close second place. I wonder
why I am wrong, but this post (and previous) is hardly persuasive.

-- 
Tony Morris
http://tmorris.net/



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread Manuel M T Chakravarty
Tony Morris:
 Interesting how I have been authoring and subsequently using monads in
 scala for several years and it is strictness that gets in the way more
 than anything.

Just to make sure that I understand you correctly.  You are saying that when 
you use monads in Scala, then strictness makes that more awkward than 
necessary.  (I assume that you are *not* saying that strictness is the most 
awkward language feature of Scala.)

Why is that?

Manuel


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Robert Harper on monads and laziness

2011-05-02 Thread Manuel M T Chakravarty
For a historical perspective, I highly recommend The Definitive Account of the 
History of Haskell:

  
http://research.microsoft.com/en-us/um/people/simonpj/papers/history-of-haskell/index.htm

Section 7 clearly and directly cites the desire to have pure I/O as the 
motivation for adopting monads.  As you are saying that doesn't directly 
contradict the statement of Bob that you cite.  In fact, Section 3.2 of the 
above paper supports the opinion that purity is a consequence of choosing to be 
lazy, but it also claims that the combination of power and beauty of a pure 
language motivated the language designers.  Interestingly, today (at least the 
academic fraction of) the Haskell community appears to hold the purity of the 
language in higher regard than its laziness.

Manuel


Ketil Malde:
 I'm following Harper's blog, Existential Type¹, which I find to be an
 enjoyable and entertainingly written tirade about the advantages of
 teaching functional programming - specifically ML - to students.  Of
 course, he tends to be critical of Haskell, but it's nice to get some
 thought provoking opinion from somebody who knows a bit about the
 business.
 
 Recently, he had a piece on monads, and how to do them in ML, and one
 statement puzzled me:
 
  There is a particular reason why monads had to arise in Haskell,
   though, which is to defeat the scourge of laziness.
 
 My own view is/was that monads were so successful in Haskell since it
 allowed writing flexible programs with imperative features, without
 sacrificing referential transparency.  Although people are quick (and
 rightly so) to point out that this flexibility goes way beyond IO, I
 think IO was in many ways the killer application for monads.  Before IO,
 we had very limited functionality (like 'interact' taking a 'String -
 String' function and converting it into an exectuable program) to build
 real programs from.
 
 Laziness does require referential transparency (or at least, it is
 easier to get away with the lack of RT in a strict language), so I can
 see that he is indirectly correct, but RT is a goal in itself.  Thus, I
 wonder if there are any other rationale for a statement like that?
 
 -k
 
 ¹ http://existentialtype.wordpress.com/
 -- 
 If I haven't seen further, it is by standing in the footprints of giants
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: Dictionaries and full laziness transformation

2011-02-09 Thread Simon Peyton-Jones
In general it's quite hard to solve this problem without risking losing sharing.

However in this case I added a simple arity analyser after the 7.0.1 release 
which solves the problem.  It'll be in 7.0.2.

Try with HEAD and check it does what you expect.

Simon

| -Original Message-
| From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-
| users-boun...@haskell.org] On Behalf Of Akio Takano
| Sent: 07 February 2011 04:10
| To: glasgow-haskell-users@haskell.org
| Subject: Dictionaries and full laziness transformation
| 
| Hi,
| 
| I'm using GHC 7.0.1. I found that recursive overloaded functions tend
| to leak memory when compiled with full-laziness optimization on. Here
| is a simple case.
| 
| -- TestSub.hs
| {-# LANGUAGE BangPatterns #-}
| module TestSub where
| 
| {-# NOINLINE factorial #-}
| factorial :: (Num a) = a - a - a
| factorial !n !acc = if n == 0 then acc else factorial (n - 1) (acc * n)
| 
| -- main.hs
| import TestSub
| 
| factorial1 :: Int - Int - Int
| factorial1 = factorial
| 
| main = do
| n - readLn
| print $ factorial1 n 1
| 
| main
| 
| This program should run in constant space, and compiled with -O0 or
| -O2 -fno-full-laziness, it does. However with -O2, it takes a linear
| amount of memory. The core for factorial looks like this:
| 
| TestSub.factorial =
|   \ (@ a_ajm) ($dNum_slz :: GHC.Num.Num a_ajm) -
| let {
|   a_slA :: GHC.Classes.Eq a_ajm
|   [LclId]
|   a_slA = GHC.Num.$p1Num @ a_ajm $dNum_slz } in
| let {
|   lvl2_slC :: a_ajm - a_ajm - a_ajm
|   [LclId]
|   lvl2_slC = TestSub.factorial @ a_ajm $dNum_slz } in
| ...
| 
| The problem is that lvl2_slC closure is created whenever factorial is
| applied to a Num dictionary, and kept alive until that application is
| GCed. In this program it never happens, because an application to the
| Num Int dictionary is referred to by the factorial1 CAF, and it
| recursively keeps the whole chain of closures alive.
| 
| I know that full laziness transformation *sometimes* causes a space
| leak, but this looks like a bad result to me, because:
| 
| - It looks like there is no point building a lot of equivalent
| closures, instead of one.
| - A lot of code can suffer from this behavior, because overloaded
| recursive functions are fairly common.
|   For example, unfoldConvStream function from the latest iteratee
| package suffers from this problem, if I understand correctly.
| 
| Does anyone have an idea on whether this can be fixed in GHC, or how
| to work around this problem?
| 
| Regards,
| 
| Takano Akio
| 
| ___
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Dictionaries and full laziness transformation

2011-02-09 Thread Maciej Wos
I've been using ghc-7.1.20110125 and it does indeed help a great deal.
I've tried compiling several problematic functions and in most cases
the problem is gone. However, in one of my test cases the closures are
still being constructed:

let {
  lvl8_s1S8
:: [Data.Iteratee.Base.Iteratee s_aXZ m_aXY a_aY1]
   - Data.Iteratee.Base.Iteratee s_aXZ m_aXY ()
  [LclId, Str=DmdType]
  lvl8_s1S8 =
EnumSequenceTest.enumSequence_
  @ m_aXY
  @ s_aXZ
  @ el_aY0
  @ a_aY1
  $dMonad_s1Q4
  $dListLike_s1Q0
  $dNullable_s1PV } in

The code for enumSequence_ is included in the attachment. There are
two versions:
* enumSequence-bad.hs is the original code
* enumSequence-good.hs includes a fix suggested by Akio

When compiled with ghc-7.1.20110125 and -O2 [1] it uses a lot of memory:

./enumSequence-main +RTS -s
1
1
0
  22,787,568,816 bytes allocated in the heap
   1,455,499,400 bytes copied during GC
 260,651,512 bytes maximum residency (10 sample(s))
  14,457,544 bytes maximum slop
 530 MB total memory in use (0 MB lost due to fragmentation)

  Generation 0: 43936 collections, 0 parallel,  1.58s,  1.57s elapsed
  Generation 1:10 collections, 0 parallel,  1.05s,  1.05s elapsed

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time4.64s  (  4.66s elapsed)
  GCtime2.63s  (  2.62s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time7.27s  (  7.28s elapsed)

  %GC time  36.1%  (36.0% elapsed)

  Alloc rate4,905,159,063 bytes per MUT second

  Productivity  63.8% of total user, 63.8% of total elapsed

while compiling with -O2 and -fno-full-laziness or -O0 reverts memory
usage back to constant:

./enumSequence-main +RTS -s
1
1
0
  22,493,819,416 bytes allocated in the heap
 578,891,112 bytes copied during GC
  46,696 bytes maximum residency (1 sample(s))
  19,928 bytes maximum slop
   1 MB total memory in use (0 MB lost due to fragmentation)

  Generation 0: 43335 collections, 0 parallel,  1.07s,  1.07s elapsed
  Generation 1: 1 collections, 0 parallel,  0.00s,  0.00s elapsed

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time4.53s  (  4.55s elapsed)
  GCtime1.07s  (  1.07s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time5.60s  (  5.62s elapsed)

  %GC time  19.2%  (19.0% elapsed)

  Alloc rate4,966,355,161 bytes per MUT second

  Productivity  80.8% of total user, 80.5% of total elapsed

-- Maciej

[1] ghc --make -rtsopts -fforce-recomp -O2 enumSequence-bad.hs
enumSequence-main.hs
[2] ghc --make -rtsopts -fforce-recomp -O2 -fno-full-laziness
enumSequence-bad.hs enumSequence-main.hs

On Thu, Feb 10, 2011 at 2:00 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 In general it's quite hard to solve this problem without risking losing 
 sharing.

 However in this case I added a simple arity analyser after the 7.0.1 release 
 which solves the problem.  It'll be in 7.0.2.

 Try with HEAD and check it does what you expect.

 Simon

 | -Original Message-
 | From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-
 | users-boun...@haskell.org] On Behalf Of Akio Takano
 | Sent: 07 February 2011 04:10
 | To: glasgow-haskell-users@haskell.org
 | Subject: Dictionaries and full laziness transformation
 |
 | Hi,
 |
 | I'm using GHC 7.0.1. I found that recursive overloaded functions tend
 | to leak memory when compiled with full-laziness optimization on. Here
 | is a simple case.
 |
 | -- TestSub.hs
 | {-# LANGUAGE BangPatterns #-}
 | module TestSub where
 |
 | {-# NOINLINE factorial #-}
 | factorial :: (Num a) = a - a - a
 | factorial !n !acc = if n == 0 then acc else factorial (n - 1) (acc * n)
 |
 | -- main.hs
 | import TestSub
 |
 | factorial1 :: Int - Int - Int
 | factorial1 = factorial
 |
 | main = do
 |     n - readLn
 |     print $ factorial1 n 1
 |
 |     main
 |
 | This program should run in constant space, and compiled with -O0 or
 | -O2 -fno-full-laziness, it does. However with -O2, it takes a linear
 | amount of memory. The core for factorial looks like this:
 |
 | TestSub.factorial =
 |   \ (@ a_ajm) ($dNum_slz :: GHC.Num.Num a_ajm) -
 |     let {
 |       a_slA :: GHC.Classes.Eq a_ajm
 |       [LclId]
 |       a_slA = GHC.Num.$p1Num @ a_ajm $dNum_slz } in
 |     let {
 |       lvl2_slC :: a_ajm - a_ajm - a_ajm
 |       [LclId]
 |       lvl2_slC = TestSub.factorial @ a_ajm $dNum_slz } in
 | ...
 |
 | The problem is that lvl2_slC closure is created whenever factorial is
 | applied to a Num dictionary, and kept alive until that application is
 | GCed. In this program it never happens, because an application to the
 | Num Int dictionary is referred to by the factorial1 CAF, and it
 | recursively keeps the whole chain of closures alive.
 |
 | I know that full laziness transformation *sometimes* causes a space
 | leak, but this looks like a bad result

Dictionaries and full laziness transformation

2011-02-06 Thread Akio Takano
Hi,

I'm using GHC 7.0.1. I found that recursive overloaded functions tend
to leak memory when compiled with full-laziness optimization on. Here
is a simple case.

-- TestSub.hs
{-# LANGUAGE BangPatterns #-}
module TestSub where

{-# NOINLINE factorial #-}
factorial :: (Num a) = a - a - a
factorial !n !acc = if n == 0 then acc else factorial (n - 1) (acc * n)

-- main.hs
import TestSub

factorial1 :: Int - Int - Int
factorial1 = factorial

main = do
n - readLn
print $ factorial1 n 1

main

This program should run in constant space, and compiled with -O0 or
-O2 -fno-full-laziness, it does. However with -O2, it takes a linear
amount of memory. The core for factorial looks like this:

TestSub.factorial =
  \ (@ a_ajm) ($dNum_slz :: GHC.Num.Num a_ajm) -
let {
  a_slA :: GHC.Classes.Eq a_ajm
  [LclId]
  a_slA = GHC.Num.$p1Num @ a_ajm $dNum_slz } in
let {
  lvl2_slC :: a_ajm - a_ajm - a_ajm
  [LclId]
  lvl2_slC = TestSub.factorial @ a_ajm $dNum_slz } in
...

The problem is that lvl2_slC closure is created whenever factorial is
applied to a Num dictionary, and kept alive until that application is
GCed. In this program it never happens, because an application to the
Num Int dictionary is referred to by the factorial1 CAF, and it
recursively keeps the whole chain of closures alive.

I know that full laziness transformation *sometimes* causes a space
leak, but this looks like a bad result to me, because:

- It looks like there is no point building a lot of equivalent
closures, instead of one.
- A lot of code can suffer from this behavior, because overloaded
recursive functions are fairly common.
  For example, unfoldConvStream function from the latest iteratee
package suffers from this problem, if I understand correctly.

Does anyone have an idea on whether this can be fixed in GHC, or how
to work around this problem?

Regards,

Takano Akio

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Eta-expansion and existentials (or: types destroy my laziness)

2010-10-24 Thread Henning Thielemann
Max Bolingbroke schrieb:

 Let's start with a simple example of an existential data type:
 
 data Stream a = forall s. Stream s (s - Maybe (a, s))

I use quite the same data type for my signal processing applications:
  http://code.haskell.org/synthesizer/core/src/Synthesizer/State/Signal.hs

You may be interested in my overview:
http://hackage.haskell.org/packages/archive/synthesizer/0.2.0.1/doc/html/Synthesizer-Storage.html

 Now we may wish to define infinite streams using value recursion:
 
 ones :: Stream Int
 ones = cons 1 ones

For me a Stream is a List without storage, thus in your example you
cannot share the content of 'ones'. The internal state type becomes
larger and larger for every new element (every element adds a new layer
of Maybe, if I'm not mistaken).

In cases, where I want this kind of recursion I store the content
generated by a Stream in a list or another more efficient stream type or
I write special functions for this purpose (e.g., 'repeat' in the case
of 'ones').


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Eta-expansion and existentials (or: types destroy my laziness)

2010-10-22 Thread Max Bolingbroke
On 19 October 2010 19:01, Dan Doel dan.d...@gmail.com wrote:
 However, this argument is a bit circular, since that eliminator could be
 defined to behave similarly to an irrefutable match.

Right.

 Or, put another
 way, eta expansion of a dictionary-holding existential would result in a value
 holding a bottom dictionary, whereas that's otherwise impossible, I think.

I think evaluating dictionaries strictly is more of a want to have
rather than actually implemented. In particular, GHC supports
building value-recursive dictionaries - and making dictionary
arguments strict indiscriminately would make this feature rather
useless.

I think this means that even with constrained existential things would be OK.

What is definitely not OK is lazy pattern matching on GADTs which have
*type equality* constraints on their existentials - because the type
equalities are erased at compile time, lazy pattern matching on a GADT
would leave no opportunity for us to observe that the proof of the
type equality was bogus (a bottom) - so allowing this would let our
programs segfault. For example, this program would erroneously
typecheck:


data Eq a b where
  Eq :: EQ a a

f :: Eq Int Bool - Bool
f (~Eq) = ((1 :: Int) + 1) :: Bool

main = print $ f undefined


I'm sticking with my unsafeCoerce based solution for now. It's ugly,
but it's nicely *localised* ugliness :-)

Cheers,
Max
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Eta-expansion and existentials (or: types destroy my laziness)

2010-10-22 Thread Dan Doel
On Friday 22 October 2010 5:48:28 am Max Bolingbroke wrote:
 I think evaluating dictionaries strictly is more of a want to have
 rather than actually implemented. In particular, GHC supports
 building value-recursive dictionaries - and making dictionary
 arguments strict indiscriminately would make this feature rather
 useless.

It now occurs to me that I might be thinking of your paper on strict core.

When does this come up? I only have one example that seems likely:

  data Mu f = In { out :: f (Mu f) }

  instance Show (f (Mu f)) = Show (Mu f) where
show = show . out

Is that an example of a value recursive dictionary? I also considered:

  data AnyShow = forall a. Show a = AS a

  instance Show AnyShow where
show (AS x) = AS ( ++ show x ++ )

  inf = AS inf

but I don't think the dictionary is actually recursive there. Translating to 
explicit dictionary passing seems to confirm the latter impression. The former 
is more iffy.

The most obvious way to build a value recursive dictionary would be an 
existential like:

  inf' = case inf' of AS x - AS (x, x)

but this is prevented from happening by the lack of irrefutable match.

 What is definitely not OK is lazy pattern matching on GADTs which have
 *type equality* constraints on their existentials.

This is certainly the more compelling deal breaker.

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Eta-expansion and existentials (or: types destroy my laziness)

2010-10-22 Thread Max Bolingbroke
On 22 October 2010 12:03, Dan Doel dan.d...@gmail.com wrote:
  data Mu f = In { out :: f (Mu f) }

  instance Show (f (Mu f)) = Show (Mu f) where
    show = show . out

 Is that an example of a value recursive dictionary?

Assuming the Show (f (Mu f)) instance uses the (Mu f) one, AFAIK this
should indeed build a loopy dictionary.

I think this extension was motivated by Scrap your Boilerplate with
Class - see section 5 of
http://research.microsoft.com/en-us/um/people/simonpj/papers/hmap/gmap3.pdf.

Cheers,
Max
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Eta-expansion and existentials (or: types destroy my laziness)

2010-10-19 Thread oleg

Max Bolingbroke wrote:
 Let's start with a simple example of an existential data type:
  data Stream a = forall s. Stream s (s - Maybe (a, s))
  ones :: Stream Int
  ones = cons 1 ones

 Unfortunately, 'ones' is just _|_! The reason is that cons is strict
 in its second argument. The problem I have is that there is no way to
 define cons which is
 simultaneously:

   1. Lazy in the tail of the list
   2. Type safe
   3. Non-recursive

Really? Here are two 'cons' that seem to satisfy all the criteria

 {-# LANGUAGE ExistentialQuantification #-}

 data Stream a = forall s. Stream s (s - Maybe (a, s))

 nil :: Stream a
 nil = Stream () (const Nothing)

 -- One version
 -- cons :: a - Stream a - Stream a
 -- cons a str = Stream Nothing (maybe (Just (a, Just str)) run)
 --  where run (Stream s step) = 
 -- step s = (\ (a,s) - return (a, Just (Stream s step)))

 -- the second version
 cons :: a - Stream a - Stream a
 cons a str = Stream (Just (a,str)) step
  where step Nothing = Nothing
step (Just (a, (Stream s step'))) = Just (a,
case step' s of
  Nothing  - Nothing
  Just (a',s') - Just (a',(Stream s' step')))


 instance Show a = Show (Stream a) where
   showsPrec _ (Stream s step) k = '[' : go s
 where go s = maybe (']' : k) 
   (\(a, s) - shows a . showString ,  $ go s) (step s)

 taken :: Int - Stream a - Stream a
 taken n (Stream s step) = 
   Stream (n, s) (\(n, s) - 
   if n = 0 then Nothing else maybe Nothing
  (\(a, s) - Just (a, (n - 1, s))) (step s))

 ones :: Stream Int
 ones = cons 1 ones

 test2 = taken 5 $ ones
 -- [1, 1, 1, 1, 1, ]

Finally, if one doesn't like existentials, one can try
universals:
http://okmij.org/ftp/Algorithms.html#zip-folds
http://okmij.org/ftp/Haskell/zip-folds.lhs

The code implements the whole list library, including zip and
zipWith. None of the list operations use value recursion. We still can
use value recursion to define infinite streams, which are processed
lazily. In fact, the sample stream2 of the example is the infinite
stream.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Eta-expansion and existentials (or: types destroy my laziness)

2010-10-19 Thread Max Bolingbroke
Hi Oleg,

Thanks for your reply!

 Really? Here are two 'cons' that seem to satisfy all the criteria

Thanks - your definitions are similar to Roman's suggestion.
Unfortunately my criteria 3) is not quite what I actually wanted - I
really wanted something GHC-optimisable - (so non-recursive
definitions are a necessary but not sufficient condition).

The problem is that I'd like to do the static argument transformation
on the Stream argument to cons so that GHC can optimise it
properly. This is why I've made my cons pattern match on str
directly, so the local run/step loop can refer to the lexically
bound step/state fields of the stream being consed on to.

As Roman suggests, the best way to get GHC to optimise these
compositions would be to do something like extend GHC so it can do the
SAT by itself :-). Alternatively, a type-safe eta for data types
involving existentials would let me do what I want without GHC changes
- but I don't think there is a way to define this.

        Finally, if one doesn't like existentials, one can try
 universals:
        http://okmij.org/ftp/Algorithms.html#zip-folds
        http://okmij.org/ftp/Haskell/zip-folds.lhs

I hadn't seen this - thanks! It is certainly a neat trick. I can't
quite see how to use it to eliminate the existential I'm actually
interested eta-expanding without causing work duplication, but I'll
keep thinking about it.

Cheers,
Max
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Eta-expansion and existentials (or: types destroy my laziness)

2010-10-19 Thread Dan Doel
On Tuesday 19 October 2010 6:16:16 am Max Bolingbroke wrote:

 Thanks - your definitions are similar to Roman's suggestion.
 Unfortunately my criteria 3) is not quite what I actually wanted - I
 really wanted something GHC-optimisable - (so non-recursive
 definitions are a necessary but not sufficient condition).

 ...

 I hadn't seen this - thanks! It is certainly a neat trick. I can't
 quite see how to use it to eliminate the existential I'm actually
 interested eta-expanding without causing work duplication, but I'll
 keep thinking about it.

I doubt it's possible, aside from your unsafeCoerce version.

The nub of the problem seems to me to be the lack of irrefutable match on the 
existential---irrefutable match should be equivalent to eta expanding values, 
so it's been intentionally disallowed. One could argue that this corresponds 
to their weak elimination:

  elim :: (forall a. P a - r) - (exists a. P a) - r

However, this argument is a bit circular, since that eliminator could be 
defined to behave similarly to an irrefutable match. One might expect it to be 
strict, but it isn't necessary (although it might be difficult to specify how 
such a lazy reduction would work formally, without resorting to other, 
stronger elimination principles).

Presumably, GHC requires strictness because of constraints that can be bundled 
in an existential. For one, with local equality constraints, it'd be unsound 
in the same way that irrefutable matches on a type equality GADT are. However, 
I also seem to recall that GHC expects all dictionaries to be strictly 
evaluated (for optimization purposes), while irrefutable matching on a 
constrained existential would introduce lazy dictionaries. Or, put another 
way, eta expansion of a dictionary-holding existential would result in a value 
holding a bottom dictionary, whereas that's otherwise impossible, I think.

However, your stream type has no constraints, so there's nothing that would 
make an irrefutable match unreasonable (unless I'm missing something). I don't 
expect GHC to start to support this, because, you can only use irrefutable 
matches on existentials without constraints, is a complicated rule. But I 
believe that is the core of your troubles, and it doesn't have any necessary 
connection to type safety in this case.

Cheers,
-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Eta-expansion and existentials (or: types destroy my laziness)

2010-10-17 Thread Roman Leshchinskiy

On 16/10/2010, at 12:36, Max Bolingbroke wrote:

 On 16 October 2010 12:16, Roman Leshchinskiy r...@cse.unsw.edu.au wrote:
 eta :: Stream a - Stream a
 eta s = Stream s next
   where
 next (Stream s next') = case next' s of
   Just (x,s') - Just (x,Stream s' next')
   Nothing - Nothing
 
 Making GHC optimise stream code involving eta properly is hard :-)
 
 Good point, I don't exactly mean non-recursive for requirement 3) then
 - I mean an adjective with a fuzzier definition like GHC-optimisable
 :-)

I suspect the easiest way to achieve this is to expand the set of 
GHC-optimisable things until it includes eta :-)

Roman


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Eta-expansion and existentials (or: types destroy my laziness)

2010-10-16 Thread Max Bolingbroke
Hi Cafe,

I've run across a problem with my use of existential data types,
whereby programs using them are forced to become too strict, and I'm
looking for possible solutions to the problem.

I'm going to explain what I mean by using a literate Haskell program.
First, the preliminaries:

 {-# LANGUAGE ExistentialQuantification #-}
 import Control.Arrow (second)
 import Unsafe.Coerce

Let's start with a simple example of an existential data type:

 data Stream a = forall s. Stream s (s - Maybe (a, s))

This is a simplified version of the type of streams from the stream
fusion paper by Coutts et al. The idea is that if you have a stream,
you have some initial state s,
which you can feed to a step function. The step function either says
Stop! The stream has ended!, or yields an element of the stream and
an updated state.

The use of an existential quantifier is essential to this code,
because it means that we can write most functions on Streams in a
non-recursive fashion. This in turn makes them amenable to inlining
and simplification by GHC, which gives the us the loop fusion we need.

An example stream is that of natural numbers:

 nats :: Stream Int
 nats = Stream 0 (\n - Just (n, n + 1))

Here, the state is just the next natural number to output.

We can also build the list constructors as functions on streams:

 nil :: Stream a
 nil = Stream () (const Nothing)

 cons :: a - Stream a - Stream a
 cons a (Stream s step) = Stream Nothing (maybe (Just (a, Just s)) (fmap 
 (second Just) . step))

List functions can also easily be expressed:

 taken :: Int - Stream a - Stream a
 taken n (Stream s step) = Stream (n, s) (\(n, s) - if n = 0 then Nothing 
 else maybe Nothing (\(a, s) - Just (a, (n - 1, s))) (step s))

To see this all in action, we will need a Show instance for streams.
Note how this code implements a loop where it repeatedly steps the
stream (starting from
the initial state):

 instance Show a = Show (Stream a) where
 showsPrec _ (Stream s step) k = '[' : go s
   where go s = maybe (']' : k) (\(a, s) - shows a . showString ,  $ go 
 s) (step s)

We can now run code like this:

 test1 = do
 print $ (nil :: Stream Int)-- []
 print $ cons 1 nil -- [1, ]
 print $ taken 1 $ cons 1 $ cons 2 nil  -- [1, ]

Now we may wish to define infinite streams using value recursion:

 ones :: Stream Int
 ones = cons 1 ones

Unfortunately, 'ones' is just _|_! The reason is that cons is strict
in its second argument. The problem I have is that there is no way to
define cons which is
simultaneously:

  1. Lazy in the tail of the list
  2. Type safe
  3. Non-recursive

If you relax any of these constraints it becomes possible. For
example, if we don't care about using only non-recursive functions we
can get the cons we want
by taking a roundtrip through the skolemized (existiental-eliminated -
see http://okmij.org/ftp/Computation/Existentials.html) version of
streams:

 data StreamSK a = StreamSK (Maybe (a, StreamSK a))

 skolemize :: Stream a - StreamSK a
 skolemize (Stream s step) = StreamSK (fmap (\(a, s') - (a, skolemize (Stream 
 s' step))) $ step s)

 unskolemize :: StreamSK a - Stream a
 unskolemize streamsk = Stream streamsk (\(StreamSK next) - next)

 instance Show a = Show (StreamSK a) where
 showsPrec _ (StreamSK mb_next) k = maybe (']' : k) (\(a, ssk) - shows a 
 . showString ,  $ shows ssk k)  mb_next

Now we can define:

 cons_internally_recursive x stream = cons x (unskolemize (skolemize stream))

This works because unskolemize (skolemize stream) != _|_ even if
stream is bottom. However, this is a non-starter because GHC isn't
able to fuse together the (recursive) skolemize function with any
consumer of it (e.g. unskolemize).

In fact, to define a correct cons it would be sufficient to have some
function (eta :: Stream a - Stream a) such that (eta s) has the same
semantics as s, except
that eta s /= _|_ for any s. I call this function eta because it
corresponds to classical eta expansion. We can define a type class for
such operations with a number
of interesting instances:

 class Eta a where
 -- eta a /= _|_
 eta :: a - a

 instance Eta (a, b) where
 eta ~(a, b) = (a, b)

 instance Eta (a - b) where
 eta f = \x - f x

 instance Eta (StreamSK a) where
 eta ~(StreamSK a) = StreamSK a

If we had an instance for Eta (Stream a) we could define a lazy cons function:

 cons_lazy :: a - Stream a - Stream a
 cons_lazy x stream = cons x (eta stream)

As we have already seen, one candidate instance is that where eta =
unskolemize . skolemize, but I've already ruled that possibility out.
Given that constraint, the
only option that I see is this:

 instance Eta (Stream a) where
 -- Doesn't type check, even though it can't go wrong:
 --eta stream = Stream (case stream of Stream s _ - s) (case stream of 
 Stream _ step - step)
 eta stream = Stream (case stream of Stream s _ - unsafeCoerce s :: ()) 
 (case stream of Stream _ step - 

Re: [Haskell-cafe] Eta-expansion and existentials (or: types destroy my laziness)

2010-10-16 Thread Roman Leshchinskiy

On 16/10/2010, at 12:00, Max Bolingbroke wrote:

 Hi Cafe,
 
 I've run across a problem with my use of existential data types,
 whereby programs using them are forced to become too strict, and I'm
 looking for possible solutions to the problem.
 
 I'm going to explain what I mean by using a literate Haskell program.
 First, the preliminaries:
 
 {-# LANGUAGE ExistentialQuantification #-}
 import Control.Arrow (second)
 import Unsafe.Coerce
 
 Let's start with a simple example of an existential data type:
 
 data Stream a = forall s. Stream s (s - Maybe (a, s))
 
 [...]
 In fact, to define a correct cons it would be sufficient to have some
 function (eta :: Stream a - Stream a) such that (eta s) has the same
 semantics as s, except that eta s /= _|_ for any s.

That's easy.

eta :: Stream a - Stream a
eta s = Stream s next
   where
 next (Stream s next') = case next' s of
   Just (x,s') - Just (x,Stream s' next')
   Nothing - Nothing

Making GHC optimise stream code involving eta properly is hard :-)

Roman


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Eta-expansion and existentials (or: types destroy my laziness)

2010-10-16 Thread Max Bolingbroke
On 16 October 2010 12:16, Roman Leshchinskiy r...@cse.unsw.edu.au wrote:
 eta :: Stream a - Stream a
 eta s = Stream s next
   where
     next (Stream s next') = case next' s of
                               Just (x,s') - Just (x,Stream s' next')
                               Nothing     - Nothing

 Making GHC optimise stream code involving eta properly is hard :-)

Good point, I don't exactly mean non-recursive for requirement 3) then
- I mean an adjective with a fuzzier definition like GHC-optimisable
:-)

Max
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness bug in Data.List.intersperse (was: ANNOUNCE: text 0.8.0.0, fast Unicode text support)

2010-09-04 Thread Bryan O'Sullivan
On Wed, Sep 1, 2010 at 1:00 PM, Daniel Fischer daniel.is.fisc...@web.dewrote:

 I'm not keen on subscribing to libraries@ to follow the official proposal
 process, any takers?


I'll take it up.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Laziness bug in Data.List.intersperse (was: ANNOUNCE: text 0.8.0.0, fast Unicode text support)

2010-09-01 Thread Daniel Fischer
On Wednesday 01 September 2010 21:29:47, Daniel Fischer wrote:
 that's where you definitely get a space leak, because

 intersperse             :: a - [a] - [a]
 intersperse _   []      = []
 intersperse _   [x]     = [x]
 intersperse sep (x:xs)  = x : sep : intersperse sep xs

 isn't lazy enough.

Ticket created, http://hackage.haskell.org/trac/ghc/ticket/4282
I'm not keen on subscribing to libraries@ to follow the official proposal 
process, any takers?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Tue, 03 Aug 2010 16:36:33 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  - If there is no class instance for function types, then those problems
  go away, of course. But it is doubtful whether that would be a viable
  solution. Quite a few programs would be rejected as a consequence. (Say,
  you want to use the strict version of foldl. That will lead to a type
  class constraint on one of the type variables. Now suppose you actually
  want to fold in a higher-order fashion, like when expressing efficient
  reverse via foldr. That would not anymore be possible for the strict
  version of foldl, as it would require the type-class-constrained
  variable to be instantiated with a function type.)
  
  I think it would be a step forward. The old seq would still exists as
  unsafeSeq and such applications could continue to use it. In the mean
  time parametricity results would better apply to programs without unsafe
  functions. And this without adding extra complexity into the type system.
 
 Yes, I agree. Of course, you (and Lennart, and others advocating putting
 seq into a type class) could work toward that solution right away, could
 have done so for quite some time: write a package with an Eval type
 class and method safeSeq (and *no* class instance for function types),
 upload it on Hackage, encourage people to use it. Modulo the naming
 difference seq/safeSeq vs. unsafeSeq/seq, that's exactly the solution
 you want. I wonder why it is not happening. :-)

Yes it would be a starting point.

  Actually I think we can keep the old generic seq, but cutting its full
  polymorphism:
  
  seq :: Typeable a = a - b - b
 
 I guess I don't know enough about Typeable to appreciate that.

Basically the Typeable constraints tells that we dynamically know the identity
of the type being passed in. So this may be a bit challenging to cleanly
explain how this safely disable the parametricity but in the mean time
this is the net result the type is dynamically known at run time.

The same trick is known to work for references as well when effects are
everywhere:

newRef :: Typeable a = a - Ref a
readRef :: Ref a - a
writeRef :: Ref a - a - ()

In the same vein it would make unsafePerformIO less dangerous to add such
a constraint.

However I would like to here more comments about this seq variant, anyone?

  OK, I better understand now where we disagree. You want to see in the type
  whether or not the free theorem apply,
 
 Oh, YES. That's the point of a free theorem, isn't it: that I only need
 to look at the type of the function to derive some property about it.
 
  I want them to always apply when
  no call to unsafe function is made.
 
 Well, the question is what you mean by no call to unsafe function is
 made. Where? In the function under consideration, from whose type the
 free theorem is derived? Are you sure that this is enough? Maybe that
 function f does not contain a call to unsafeSeq, but it has an argument
 which is itself a function. Maybe in some function application,
 unsafeSeq is passed to f in that argument position, directly or
 indirectly. Maybe f does internally apply that function argument to
 something. Can you be sure that this will not lead to a failure of the
 free theorem you derived from f's type (counting on the fact that f does
 not call an unsafe function)?
 
 Of course, preventing the *whole program* from calling unsafeSeq is
 enough to guarantee validity of the free theorems thus derived. But
 that's equivalent to excluding seq from Haskell altogether.

It depends on the unsafe function that is used. Using unsafeCoerce
or unsafePerformIO (from which we can derive unsafeCoerce) badely
anywhere suffice to break anything. So while seq is less invasive
I find it too much invasive in its raw form.

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

Actually I think we can keep the old generic seq, but cutting its full
polymorphism:

seq :: Typeable a = a - b - b

I guess I don't know enough about Typeable to appreciate that.


Basically the Typeable constraints tells that we dynamically know the identity
of the type being passed in. So this may be a bit challenging to cleanly
explain how this safely disable the parametricity but in the mean time
this is the net result the type is dynamically known at run time.

...

However I would like to here more comments about this seq variant, anyone?


On reflection, isn't Typeable actually much too strong a constraint?
Given that it provides runtime type inspection, probably one cannot
derive any parametricity results at all for a type variable constrained
by Typeable. In contrast, for a type variable constrained via a
hypothetical (and tailored to seq) Eval-constraint, one still gets
something which looks like a standard free theorem, just with some side
conditions relating to _|_ (strictness, totality, ...).

In other words, by saying seq :: Typeable a = a - b - b, you assume
pessimistically that seq can do everything that is possible on members
of the Typeable class. But that might be overly pessimistic, since in
reality the only thing that seq can do is evaluate an arbitrary
expression to weak head normal form.


OK, I better understand now where we disagree. You want to see in the type
whether or not the free theorem apply,

Oh, YES. That's the point of a free theorem, isn't it: that I only need
to look at the type of the function to derive some property about it.


I want them to always apply when
no call to unsafe function is made.

Well, the question is what you mean by no call to unsafe function is
made. Where? In the function under consideration, from whose type the
free theorem is derived? Are you sure that this is enough? Maybe that
function f does not contain a call to unsafeSeq, but it has an argument
which is itself a function. Maybe in some function application,
unsafeSeq is passed to f in that argument position, directly or
indirectly. Maybe f does internally apply that function argument to
something. Can you be sure that this will not lead to a failure of the
free theorem you derived from f's type (counting on the fact that f does
not call an unsafe function)?

Of course, preventing the *whole program* from calling unsafeSeq is
enough to guarantee validity of the free theorems thus derived. But
that's equivalent to excluding seq from Haskell altogether.


It depends on the unsafe function that is used. Using unsafeCoerce
or unsafePerformIO (from which we can derive unsafeCoerce) badely
anywhere suffice to break anything. So while seq is less invasive
I find it too much invasive in its raw form.


Hmm, from this answer I still do not see what you meant when you said
you want free theorems to always apply when no call to seq is made. You
say that seq is less invasive, so do you indeed assume that as soon as
you are sure a function f does not itself (syntactically) contain a call
to seq you are safe to use the standard free theorem derived from f's
type unconstrained? Do you have any justification for that? Otherwise,
we are back to banning seq completely from the whole program/language,
in which case it is trivial that no seq-related side conditions will be
relevant.

Best,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Wed, 04 Aug 2010 15:41:54 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  Actually I think we can keep the old generic seq, but cutting its full
  polymorphism:
 
  seq :: Typeable a = a - b - b
  I guess I don't know enough about Typeable to appreciate that.
  
  Basically the Typeable constraints tells that we dynamically know the 
  identity
  of the type being passed in. So this may be a bit challenging to cleanly
  explain how this safely disable the parametricity but in the mean time
  this is the net result the type is dynamically known at run time.
  
  ...
  
  However I would like to here more comments about this seq variant, anyone?
 
 On reflection, isn't Typeable actually much too strong a constraint?

It is indeed too strong, or not precise enough we could say.

However at least this simple change make it correct (i.e. restore
the parametricity results).

I would call this function genericSeq :: Typeable a = a - b - b

 Given that it provides runtime type inspection, probably one cannot
 derive any parametricity results at all for a type variable constrained
 by Typeable.

Exactly. We could say that we no longer car derive wrong parametricity
results about it.

 In contrast, for a type variable constrained via a
 hypothetical (and tailored to seq) Eval-constraint, one still gets
 something which looks like a standard free theorem, just with some side
 conditions relating to _|_ (strictness, totality, ...).

Indeed, that's why I want both!

In particular for the instance on functions which could be defined using
genericSeq.

  OK, I better understand now where we disagree. You want to see in the type
  whether or not the free theorem apply,
  Oh, YES. That's the point of a free theorem, isn't it: that I only need
  to look at the type of the function to derive some property about it.
 
  I want them to always apply when
  no call to unsafe function is made.
  Well, the question is what you mean by no call to unsafe function is
  made. Where? In the function under consideration, from whose type the
  free theorem is derived? Are you sure that this is enough? Maybe that
  function f does not contain a call to unsafeSeq, but it has an argument
  which is itself a function. Maybe in some function application,
  unsafeSeq is passed to f in that argument position, directly or
  indirectly. Maybe f does internally apply that function argument to
  something. Can you be sure that this will not lead to a failure of the
  free theorem you derived from f's type (counting on the fact that f does
  not call an unsafe function)?
 
  Of course, preventing the *whole program* from calling unsafeSeq is
  enough to guarantee validity of the free theorems thus derived. But
  that's equivalent to excluding seq from Haskell altogether.
  
  It depends on the unsafe function that is used. Using unsafeCoerce
  or unsafePerformIO (from which we can derive unsafeCoerce) badely
  anywhere suffice to break anything. So while seq is less invasive
  I find it too much invasive in its raw form.
 
 Hmm, from this answer I still do not see what you meant when you said
 you want free theorems to always apply when no call to seq is made. You
 say that seq is less invasive, so do you indeed assume that as soon as
 you are sure a function f does not itself (syntactically) contain a call
 to seq you are safe to use the standard free theorem derived from f's
 type unconstrained? Do you have any justification for that? Otherwise,
 we are back to banning seq completely from the whole program/language,
 in which case it is trivial that no seq-related side conditions will be
 relevant.

Actually given genericSeq, I no longer advocate the need for a polymorphic
seq function. So both genericSeq and the seq from the type class would
both safe.

However the rule is still the same when using an unsafe function you are on
your own.

Clearer?

Best regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

However the rule is still the same when using an unsafe function you are on
your own.

Clearer?


Almost. What I am missing is whether or not you would then consider your
genericSeq (which is applicable to functions) one of those unsafe
functions or not.

Ciao,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  However the rule is still the same when using an unsafe function you are on
  your own.
  
  Clearer?
 
 Almost. What I am missing is whether or not you would then consider your
 genericSeq (which is applicable to functions) one of those unsafe
 functions or not.

I would consider it as a safe function.

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:

Nicolas Pouillard schrieb:

However the rule is still the same when using an unsafe function you are on
your own.

Clearer?

Almost. What I am missing is whether or not you would then consider your
genericSeq (which is applicable to functions) one of those unsafe
functions or not.


I would consider it as a safe function.


Well, then I fear you have come full-circle back to a non-solution. It
is not safe:

Consider the example foldl''' from our paper, and replace seq therein by
your genericSeq. Then the function will have the same type as the
original foldl, but the standard free theorem for foldl does not hold
for foldl''' (as also shown in the paper).

Ciao,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Wed, 04 Aug 2010 17:47:12 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
  j...@informatik.uni-bonn.de wrote:
  Nicolas Pouillard schrieb:
  However the rule is still the same when using an unsafe function you are 
  on
  your own.
 
  Clearer?
  Almost. What I am missing is whether or not you would then consider your
  genericSeq (which is applicable to functions) one of those unsafe
  functions or not.
  
  I would consider it as a safe function.
 
 Well, then I fear you have come full-circle back to a non-solution. It
 is not safe:

I feared a bit... but no


 Consider the example foldl''' from our paper, and replace seq therein by
 your genericSeq. Then the function will have the same type as the
 original foldl, but the standard free theorem for foldl does not hold
 for foldl''' (as also shown in the paper).

So foldl''' now has some Typeable constraints.

I agree that the free theorem for foldl does not hold for foldl'''.

However can we derive the free theorem by looking at the type? No because
of the Typeable constraint.

So it is safe to derive free theorems without looking at the usage of seq,
just the type of the function. Taking care of not considering parametric
a type constrained by Typeable.

Finally the difference between your solution and this one is that fewer
(valid) free theorems can be derived (because of the Typable constraints
introduced by seq on functions). Still it is a solution since we no longer
have to fear the usage of seq when deriving a free theorem.

Best regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

On Wed, 04 Aug 2010 17:47:12 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:

Nicolas Pouillard schrieb:

On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:

Nicolas Pouillard schrieb:

However the rule is still the same when using an unsafe function you are on
your own.

Clearer?

Almost. What I am missing is whether or not you would then consider your
genericSeq (which is applicable to functions) one of those unsafe
functions or not.

I would consider it as a safe function.

Well, then I fear you have come full-circle back to a non-solution. It
is not safe:


I feared a bit... but no


Consider the example foldl''' from our paper, and replace seq therein by
your genericSeq. Then the function will have the same type as the
original foldl, but the standard free theorem for foldl does not hold
for foldl''' (as also shown in the paper).


So foldl''' now has some Typeable constraints.


No, I don't see how it has that. Or maybe you should make explicit under
what conditions a type (a - b) is in Typeable. What exactly will the
type of foldl''' be, and why?

Ciao,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Wed, 04 Aug 2010 18:04:13 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  On Wed, 04 Aug 2010 17:47:12 +0200, Janis Voigtländer 
  j...@informatik.uni-bonn.de wrote:
  Nicolas Pouillard schrieb:
  On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
  j...@informatik.uni-bonn.de wrote:
  Nicolas Pouillard schrieb:
  However the rule is still the same when using an unsafe function you 
  are on
  your own.
 
  Clearer?
  Almost. What I am missing is whether or not you would then consider your
  genericSeq (which is applicable to functions) one of those unsafe
  functions or not.
  I would consider it as a safe function.
  Well, then I fear you have come full-circle back to a non-solution. It
  is not safe:
  
  I feared a bit... but no
  
  Consider the example foldl''' from our paper, and replace seq therein by
  your genericSeq. Then the function will have the same type as the
  original foldl, but the standard free theorem for foldl does not hold
  for foldl''' (as also shown in the paper).
  
  So foldl''' now has some Typeable constraints.
 
 No, I don't see how it has that. Or maybe you should make explicit under
 what conditions a type (a - b) is in Typeable. What exactly will the
 type of foldl''' be, and why?

Right let's make it more explicit, I actually just wrote a Control.Seq
module and a test file:

module Control.Seq where
  genericSeq :: Typeable a = a - b - b
  genericSeq = Prelude.seq

  class Seq a where
seq :: a - b - b

  instance (Typeable a, Typeable b) = Seq (a - b) where
seq = genericSeq

  ... Other seq instances ...

$ cat test.hs
import Prelude hiding (seq)
import Data.Function (fix)
import Control.Seq (Seq(seq))
import Data.Typeable

foldl :: (a - b - a) - a - [b] - a
foldl c = fix (\h n ys - case ys of
[] - n
x : xs - let n' = c n x in h n' xs)

foldl' :: Seq a = (a - b - a) - a - [b] - a
foldl' c = fix (\h n ys - case ys of
[] - n
x : xs - let n' = c n x in seq n' (h n' xs))

foldl'' :: (Typeable a, Typeable b, Seq b) = (a - b - a) - a - [b] - a
foldl'' c = fix (\h n ys - seq (c n) (case ys of
 [] - n
 x : xs - seq xs (seq x
 (let n' = c n x in h n' 
xs

foldl''' :: (Typeable a, Typeable b) = (a - b - a) - a - [b] - a
-- GHC infer this one
-- foldl''' :: Seq (a - b - a) = (a - b - a) - a - [b] - a
-- however this one require FlexibleContext, and the first one is accepted.
foldl''' c = seq c (fix (\h n ys - case ys of
  [] - n
  x : xs - let n' = c n x in h n' xs))

Best regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

Right let's make it more explicit, I actually just wrote a Control.Seq
module and a test file:

module Control.Seq where
  genericSeq :: Typeable a = a - b - b
  genericSeq = Prelude.seq

  class Seq a where

seq :: a - b - b

  instance (Typeable a, Typeable b) = Seq (a - b) where
seq = genericSeq

  ... Other seq instances ...

$ cat test.hs
import Prelude hiding (seq)
import Data.Function (fix)
import Control.Seq (Seq(seq))
import Data.Typeable

...

foldl''' :: (Typeable a, Typeable b) = (a - b - a) - a - [b] - a
-- GHC infer this one
-- foldl''' :: Seq (a - b - a) = (a - b - a) - a - [b] - a
-- however this one require FlexibleContext, and the first one is accepted.
foldl''' c = seq c (fix (\h n ys - case ys of
  [] - n
  x : xs - let n' = c n x in h n' xs))


Well, in this example you were lucky that the function type on which you
use seq involves some type variables. But consider this example:

f :: (Int - Int) - a - a
f h x = seq h x

I think with your definitions that function will really have that type,
without any type class constraints on anything.

So let us derive the free theorem for that type. It is:

forall t1,t2 in TYPES, g :: t1 - t2, g strict.
 forall p :: Int - Int.
  forall q :: Int - Int.
   (forall x :: Int. p x = q x)
   == (forall y :: t1. g (f p y) = f q (g y))

Now, set

p :: Int - Int
p = undefined

q :: Int - Int
q _ = undefined

Clearly, forall x :: Int. p x = q x holds.

So it should be the case that for every strict function g and
type-appropriate input y it holds:

  g (f p y) = f q (g y)

But clearly the left-hand side is undefined (due to strictness of g and
f p y = f undefined y = seq undefined y), while the right-hand side is
not necessarily so (due to f q (g y) = f (\_ - undefined) (g y) = seq
(\_ - undefined) (g y) = g y).

So you have claimed that by using seq via genericSeq in the above
definition of f you are guaranteed that any free theorem you derive from
its type is correct. But as you see above it is not!

I think you have to face it: if you want a solution that both gives
meaningful free theorems and still allows to write all programs
involving seq that you can currently write in Haskell, then using type
classes is not the answer.

Ciao,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Nicolas Pouillard
On Mon, 02 Aug 2010 17:41:02 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Hi,
 
 I am late to reply in this thread, but as I see Stefan has already made
 what (also from my view) are the main points:
 
 - Putting seq in a type class makes type signatures more verbose, which
 one may consider okay or not. In the past (and, as it seems, again in
 every iteration of the language development process since then) the
 majority of the language design decision makers have considered this
 verbosity non-okay enough, so that they decided to have a fully
 polymorhpic seq.
 
 - Even if putting seq in a type class, problems with parametricity do
 not simply vanish. The question is what instances there will be for that
 class. (For example, if there is not instance at all, then no
 seq-related problems of *any* nature can exist...)
 
 - The Haskell 1.3 solution was to, among others, have a class instance
 for functions. As we show in the paper Stefan mentioned, that is not a
 solution. Some statements claimed by parametricity will then still be
 wrong due to seq.

I agree. Adding an instance with a polymorphic primitive vanish the whole
bonus of the type class approach.

 - If there is no class instance for function types, then those problems
 go away, of course. But it is doubtful whether that would be a viable
 solution. Quite a few programs would be rejected as a consequence. (Say,
 you want to use the strict version of foldl. That will lead to a type
 class constraint on one of the type variables. Now suppose you actually
 want to fold in a higher-order fashion, like when expressing efficient
 reverse via foldr. That would not anymore be possible for the strict
 version of foldl, as it would require the type-class-constrained
 variable to be instantiated with a function type.)

I think it would be a step forward. The old seq would still exists as
unsafeSeq and such applications could continue to use it. In the mean
time parametricity results would better apply to programs without unsafe
functions. And this without adding extra complexity into the type system.

Actually I think we can keep the old generic seq, but cutting its full
polymorphism:

seq :: Typeable a = a - b - b

Even if this is acceptable I would still introduce a type class for seq
for the following reasons:

  - It can be useful to have a different implementation on some
specific types.
  - It may apply one types on which we do not want Typeable.
  - One could safely use the Typeable version for functions.

 Two more specific answers to Nicolas' comments:
 
  Actually my point is that if we do not use any polymorphic primitive to
  implement a family of seq functions then it cannot be flawed. Indeed
  if you read type classes as a way to implicitly pass and pick  functions
  then it cannot add troubles.
 
 Completely correct. But the point is that without using any polymorphic
 primitive you won't be able to implement a member of that family for the
 case of function types (which you do not consider a big restriction, but
 others do).
 
  However I absolutely do not buy their argument using as example a function
  f :: Eval (a - Int) = (a - Int) - (a - Int) - Int. They consider as
  an issue the fact that the type does not tell us on which argument seq is
  used. I think it is fine we may want a more precise type for it to get more
  properties out of it but it is not flawed.
  As much as we don't know where (==) is used given a function of type
  ∀ a. Eq a = [a] - [a].
 
 I fear you do not buy our argument since you did not fully understand
 what our argument is, which in all probability is our fault in not
 explaining it enough. The point is not that we dislike per se that one
 doesn't know from the type signature how/where exactly methods from a
 type class are used. In your example ∀ a. Eq a = [a] - [a] it's
 alright that we don't know more about where (==) is used. But for a
 function of type f :: Eval (a - Int) = (a - Int) - (a - Int) -
 Int, in connection with trying to find out whether uses of seq are
 harmful or not, it is absolutely *essential* to know on which of the two
 functions (a - Int) seq is used. The type class approach cannot tell
 that. Hence, a type class approach is unsuitable for trying to prevent
 seq from doing parametricity-damage while still allowing to write all
 the Haskell programs one could before (including ones that use seq on
 functions). That is the flaw of the type class approach to controlling
 seq. It is of course no flaw of using type classes in Haskell for other
 things, and we certainly did not meant to imply such a thing.

OK, I better understand now where we disagree. You want to see in the type
whether or not the free theorem apply, I want them to always apply when
no call to unsafe function is made.

Kind regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Stefan Holdermans
Nicolas,

 OK, I better understand now where we disagree. You want to see in the type
 whether or not the free theorem apply, I want them to always apply when
 no call to unsafe function is made.

Implementing your suggestion would make me feel uncomfortable. Turning seq into 
an unsafe operations effectively places it outside the language, like 
unsafePerformIO isn't really part of the language (in my view, at least). But 
experience has made it clear that there are plenty of occasions in which we 
cannot really do without seq (even though its presence in the language is 
prominent subject of debate).

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

- If there is no class instance for function types, then those problems
go away, of course. But it is doubtful whether that would be a viable
solution. Quite a few programs would be rejected as a consequence. (Say,
you want to use the strict version of foldl. That will lead to a type
class constraint on one of the type variables. Now suppose you actually
want to fold in a higher-order fashion, like when expressing efficient
reverse via foldr. That would not anymore be possible for the strict
version of foldl, as it would require the type-class-constrained
variable to be instantiated with a function type.)


I think it would be a step forward. The old seq would still exists as
unsafeSeq and such applications could continue to use it. In the mean
time parametricity results would better apply to programs without unsafe
functions. And this without adding extra complexity into the type system.


Yes, I agree. Of course, you (and Lennart, and others advocating putting
seq into a type class) could work toward that solution right away, could
have done so for quite some time: write a package with an Eval type
class and method safeSeq (and *no* class instance for function types),
upload it on Hackage, encourage people to use it. Modulo the naming
difference seq/safeSeq vs. unsafeSeq/seq, that's exactly the solution
you want. I wonder why it is not happening. :-)


Actually I think we can keep the old generic seq, but cutting its full
polymorphism:

seq :: Typeable a = a - b - b


I guess I don't know enough about Typeable to appreciate that.


OK, I better understand now where we disagree. You want to see in the type
whether or not the free theorem apply,


Oh, YES. That's the point of a free theorem, isn't it: that I only need
to look at the type of the function to derive some property about it.


I want them to always apply when
no call to unsafe function is made.


Well, the question is what you mean by no call to unsafe function is
made. Where? In the function under consideration, from whose type the
free theorem is derived? Are you sure that this is enough? Maybe that
function f does not contain a call to unsafeSeq, but it has an argument
which is itself a function. Maybe in some function application,
unsafeSeq is passed to f in that argument position, directly or
indirectly. Maybe f does internally apply that function argument to
something. Can you be sure that this will not lead to a failure of the
free theorem you derived from f's type (counting on the fact that f does
not call an unsafe function)?

Of course, preventing the *whole program* from calling unsafeSeq is
enough to guarantee validity of the free theorems thus derived. But
that's equivalent to excluding seq from Haskell altogether.

Best,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Janis Voigtländer

Hi again,

Maybe I should add that, maybe disappointingly, I do not even have a
strong opinion about whether seq should be in Haskell or not, and in
what form. Let me quote the last paragraph of an extended version of our
paper referred to earlier:


Finally, a natural question is whether or not selective strictness
should be put under control via the type system in a future version of
Haskell (or even removed completely). We have deliberately not taken a
stand on this here. What was important to us is that both the costs and
benefits of either way should be well understood when making such a
decision. Maybe the realization that, contrary to popular opinion, a
relatively simple approach like the one that was present in Haskell
version 1.3 does not suffice to keep selective strictness in check, and
that instead something slightly less wieldy, like our type system
presented here or a similar one, would be needed, will quell the
recurring calls for putting seq in a type class once and for all. Even
then, while it would mean that our type system does not get adopted in
practice, we would consider our effort well invested. At least, the
community would then have made an informed decision, and part of the
justification would be on record.


That's under the assumption that the requirements we have on a solution are:

1. All Haskell programs that could be written before should still be
implementable, though potentially with a different type.

2. Parametricity results derived from the (new) types should hold, even
if seq is involved.

The Haskell 1.3 approach achieves 1. but not 2.

The approach of an Eval class without a function type instance achieves
2. but not 1.

Lennart suggested that the programs one loses that (latter) way might be
few in practice. I have no idea whether that is true, but it might well be.

But it is actually new to me that proponents of putting seq in a type
class admit that they can only do so and achieve 2. by accepting to give
up 1. In the past, also in the Being lazy with class paper, the
impression was given that the controversial issue about the Haskell 1.3
solution were just its practicality in terms of how cumbersome or not
the additional typing artifacts become. While it was simply supposed
that at least that solution achieves both 1. and 2. It does not.

Best,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Nicolas Pouillard
On Tue, 3 Aug 2010 16:24:54 +0200, Stefan Holdermans ste...@vectorfabrics.com 
wrote:
 Nicolas,
 
  OK, I better understand now where we disagree. You want to see in the type
  whether or not the free theorem apply, I want them to always apply when
  no call to unsafe function is made.
 
 Implementing your suggestion would make me feel uncomfortable. Turning seq 
 into an unsafe operations effectively places it outside the language, like 
 unsafePerformIO isn't really part of the language (in my view, at least). But 
 experience has made it clear that there are plenty of occasions in which we 
 cannot really do without seq (even though its presence in the language is 
 prominent subject of debate).

If we ignore the solution using Typeable for now.
The actual seq would be considered unsafe but seq would be re-introduced as 
a method of a type class with instances for many types but not
for functions. So in this view only forcing functions would be considered
out of the language.

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 8/3/10 10:24 , Stefan Holdermans wrote:
 Implementing your suggestion would make me feel uncomfortable. Turning seq 
 into an unsafe operations effectively places it outside the language, like 
 unsafePerformIO isn't really part of the language (in my view, at least). But 
 experience has made it clear that there are plenty of occasions in which we 
 cannot really do without seq (even though its presence in the language is 
 prominent subject of debate).

...which sounds a lot like unsafePerformIO (which *is* inside the language;
it's part of the FFI addendum).  The parallels are exactly why I suggested
it become unsafeSeq.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxYR/gACgkQIn7hlCsL25WE8QCgtAs+gq93pZeRsBwsis9HLSWm
xeEAn2xuKLYSB4IsFlxlssL5Hf3Pxo1x
=oA8A
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-08-02 Thread Stefan Holdermans
Nicolas, Luke,

SH More importantly, the type-class approach is flawed [...].  It assumes
SH that all seq-related constraints can be read off from type variables,
SH which is in fact not the case.

LP Could you provide an example (or a specific example or explanation in
LP the paper) of what you mean by this?

I don't remember the exact details, but if I remember correctly the main
concern of the authors is that by requiring that all function types are in
Eval as well, as an older version of the Report did, applications of seq to
values of function type are not reflected in types and hence, types do not
convey enough information to state some necessary seq-induced extra
conditions for parametricity results. Anyway, I guess Janis and Daniel will
do a far better job commenting at this.

As an aside, or a shameless plug if you will, in a paper presented at this
year's PEPM [*], Jurriaan and I mention that having information about uses
of seq explicitly available, particularly in types, could be favourable for
strictness analysis as well.

NP Actually my point is that if we do not use any polymorphic primitive to
NP implement a family of seq functions then it cannot be flawed. Indeed
NP if you read type classes as a way to implicitly pass and pick functions
NP then it cannot add troubles.

NP Finally maybe we can simply forbidden the forcing of function (as we do
NP with Eq). The few cases where it does matter will rescue to
NP unsafeSeqFunction.

I guess not having functions types in Eval would indeed make parametricity
less of an issue, but I do not quite agree that the cases in which one may
need seq on values of function type are insignificant.

Cheers,

  Stefan

[*] Stefan Holdermans and Jurriaan Hage. Making “stricterness” more relevant.
In John P. Gallagher and Janis Voigtländer, editors, Proceedings of the
2010 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation,
PEPM 2010, Madrid, Spain, January 18–19, 2010, pages 121–130. ACM Press,
2010.

http://people.cs.uu.nl/stefan/pubs/holdermans10making.html___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness question

2010-08-02 Thread Janis Voigtländer

Hi,

I am late to reply in this thread, but as I see Stefan has already made
what (also from my view) are the main points:

- Putting seq in a type class makes type signatures more verbose, which
one may consider okay or not. In the past (and, as it seems, again in
every iteration of the language development process since then) the
majority of the language design decision makers have considered this
verbosity non-okay enough, so that they decided to have a fully
polymorhpic seq.

- Even if putting seq in a type class, problems with parametricity do
not simply vanish. The question is what instances there will be for that
class. (For example, if there is not instance at all, then no
seq-related problems of *any* nature can exist...)

- The Haskell 1.3 solution was to, among others, have a class instance
for functions. As we show in the paper Stefan mentioned, that is not a
solution. Some statements claimed by parametricity will then still be
wrong due to seq.

- If there is no class instance for function types, then those problems
go away, of course. But it is doubtful whether that would be a viable
solution. Quite a few programs would be rejected as a consequence. (Say,
you want to use the strict version of foldl. That will lead to a type
class constraint on one of the type variables. Now suppose you actually
want to fold in a higher-order fashion, like when expressing efficient
reverse via foldr. That would not anymore be possible for the strict
version of foldl, as it would require the type-class-constrained
variable to be instantiated with a function type.)

Two more specific answers to Nicolas' comments:


Actually my point is that if we do not use any polymorphic primitive to
implement a family of seq functions then it cannot be flawed. Indeed
if you read type classes as a way to implicitly pass and pick  functions
then it cannot add troubles.


Completely correct. But the point is that without using any polymorphic
primitive you won't be able to implement a member of that family for the
case of function types (which you do not consider a big restriction, but
others do).


However I absolutely do not buy their argument using as example a function
f :: Eval (a - Int) = (a - Int) - (a - Int) - Int. They consider as
an issue the fact that the type does not tell us on which argument seq is
used. I think it is fine we may want a more precise type for it to get more
properties out of it but it is not flawed.
As much as we don't know where (==) is used given a function of type
∀ a. Eq a = [a] - [a].


I fear you do not buy our argument since you did not fully understand
what our argument is, which in all probability is our fault in not
explaining it enough. The point is not that we dislike per se that one
doesn't know from the type signature how/where exactly methods from a
type class are used. In your example ∀ a. Eq a = [a] - [a] it's
alright that we don't know more about where (==) is used. But for a
function of type f :: Eval (a - Int) = (a - Int) - (a - Int) -
Int, in connection with trying to find out whether uses of seq are
harmful or not, it is absolutely *essential* to know on which of the two
functions (a - Int) seq is used. The type class approach cannot tell
that. Hence, a type class approach is unsuitable for trying to prevent
seq from doing parametricity-damage while still allowing to write all
the Haskell programs one could before (including ones that use seq on
functions). That is the flaw of the type class approach to controlling
seq. It is of course no flaw of using type classes in Haskell for other
things, and we certainly did not meant to imply such a thing.

Best regards,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 8/2/10 11:41 , Janis Voigtländer wrote:
 alright that we don't know more about where (==) is used. But for a
 function of type f :: Eval (a - Int) = (a - Int) - (a - Int) -
 Int, in connection with trying to find out whether uses of seq are
 harmful or not, it is absolutely *essential* to know on which of the two
 functions (a - Int) seq is used. The type class approach cannot tell

Hm.  Seems to me that (with TypeFamilies and FlexibleContexts)

 h :: (x ~ y, Eval (y - Int)) = (x - Int) - (y - Int) - Int

should do that, but ghci is telling me it isn't (substituting Eq for Eval
for the nonce):

  Prelude let h :: (x ~ y, Eq (y - Int)) = (x - Int) - (y - Int) -
Int; h = undefined
  Prelude :t h
  h :: (Eq (x - Int)) = (x - Int) - (x - Int) - Int

Bleah.  (as if it weren't obvious) I still don't quite grok this stuff

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxW+VwACgkQIn7hlCsL25Us2gCbBaiDCutFcN7URjqBL0RUUMUl
fkkAoJ6jV52RUeNQcISeyzTMFtDwic+y
=0fBN
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Stefan Holdermans
Brandon,

 Hm.  Seems to me that (with TypeFamilies and FlexibleContexts)
 
 h :: (x ~ y, Eval (y - Int)) = (x - Int) - (y - Int) - Int
 
 should do that, but ghci is telling me it isn't (substituting Eq for Eval
 for the nonce):
 
  Prelude let h :: (x ~ y, Eq (y - Int)) = (x - Int) - (y - Int) -
 Int; h = undefined
  Prelude :t h
  h :: (Eq (x - Int)) = (x - Int) - (x - Int) - Int
 
 Bleah.  (as if it weren't obvious) I still don't quite grok this stuff

Well... x ~ y kind of implies that x could replace y within the scope of the 
constraint: it's like one of the first thing I would expect to follow from a 
notion  of equality. ;-)  But actually if you push the constraint inward, into 
the type so to say, you actually get quite close to Janis' and David's solution.

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Stefan Holdermans
Brandon,

 h :: (x ~ y, Eval (y - Int)) = (x - Int) - (y - Int) - Int

  But actually if you push the constraint inward, into the type so to say, you 
 actually get quite close to Janis' and David's solution.

Sorry, I was thinking out loud there. I meant the Eval constraint, not the 
equality constraint.  But, right now, I guess my comment only makes sense to 
me, so let's pretend I kept quiet. ;-)

Cheers,

  S.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 8/2/10 17:18 , Stefan Holdermans wrote:
 Brandon,
 h :: (x ~ y, Eval (y - Int)) = (x - Int) - (y - Int) - Int
 
  But actually if you push the constraint inward, into the type so to say, 
 you actually get quite close to Janis' and David's solution.
 
 Sorry, I was thinking out loud there. I meant the Eval constraint, not the 
 equality constraint.  But, right now, I guess my comment only makes sense to 
 me, so let's pretend I kept quiet. ;-)

The point of this discussion is that the Eval constraint needs to be on one
of the functions.  So I tried to specify that (x - Int) and (y - Int) are
different types despite x and y being the same type, because one of them has
an Eval constraint.  This may be a shortcoming of Haskell (or System Fc?)
types, although it may be doable with a newtype.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxXWZUACgkQIn7hlCsL25XhxACdFLFtCUrJqEpqGSsymt1uE3Zc
yWgAoKcyJZdjng1zthyAtPkMCIvHce27
=XkFz
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Stefan Holdermans
Brandon,

 Sorry, I was thinking out loud there. I meant the Eval constraint, not the 
 equality constraint.  But, right now, I guess my comment only makes sense to 
 me, so let's pretend I kept quiet. ;-)

 The point of this discussion is that the Eval constraint needs to be on one
 of the functions.  So I tried to specify that (x - Int) and (y - Int) are
 different types despite x and y being the same type, because one of them has
 an Eval constraint.  This may be a shortcoming of Haskell (or System Fc?)
 types, although it may be doable with a newtype.

That was kind of what my thinking out loud was getting at. If you want x - Int 
and y - Int to be different types even if x and y actually are the same type, 
then apparently you want x - Int and y - Int to be built from different 
function-space constructors, say - and -*, yielding x - Int and y -* Int. 
Replacing equals for equals again, you get x - Int and x -* Int. So, 
basically, we are annotating function types, what is IIRC exactly what Janis 
and David are doing. (I hope Janis corrects me if I'm wrong here).

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-08-01 Thread Ivan Lazar Miljenovic
Brandon S Allbery KF8NH allb...@ece.cmu.edu writes:

 Exactly.  (I was being cagey because the first response was cagey, possibly
 suspecting a homework question although it seems like an odd time for
 it.)

Why is it an odd time for it?

Here in Australia (and presumably other countries in the southern
hemisphere) this week will be the second or third week of semester...

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-08-01 Thread Nicolas Pouillard
On Sat, 31 Jul 2010 17:30:54 -0400, Brandon S Allbery KF8NH 
allb...@ece.cmu.edu wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 7/31/10 16:58 , wren ng thornton wrote:
  Brandon S Allbery KF8NH wrote:
  michael rice wrote:
  Are you saying:
 
  [ head x ]  -  [ *thunk* ]   and   length [ *thunk* ] -  1, independent 
  of
  what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?
 
  Exactly.  (I was being cagey because the first response was cagey, possibly
  suspecting a homework question although it seems like an odd time for it.)
 
  length not only does not look inside of the thunk, it *can't* look inside
  it; all it knows is that it has a list, it specifically does *not* know 
  what
  that list can hold.  So the only thing it can do is count the number of
  unknown somethings in the list.
  
  Not entirely true:
  
  stupidlyStrictLength :: [a] - Integer
  stupidlyStrictLength [] = 0
  stupidlyStrictLength (x:xs) = x `seq` 1 + stupidlyStrictLength xs
 
 Given all the messes seq makes (hey, go behind the compiler's back and
 touch this arbitrary value of arbitrary type), I generally consider it to
 be unsafeSeq :)

I would deeply in favor of renaming seq to unsafeSeq, and introduce a
type class to reintroduce seq in a disciplined way.

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-08-01 Thread Stefan Holdermans
Nicolas,

 I would deeply in favor of renaming seq to unsafeSeq, and introduce a
 type class to reintroduce seq in a disciplined way.

There is a well-documented [1] trade-off here:  Often, calls to seq are 
introduced late in a developing cycle; typically after you have discovered a 
space leak and figured out how to resolve it.  Then you introduce a call to seq 
somewhere deep in a call chain.  If seq were a class method, you would know 
have to work your way upward in the call chain to update all type signatures 
you had written there.  Clearly, this is quite tedious.  And then, almost just 
as often, you find out that actually you did not quite figure out how to 
resolve the space leak, because it's still there.  So, you remove your freshly 
introduced call to seq and, indeed, work your way to the call chain again to 
remove all now superfluous class constraints from the type signatures. (By the 
way, this is typically the sort of task, IDEs with refactoring support excel 
at, but these are unfortunately not ubiquitous for Haskell yet.)

More importantly, the type-class approach is flawed [2].  It assumes that all 
seq-related constraints can be read off from type variables, which is in fact 
not the case.

Cheers,

  Stefan

[1] Paul Hudak, John Hughes, Simon Peyton Jones, and Philip Wadler. A
history of Haskell: Being lazy with class. In Barbara G. Ryder and
Brent Hailpern, editors, Proceedings of the Third ACM SIGPLAN
History of Programming Languages Conference (HOPL-III), San Diego,
California, USA, 9–10 June 2007, pages 1–55. ACM Press, 2007.
http://doi.acm.org/10.1145/1238844.1238856

[2] Daniel Seidel and Janis Voigtländer. Taming selective strictness.
In Stefan Fischer, Erik Maehle, and Rüdiger Reischuk, editors,
INFORMATIK 2009 – Im Focus das Leben, Beiträge der 39. Jahrestagung
der Gesellschaft für Informatik e.V. (GI), 28. September – 2.
Oktober, in Lübeck, volume 154 of Lecture Notes in Informatics,
pages 2916–2930. GI, 2009.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-08-01 Thread Luke Palmer
On Sun, Aug 1, 2010 at 2:53 AM, Stefan Holdermans
ste...@vectorfabrics.com wrote:
 Nicolas,

 I would deeply in favor of renaming seq to unsafeSeq, and introduce a
 type class to reintroduce seq in a disciplined way.

 There is a well-documented [1] trade-off here:  Often, calls to seq are 
 introduced late in a developing cycle; typically after you have discovered a 
 space leak and figured out how to resolve it.  Then you introduce a call to 
 seq somewhere deep in a call chain.  If seq were a class method, you would 
 know have to work your way upward in the call chain to update all type 
 signatures you had written there.  Clearly, this is quite tedious.  And then, 
 almost just as often, you find out that actually you did not quite figure out 
 how to resolve the space leak, because it's still there.  So, you remove your 
 freshly introduced call to seq and, indeed, work your way to the call chain 
 again to remove all now superfluous class constraints from the type 
 signatures. (By the way, this is typically the sort of task, IDEs with 
 refactoring support excel at, but these are unfortunately not ubiquitous for 
 Haskell yet.)

That's a reasonable concern.  I suppose that would be the reason for
keeping unsafeSeq around if we do typeclassify seq.

 More importantly, the type-class approach is flawed [2].  It assumes that all 
 seq-related constraints can be read off from type variables, which is in 
 fact not the case.

Could you provide an example (or a specific example or explanation in
the paper) of what you mean by this?

Luke

 Cheers,

  Stefan

 [1] Paul Hudak, John Hughes, Simon Peyton Jones, and Philip Wadler. A
    history of Haskell: Being lazy with class. In Barbara G. Ryder and
    Brent Hailpern, editors, Proceedings of the Third ACM SIGPLAN
    History of Programming Languages Conference (HOPL-III), San Diego,
    California, USA, 9–10 June 2007, pages 1–55. ACM Press, 2007.
    http://doi.acm.org/10.1145/1238844.1238856

 [2] Daniel Seidel and Janis Voigtländer. Taming selective strictness.
    In Stefan Fischer, Erik Maehle, and Rüdiger Reischuk, editors,
    INFORMATIK 2009 – Im Focus das Leben, Beiträge der 39. Jahrestagung
    der Gesellschaft für Informatik e.V. (GI), 28. September – 2.
    Oktober, in Lübeck, volume 154 of Lecture Notes in Informatics,
    pages 2916–2930. GI, 2009.___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-08-01 Thread Nicolas Pouillard
On Sun, 1 Aug 2010 10:53:24 +0200, Stefan Holdermans ste...@vectorfabrics.com 
wrote:
 Nicolas,
 
  I would deeply in favor of renaming seq to unsafeSeq, and introduce a
  type class to reintroduce seq in a disciplined way.
 
 There is a well-documented [1] trade-off here:  Often, calls to seq are 
 introduced late in a developing cycle; typically after you have discovered a 
 space leak and figured out how to resolve it.  Then you introduce a call to 
 seq somewhere deep in a call chain.  If seq were a class method, you would 
 know have to work your way upward in the call chain to update all type 
 signatures you had written there.  Clearly, this is quite tedious.  And then, 
 almost just as often, you find out that actually you did not quite figure out 
 how to resolve the space leak, because it's still there.  So, you remove your 
 freshly introduced call to seq and, indeed, work your way to the call chain 
 again to remove all now superfluous class constraints from the type 
 signatures. (By the way, this is typically the sort of task, IDEs with 
 refactoring support excel at, but these are unfortunately not ubiquitous for 
 Haskell yet.)

Only in the case where the type of the value being forced is not known,
otherwise the class constraint get silently discarded.

 More importantly, the type-class approach is flawed [2].  It assumes that all 
 seq-related constraints can be read off from type variables, which is in 
 fact not the case.

Indeed they give some account to the type class approach but not enough IMHO.
Sure forcing functions is a tricky business and having a generic instance for
functions is clearly not the solution (as they explain it with foldl vs 
foldl''').

However I absolutely do not buy their argument using as example a function
f :: Eval (a - Int) = (a - Int) - (a - Int) - Int. They consider as
an issue the fact that the type does not tell us on which argument seq is
used. I think it is fine we may want a more precise type for it to get more
properties out of it but it is not flawed.
As much as we don't know where (==) is used given a function of type
∀ a. Eq a = [a] - [a].

Actually my point is that if we do not use any polymorphic primitive to
implement a family of seq functions then it cannot be flawed. Indeed
if you read type classes as a way to implicitly pass and pick functions
then it cannot add troubles.

Finally maybe we can simply forbidden the forcing of function (as we do with
Eq). The few cases where it does matter will rescue to unsafeSeqFunction.

Cheers,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-08-01 Thread Felipe Lessa
On Sun, Aug 1, 2010 at 11:29 AM, Nicolas Pouillard
nicolas.pouill...@gmail.com wrote:
 Finally maybe we can simply forbidden the forcing of function (as we do with
 Eq). The few cases where it does matter will rescue to unsafeSeqFunction.

What's the problem with

  class Eval a where
seq :: a - t - t

  instance Eval b = Eval (a - b) where
seq f = seq (f undefined)

It would reduce at least to WHNF as unsafeSeq would.  Does it compute
more than WHNF?

Hmmm, I see, if we had

  f :: Int - Int
  f _ = undefined

Then my seq above would diverge while unsafeSeq wouldn't.  Perhaps
that instance would be a good compromise if we insist in not using
'unsafeSeq' to define it.

Cheers, =)

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-08-01 Thread Dan Doel
On Sunday 01 August 2010 10:52:48 am Felipe Lessa wrote:
 On Sun, Aug 1, 2010 at 11:29 AM, Nicolas Pouillard
 
 nicolas.pouill...@gmail.com wrote:
  Finally maybe we can simply forbidden the forcing of function (as we do
  with Eq). The few cases where it does matter will rescue to
  unsafeSeqFunction.
 
 What's the problem with
 
   class Eval a where
 seq :: a - t - t
 
   instance Eval b = Eval (a - b) where
 seq f = seq (f undefined)
 
 It would reduce at least to WHNF as unsafeSeq would.  Does it compute
 more than WHNF?
 
 Hmmm, I see, if we had
 
   f :: Int - Int
   f _ = undefined
 
 Then my seq above would diverge while unsafeSeq wouldn't.  Perhaps
 that instance would be a good compromise if we insist in not using
 'unsafeSeq' to define it.

It also diverges for every strict function f.

  seq id x = seq (id undefined) x = seq undefined x = undefined

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Laziness question

2010-07-31 Thread michael rice
From: http://en.wikibooks.org/wiki/Haskell/Laziness


Given two functions of one parameter, f and g, we say f is stricter than g if f 
x evaluates x to a deeper level than g x

Exercises

   1. Which is the stricter function?

f x = length [head x]
g x = length (tail x)



Prelude let f x = length [head x]
Prelude let g x = length (tail x)
Prelude f undefined
1
Prelude g undefined
*** Exception: Prelude.undefined
Prelude 



So, g is stricter than f?

Wouldn't both functions need to evaluate x to the same level, *thunk* : *thunk* 
to insure listhood?

f x = length [head *thunk* : *thunk*]
g x = length (tail *thunk* : *thunk*)

Michael



  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread Henning Thielemann
michael rice schrieb:

 So, g is stricter than f?
 
 Wouldn't both functions need to evaluate x to the same level, *thunk* :
 *thunk* to insure listhood?

No. :-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread Ben Millwood
On Sat, Jul 31, 2010 at 4:56 PM, michael rice nowg...@yahoo.com wrote:

 From: http://en.wikibooks.org/wiki/Haskell/Laziness


 Given two functions of one parameter, f and g, we say f is stricter than g if 
 f x evaluates x to a deeper level than g x

 Exercises

    1. Which is the stricter function?

 f x = length [head x]
 g x = length (tail x)



 Prelude let f x = length [head x]
 Prelude let g x = length (tail x)
 Prelude f undefined
 1
 Prelude g undefined
 *** Exception: Prelude.undefined
 Prelude



 So, g is stricter than f?

 Wouldn't both functions need to evaluate x to the same level, *thunk* : 
 *thunk* to insure listhood?

 f x = length [head *thunk* : *thunk*]
 g x = length (tail *thunk* : *thunk*)

 Michael


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


Notice the two different kinds of brackets being used in f versus g :)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread michael rice
OK, in f, *length* already knows it's argument is a list.

In g, *length* doesn't know what's inside the parens, extra evaluation there. 
So g is already ahead before we get to what's inside the [] and ().

But since both still have eval x to *thunk* : *thunk*,  g evaluates to a 
deeper level?

Michael


 Wouldn't both functions need to evaluate x to the same level, *thunk* : 
 *thunk* to insure listhood?

 f x = length [head *thunk* : *thunk*]
 g x = length (tail *thunk* : *thunk*)

 Michael


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


Notice the two different kinds of brackets being used in f versus g :)

--- On Sat, 7/31/10, Ben Millwood hask...@benmachine.co.uk wrote:

From: Ben Millwood hask...@benmachine.co.uk
Subject: Re: [Haskell-cafe] Laziness question
To: michael rice nowg...@yahoo.com
Cc: haskell-cafe@haskell.org
Date: Saturday, July 31, 2010, 12:38 PM

On Sat, Jul 31, 2010 at 4:56 PM, michael rice nowg...@yahoo.com wrote:

 From: http://en.wikibooks.org/wiki/Haskell/Laziness


 Given two functions of one parameter, f and g, we say f is stricter than g if 
 f x evaluates x to a deeper level than g x

 Exercises

    1. Which is the stricter function?

 f x = length [head x]
 g x = length (tail x)



 Prelude let f x = length [head x]
 Prelude let g x = length (tail x)
 Prelude f undefined
 1
 Prelude g undefined
 *** Exception: Prelude.undefined
 Prelude



 So, g is stricter than f?

 Wouldn't both functions need to evaluate x to the same level, *thunk* : 
 *thunk* to insure listhood?

 f x = length [head *thunk* : *thunk*]
 g x = length (tail *thunk* : *thunk*)

 Michael


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


Notice the two different kinds of brackets being used in f versus g :)



  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/31/10 12:59 , michael rice wrote:
 OK, in f, *length* already knows it's argument is a list.
 
 In g, *length* doesn't know what's inside the parens, extra evaluation
 there. So g is already ahead before we get to what's inside the [] and ().
 
 But since both still have eval x to *thunk* : *thunk*,  g evaluates to a
 deeper level?

The whole point of laziness is that f *doesn't* have to eval x.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxUXbgACgkQIn7hlCsL25X5dQCdFskJ8+DdIVnJtsYVAFJkHcHO
yjEAoMuoKU2yXLKVcLFGumLb0IJAVxnx
=5KJ5
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread Ben Millwood
On Sat, Jul 31, 2010 at 5:59 PM, michael rice nowg...@yahoo.com wrote:

 OK, in f, *length* already knows it's argument is a list.

 In g, *length* doesn't know what's inside the parens, extra evaluation there. 
 So g is already ahead before we get to what's inside the [] and ().

According to the types, we already know both are lists. The question
is, of course, what kind of list.

 But since both still have eval x to *thunk* : *thunk*,  g evaluates to a 
 deeper level?

 Michael


I think this question is being quite sneaky. The use of head and tail
is pretty much irrelevant. Try the pointfree versions:

f = length . (:[]) . head
g = length . tail

and see if that helps you see why f is lazier than g.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread Albert Y. C. Lai

On 10-07-31 01:30 PM, Brandon S Allbery KF8NH wrote:

On 7/31/10 12:59 , michael rice wrote:

But since both still have eval x to *thunk* : *thunk*,  g evaluates to a
deeper level?


The whole point of laziness is that f *doesn't* have to eval x.


To elaborate, in computer-friendly syntax:

f x = length (red_herring : [])

length cares about cons cells (:) and nil [] only. You have already 
hardcoded exactly those. Enough said... err, enough evaluated.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread michael rice
From Hoogle:

  
Query: (:[])


  Error: 
unexpected :
expecting #, ,, forall, (, [, ! or )
Bad symbol

Prelude let h = length . (:[]) . head
Prelude h undefined
1
Prelude :t (:[])
(:[]) :: a - [a]
Prelude h []
1    this comes as a surprise
Prelude 

Are you saying:

[ head x ]  -  [ *thunk* ]   and   length [ *thunk* ] -  1, independent of 
what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?

Michael




--- On Sat, 7/31/10, Ben Millwood hask...@benmachine.co.uk wrote:

From: Ben Millwood hask...@benmachine.co.uk
Subject: Re: [Haskell-cafe] Laziness question
To: michael rice nowg...@yahoo.com
Cc: haskell-cafe@haskell.org
Date: Saturday, July 31, 2010, 1:47 PM

On Sat, Jul 31, 2010 at 5:59 PM, michael rice nowg...@yahoo.com wrote:

 OK, in f, *length* already knows it's argument is a list.

 In g, *length* doesn't know what's inside the parens, extra evaluation there. 
 So g is already ahead before we get to what's inside the [] and ().

According to the types, we already know both are lists. The question
is, of course, what kind of list.

 But since both still have eval x to *thunk* : *thunk*,  g evaluates to a 
 deeper level?

 Michael


I think this question is being quite sneaky. The use of head and tail
is pretty much irrelevant. Try the pointfree versions:

f = length . (:[]) . head
g = length . tail

and see if that helps you see why f is lazier than g.



  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/31/10 14:24 , michael rice wrote:
 Are you saying:
 
 [ head x ]  -  [ *thunk* ]   and   length [ *thunk* ] -  1, independent of
 what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?

Exactly.  (I was being cagey because the first response was cagey, possibly
suspecting a homework question although it seems like an odd time for it.)

length not only does not look inside of the thunk, it *can't* look inside
it; all it knows is that it has a list, it specifically does *not* know what
that list can hold.  So the only thing it can do is count the number of
unknown somethings in the list.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxUa54ACgkQIn7hlCsL25XVpgCeIxWwVWhjYQQ86uE2JeJD7mCB
mKUAn3WwhrgrYyudv/E8pn5a0HB4gLA9
=H++/
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread Tillmann Rendel

michael rice wrote:

f x = length [head x]
g x = length (tail x)

Wouldn't both functions need to evaluate x to the same level, *thunk* : 
*thunk* to insure listhood?


There is no need to insure listhood at run time, since Haskell is 
statically typed.


  Tillmann
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread michael rice
Subtle stuff.

Thanks, everyone, for your patience. You've been VERY helpful. Great list!

Michael

--- On Sat, 7/31/10, Brandon S Allbery KF8NH allb...@ece.cmu.edu wrote:

From: Brandon S Allbery KF8NH allb...@ece.cmu.edu
Subject: Re: [Haskell-cafe] Laziness question
To: haskell-cafe@haskell.org
Date: Saturday, July 31, 2010, 2:29 PM

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/31/10 14:24 , michael rice wrote:
 Are you saying:
 
 [ head x ]  -  [ *thunk* ]   and   length [ *thunk* ] -  1, independent of
 what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?

Exactly.  (I was being cagey because the first response was cagey, possibly
suspecting a homework question although it seems like an odd time for it.)

length not only does not look inside of the thunk, it *can't* look inside
it; all it knows is that it has a list, it specifically does *not* know what
that list can hold.  So the only thing it can do is count the number of
unknown somethings in the list.

- -- 
brandon s. allbery     [linux,solaris,freebsd,perl]      allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university      KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxUa54ACgkQIn7hlCsL25XVpgCeIxWwVWhjYQQ86uE2JeJD7mCB
mKUAn3WwhrgrYyudv/E8pn5a0HB4gLA9
=H++/
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread wren ng thornton

Brandon S Allbery KF8NH wrote:

michael rice wrote:

Are you saying:

[ head x ]  -  [ *thunk* ]   and   length [ *thunk* ] -  1, independent of
what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?


Exactly.  (I was being cagey because the first response was cagey, possibly
suspecting a homework question although it seems like an odd time for it.)

length not only does not look inside of the thunk, it *can't* look inside
it; all it knows is that it has a list, it specifically does *not* know what
that list can hold.  So the only thing it can do is count the number of
unknown somethings in the list.


Not entirely true:

stupidlyStrictLength :: [a] - Integer
stupidlyStrictLength [] = 0
stupidlyStrictLength (x:xs) = x `seq` 1 + stupidlyStrictLength xs

Though, of course, if we actually wanted this function we should use an 
accumulator in order to avoid stack overflow when evaluating the 
(1+(1+...0)) thunk at the end.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness question

2010-07-31 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/31/10 16:58 , wren ng thornton wrote:
 Brandon S Allbery KF8NH wrote:
 michael rice wrote:
 Are you saying:

 [ head x ]  -  [ *thunk* ]   and   length [ *thunk* ] -  1, independent of
 what *thunk* is, even head [], i.e., *thunk* never needs be evaluated?

 Exactly.  (I was being cagey because the first response was cagey, possibly
 suspecting a homework question although it seems like an odd time for it.)

 length not only does not look inside of the thunk, it *can't* look inside
 it; all it knows is that it has a list, it specifically does *not* know what
 that list can hold.  So the only thing it can do is count the number of
 unknown somethings in the list.
 
 Not entirely true:
 
 stupidlyStrictLength :: [a] - Integer
 stupidlyStrictLength [] = 0
 stupidlyStrictLength (x:xs) = x `seq` 1 + stupidlyStrictLength xs

Given all the messes seq makes (hey, go behind the compiler's back and
touch this arbitrary value of arbitrary type), I generally consider it to
be unsafeSeq :)

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxUlg4ACgkQIn7hlCsL25U41ACgy88GrDKrhfhNn8IiwYPA92qw
Kn0AnilNyNJsPZXKIp86NEuWW4ECLVuv
=hsLW
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [GHC] #3944: Asynchronous exceptions and laziness bugs (with fixes) in Control.Concurrent.QSem/QSemN

2010-07-09 Thread GHC
#3944: Asynchronous exceptions and laziness bugs (with fixes) in
Control.Concurrent.QSem/QSemN
-+--
  Reporter:  basvandijk  |  Owner:  simonmar
  Type:  bug | Status:  closed  
  Priority:  normal  |  Milestone:  6.14.1  
 Component:  libraries/base  |Version:  6.12.1  
Resolution:  fixed   |   Keywords:  
Difficulty:  | Os:  Unknown/Multiple
  Testcase:  |   Architecture:  Unknown/Multiple
   Failure:  None/Unknown|  
-+--
Changes (by simonmar):

  * status:  patch = closed
  * resolution:  = fixed


Comment:

 Fixed:

 {{{
 Thu Jul  8 15:58:19 BST 2010  Simon Marlow marlo...@gmail.com
   * Async-exception safety, and avoid space leaks
   Patch submitted by: Bas van Dijk v.dijk@gmail.com
   Modified slightly by me to remove non-functional changes.

 M ./Control/Concurrent/QSem.hs -9 +12

 Thu Jul  8 11:31:54 BST 2010  Simon Marlow marlo...@gmail.com
   * Async-exception safety, and avoid space leaks
   Patch submitted by: Bas van Dijk v.dijk@gmail.com
   Modified slightly by me to remove non-functional changes.

 M ./Control/Concurrent/QSemN.hs -11 +13
 }}}

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/3944#comment:3
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: laziness in `length'

2010-06-15 Thread Denys Rtveliashvili
Hi Daniel,

Thank you very much for the explanation of this issue.

While I understand the parts about rewrite rules and the big thunk, it
is still not clear why it is the way it is.

Please could you explain which Nums are not strict? The ones I am aware
about are all strict.

Also, why doesn't it require building the full thunk for non-strict
Nums? Even if they are not strict, an addition requires both parts to be
evaluated. This means the thunk will have to be pre-built, doesn't it?

With kind regards,
Denys


 On Monday 14 June 2010 16:25:06, Serge D. Mechveliani wrote:
  Dear people and GHC team,
 
  I have a naive question about the compiler and library of  ghc-6.12.3.
  Consider the program
 
import List (genericLength)
main = putStr $ shows (genericLength [1 .. n]) \n
   where
   n = -- 10^6, 10^7, 10^8 ...
 
  (1) When it is compiled under  -O,  it runs in a small constant space
  in  n  and in a time approximately proportional to  n.
  (2) When it is compiled without -O,  it takes at the run-time the
  stack proportional to  n,  and it takes enormousely large time
  for  n = 10^7.
  (3) In the interpreter mode  ghci,   `genericLength [1 .. n]'
  takes as much resource as (2).
 
  Are the points (2) and (3) natural for an Haskell implementation?
 
  Independently on whether  lng  is inlined or not, its lazy evaluation
  is, probably, like this:
   lng [1 .. n] =
   lng (1 : (list 2 n)) =  1 + (lng $ list 2 n) =
   1 + (lng (2: (list 3 n))) = 1 + 1 + (lng $ list 3 n) =
   2 + (lng (3: (list 4 n)))   -- because this + is of Integer
   = 2 + 1 + (lng $ list 4 n) =
   3 + (lng $ list 4 n)
   ...
  And this takes a small constant space.
 
 Unfortunately, it would be
 
 lng [1 .. n]
 ~ 1 + (lng [2 .. n])
 ~ 1 + (1 + (lng [3 .. n]))
 ~ 1 + (1 + (1 + (lng [4 .. n])))
 ~
 
 and that builds a thunk of size O(n).
 
 The thing is, genericLength is written so that for lazy number types, the 
 construction of the result can begin before the entire list has been 
 traversed. This means however, that for strict number types, like Int or 
 Integer, it is woefully inefficient.
 
 In the code above, the result type of generic length (and the type of list 
 elements) is defaulted to Integer.
 When you compile with optimisations, a rewrite-rule fires:
 
 -- | The 'genericLength' function is an overloaded version of 'length'.  In
 -- particular, instead of returning an 'Int', it returns any type which is
 -- an instance of 'Num'.  It is, however, less efficient than 'length'.
 genericLength   :: (Num i) = [b] - i
 genericLength []=  0
 genericLength (_:l) =  1 + genericLength l
 
 {-# RULES
   genericLengthInt genericLength = (strictGenericLength :: [a] - 
 Int);
   genericLengthInteger genericLength = (strictGenericLength :: [a] - 
 Integer);
  #-}
 
 strictGenericLength :: (Num i) = [b] - i
 strictGenericLength l   =  gl l 0
   where
 gl [] a = a
 gl (_:xs) a = let a' = a + 1 in a' `seq` gl xs a'
 
 which gives a reasonabley efficient constant space calculation.
 
 Without optimisations and in ghci, you get the generic code, which is slow 
 and thakes O(n) space.
 
  Thank you in advance for your explanation,
 
  -
  Serge Mechveliani
  mech...@botik.ru
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: laziness in `length'

2010-06-15 Thread Daniel Fischer
On Tuesday 15 June 2010 16:52:04, Denys Rtveliashvili wrote:
 Hi Daniel,

 Thank you very much for the explanation of this issue.

 While I understand the parts about rewrite rules and the big thunk, it
 is still not clear why it is the way it is.

 Please could you explain which Nums are not strict? The ones I am aware
 about are all strict.

There are several implementations of lazy (to different degrees) Peano 
numbers on hackage.
The point is that it's possible to have lazy Num types, and the decision 
was apparently to write genericLength so that lazy Num types may profit 
from it.
Arguably, one should have lazyGenericLength for lazy number types and 
strictGenericLength for strict number types (Integer, Int64, Word, Word64, 
...).
On the other hand, fromIntegral . length works fine in practice (calling 
length on a list exceeding the Int range would be doubtful on 32-bit 
systems and plain madness on 64-bit systems).


 Also, why doesn't it require building the full thunk for non-strict
 Nums? Even if they are not strict, an addition requires both parts to be
 evaluated.

Not necessarily for lazy numbers.

 This means the thunk will have to be pre-built, doesn't it?

For illustration, the very simple-minded lazy Peano numbers:

data Peano
= Zero
| Succ Peano
  deriving (Show, Eq)

instance Ord Peano where
compare Zero Zero = EQ
compare Zero _= LT
compare _Zero = GT
compare (Succ m) (Succ n) = compare m n
min Zero _ = Zero
min _ Zero = Zero
min (Succ m) (Succ n) = Succ (min m n)
max Zero n = n
max m Zero = m
max (Succ m) (Succ n) = Succ (max m n)

instance Num Peano where
Zero + n = n
(Succ m) + n = Succ (m + n)
-- omitted other methods due to laziness (mine, not Haskell's)
fromInteger n
| n  0 = error Peano.fromInteger: negative argument
| n == 0 = Zero
| otherwise = Succ (fromInteger (n-1))

one, two, three, four :: Peano
one = Succ Zero
two = Succ one
three = Succ two
four = Succ three

min two (genericLength [1 .. ])
~ min (Succ one) (genericLength [1 .. ])
~ min (Succ one) (1 + (genericLength [2 .. ]))
~ min (Succ one) ((Succ Zero) + (genericLength [2 .. ]))
~ min (Succ one) (Succ (Zero + (genericLength [2 .. ])))
~ Succ (min one (Zero + (genericLength [2 .. ])))
~ Succ (min (Succ Zero) (Zero + (genericLength [2 .. ])))
~ Succ (min (Succ Zero) (genericLength [2 .. ]))
~ Succ (min (Succ Zero) (1 + (genericLength [3 .. ])))
~ Succ (min (Succ Zero) ((Succ Zero) + (genericLength [3 ..])))
~ Succ (min (Succ Zero) (Succ (Zero + (genericLength [3 .. ]
~ Succ (Succ (min Zero (Zero + (genericLength [3 .. ]
~ Succ (Succ Zero)


 With kind regards,
 Denys
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: laziness in `length'

2010-06-15 Thread Roman Beslik

On 14.06.10 17:25, Serge D. Mechveliani wrote:

  lng [1 .. n] =
  lng (1 : (list 2 n)) =  1 + (lng $ list 2 n) =
  1 + (lng (2: (list 3 n))) = 1 + 1 + (lng $ list 3 n) = {- !!! -}
  2 + (lng (3: (list 4 n)))   -- because this + is of Integer
  = 2 + 1 + (lng $ list 4 n) = {- !!! -}
  3 + (lng $ list 4 n)
   
Actually matters are more complicated. In the highlighted steps you 
implicitly used associativity of (+). Of course, Haskell can not do 
this. Also 'lng' and 'genericLength' *are not tail recursive*. This 
explains stack overflow. If you compute length with 'foldl' 
(tail-recursively) and without -O flag, than you will see excessive 
heap usage. Also, GHC's 'length' and 'foldl'' are tail recursive and 
eagerly computes length/accumulator, so they are effective without -O 
flag. See for explanation

http://www.haskell.org/haskellwiki/Stack_overflow

--
Best regards,
  Roman Beslik.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


laziness in `length'

2010-06-14 Thread Serge D. Mechveliani
Dear people and GHC team,

I have a naive question about the compiler and library of  ghc-6.12.3.
Consider the program

  import List (genericLength)
  main = putStr $ shows (genericLength [1 .. n]) \n
 where
 n = -- 10^6, 10^7, 10^8 ... 
  
(1) When it is compiled under  -O,  it runs in a small constant space
in  n  and in a time approximately proportional to  n.
(2) When it is compiled without -O,  it takes at the run-time the 
stack proportional to  n,  and it takes enormousely large time 
for  n = 10^7.
(3) In the interpreter mode  ghci,   `genericLength [1 .. n]'
takes as much resource as (2). 

Are the points (2) and (3) natural for an Haskell implementation?

Independently on whether  lng  is inlined or not, its lazy evaluation
is, probably, like this:
 lng [1 .. n] = 
 lng (1 : (list 2 n)) =  1 + (lng $ list 2 n) = 
 1 + (lng (2: (list 3 n))) = 1 + 1 + (lng $ list 3 n) =
 2 + (lng (3: (list 4 n)))   -- because this + is of Integer
 = 2 + 1 + (lng $ list 4 n) =
 3 + (lng $ list 4 n)
 ...
And this takes a small constant space.
Thank you in advance for your explanation,

-
Serge Mechveliani
mech...@botik.ru
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: laziness in `length'

2010-06-14 Thread Daniel Fischer
On Monday 14 June 2010 16:25:06, Serge D. Mechveliani wrote:
 Dear people and GHC team,

 I have a naive question about the compiler and library of  ghc-6.12.3.
 Consider the program

   import List (genericLength)
   main = putStr $ shows (genericLength [1 .. n]) \n
  where
  n = -- 10^6, 10^7, 10^8 ...

 (1) When it is compiled under  -O,  it runs in a small constant space
 in  n  and in a time approximately proportional to  n.
 (2) When it is compiled without -O,  it takes at the run-time the
 stack proportional to  n,  and it takes enormousely large time
 for  n = 10^7.
 (3) In the interpreter mode  ghci,   `genericLength [1 .. n]'
 takes as much resource as (2).

 Are the points (2) and (3) natural for an Haskell implementation?

 Independently on whether  lng  is inlined or not, its lazy evaluation
 is, probably, like this:
  lng [1 .. n] =
  lng (1 : (list 2 n)) =  1 + (lng $ list 2 n) =
  1 + (lng (2: (list 3 n))) = 1 + 1 + (lng $ list 3 n) =
  2 + (lng (3: (list 4 n)))   -- because this + is of Integer
  = 2 + 1 + (lng $ list 4 n) =
  3 + (lng $ list 4 n)
  ...
 And this takes a small constant space.

Unfortunately, it would be

lng [1 .. n]
~ 1 + (lng [2 .. n])
~ 1 + (1 + (lng [3 .. n]))
~ 1 + (1 + (1 + (lng [4 .. n])))
~

and that builds a thunk of size O(n).

The thing is, genericLength is written so that for lazy number types, the 
construction of the result can begin before the entire list has been 
traversed. This means however, that for strict number types, like Int or 
Integer, it is woefully inefficient.

In the code above, the result type of generic length (and the type of list 
elements) is defaulted to Integer.
When you compile with optimisations, a rewrite-rule fires:

-- | The 'genericLength' function is an overloaded version of 'length'.  In
-- particular, instead of returning an 'Int', it returns any type which is
-- an instance of 'Num'.  It is, however, less efficient than 'length'.
genericLength   :: (Num i) = [b] - i
genericLength []=  0
genericLength (_:l) =  1 + genericLength l

{-# RULES
  genericLengthInt genericLength = (strictGenericLength :: [a] - 
Int);
  genericLengthInteger genericLength = (strictGenericLength :: [a] - 
Integer);
 #-}

strictGenericLength :: (Num i) = [b] - i
strictGenericLength l   =  gl l 0
  where
gl [] a = a
gl (_:xs) a = let a' = a + 1 in a' `seq` gl xs a'

which gives a reasonabley efficient constant space calculation.

Without optimisations and in ghci, you get the generic code, which is slow 
and thakes O(n) space.

 Thank you in advance for your explanation,

 -
 Serge Mechveliani
 mech...@botik.ru

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [GHC] #3944: Asynchronous exceptions and laziness bugs (with fixes) in Control.Concurrent.QSem/QSemN

2010-05-05 Thread GHC
#3944: Asynchronous exceptions and laziness bugs (with fixes) in
Control.Concurrent.QSem/QSemN
-+--
Reporter:  basvandijk|Owner:  simonmar
Type:  bug   |   Status:  patch   
Priority:  normal|Milestone:  6.14.1  
   Component:  libraries/base|  Version:  6.12.1  
Keywords:|   Difficulty:  
  Os:  Unknown/Multiple  | Testcase:  
Architecture:  Unknown/Multiple  |  Failure:  None/Unknown
-+--
Changes (by simonmar):

  * owner:  = simonmar


-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/3944#comment:2
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


  1   2   3   4   >