[Haskell-cafe] Re: Laziness bug in Data.List.intersperse (was: ANNOUNCE: text 0.8.0.0, fast Unicode text support)

2010-09-04 Thread Bryan O'Sullivan
On Wed, Sep 1, 2010 at 1:00 PM, Daniel Fischer daniel.is.fisc...@web.dewrote:

 I'm not keen on subscribing to libraries@ to follow the official proposal
 process, any takers?


I'll take it up.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Tue, 03 Aug 2010 16:36:33 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  - If there is no class instance for function types, then those problems
  go away, of course. But it is doubtful whether that would be a viable
  solution. Quite a few programs would be rejected as a consequence. (Say,
  you want to use the strict version of foldl. That will lead to a type
  class constraint on one of the type variables. Now suppose you actually
  want to fold in a higher-order fashion, like when expressing efficient
  reverse via foldr. That would not anymore be possible for the strict
  version of foldl, as it would require the type-class-constrained
  variable to be instantiated with a function type.)
  
  I think it would be a step forward. The old seq would still exists as
  unsafeSeq and such applications could continue to use it. In the mean
  time parametricity results would better apply to programs without unsafe
  functions. And this without adding extra complexity into the type system.
 
 Yes, I agree. Of course, you (and Lennart, and others advocating putting
 seq into a type class) could work toward that solution right away, could
 have done so for quite some time: write a package with an Eval type
 class and method safeSeq (and *no* class instance for function types),
 upload it on Hackage, encourage people to use it. Modulo the naming
 difference seq/safeSeq vs. unsafeSeq/seq, that's exactly the solution
 you want. I wonder why it is not happening. :-)

Yes it would be a starting point.

  Actually I think we can keep the old generic seq, but cutting its full
  polymorphism:
  
  seq :: Typeable a = a - b - b
 
 I guess I don't know enough about Typeable to appreciate that.

Basically the Typeable constraints tells that we dynamically know the identity
of the type being passed in. So this may be a bit challenging to cleanly
explain how this safely disable the parametricity but in the mean time
this is the net result the type is dynamically known at run time.

The same trick is known to work for references as well when effects are
everywhere:

newRef :: Typeable a = a - Ref a
readRef :: Ref a - a
writeRef :: Ref a - a - ()

In the same vein it would make unsafePerformIO less dangerous to add such
a constraint.

However I would like to here more comments about this seq variant, anyone?

  OK, I better understand now where we disagree. You want to see in the type
  whether or not the free theorem apply,
 
 Oh, YES. That's the point of a free theorem, isn't it: that I only need
 to look at the type of the function to derive some property about it.
 
  I want them to always apply when
  no call to unsafe function is made.
 
 Well, the question is what you mean by no call to unsafe function is
 made. Where? In the function under consideration, from whose type the
 free theorem is derived? Are you sure that this is enough? Maybe that
 function f does not contain a call to unsafeSeq, but it has an argument
 which is itself a function. Maybe in some function application,
 unsafeSeq is passed to f in that argument position, directly or
 indirectly. Maybe f does internally apply that function argument to
 something. Can you be sure that this will not lead to a failure of the
 free theorem you derived from f's type (counting on the fact that f does
 not call an unsafe function)?
 
 Of course, preventing the *whole program* from calling unsafeSeq is
 enough to guarantee validity of the free theorems thus derived. But
 that's equivalent to excluding seq from Haskell altogether.

It depends on the unsafe function that is used. Using unsafeCoerce
or unsafePerformIO (from which we can derive unsafeCoerce) badely
anywhere suffice to break anything. So while seq is less invasive
I find it too much invasive in its raw form.

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

Actually I think we can keep the old generic seq, but cutting its full
polymorphism:

seq :: Typeable a = a - b - b

I guess I don't know enough about Typeable to appreciate that.


Basically the Typeable constraints tells that we dynamically know the identity
of the type being passed in. So this may be a bit challenging to cleanly
explain how this safely disable the parametricity but in the mean time
this is the net result the type is dynamically known at run time.

...

However I would like to here more comments about this seq variant, anyone?


On reflection, isn't Typeable actually much too strong a constraint?
Given that it provides runtime type inspection, probably one cannot
derive any parametricity results at all for a type variable constrained
by Typeable. In contrast, for a type variable constrained via a
hypothetical (and tailored to seq) Eval-constraint, one still gets
something which looks like a standard free theorem, just with some side
conditions relating to _|_ (strictness, totality, ...).

In other words, by saying seq :: Typeable a = a - b - b, you assume
pessimistically that seq can do everything that is possible on members
of the Typeable class. But that might be overly pessimistic, since in
reality the only thing that seq can do is evaluate an arbitrary
expression to weak head normal form.


OK, I better understand now where we disagree. You want to see in the type
whether or not the free theorem apply,

Oh, YES. That's the point of a free theorem, isn't it: that I only need
to look at the type of the function to derive some property about it.


I want them to always apply when
no call to unsafe function is made.

Well, the question is what you mean by no call to unsafe function is
made. Where? In the function under consideration, from whose type the
free theorem is derived? Are you sure that this is enough? Maybe that
function f does not contain a call to unsafeSeq, but it has an argument
which is itself a function. Maybe in some function application,
unsafeSeq is passed to f in that argument position, directly or
indirectly. Maybe f does internally apply that function argument to
something. Can you be sure that this will not lead to a failure of the
free theorem you derived from f's type (counting on the fact that f does
not call an unsafe function)?

Of course, preventing the *whole program* from calling unsafeSeq is
enough to guarantee validity of the free theorems thus derived. But
that's equivalent to excluding seq from Haskell altogether.


It depends on the unsafe function that is used. Using unsafeCoerce
or unsafePerformIO (from which we can derive unsafeCoerce) badely
anywhere suffice to break anything. So while seq is less invasive
I find it too much invasive in its raw form.


Hmm, from this answer I still do not see what you meant when you said
you want free theorems to always apply when no call to seq is made. You
say that seq is less invasive, so do you indeed assume that as soon as
you are sure a function f does not itself (syntactically) contain a call
to seq you are safe to use the standard free theorem derived from f's
type unconstrained? Do you have any justification for that? Otherwise,
we are back to banning seq completely from the whole program/language,
in which case it is trivial that no seq-related side conditions will be
relevant.

Best,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Wed, 04 Aug 2010 15:41:54 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  Actually I think we can keep the old generic seq, but cutting its full
  polymorphism:
 
  seq :: Typeable a = a - b - b
  I guess I don't know enough about Typeable to appreciate that.
  
  Basically the Typeable constraints tells that we dynamically know the 
  identity
  of the type being passed in. So this may be a bit challenging to cleanly
  explain how this safely disable the parametricity but in the mean time
  this is the net result the type is dynamically known at run time.
  
  ...
  
  However I would like to here more comments about this seq variant, anyone?
 
 On reflection, isn't Typeable actually much too strong a constraint?

It is indeed too strong, or not precise enough we could say.

However at least this simple change make it correct (i.e. restore
the parametricity results).

I would call this function genericSeq :: Typeable a = a - b - b

 Given that it provides runtime type inspection, probably one cannot
 derive any parametricity results at all for a type variable constrained
 by Typeable.

Exactly. We could say that we no longer car derive wrong parametricity
results about it.

 In contrast, for a type variable constrained via a
 hypothetical (and tailored to seq) Eval-constraint, one still gets
 something which looks like a standard free theorem, just with some side
 conditions relating to _|_ (strictness, totality, ...).

Indeed, that's why I want both!

In particular for the instance on functions which could be defined using
genericSeq.

  OK, I better understand now where we disagree. You want to see in the type
  whether or not the free theorem apply,
  Oh, YES. That's the point of a free theorem, isn't it: that I only need
  to look at the type of the function to derive some property about it.
 
  I want them to always apply when
  no call to unsafe function is made.
  Well, the question is what you mean by no call to unsafe function is
  made. Where? In the function under consideration, from whose type the
  free theorem is derived? Are you sure that this is enough? Maybe that
  function f does not contain a call to unsafeSeq, but it has an argument
  which is itself a function. Maybe in some function application,
  unsafeSeq is passed to f in that argument position, directly or
  indirectly. Maybe f does internally apply that function argument to
  something. Can you be sure that this will not lead to a failure of the
  free theorem you derived from f's type (counting on the fact that f does
  not call an unsafe function)?
 
  Of course, preventing the *whole program* from calling unsafeSeq is
  enough to guarantee validity of the free theorems thus derived. But
  that's equivalent to excluding seq from Haskell altogether.
  
  It depends on the unsafe function that is used. Using unsafeCoerce
  or unsafePerformIO (from which we can derive unsafeCoerce) badely
  anywhere suffice to break anything. So while seq is less invasive
  I find it too much invasive in its raw form.
 
 Hmm, from this answer I still do not see what you meant when you said
 you want free theorems to always apply when no call to seq is made. You
 say that seq is less invasive, so do you indeed assume that as soon as
 you are sure a function f does not itself (syntactically) contain a call
 to seq you are safe to use the standard free theorem derived from f's
 type unconstrained? Do you have any justification for that? Otherwise,
 we are back to banning seq completely from the whole program/language,
 in which case it is trivial that no seq-related side conditions will be
 relevant.

Actually given genericSeq, I no longer advocate the need for a polymorphic
seq function. So both genericSeq and the seq from the type class would
both safe.

However the rule is still the same when using an unsafe function you are on
your own.

Clearer?

Best regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

However the rule is still the same when using an unsafe function you are on
your own.

Clearer?


Almost. What I am missing is whether or not you would then consider your
genericSeq (which is applicable to functions) one of those unsafe
functions or not.

Ciao,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  However the rule is still the same when using an unsafe function you are on
  your own.
  
  Clearer?
 
 Almost. What I am missing is whether or not you would then consider your
 genericSeq (which is applicable to functions) one of those unsafe
 functions or not.

I would consider it as a safe function.

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:

Nicolas Pouillard schrieb:

However the rule is still the same when using an unsafe function you are on
your own.

Clearer?

Almost. What I am missing is whether or not you would then consider your
genericSeq (which is applicable to functions) one of those unsafe
functions or not.


I would consider it as a safe function.


Well, then I fear you have come full-circle back to a non-solution. It
is not safe:

Consider the example foldl''' from our paper, and replace seq therein by
your genericSeq. Then the function will have the same type as the
original foldl, but the standard free theorem for foldl does not hold
for foldl''' (as also shown in the paper).

Ciao,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Wed, 04 Aug 2010 17:47:12 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
  j...@informatik.uni-bonn.de wrote:
  Nicolas Pouillard schrieb:
  However the rule is still the same when using an unsafe function you are 
  on
  your own.
 
  Clearer?
  Almost. What I am missing is whether or not you would then consider your
  genericSeq (which is applicable to functions) one of those unsafe
  functions or not.
  
  I would consider it as a safe function.
 
 Well, then I fear you have come full-circle back to a non-solution. It
 is not safe:

I feared a bit... but no


 Consider the example foldl''' from our paper, and replace seq therein by
 your genericSeq. Then the function will have the same type as the
 original foldl, but the standard free theorem for foldl does not hold
 for foldl''' (as also shown in the paper).

So foldl''' now has some Typeable constraints.

I agree that the free theorem for foldl does not hold for foldl'''.

However can we derive the free theorem by looking at the type? No because
of the Typeable constraint.

So it is safe to derive free theorems without looking at the usage of seq,
just the type of the function. Taking care of not considering parametric
a type constrained by Typeable.

Finally the difference between your solution and this one is that fewer
(valid) free theorems can be derived (because of the Typable constraints
introduced by seq on functions). Still it is a solution since we no longer
have to fear the usage of seq when deriving a free theorem.

Best regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

On Wed, 04 Aug 2010 17:47:12 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:

Nicolas Pouillard schrieb:

On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:

Nicolas Pouillard schrieb:

However the rule is still the same when using an unsafe function you are on
your own.

Clearer?

Almost. What I am missing is whether or not you would then consider your
genericSeq (which is applicable to functions) one of those unsafe
functions or not.

I would consider it as a safe function.

Well, then I fear you have come full-circle back to a non-solution. It
is not safe:


I feared a bit... but no


Consider the example foldl''' from our paper, and replace seq therein by
your genericSeq. Then the function will have the same type as the
original foldl, but the standard free theorem for foldl does not hold
for foldl''' (as also shown in the paper).


So foldl''' now has some Typeable constraints.


No, I don't see how it has that. Or maybe you should make explicit under
what conditions a type (a - b) is in Typeable. What exactly will the
type of foldl''' be, and why?

Ciao,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Nicolas Pouillard
On Wed, 04 Aug 2010 18:04:13 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Nicolas Pouillard schrieb:
  On Wed, 04 Aug 2010 17:47:12 +0200, Janis Voigtländer 
  j...@informatik.uni-bonn.de wrote:
  Nicolas Pouillard schrieb:
  On Wed, 04 Aug 2010 17:27:01 +0200, Janis Voigtländer 
  j...@informatik.uni-bonn.de wrote:
  Nicolas Pouillard schrieb:
  However the rule is still the same when using an unsafe function you 
  are on
  your own.
 
  Clearer?
  Almost. What I am missing is whether or not you would then consider your
  genericSeq (which is applicable to functions) one of those unsafe
  functions or not.
  I would consider it as a safe function.
  Well, then I fear you have come full-circle back to a non-solution. It
  is not safe:
  
  I feared a bit... but no
  
  Consider the example foldl''' from our paper, and replace seq therein by
  your genericSeq. Then the function will have the same type as the
  original foldl, but the standard free theorem for foldl does not hold
  for foldl''' (as also shown in the paper).
  
  So foldl''' now has some Typeable constraints.
 
 No, I don't see how it has that. Or maybe you should make explicit under
 what conditions a type (a - b) is in Typeable. What exactly will the
 type of foldl''' be, and why?

Right let's make it more explicit, I actually just wrote a Control.Seq
module and a test file:

module Control.Seq where
  genericSeq :: Typeable a = a - b - b
  genericSeq = Prelude.seq

  class Seq a where
seq :: a - b - b

  instance (Typeable a, Typeable b) = Seq (a - b) where
seq = genericSeq

  ... Other seq instances ...

$ cat test.hs
import Prelude hiding (seq)
import Data.Function (fix)
import Control.Seq (Seq(seq))
import Data.Typeable

foldl :: (a - b - a) - a - [b] - a
foldl c = fix (\h n ys - case ys of
[] - n
x : xs - let n' = c n x in h n' xs)

foldl' :: Seq a = (a - b - a) - a - [b] - a
foldl' c = fix (\h n ys - case ys of
[] - n
x : xs - let n' = c n x in seq n' (h n' xs))

foldl'' :: (Typeable a, Typeable b, Seq b) = (a - b - a) - a - [b] - a
foldl'' c = fix (\h n ys - seq (c n) (case ys of
 [] - n
 x : xs - seq xs (seq x
 (let n' = c n x in h n' 
xs

foldl''' :: (Typeable a, Typeable b) = (a - b - a) - a - [b] - a
-- GHC infer this one
-- foldl''' :: Seq (a - b - a) = (a - b - a) - a - [b] - a
-- however this one require FlexibleContext, and the first one is accepted.
foldl''' c = seq c (fix (\h n ys - case ys of
  [] - n
  x : xs - let n' = c n x in h n' xs))

Best regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-04 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

Right let's make it more explicit, I actually just wrote a Control.Seq
module and a test file:

module Control.Seq where
  genericSeq :: Typeable a = a - b - b
  genericSeq = Prelude.seq

  class Seq a where

seq :: a - b - b

  instance (Typeable a, Typeable b) = Seq (a - b) where
seq = genericSeq

  ... Other seq instances ...

$ cat test.hs
import Prelude hiding (seq)
import Data.Function (fix)
import Control.Seq (Seq(seq))
import Data.Typeable

...

foldl''' :: (Typeable a, Typeable b) = (a - b - a) - a - [b] - a
-- GHC infer this one
-- foldl''' :: Seq (a - b - a) = (a - b - a) - a - [b] - a
-- however this one require FlexibleContext, and the first one is accepted.
foldl''' c = seq c (fix (\h n ys - case ys of
  [] - n
  x : xs - let n' = c n x in h n' xs))


Well, in this example you were lucky that the function type on which you
use seq involves some type variables. But consider this example:

f :: (Int - Int) - a - a
f h x = seq h x

I think with your definitions that function will really have that type,
without any type class constraints on anything.

So let us derive the free theorem for that type. It is:

forall t1,t2 in TYPES, g :: t1 - t2, g strict.
 forall p :: Int - Int.
  forall q :: Int - Int.
   (forall x :: Int. p x = q x)
   == (forall y :: t1. g (f p y) = f q (g y))

Now, set

p :: Int - Int
p = undefined

q :: Int - Int
q _ = undefined

Clearly, forall x :: Int. p x = q x holds.

So it should be the case that for every strict function g and
type-appropriate input y it holds:

  g (f p y) = f q (g y)

But clearly the left-hand side is undefined (due to strictness of g and
f p y = f undefined y = seq undefined y), while the right-hand side is
not necessarily so (due to f q (g y) = f (\_ - undefined) (g y) = seq
(\_ - undefined) (g y) = g y).

So you have claimed that by using seq via genericSeq in the above
definition of f you are guaranteed that any free theorem you derive from
its type is correct. But as you see above it is not!

I think you have to face it: if you want a solution that both gives
meaningful free theorems and still allows to write all programs
involving seq that you can currently write in Haskell, then using type
classes is not the answer.

Ciao,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Nicolas Pouillard
On Mon, 02 Aug 2010 17:41:02 +0200, Janis Voigtländer 
j...@informatik.uni-bonn.de wrote:
 Hi,
 
 I am late to reply in this thread, but as I see Stefan has already made
 what (also from my view) are the main points:
 
 - Putting seq in a type class makes type signatures more verbose, which
 one may consider okay or not. In the past (and, as it seems, again in
 every iteration of the language development process since then) the
 majority of the language design decision makers have considered this
 verbosity non-okay enough, so that they decided to have a fully
 polymorhpic seq.
 
 - Even if putting seq in a type class, problems with parametricity do
 not simply vanish. The question is what instances there will be for that
 class. (For example, if there is not instance at all, then no
 seq-related problems of *any* nature can exist...)
 
 - The Haskell 1.3 solution was to, among others, have a class instance
 for functions. As we show in the paper Stefan mentioned, that is not a
 solution. Some statements claimed by parametricity will then still be
 wrong due to seq.

I agree. Adding an instance with a polymorphic primitive vanish the whole
bonus of the type class approach.

 - If there is no class instance for function types, then those problems
 go away, of course. But it is doubtful whether that would be a viable
 solution. Quite a few programs would be rejected as a consequence. (Say,
 you want to use the strict version of foldl. That will lead to a type
 class constraint on one of the type variables. Now suppose you actually
 want to fold in a higher-order fashion, like when expressing efficient
 reverse via foldr. That would not anymore be possible for the strict
 version of foldl, as it would require the type-class-constrained
 variable to be instantiated with a function type.)

I think it would be a step forward. The old seq would still exists as
unsafeSeq and such applications could continue to use it. In the mean
time parametricity results would better apply to programs without unsafe
functions. And this without adding extra complexity into the type system.

Actually I think we can keep the old generic seq, but cutting its full
polymorphism:

seq :: Typeable a = a - b - b

Even if this is acceptable I would still introduce a type class for seq
for the following reasons:

  - It can be useful to have a different implementation on some
specific types.
  - It may apply one types on which we do not want Typeable.
  - One could safely use the Typeable version for functions.

 Two more specific answers to Nicolas' comments:
 
  Actually my point is that if we do not use any polymorphic primitive to
  implement a family of seq functions then it cannot be flawed. Indeed
  if you read type classes as a way to implicitly pass and pick  functions
  then it cannot add troubles.
 
 Completely correct. But the point is that without using any polymorphic
 primitive you won't be able to implement a member of that family for the
 case of function types (which you do not consider a big restriction, but
 others do).
 
  However I absolutely do not buy their argument using as example a function
  f :: Eval (a - Int) = (a - Int) - (a - Int) - Int. They consider as
  an issue the fact that the type does not tell us on which argument seq is
  used. I think it is fine we may want a more precise type for it to get more
  properties out of it but it is not flawed.
  As much as we don't know where (==) is used given a function of type
  ∀ a. Eq a = [a] - [a].
 
 I fear you do not buy our argument since you did not fully understand
 what our argument is, which in all probability is our fault in not
 explaining it enough. The point is not that we dislike per se that one
 doesn't know from the type signature how/where exactly methods from a
 type class are used. In your example ∀ a. Eq a = [a] - [a] it's
 alright that we don't know more about where (==) is used. But for a
 function of type f :: Eval (a - Int) = (a - Int) - (a - Int) -
 Int, in connection with trying to find out whether uses of seq are
 harmful or not, it is absolutely *essential* to know on which of the two
 functions (a - Int) seq is used. The type class approach cannot tell
 that. Hence, a type class approach is unsuitable for trying to prevent
 seq from doing parametricity-damage while still allowing to write all
 the Haskell programs one could before (including ones that use seq on
 functions). That is the flaw of the type class approach to controlling
 seq. It is of course no flaw of using type classes in Haskell for other
 things, and we certainly did not meant to imply such a thing.

OK, I better understand now where we disagree. You want to see in the type
whether or not the free theorem apply, I want them to always apply when
no call to unsafe function is made.

Kind regards,

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Stefan Holdermans
Nicolas,

 OK, I better understand now where we disagree. You want to see in the type
 whether or not the free theorem apply, I want them to always apply when
 no call to unsafe function is made.

Implementing your suggestion would make me feel uncomfortable. Turning seq into 
an unsafe operations effectively places it outside the language, like 
unsafePerformIO isn't really part of the language (in my view, at least). But 
experience has made it clear that there are plenty of occasions in which we 
cannot really do without seq (even though its presence in the language is 
prominent subject of debate).

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Janis Voigtländer

Nicolas Pouillard schrieb:

- If there is no class instance for function types, then those problems
go away, of course. But it is doubtful whether that would be a viable
solution. Quite a few programs would be rejected as a consequence. (Say,
you want to use the strict version of foldl. That will lead to a type
class constraint on one of the type variables. Now suppose you actually
want to fold in a higher-order fashion, like when expressing efficient
reverse via foldr. That would not anymore be possible for the strict
version of foldl, as it would require the type-class-constrained
variable to be instantiated with a function type.)


I think it would be a step forward. The old seq would still exists as
unsafeSeq and such applications could continue to use it. In the mean
time parametricity results would better apply to programs without unsafe
functions. And this without adding extra complexity into the type system.


Yes, I agree. Of course, you (and Lennart, and others advocating putting
seq into a type class) could work toward that solution right away, could
have done so for quite some time: write a package with an Eval type
class and method safeSeq (and *no* class instance for function types),
upload it on Hackage, encourage people to use it. Modulo the naming
difference seq/safeSeq vs. unsafeSeq/seq, that's exactly the solution
you want. I wonder why it is not happening. :-)


Actually I think we can keep the old generic seq, but cutting its full
polymorphism:

seq :: Typeable a = a - b - b


I guess I don't know enough about Typeable to appreciate that.


OK, I better understand now where we disagree. You want to see in the type
whether or not the free theorem apply,


Oh, YES. That's the point of a free theorem, isn't it: that I only need
to look at the type of the function to derive some property about it.


I want them to always apply when
no call to unsafe function is made.


Well, the question is what you mean by no call to unsafe function is
made. Where? In the function under consideration, from whose type the
free theorem is derived? Are you sure that this is enough? Maybe that
function f does not contain a call to unsafeSeq, but it has an argument
which is itself a function. Maybe in some function application,
unsafeSeq is passed to f in that argument position, directly or
indirectly. Maybe f does internally apply that function argument to
something. Can you be sure that this will not lead to a failure of the
free theorem you derived from f's type (counting on the fact that f does
not call an unsafe function)?

Of course, preventing the *whole program* from calling unsafeSeq is
enough to guarantee validity of the free theorems thus derived. But
that's equivalent to excluding seq from Haskell altogether.

Best,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Janis Voigtländer

Hi again,

Maybe I should add that, maybe disappointingly, I do not even have a
strong opinion about whether seq should be in Haskell or not, and in
what form. Let me quote the last paragraph of an extended version of our
paper referred to earlier:


Finally, a natural question is whether or not selective strictness
should be put under control via the type system in a future version of
Haskell (or even removed completely). We have deliberately not taken a
stand on this here. What was important to us is that both the costs and
benefits of either way should be well understood when making such a
decision. Maybe the realization that, contrary to popular opinion, a
relatively simple approach like the one that was present in Haskell
version 1.3 does not suffice to keep selective strictness in check, and
that instead something slightly less wieldy, like our type system
presented here or a similar one, would be needed, will quell the
recurring calls for putting seq in a type class once and for all. Even
then, while it would mean that our type system does not get adopted in
practice, we would consider our effort well invested. At least, the
community would then have made an informed decision, and part of the
justification would be on record.


That's under the assumption that the requirements we have on a solution are:

1. All Haskell programs that could be written before should still be
implementable, though potentially with a different type.

2. Parametricity results derived from the (new) types should hold, even
if seq is involved.

The Haskell 1.3 approach achieves 1. but not 2.

The approach of an Eval class without a function type instance achieves
2. but not 1.

Lennart suggested that the programs one loses that (latter) way might be
few in practice. I have no idea whether that is true, but it might well be.

But it is actually new to me that proponents of putting seq in a type
class admit that they can only do so and achieve 2. by accepting to give
up 1. In the past, also in the Being lazy with class paper, the
impression was given that the controversial issue about the Haskell 1.3
solution were just its practicality in terms of how cumbersome or not
the additional typing artifacts become. While it was simply supposed
that at least that solution achieves both 1. and 2. It does not.

Best,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Nicolas Pouillard
On Tue, 3 Aug 2010 16:24:54 +0200, Stefan Holdermans ste...@vectorfabrics.com 
wrote:
 Nicolas,
 
  OK, I better understand now where we disagree. You want to see in the type
  whether or not the free theorem apply, I want them to always apply when
  no call to unsafe function is made.
 
 Implementing your suggestion would make me feel uncomfortable. Turning seq 
 into an unsafe operations effectively places it outside the language, like 
 unsafePerformIO isn't really part of the language (in my view, at least). But 
 experience has made it clear that there are plenty of occasions in which we 
 cannot really do without seq (even though its presence in the language is 
 prominent subject of debate).

If we ignore the solution using Typeable for now.
The actual seq would be considered unsafe but seq would be re-introduced as 
a method of a type class with instances for many types but not
for functions. So in this view only forcing functions would be considered
out of the language.

-- 
Nicolas Pouillard
http://nicolaspouillard.fr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-03 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 8/3/10 10:24 , Stefan Holdermans wrote:
 Implementing your suggestion would make me feel uncomfortable. Turning seq 
 into an unsafe operations effectively places it outside the language, like 
 unsafePerformIO isn't really part of the language (in my view, at least). But 
 experience has made it clear that there are plenty of occasions in which we 
 cannot really do without seq (even though its presence in the language is 
 prominent subject of debate).

...which sounds a lot like unsafePerformIO (which *is* inside the language;
it's part of the FFI addendum).  The parallels are exactly why I suggested
it become unsafeSeq.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxYR/gACgkQIn7hlCsL25WE8QCgtAs+gq93pZeRsBwsis9HLSWm
xeEAn2xuKLYSB4IsFlxlssL5Hf3Pxo1x
=oA8A
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness question

2010-08-02 Thread Janis Voigtländer

Hi,

I am late to reply in this thread, but as I see Stefan has already made
what (also from my view) are the main points:

- Putting seq in a type class makes type signatures more verbose, which
one may consider okay or not. In the past (and, as it seems, again in
every iteration of the language development process since then) the
majority of the language design decision makers have considered this
verbosity non-okay enough, so that they decided to have a fully
polymorhpic seq.

- Even if putting seq in a type class, problems with parametricity do
not simply vanish. The question is what instances there will be for that
class. (For example, if there is not instance at all, then no
seq-related problems of *any* nature can exist...)

- The Haskell 1.3 solution was to, among others, have a class instance
for functions. As we show in the paper Stefan mentioned, that is not a
solution. Some statements claimed by parametricity will then still be
wrong due to seq.

- If there is no class instance for function types, then those problems
go away, of course. But it is doubtful whether that would be a viable
solution. Quite a few programs would be rejected as a consequence. (Say,
you want to use the strict version of foldl. That will lead to a type
class constraint on one of the type variables. Now suppose you actually
want to fold in a higher-order fashion, like when expressing efficient
reverse via foldr. That would not anymore be possible for the strict
version of foldl, as it would require the type-class-constrained
variable to be instantiated with a function type.)

Two more specific answers to Nicolas' comments:


Actually my point is that if we do not use any polymorphic primitive to
implement a family of seq functions then it cannot be flawed. Indeed
if you read type classes as a way to implicitly pass and pick  functions
then it cannot add troubles.


Completely correct. But the point is that without using any polymorphic
primitive you won't be able to implement a member of that family for the
case of function types (which you do not consider a big restriction, but
others do).


However I absolutely do not buy their argument using as example a function
f :: Eval (a - Int) = (a - Int) - (a - Int) - Int. They consider as
an issue the fact that the type does not tell us on which argument seq is
used. I think it is fine we may want a more precise type for it to get more
properties out of it but it is not flawed.
As much as we don't know where (==) is used given a function of type
∀ a. Eq a = [a] - [a].


I fear you do not buy our argument since you did not fully understand
what our argument is, which in all probability is our fault in not
explaining it enough. The point is not that we dislike per se that one
doesn't know from the type signature how/where exactly methods from a
type class are used. In your example ∀ a. Eq a = [a] - [a] it's
alright that we don't know more about where (==) is used. But for a
function of type f :: Eval (a - Int) = (a - Int) - (a - Int) -
Int, in connection with trying to find out whether uses of seq are
harmful or not, it is absolutely *essential* to know on which of the two
functions (a - Int) seq is used. The type class approach cannot tell
that. Hence, a type class approach is unsuitable for trying to prevent
seq from doing parametricity-damage while still allowing to write all
the Haskell programs one could before (including ones that use seq on
functions). That is the flaw of the type class approach to controlling
seq. It is of course no flaw of using type classes in Haskell for other
things, and we certainly did not meant to imply such a thing.

Best regards,
Janis.

--
Jun.-Prof. Dr. Janis Voigtländer
http://www.iai.uni-bonn.de/~jv/
mailto:j...@iai.uni-bonn.de

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 8/2/10 11:41 , Janis Voigtländer wrote:
 alright that we don't know more about where (==) is used. But for a
 function of type f :: Eval (a - Int) = (a - Int) - (a - Int) -
 Int, in connection with trying to find out whether uses of seq are
 harmful or not, it is absolutely *essential* to know on which of the two
 functions (a - Int) seq is used. The type class approach cannot tell

Hm.  Seems to me that (with TypeFamilies and FlexibleContexts)

 h :: (x ~ y, Eval (y - Int)) = (x - Int) - (y - Int) - Int

should do that, but ghci is telling me it isn't (substituting Eq for Eval
for the nonce):

  Prelude let h :: (x ~ y, Eq (y - Int)) = (x - Int) - (y - Int) -
Int; h = undefined
  Prelude :t h
  h :: (Eq (x - Int)) = (x - Int) - (x - Int) - Int

Bleah.  (as if it weren't obvious) I still don't quite grok this stuff

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxW+VwACgkQIn7hlCsL25Us2gCbBaiDCutFcN7URjqBL0RUUMUl
fkkAoJ6jV52RUeNQcISeyzTMFtDwic+y
=0fBN
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Stefan Holdermans
Brandon,

 Hm.  Seems to me that (with TypeFamilies and FlexibleContexts)
 
 h :: (x ~ y, Eval (y - Int)) = (x - Int) - (y - Int) - Int
 
 should do that, but ghci is telling me it isn't (substituting Eq for Eval
 for the nonce):
 
  Prelude let h :: (x ~ y, Eq (y - Int)) = (x - Int) - (y - Int) -
 Int; h = undefined
  Prelude :t h
  h :: (Eq (x - Int)) = (x - Int) - (x - Int) - Int
 
 Bleah.  (as if it weren't obvious) I still don't quite grok this stuff

Well... x ~ y kind of implies that x could replace y within the scope of the 
constraint: it's like one of the first thing I would expect to follow from a 
notion  of equality. ;-)  But actually if you push the constraint inward, into 
the type so to say, you actually get quite close to Janis' and David's solution.

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Stefan Holdermans
Brandon,

 h :: (x ~ y, Eval (y - Int)) = (x - Int) - (y - Int) - Int

  But actually if you push the constraint inward, into the type so to say, you 
 actually get quite close to Janis' and David's solution.

Sorry, I was thinking out loud there. I meant the Eval constraint, not the 
equality constraint.  But, right now, I guess my comment only makes sense to 
me, so let's pretend I kept quiet. ;-)

Cheers,

  S.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 8/2/10 17:18 , Stefan Holdermans wrote:
 Brandon,
 h :: (x ~ y, Eval (y - Int)) = (x - Int) - (y - Int) - Int
 
  But actually if you push the constraint inward, into the type so to say, 
 you actually get quite close to Janis' and David's solution.
 
 Sorry, I was thinking out loud there. I meant the Eval constraint, not the 
 equality constraint.  But, right now, I guess my comment only makes sense to 
 me, so let's pretend I kept quiet. ;-)

The point of this discussion is that the Eval constraint needs to be on one
of the functions.  So I tried to specify that (x - Int) and (y - Int) are
different types despite x and y being the same type, because one of them has
an Eval constraint.  This may be a shortcoming of Haskell (or System Fc?)
types, although it may be doable with a newtype.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkxXWZUACgkQIn7hlCsL25XhxACdFLFtCUrJqEpqGSsymt1uE3Zc
yWgAoKcyJZdjng1zthyAtPkMCIvHce27
=XkFz
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness question

2010-08-02 Thread Stefan Holdermans
Brandon,

 Sorry, I was thinking out loud there. I meant the Eval constraint, not the 
 equality constraint.  But, right now, I guess my comment only makes sense to 
 me, so let's pretend I kept quiet. ;-)

 The point of this discussion is that the Eval constraint needs to be on one
 of the functions.  So I tried to specify that (x - Int) and (y - Int) are
 different types despite x and y being the same type, because one of them has
 an Eval constraint.  This may be a shortcoming of Haskell (or System Fc?)
 types, although it may be doable with a newtype.

That was kind of what my thinking out loud was getting at. If you want x - Int 
and y - Int to be different types even if x and y actually are the same type, 
then apparently you want x - Int and y - Int to be built from different 
function-space constructors, say - and -*, yielding x - Int and y -* Int. 
Replacing equals for equals again, you get x - Int and x -* Int. So, 
basically, we are annotating function types, what is IIRC exactly what Janis 
and David are doing. (I hope Janis corrects me if I'm wrong here).

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: laziness in `length'

2010-06-15 Thread Denys Rtveliashvili
Hi Daniel,

Thank you very much for the explanation of this issue.

While I understand the parts about rewrite rules and the big thunk, it
is still not clear why it is the way it is.

Please could you explain which Nums are not strict? The ones I am aware
about are all strict.

Also, why doesn't it require building the full thunk for non-strict
Nums? Even if they are not strict, an addition requires both parts to be
evaluated. This means the thunk will have to be pre-built, doesn't it?

With kind regards,
Denys


 On Monday 14 June 2010 16:25:06, Serge D. Mechveliani wrote:
  Dear people and GHC team,
 
  I have a naive question about the compiler and library of  ghc-6.12.3.
  Consider the program
 
import List (genericLength)
main = putStr $ shows (genericLength [1 .. n]) \n
   where
   n = -- 10^6, 10^7, 10^8 ...
 
  (1) When it is compiled under  -O,  it runs in a small constant space
  in  n  and in a time approximately proportional to  n.
  (2) When it is compiled without -O,  it takes at the run-time the
  stack proportional to  n,  and it takes enormousely large time
  for  n = 10^7.
  (3) In the interpreter mode  ghci,   `genericLength [1 .. n]'
  takes as much resource as (2).
 
  Are the points (2) and (3) natural for an Haskell implementation?
 
  Independently on whether  lng  is inlined or not, its lazy evaluation
  is, probably, like this:
   lng [1 .. n] =
   lng (1 : (list 2 n)) =  1 + (lng $ list 2 n) =
   1 + (lng (2: (list 3 n))) = 1 + 1 + (lng $ list 3 n) =
   2 + (lng (3: (list 4 n)))   -- because this + is of Integer
   = 2 + 1 + (lng $ list 4 n) =
   3 + (lng $ list 4 n)
   ...
  And this takes a small constant space.
 
 Unfortunately, it would be
 
 lng [1 .. n]
 ~ 1 + (lng [2 .. n])
 ~ 1 + (1 + (lng [3 .. n]))
 ~ 1 + (1 + (1 + (lng [4 .. n])))
 ~
 
 and that builds a thunk of size O(n).
 
 The thing is, genericLength is written so that for lazy number types, the 
 construction of the result can begin before the entire list has been 
 traversed. This means however, that for strict number types, like Int or 
 Integer, it is woefully inefficient.
 
 In the code above, the result type of generic length (and the type of list 
 elements) is defaulted to Integer.
 When you compile with optimisations, a rewrite-rule fires:
 
 -- | The 'genericLength' function is an overloaded version of 'length'.  In
 -- particular, instead of returning an 'Int', it returns any type which is
 -- an instance of 'Num'.  It is, however, less efficient than 'length'.
 genericLength   :: (Num i) = [b] - i
 genericLength []=  0
 genericLength (_:l) =  1 + genericLength l
 
 {-# RULES
   genericLengthInt genericLength = (strictGenericLength :: [a] - 
 Int);
   genericLengthInteger genericLength = (strictGenericLength :: [a] - 
 Integer);
  #-}
 
 strictGenericLength :: (Num i) = [b] - i
 strictGenericLength l   =  gl l 0
   where
 gl [] a = a
 gl (_:xs) a = let a' = a + 1 in a' `seq` gl xs a'
 
 which gives a reasonabley efficient constant space calculation.
 
 Without optimisations and in ghci, you get the generic code, which is slow 
 and thakes O(n) space.
 
  Thank you in advance for your explanation,
 
  -
  Serge Mechveliani
  mech...@botik.ru
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: laziness in `length'

2010-06-15 Thread Daniel Fischer
On Tuesday 15 June 2010 16:52:04, Denys Rtveliashvili wrote:
 Hi Daniel,

 Thank you very much for the explanation of this issue.

 While I understand the parts about rewrite rules and the big thunk, it
 is still not clear why it is the way it is.

 Please could you explain which Nums are not strict? The ones I am aware
 about are all strict.

There are several implementations of lazy (to different degrees) Peano 
numbers on hackage.
The point is that it's possible to have lazy Num types, and the decision 
was apparently to write genericLength so that lazy Num types may profit 
from it.
Arguably, one should have lazyGenericLength for lazy number types and 
strictGenericLength for strict number types (Integer, Int64, Word, Word64, 
...).
On the other hand, fromIntegral . length works fine in practice (calling 
length on a list exceeding the Int range would be doubtful on 32-bit 
systems and plain madness on 64-bit systems).


 Also, why doesn't it require building the full thunk for non-strict
 Nums? Even if they are not strict, an addition requires both parts to be
 evaluated.

Not necessarily for lazy numbers.

 This means the thunk will have to be pre-built, doesn't it?

For illustration, the very simple-minded lazy Peano numbers:

data Peano
= Zero
| Succ Peano
  deriving (Show, Eq)

instance Ord Peano where
compare Zero Zero = EQ
compare Zero _= LT
compare _Zero = GT
compare (Succ m) (Succ n) = compare m n
min Zero _ = Zero
min _ Zero = Zero
min (Succ m) (Succ n) = Succ (min m n)
max Zero n = n
max m Zero = m
max (Succ m) (Succ n) = Succ (max m n)

instance Num Peano where
Zero + n = n
(Succ m) + n = Succ (m + n)
-- omitted other methods due to laziness (mine, not Haskell's)
fromInteger n
| n  0 = error Peano.fromInteger: negative argument
| n == 0 = Zero
| otherwise = Succ (fromInteger (n-1))

one, two, three, four :: Peano
one = Succ Zero
two = Succ one
three = Succ two
four = Succ three

min two (genericLength [1 .. ])
~ min (Succ one) (genericLength [1 .. ])
~ min (Succ one) (1 + (genericLength [2 .. ]))
~ min (Succ one) ((Succ Zero) + (genericLength [2 .. ]))
~ min (Succ one) (Succ (Zero + (genericLength [2 .. ])))
~ Succ (min one (Zero + (genericLength [2 .. ])))
~ Succ (min (Succ Zero) (Zero + (genericLength [2 .. ])))
~ Succ (min (Succ Zero) (genericLength [2 .. ]))
~ Succ (min (Succ Zero) (1 + (genericLength [3 .. ])))
~ Succ (min (Succ Zero) ((Succ Zero) + (genericLength [3 ..])))
~ Succ (min (Succ Zero) (Succ (Zero + (genericLength [3 .. ]
~ Succ (Succ (min Zero (Zero + (genericLength [3 .. ]
~ Succ (Succ Zero)


 With kind regards,
 Denys
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: laziness in `length'

2010-06-15 Thread Roman Beslik

On 14.06.10 17:25, Serge D. Mechveliani wrote:

  lng [1 .. n] =
  lng (1 : (list 2 n)) =  1 + (lng $ list 2 n) =
  1 + (lng (2: (list 3 n))) = 1 + 1 + (lng $ list 3 n) = {- !!! -}
  2 + (lng (3: (list 4 n)))   -- because this + is of Integer
  = 2 + 1 + (lng $ list 4 n) = {- !!! -}
  3 + (lng $ list 4 n)
   
Actually matters are more complicated. In the highlighted steps you 
implicitly used associativity of (+). Of course, Haskell can not do 
this. Also 'lng' and 'genericLength' *are not tail recursive*. This 
explains stack overflow. If you compute length with 'foldl' 
(tail-recursively) and without -O flag, than you will see excessive 
heap usage. Also, GHC's 'length' and 'foldl'' are tail recursive and 
eagerly computes length/accumulator, so they are effective without -O 
flag. See for explanation

http://www.haskell.org/haskellwiki/Stack_overflow

--
Best regards,
  Roman Beslik.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: laziness in `length'

2010-06-14 Thread Daniel Fischer
On Monday 14 June 2010 16:25:06, Serge D. Mechveliani wrote:
 Dear people and GHC team,

 I have a naive question about the compiler and library of  ghc-6.12.3.
 Consider the program

   import List (genericLength)
   main = putStr $ shows (genericLength [1 .. n]) \n
  where
  n = -- 10^6, 10^7, 10^8 ...

 (1) When it is compiled under  -O,  it runs in a small constant space
 in  n  and in a time approximately proportional to  n.
 (2) When it is compiled without -O,  it takes at the run-time the
 stack proportional to  n,  and it takes enormousely large time
 for  n = 10^7.
 (3) In the interpreter mode  ghci,   `genericLength [1 .. n]'
 takes as much resource as (2).

 Are the points (2) and (3) natural for an Haskell implementation?

 Independently on whether  lng  is inlined or not, its lazy evaluation
 is, probably, like this:
  lng [1 .. n] =
  lng (1 : (list 2 n)) =  1 + (lng $ list 2 n) =
  1 + (lng (2: (list 3 n))) = 1 + 1 + (lng $ list 3 n) =
  2 + (lng (3: (list 4 n)))   -- because this + is of Integer
  = 2 + 1 + (lng $ list 4 n) =
  3 + (lng $ list 4 n)
  ...
 And this takes a small constant space.

Unfortunately, it would be

lng [1 .. n]
~ 1 + (lng [2 .. n])
~ 1 + (1 + (lng [3 .. n]))
~ 1 + (1 + (1 + (lng [4 .. n])))
~

and that builds a thunk of size O(n).

The thing is, genericLength is written so that for lazy number types, the 
construction of the result can begin before the entire list has been 
traversed. This means however, that for strict number types, like Int or 
Integer, it is woefully inefficient.

In the code above, the result type of generic length (and the type of list 
elements) is defaulted to Integer.
When you compile with optimisations, a rewrite-rule fires:

-- | The 'genericLength' function is an overloaded version of 'length'.  In
-- particular, instead of returning an 'Int', it returns any type which is
-- an instance of 'Num'.  It is, however, less efficient than 'length'.
genericLength   :: (Num i) = [b] - i
genericLength []=  0
genericLength (_:l) =  1 + genericLength l

{-# RULES
  genericLengthInt genericLength = (strictGenericLength :: [a] - 
Int);
  genericLengthInteger genericLength = (strictGenericLength :: [a] - 
Integer);
 #-}

strictGenericLength :: (Num i) = [b] - i
strictGenericLength l   =  gl l 0
  where
gl [] a = a
gl (_:xs) a = let a' = a + 1 in a' `seq` gl xs a'

which gives a reasonabley efficient constant space calculation.

Without optimisations and in ghci, you get the generic code, which is slow 
and thakes O(n) space.

 Thank you in advance for your explanation,

 -
 Serge Mechveliani
 mech...@botik.ru

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell-cafe] Re: Laziness enhances composability: an example

2009-07-10 Thread Gleb Alexeyev

Marcin Kosiba wrote:

Hi,
	To illustrate what I meant I'm attaching two examples. In example_1.py I've 
written code the way I think would be elegant (but it doesn't work). In 
example_2.py I've written code so that it works, but it isn't elegant.
	I know I'm abusing Python iterators here. Also, I'm not sure the way to 
compose iterators shown in example_2.py is the only option. Actually I'd love 
to see a better solution, because it would remove a lot of bloat from my 
code ;)




You may want to look at Lua coroutines, which are more powerful than 
Python iterators. Your example_1.py is very similiar to the example in 
the Coroutines Tutorial [1].


[1] http://lua-users.org/wiki/CoroutinesTutorial

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness enhances composability: an example

2009-07-10 Thread Marcin Kosiba
On Friday 10 July 2009, Gleb Alexeyev wrote:
 Marcin Kosiba wrote:
  Hi,
  To illustrate what I meant I'm attaching two examples. In example_1.py
  I've written code the way I think would be elegant (but it doesn't work).
  In example_2.py I've written code so that it works, but it isn't elegant.
  I know I'm abusing Python iterators here. Also, I'm not sure the way to
  compose iterators shown in example_2.py is the only option. Actually I'd
  love to see a better solution, because it would remove a lot of bloat
  from my code ;)

 You may want to look at Lua coroutines, which are more powerful than
 Python iterators. Your example_1.py is very similiar to the example in
 the Coroutines Tutorial [1].

 [1] http://lua-users.org/wiki/CoroutinesTutorial

That in turn looks similar to Stackless Python[1], which I've been looking 
over recently ;)

[1] http://www.stackless.com/wiki/Channels
Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness leaks

2008-06-05 Thread Jules Bean

Achim Schneider wrote:

 You don't come across space-leaks in strict programs often because
data is usually allocated statically even if execution is non-strict.

Piping /dev/zero into a program that just sleeps does leak space,
though.



It only leaks 8K or whatever size your system buffers pipes until it 
suspends the writer though... or do I misunderstand your analogy?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness leaks

2008-06-05 Thread Ketil Malde
Albert Y. C. Lai [EMAIL PROTECTED] writes:

 I haven't heard the terms laziness leak and strictness leak
 before

 Leak refers to a surprise.

I the meaning of leak is in a bit of flux.  Originally, I believe it
refers to a memory leak, where the programmer forgot to call free()
before losing the pointer, thus making the program consume memory it
can't recover, and can't use.

With automatic memory management, this doesn't happen, so memory
leak then started to mean retaining objects longer than necessary.

(Aside: am I the only one who is shocked by the memory consumption of
modern programs?  I use a simple time tracker (gnotime), a calendar
with a handful of entries (evolution), and they both typically consume
half to one gigabyte of memory.  In all fairness, it seems to be much
better under Ubuntu 8.04 than 7.10, but again, they haven't been
running for very long yet.)

I'm not sure I'll use terms like strictness and laziness leak, I think
it's hard to see what's being lost here, and if you have a laziness
leak, it is unclear if it's too much laziness or too little?

 (Leak, bug, issue... We surely are very creative in how to avoid
 calling a shovel a shovel, or an error an error.)

At least one text insisted on defect.

Too much laziness or strictness may be harmful to performance, but
it's only a defect if it means the program fails to meet its
requirements. 

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness leaks

2008-06-05 Thread Loup Vaillant
2008/6/4 apfelmus [EMAIL PROTECTED]:
 [...]
 But it can waste space (- space leak), for instance by
 accumulating a big expression like

  (..) - ((..)+1) - (((..) + 1) + 1) - etc.

 instead of evaluating  x+1  immediately

   5   - 6- 7- etc.

So, it is called a space leak because the asymptotic space required
is greater in a lazy setting compared to a strict setting. Then, what
about:

sum . map (\x - 2 * x + 42) $ [1..999]

Provided sum works in constant space (needs proper tail calls and the
right level of stricness), this runs in constant space in Haskell,
deforestation or not. However, the equivalent code in, say, Ocaml,
will work in linear space in the absence of deforestation. So, in this
case, I find legitimate to talk about a strictness space leak. Well,
is it? Of course it uses linear space, there is no leak! hummm.

It feels like only lazy evaluation can be accused of space leak, while
strict evaluation cannot. Are we biased in favour of strict
evaluation? I mean, if we take strict evaluation as the default
(because it's mainstream), it is easy to think that lazy evaluation is
cool when better, but unacceptable when worse. Hence the word leak.

Just my two cents,
Loup
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness leaks

2008-06-05 Thread Achim Schneider
Jules Bean [EMAIL PROTECTED] wrote:

 Achim Schneider wrote:
   You don't come across space-leaks in strict programs often because
  data is usually allocated statically even if execution is
  non-strict.
  
  Piping /dev/zero into a program that just sleeps does leak space,
  though.
  
 
 It only leaks 8K or whatever size your system buffers pipes until it 
 suspends the writer though... or do I misunderstand your analogy?

I don't think so. I would say that it leaks 8k memory and infinite
time. I did not intend the example to be implementation-dependent, but
then that kind of proves my point.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness leaks

2008-06-05 Thread apfelmus

Ketil Malde wrote:
I the meaning of leak is in a bit of flux.  Originally, I believe 
it refers to a memory leak, where the programmer forgot to call 
free() before losing the pointer, thus making the program consume 
memory it can't recover, and can't use.


With automatic memory management, this doesn't happen, so memory 
leak then started to mean retaining objects longer than necessary.


I agree. This definition fits the space leak

  foldl (+1) 0 [1..1]   -  (((...)+1)+1)

in the sense that the unevaluated expressions are retained in memory
longer than necessary; the difference being of course that it's not
garbage collection but beta-reduction that frees the memory in question.

On the other hand, I think that the situation of

  foldl (+1) 0 [1..1]

in a strict language does not fit this definition of leak because
evaluating the list [1..1] eagerly does not retain memory longer
than necessary, it consumes memory earlier than necessary.

So, this notion of leak is spot-on.

I'm not sure I'll use terms like strictness and laziness leak, I 
think it's hard to see what's being lost here, and if you have a 
laziness leak, it is unclear if it's too much laziness or too little?


Me too, I don't see a reason to muddy the word with other meanings.
Space leak is a good word in the sense that space describes the
leak; it's the space that leaks and goes down the drain. Neither
laziness nor strictness can leak and be washed away with the rain.



(Aside: am I the only one who is shocked by the memory consumption of
 modern programs?  I use a simple time tracker (gnotime), a calendar
 with a handful of entries (evolution), and they both typically 
consume half to one gigabyte of memory.  In all fairness, it seems to
 be much better under Ubuntu 8.04 than 7.10, but again, they haven't 
been running for very long yet.)


Yeah :( When a piece of softwares wastes time and memory, they should
have written it in Haskell, so that at least the other bugs wouldn't
plague me as well.


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness leaks

2008-06-05 Thread Paul Johnson

Achim Schneider wrote:

There won't ever be a space leak without a time leak nor a time leak
without a space leak. I'd just call it a leak.
  
Actually I think you can have a space leak without a time leak.  For 
instance if every time around the main loop I cons data onto a linked 
list that never gets freed then I have a space leak.  If the list never 
gets used (or more realistically, if the program only ever uses the 
first N entries) then there is no time leak.


I agree that a time leak implies a space leak though.

Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness leaks

2008-06-05 Thread Achim Schneider
Paul Johnson [EMAIL PROTECTED] wrote:

 Achim Schneider wrote:
  There won't ever be a space leak without a time leak nor a time leak
  without a space leak. I'd just call it a leak.

 Actually I think you can have a space leak without a time leak.  For 
 instance if every time around the main loop I cons data onto a linked 
 list that never gets freed then I have a space leak.  If the list
 never gets used (or more realistically, if the program only ever uses
 the first N entries) then there is no time leak.
 
Sure there is: you leaked time while constructing the list.

The whole topic seems to degenerate into nit-picking for border cases.

One could define a leak as a property of an error-free program
resulting in non-optimal performance, or much more concise,
a trap no sufficiently smart programmer runs into.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness leaks

2008-06-04 Thread apfelmus

Ronald Guida wrote:

So I just thought of something.  If laziness leads to laziness leaks,
then is there such a thing as a strictness leak?  I realized that the
answer is yes.

A lazy leak is a situation where I'm wasting resources to delay a
sequence of calculations instead of just doing them now.  But in a
strict language, I might waste resources to compute things that I'll
never need.  I would call that a strictness leak.

Now I could ask the dual question, How do I detect strictness leaks,
and I would probably get the same answers: profiling, looking at
object code, and being explicit about the evaluation strategy.

Both types of leaks share a lot in common.  In both cases, I'm wasting
resources.  If I have a real-time system, then either type of leak can
cause me to a miss a deadline.


I haven't heard the terms laziness leak and strictness leak before, 
imho they sound a bit spooky because it's not clear to me what the 
situation without leak would be. (Time vs Space? Is an O(n) algorithm a 
strictness leak compared to an O(log n) algorithm?)



Note that lazy evaluation never wastes time; evaluating a term with lazy 
evaluation will always take less reduction steps than doing so eagerly 
or partly eagerly. But it can waste space (- space leak), for 
instance by accumulating a big expression like


  (..) - ((..)+1) - (((..) + 1) + 1) - etc.

instead of evaluating  x+1  immediately

   5   - 6- 7- etc.

However, this would be wasted time in case the whole expression will not 
be evaluated but just thrown away. So, it's a trade-off.


The effect you have in mind only appears in real-time systems, where 
lazy evaluation procrastinates everything by default. So, trying to 
implement a real-time system in a lazy language is more or less a 
paradox :) as Okasaki already points out in his book.



Eager evaluation may waste both time and space compared to alternative 
course of reduction.



Regards,
apfelmus

PS: The reduction strategies we compare to don't evaluate under lambdas.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness leaks

2008-06-04 Thread Albert Y. C. Lai

apfelmus wrote:
I haven't heard the terms laziness leak and strictness leak before, 
imho they sound a bit spooky because it's not clear to me what the 
situation without leak would be. (Time vs Space? Is an O(n) algorithm a 
strictness leak compared to an O(log n) algorithm?)


Leak refers to a surprise. You didn't expect Firefox to use Omega(n) 
memory where n is the number of times you refresh a rather plain static 
HTML page, but it does, and you call it a memory leak. You didn't 
expect foldl to lump a big thunk, but it does, and you call it a lazy 
leak. Therefore leak refers to a program failing your expectation - 
even if you yourself wrote the program.


(Leak, bug, issue... We surely are very creative in how to avoid 
calling a shovel a shovel, or an error an error.)


The solution is better education, better reasoning, and Intelligent 
Design. As you write every line of code, you should already know what to 
expect. No magic, no surprise, just science.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness leaks

2008-06-04 Thread Ryan Ingram
On 6/4/08, apfelmus [EMAIL PROTECTED] wrote:
 Note that lazy evaluation never wastes time; evaluating a term with lazy
 evaluation will always take less reduction steps than doing so eagerly or
 partly eagerly.

True, but you can still have a time leak; this is particularily
relevant in soft-real-time apps (like almost every app you use on a
regular basis, from your editor to games); a time leak is when a
computation that would take X time if evaluated every time step is
left for many timesteps without being evaluated, leading to a hitch in
responsiveness when it eventually is demanded N frames later taking
N*X time.

Eager applications almost never have this sort of time leak, but
it's easy for it to happen with lazy evaluation.

A simple example: consider a variable that holds the number of
timesteps since the app launched, for example.  Every time step it
gets incremented by 1.  If the result is evaluated every time step, it
takes a constant amount of time per timestep.  But if you go a long
time without evaluating it, you end up with both a space leak (as the
+1 thunks build up) but a time leak as well--when you eventually
evaluate it, it takes O(n) time, where n is the number of frames since
the variable was last evaluated.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness leaks

2008-06-04 Thread Achim Schneider
Ryan Ingram [EMAIL PROTECTED] wrote:

 On 6/4/08, apfelmus [EMAIL PROTECTED] wrote:
  Note that lazy evaluation never wastes time; evaluating a term with
  lazy evaluation will always take less reduction steps than doing so
  eagerly or partly eagerly.
 
 True, but you can still have a time leak; this is particularily
 relevant in soft-real-time apps (like almost every app you use on a
 regular basis, from your editor to games); a time leak is when a
 computation that would take X time if evaluated every time step is
 left for many timesteps without being evaluated, leading to a hitch in
 responsiveness when it eventually is demanded N frames later taking
 N*X time.
 
 Eager applications almost never have this sort of time leak, but
 it's easy for it to happen with lazy evaluation.
 
 A simple example: consider a variable that holds the number of
 timesteps since the app launched, for example.  Every time step it
 gets incremented by 1.  If the result is evaluated every time step, it
 takes a constant amount of time per timestep.  But if you go a long
 time without evaluating it, you end up with both a space leak (as the
 +1 thunks build up) but a time leak as well--when you eventually
 evaluate it, it takes O(n) time, where n is the number of frames since
 the variable was last evaluated.
 
There won't ever be a space leak without a time leak nor a time leak
without a space leak. I'd just call it a leak.

You don't come across space-leaks in strict programs often because
data is usually allocated statically even if execution is non-strict.

Piping /dev/zero into a program that just sleeps does leak space,
though.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: laziness, memoization and inlining

2008-05-15 Thread Simon Marlow

Scott Dillard wrote:

Simon, Don,

You're right. -fno-state-hack fixed it. I've opened a trac ticket.
Program and test data are there.

http://hackage.haskell.org/trac/ghc/ticket/2284


Ok, but do we really need two tickets for this?  Why open a new ticket 
rather than adding the information to the existing ticket?


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: laziness, memoization and inlining

2008-05-15 Thread Simon Peyton-Jones
i'd forgotten there was an existing one when I asked Scott document his 
problem.  I've cross-linked them, but equally good would be to transfer the info

| -Original Message-
| From: Simon Marlow [mailto:[EMAIL PROTECTED] On Behalf Of Simon Marlow
| Sent: 15 May 2008 10:35
| To: Scott Dillard
| Cc: Simon Peyton-Jones; Don Stewart; glasgow-haskell-users@haskell.org
| Subject: Re: laziness, memoization and inlining
|
| Scott Dillard wrote:
|  Simon, Don,
| 
|  You're right. -fno-state-hack fixed it. I've opened a trac ticket.
|  Program and test data are there.
| 
|  http://hackage.haskell.org/trac/ghc/ticket/2284
|
| Ok, but do we really need two tickets for this?  Why open a new ticket
| rather than adding the information to the existing ticket?
|
| Cheers,
| Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: laziness, memoization and inlining

2008-05-14 Thread Simon Peyton-Jones
Scott

| I'm experiencing some undesirable performance behavior, I suspect from
| inlining things that shouldn't be, defeating my memoization attempts.

This is bad, very bad.  I think Don is right. I believe the following is 
happening.  In your main program you have

   do   let mesh = memoMesh rawMesh
   display :: IO ()
display = draw mesh  stuff
setDisplayCallback display
glutMainLoop

So the effect is that 'display' is performed many times, by glutMainLoop.

Now 'display' is seen by GHC thus:
display = \s - draw mesh s  stuff

The \s says given the state of the world, s, I'll draw the mesh on it.  The 
state hack makes GHC think that a \s will only ever be called once (which 
is utterly false in this case), so it can inline mesh=memoMesh rawMesh.  Result 
disaster.


I bet you'll be fine if you compile your main module with -fno-state-hack.

But I should fix this, somehow.  It's coming up too often to justify the hack.  
Can you make a Trac bug report, and include your message and this one?

Thanks for reporting it.

Simon



| -Original Message-
| From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
| Behalf Of Scott Dillard
| Sent: 14 May 2008 00:13
| To: glasgow-haskell-users@haskell.org
| Subject: laziness, memoization and inlining
|
| Hi Everybody,
|
| I'm experiencing some undesirable performance behavior, I suspect from
| inlining things that shouldn't be, defeating my memoization attempts.
| I've been experimenting with purely functional 3D modeling code, so a
| mesh is (initially) something like
|
|  type Mesh = Map (Int,Int) (Int,Int)
|
| that is, a function from from an edge to the next edge around the
| face, where an edge is a pair of Ints (the vertices.)
|
| This nice and pure and everything, but its slow to read from. So I
| have another immutable pointer-based representation
|
|  data Edge = Edge { edgeOrg :: Int , edgeSym :: Edge , edgeNext :: Edge }
|
| which is something like a half-edge, for those familiar with such
| things. Its basically a big net made of circular lists that are tied
| together. I do the knot tying stuff to create this thing,
|
|  memoMesh :: Map (Int,Int) (Int,Int) - Edge Int
|  memoMesh nexts = head $ Map.elems ties
|where
|  ties = Map.mapWithKey (\ij _ - make ij) nexts
|  lookup ij = trace hello $ fromJust $ Map.lookup ij ties
|  make ij@(i,j) = Edge i (lookup (j,i)) (lookup . fromJust $ Map.lookup 
ij nexts)
|
| The program first loads the model file and creates the Edge-based mesh
| using the memoMesh function. The result is then captured in the
| closure for the rendering callback in GLUT. When I compile with -O0 I
| see the hello traces only during the first drawing. Subsequent
| redraws are fast and output no traces. When I compile with -O1 or -O2,
| the traces get output on every redraw, and its very slow. I suspect
| all of the calls which create the mesh are inlined into the rendering
| callback, effectively rebuilding the mesh on every draw.
|
| I've tried littering NOINLINE pragmas all around, to no avail.
|
| The main function is something like
|
|  main = do
|initGlut ...
|rawMesh - loadMeshFile ...
|let
|  mesh = memoMesh rawMesh
|  otherstuff = ...
|  display =
|draw mesh  amongOtherThings
|displayCallback $= display
|glutMainLoop
|
| Can someone help me understand what's going on here? Is there a nice
| solution to this, hopefully a single strategic pragma or something?
|
| Scott
| ___
| Glasgow-haskell-users mailing list
| Glasgow-haskell-users@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: laziness, memoization and inlining

2008-05-14 Thread Scott Dillard
Simon, Don,

You're right. -fno-state-hack fixed it. I've opened a trac ticket.
Program and test data are there.

http://hackage.haskell.org/trac/ghc/ticket/2284

Scott



On Wed, May 14, 2008 at 1:48 AM, Simon Peyton-Jones
[EMAIL PROTECTED] wrote:
 Scott

 | I'm experiencing some undesirable performance behavior, I suspect from
 | inlining things that shouldn't be, defeating my memoization attempts.

 This is bad, very bad.  I think Don is right. I believe the following is 
 happening.  In your main program you have

   do   let mesh = memoMesh rawMesh
   display :: IO ()
display = draw mesh  stuff
setDisplayCallback display
glutMainLoop

 So the effect is that 'display' is performed many times, by glutMainLoop.

 Now 'display' is seen by GHC thus:
display = \s - draw mesh s  stuff

 The \s says given the state of the world, s, I'll draw the mesh on it.  
 The state hack makes GHC think that a \s will only ever be called once 
 (which is utterly false in this case), so it can inline mesh=memoMesh 
 rawMesh.  Result disaster.


 I bet you'll be fine if you compile your main module with -fno-state-hack.

 But I should fix this, somehow.  It's coming up too often to justify the 
 hack.  Can you make a Trac bug report, and include your message and this one?

 Thanks for reporting it.

 Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: laziness, memoization and inlining

2008-05-13 Thread Don Stewart
sedillard:
 Hi Everybody,
 
 I'm experiencing some undesirable performance behavior, I suspect from
 inlining things that shouldn't be, defeating my memoization attempts.
 I've been experimenting with purely functional 3D modeling code, so a
 mesh is (initially) something like
 
  type Mesh = Map (Int,Int) (Int,Int)
 
 that is, a function from from an edge to the next edge around the
 face, where an edge is a pair of Ints (the vertices.)
 
 This nice and pure and everything, but its slow to read from. So I
 have another immutable pointer-based representation
 
  data Edge = Edge { edgeOrg :: Int , edgeSym :: Edge , edgeNext :: Edge }
 
 which is something like a half-edge, for those familiar with such
 things. Its basically a big net made of circular lists that are tied
 together. I do the knot tying stuff to create this thing,
 
  memoMesh :: Map (Int,Int) (Int,Int) - Edge Int
  memoMesh nexts = head $ Map.elems ties
where
  ties = Map.mapWithKey (\ij _ - make ij) nexts
  lookup ij = trace hello $ fromJust $ Map.lookup ij ties
  make ij@(i,j) = Edge i (lookup (j,i)) (lookup . fromJust $ Map.lookup 
  ij nexts)
 
 The program first loads the model file and creates the Edge-based mesh
 using the memoMesh function. The result is then captured in the
 closure for the rendering callback in GLUT. When I compile with -O0 I
 see the hello traces only during the first drawing. Subsequent
 redraws are fast and output no traces. When I compile with -O1 or -O2,
 the traces get output on every redraw, and its very slow. I suspect
 all of the calls which create the mesh are inlined into the rendering
 callback, effectively rebuilding the mesh on every draw.

Hmm. I wonder if *this* is the no-state-hack at play.

Does -fno-state-hack help?

 I've tried littering NOINLINE pragmas all around, to no avail.
 
 The main function is something like
 
  main = do
initGlut ...
rawMesh - loadMeshFile ...
let
  mesh = memoMesh rawMesh
  otherstuff = ...
  display =
draw mesh  amongOtherThings
displayCallback $= display
glutMainLoop
 
 Can someone help me understand what's going on here? Is there a nice
 solution to this, hopefully a single strategic pragma or something?

Is it possible to boil this down to a program that doesn't use GL?

-- Don
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell-cafe] Re: Laziness and Either

2008-04-23 Thread apfelmus

John Goerzen wrote:

On Mon April 21 2008 3:26:04 pm Magnus Therning wrote:

In order to allow lazy decoding I ended up exporting decode' as well:

  decode' :: String - [Maybe Word8]


I take it that in a situation like this, you'd have either:

[]   -- success with empty result

a list full of Just x
 -- success with valid results

a list with 0 or more Just x, followed by one Nothing
 -- an error

Makes sense to me.  What impact does this have on performance?


I think that using  [Maybe a]  for this purpose is too fine-grained, I 
would use a custom list type


  data River a = a : (River a) | Done | Error

(I didn't want to call it  Stream  because that name is too overloaded 
already and  PartialList  is too long :) The three constructors 
correspond to the three cases you mention. In particular, Error takes 
the role of the last  Nothing .


In other words, we just replace the usual end of list [] with another 
data type. Thus, the general version is


  data River b a = a : (River a) | End b

Of course, this type is isomorphic to

  River b a ~ (b, [a])

The latter just puts the end result up front which is the original idea 
for lazy parsing: report the error  b  but also return a (partial) 
result  [a] .


Also, I wonder if there is some call for tools in Data.Either to support this 
type of usage?  For example:


type EitherList a b = [Either a b]

then some functions such as, say, mapEither or foldEither that act like 
try/catch: a special function to use for an exception, and otherwise 
they unwrap the Right side passing it along to others.


The  River  thing has the drawback that you have to rewrite all the 
standard list functions, but since you're ready to accept that for in 
the case of  [Either a b]  anyway, you can as well use the  River  thing.



Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness and Either

2008-04-23 Thread David Roundy
On Wed, Apr 23, 2008 at 01:12:16PM +0200, apfelmus wrote:
 I think that using  [Maybe a]  for this purpose is too fine-grained, I 
 would use a custom list type
 
   data River a = a : (River a) | Done | Error
 
 (I didn't want to call it  Stream  because that name is too overloaded 
 already and  PartialList  is too long :) The three constructors 
 correspond to the three cases you mention. In particular, Error takes 
 the role of the last  Nothing .

That sounds like a good idea.  But I'd call it Creek, because a river is
present year-round, while a creek sometimes dries up in the summer.  :) And
I'd also vote for adding a String (or more generic parameter) to the Error
type, so you can give some sort of reporting when something goes wrong.
String is appealing, because then you could make Creek a monad, and fail
could generate a nice Error message (assuming fail = Error).

Of course, you could take the silly metaphor even further

data Creek a = a : (Creek a) | Ocean | Drought String

:)
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Laziness (was: [Haskell-cafe] Performance problem with random numbers)

2007-10-15 Thread David Roundy
On Sun, Oct 14, 2007 at 11:54:54PM +0200, ntupel wrote:
 On Sat, 2007-10-13 at 09:56 -0400, Brandon S. Allbery KF8NH wrote:
  Now you need to start forcing things; given laziness, things tend to  
  only get forced when in IO, which leads to time being accounted to  
  the routine where the forcing happened.  If random / randomR are  
  invoked with large unevaluated thunks, their forcing will generally  
  be attributed to them, not to functions within the thunks.
  
  (Yes, this means profiling lazy programs is a bit of a black art.)
 
 After more testing I finally realized how right you are. It appears that
 my problem is not related to random/randomR but only to laziness. I came
 up with a test that doesn't use random numbers at all and still needs
 about 2.5 seconds to complete (it is really just meaningless
 computations):

Here's a modified version of your code that prints out a real result, by
using sum rather than seq to force the computation:

module Main where

main :: IO ()
main = do let n = 100 :: Int
  print $ sum (take n $ test 1 [1,2..])

test :: Int - [Int] - [Int]
test t g =
let (n, g') = next t g
in
n:test t g'

next :: Int - [Int] - (Int, [Int])
next x (y:ys) =
let n = func y
in
if n = 0.5 then (x, ys) else (0, ys)
where
func x = fromIntegral x / (10 ^ len x)
where
len 0 = 0
len n = 1 + len (n `div` 10)

On my computer this takes 4 seconds to run.  I can speed it up by an order
of magnitude by writing code that is friendlier to the compiler:

module Main where

main :: IO ()
main = do let n = 100 :: Int
  print $ sum (take n $ test 1 [1,2..])

test :: Int - [Int] - [Int]
test t g = map f g
where f :: Int - Int
  f y = if func y = 0.5 then t else 0
  func :: Int - Double
  func x = fromIntegral x / mypow x
  mypow 0 = 1
  mypow n = 10*(mypow (n `div` 10))

Switching to map and simplifying the structure gained me 30% or so, but the
big improvement came from the elimination of the use of (^) by writing
mypow (ill-named).

I have no idea if this example will help your actual code, but it
illustrates that at least in this example, it's pretty easy to gain an
order of magnitude in speed.  (That func is a weird function, by the
way.)

Incidentally, implementing the same program in C, I get:

#include stdio.h

int test(int, int);
double func(int);
int mypow(int);

int mypow(int n) {
  double result = 1;
  while (n0) {
result *= 10;
n /= 10;
  }
  return result;
}

double func(int x) {
  return x / (double) mypow(x);
}

int test(int t, int y) {
  if (func(y) = 0.5) {
return t;
  } else {
return 0;
  }
}

int main() {
  int i;
  int sum = 0;
  for (i=0;i100;i++) {
sum += test(1,i);
  }
  printf(sum is %d\n, sum);
  return 0;
}

Which runs more than 10 times faster than my Haskell version, so there's
obviously still a lot of room for optimization.  :( Incidentally, a version
written in C that uses pow for the 10^(len n) runs in only half the time of
my haskell version (five time the time of the C version I give)--confirming
that pow is indeed a very expensive operation (as I already knew) and that
if you call the pow function it *ought* to dominate your timing.  But we've
also still clearly got some seriously painful loop overhead.  :(
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Laziness (was: [Haskell-cafe] Performance problem with random numbers)

2007-10-15 Thread ntupel
On Mon, 2007-10-15 at 10:48 -0400, David Roundy wrote:
 I have no idea if this example will help your actual code, but it
 illustrates that at least in this example, it's pretty easy to gain an
 order of magnitude in speed.  (That func is a weird function, by the
 way.)
 

Thanks for your reply David,

Unfortunately my original problem was that System.Random.{random,
randomR} is used instead of all these weird test functions that I came
up with during experimentation. And I can't force anything inside StdGen
so I see no way of speeding up the original program sans replacing the
random number generator itself. When I did that I became about 4 times
faster than with System.Random but still an order of magnitude slower
than for instance by using the Java implementation (and I can confirm
that (^) is *very* expensive in this context).

Many thanks again,
Thoralf





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Laziness (was: [Haskell-cafe] Performance problem with random numbers)

2007-10-14 Thread Brandon S. Allbery KF8NH


On Oct 14, 2007, at 17:54 , ntupel wrote:


Now my problem still is, that I don't know how to speed things up. I
tried putting seq and $! at various places with no apparent  
improvement.
Maybe I need to find a different data structure for my random  
module and

lazy lists are simply not working well enough here?


Unfortunately I'm not so good at that myself.  Even more  
unfortunately, my understanding is that randomly using seq and/or $!  
not only usually doesn't help, but can actually make things slower;  
and to do it right, you need to refer to the simplified Core  
Haskell code generated by GHC.  And understanding *that* requires  
rather more familiarity with Core than I have.  :/


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Laziness (was: [Haskell-cafe] Performance problem with random numbers)

2007-10-14 Thread Derek Elkins
On Sun, 2007-10-14 at 18:14 -0400, Brandon S. Allbery KF8NH wrote:
 On Oct 14, 2007, at 17:54 , ntupel wrote:
 
  Now my problem still is, that I don't know how to speed things up. I
  tried putting seq and $! at various places with no apparent  
  improvement.
  Maybe I need to find a different data structure for my random  
  module and
  lazy lists are simply not working well enough here?
 
 Unfortunately I'm not so good at that myself.  Even more  
 unfortunately, my understanding is that randomly using seq and/or $!  
 not only usually doesn't help, but can actually make things slower;  
 and to do it right, you need to refer to the simplified Core  
 Haskell code generated by GHC.  And understanding *that* requires  
 rather more familiarity with Core than I have.  :/
 

A lot of times just unfolding a few evaluations by hand (perhaps
mentally) will point out issues readily and readily suggest there
solution.  After a while you will know what kinds of things are
problematic and not write such code to begin with.  Unfortunately, this
is not something widely and well understood and is not part of almost
any of the available educational material for Haskell.  Programming in a
lazy language is more different than programming in an eager one than
almost any resource states.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Laziness

2003-08-02 Thread Jon Fairbairn
On 2003-08-02 at 14:36PDT Dominic Steinitz wrote:
 Could someone explain to me why this doesn't work
 
 test l =
hs
   where
  hs = map (\x - [x]) [0..abs(l `div` hLen)]
  hLen   = length $ head hs
 
 whereas this does
 
 test l =
hs
   where
  hs = map (\x - [x]) (0:[1..abs(l `div` hLen)])
  hLen   = length $ head hs
 
 I would have thought laziness would allow the compiler to
 know that hs would contain at least one element and
 therefore calculate hLen.

Laziness isn't enough to tell it that. It would also have to
know that abs never returns an answer less than zero 
([0 .. -1] == []).

All the compiler knows is that abs returns an integer.

  Jón

PS I don't know the general policy, but I for one dislike
getting emails in HTML unless it's /absolutely/ necessary for
the content. Multipart/alternative doesn't help much either.


-- 
Jón Fairbairn [EMAIL PROTECTED]


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: laziness in IO

2003-01-09 Thread Christopher Milton
Hal's solution reminds me of the paper by
Levent Erkök and John Launchbury:
Recursive monadic Bindings: Technical Development and Details.
http://www.cse.ogi.edu/PacSoft/projects/rmb/mfixTR.pdf

One of their examples uses MonadRec and IOExts.
webpage:
Value recursion in Monadic Computations
(a.k.a. Recursive Monadic Bindings)
http://www.cse.ogi.edu/PacSoft/projects/rmb/index.html

How to use the mdo-notation in Hugs and GHC
http://www.cse.ogi.edu/PacSoft/projects/rmb/usage.html

Or maybe module Control.Monad.List would be useful.

--- Hal Daume III [EMAIL PROTECTED] wrote:
 The following works for me:
 
 import IOExts
 
 main = do xs - unsafeInterleaveIO getStrings
   putStrLn (head xs)
 
 getStrings = do
   x - getLine
   if x == stop
 then return []
 else do xs - unsafeInterleaveIO getStrings; return (x:xs)
 
 
 in this particular case, the unsafeInterleaveIO on the recursive call to
 getStrings isn't necessary, but if you change 'putStrLn (head xs)' to
 'mapM_ putStrLn (take 3 xs)' then it's necessary (I believe).
 
 HTH
 
  - Hal
 
 --
 Hal Daume III
 
  Computer science is no more about computers| [EMAIL PROTECTED]
   than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume
 
 On Wed, 8 Jan 2003, Amanda Clare wrote:
 
  How can I recursively collect a list of things while in the IO monad, 
  and return the list lazily as it's constructed rather than waiting until 
  they've all been collected?
  
  Perhaps an example will make things clearer:
  
  
  main =
 do xs - getStrings
putStrLn (head xs)
  
  getStrings =
 do x - getLine
if x == stop
  then return []
  else do xs - getStrings
  return (x:xs)
  
  
  How can I make getStrings lazy? main should be able to terminate 
  immediately after I've entered just one line.
  
  Amanda
  
  ps: This is a fake example, I'm really trying to lazily retrieve answers 
  fetched from an Oracle SQL query.
  
  ___
  Haskell mailing list
  [EMAIL PROTECTED]
  http://www.haskell.org/mailman/listinfo/haskell
  
 
 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell


=
Christopher Milton
[EMAIL PROTECTED]

__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: laziness in IO

2003-01-08 Thread Hal Daume III
The following works for me:

import IOExts

main = do xs - unsafeInterleaveIO getStrings
  putStrLn (head xs)

getStrings = do
  x - getLine
  if x == stop
then return []
else do xs - unsafeInterleaveIO getStrings; return (x:xs)


in this particular case, the unsafeInterleaveIO on the recursive call to
getStrings isn't necessary, but if you change 'putStrLn (head xs)' to
'mapM_ putStrLn (take 3 xs)' then it's necessary (I believe).

HTH

 - Hal

--
Hal Daume III

 Computer science is no more about computers| [EMAIL PROTECTED]
  than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume

On Wed, 8 Jan 2003, Amanda Clare wrote:

 How can I recursively collect a list of things while in the IO monad, 
 and return the list lazily as it's constructed rather than waiting until 
 they've all been collected?
 
 Perhaps an example will make things clearer:
 
 
 main =
do xs - getStrings
   putStrLn (head xs)
 
 getStrings =
do x - getLine
   if x == stop
 then return []
 else do xs - getStrings
 return (x:xs)
 
 
 How can I make getStrings lazy? main should be able to terminate 
 immediately after I've entered just one line.
 
 Amanda
 
 ps: This is a fake example, I'm really trying to lazily retrieve answers 
 fetched from an Oracle SQL query.
 
 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell
 

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: laziness in IO

2003-01-08 Thread S.M.Kahrs
I don't think you can do what you want to using standard lists,
not without some dirty trickery...

But you can define a datatype for such a purpose which would essentially
have to put the tail into the Monad.

Disadvantage: you would have to redo lots of the list stuff yourself.
I had once started writing such a module, it's attached...

With this you can write your program as follows:

main =
do xs - getStrings
   putStrLn(headML xs)

getStrings =
do { x - getLine; if x==stop then return NIL
   else return (x::getStrings)
   }

So, this uses headML instead of head, NIL instead of [], etc.
But the things that makes everything work is the different cons-operator,
the :: which allows the list tail to still sit in some monad.

Hope this helps
Stefan Kahrs

module ListForMonad where

import Monad

data Mlist m a = NIL | a :: m (Mlist m a)

nullML :: Mlist m a - Bool
nullML NIL = True
nullML _ = False

(:) :: Monad m = a - m (Mlist m a) - m (Mlist m a)
x : ms = return (x :: ms)

(++) :: Monad m = Mlist m a - m (Mlist m a) - m (Mlist m a)
xs ++ ms = foldrML (:) ms xs

(!!) :: Monad m = Mlist m a - Int - m a
NIL !! _ = error index out of bounds
(x :: ms) !! 0 = return x
(_ :: ms) !! n = ms = (!! (n-1))

lengthML :: Monad m = Mlist m a - m Int
lengthML NIL = return 0
lengthML (_ :: ms) = liftM (+1) (ms = lengthML)

headML :: Mlist m a - a
headML (x :: _ ) = x
headML NIL = error head of empty list

lastML :: Monad m = Mlist m a - m a
lastML (x :: ms) =
do  xs-ms
case xs of NIL - return x
   p   - lastML p
lastML NIL = error last of empty list

tailML :: Mlist m a - m (Mlist m a)
tailML (_ :: ms) = ms
tailML NIL = error tail of empty list

initML :: Monad m = Mlist m a - m (Mlist m a)
initML NIL = error init of empty list
initML (x :: ms) =
do  xs-ms
case xs of NIL - return NIL
   p   - return (x :: initML p)

replicateML :: Monad m = Int - m a - m (Mlist m a)
replicateML n a = liftM (takeML n) (repeatML a)

repeatML :: Monad m = m a - m (Mlist m a)
repeatML action = xs
  where
  xs = do { r-action; return (r :: xs) }

takeML :: Monad m = Int - Mlist m a - Mlist m a
takeML _ NIL = NIL
takeML 0 _ = NIL
takeML n (x::ms) = x :: (liftM (takeML (n-1)) ms)

dropML :: Monad m = Int - Mlist m a - m(Mlist m a)
dropML 0 xs = return xs
dropML _ NIL = return NIL
dropML n (x::ms) = ms = dropML (n-1)


splitAtML :: Monad m = Int - Mlist m a - m (Mlist m a, m(Mlist m a))
splitAtML 0 xs = return (NIL, return xs)
splitAtML n NIL = return (NIL, return NIL)
splitAtML n (x:: ms) =
do  m-ms
(as,ns)-splitAtML (n-1) m
return (x :: return as,ns)

reverseML :: Monad m = Mlist m a - m (Mlist m a)
reverseML ms =
do  xs - mlToList ms
foldr (:) (return NIL) (reverse xs)

zipML :: Monad m = Mlist m a - Mlist m b - Mlist m (a,b) 
zipML (x::ms) (y::ns) = (x,y) :: do { xs-ms; ys-ns; return(zipML xs ys) }
zipML _ _ = NIL

unzipML :: Monad m = Mlist m (a,b) - (Mlist m a,Mlist m b)
unzipML xs = (fmap fst xs,fmap snd xs) {- note: re-evaluation -}

instance Monad m = Functor (Mlist m) where
fmap f NIL = NIL
fmap f (x::ms) = f x :: (liftM (fmap f) ms)

mlToList :: Monad m = Mlist m a - m [a]
mlToList NIL = return []
mlToList (x :: ms) = liftM (x:)(ms = mlToList)



foldrML :: Monad m = (a - m b - m b) - m b - Mlist m a - m b
foldrML f n NIL = n
foldrML f n (x :: ms) = f x (ms = foldrML f n)

blift :: Monad m = (a-b-b) - (a- m b - m b)
blift f x act = liftM (f x) act

() :: Monad m = Bool - m Bool - m Bool
True  xs = xs
False  _ = return False

(||) :: Monad m = Bool - m Bool - m Bool
True || xs = return True
False || xs = xs

andML :: Monad m = Mlist m Bool - m Bool
andML xs = foldrML () (return True) xs
orML :: Monad m = Mlist m Bool - m Bool
orML xs = foldrML (||) (return False) xs

anyML :: Monad m = (a-Bool) - Mlist m a - m(Bool)
anyML p xs = orML $ fmap p xs
allML :: Monad m = (a-Bool) - Mlist m a - m(Bool)
allML p xs = andML $ fmap p xs

sumML :: (Monad m,Num a) = Mlist m a - m a
sumML NIL = return 0
sumML (x::ms) = liftM (+x) (ms= sumML)

productML :: (Monad m,Num a) = Mlist m a - m a
productML NIL = return 1
productML (x::ms) = liftM (*x) (ms= productML)

sequenceML :: Monad m = [m a] - m(Mlist m a)
sequenceML [] = return NIL
sequenceML (x:xs) = liftM (:: sequenceML xs) x

listEmbed :: Monad m = [a] - Mlist m a
listEmbed [] = NIL
listEmbed (x:xs) = x :: return (listEmbed xs)

filterML :: Monad m = (a-Bool) - Mlist m a - m(Mlist m a)
filterML _ NIL = return NIL
filterML p (x :: ms)
| p x = return (x :: rs)
| otherwise = rs
  where rs = ms = filterML p

takeWhileML :: Monad m = (a-Bool) - Mlist m a - Mlist m a
takeWhileML _ NIL = NIL
takeWhileML p (x :: ms)
| p x = x :: (liftM (takeWhileML p) ms)
| otherwise = NIL   

dropWhileML :: Monad m 

Re: laziness again...

2002-02-18 Thread Jay Cox

On 18 Feb 2002, Ketil Z. Malde wrote:


 Hi,

 I'm a bit puzzled by this observatio that I made.  I have a function
 that, pseudocoded, lookes somewhat like

 f i as bs cs = ins i (f (i+1) as) ++ ins i (f (i+1) bs) ++ ins i (f (i+1) cs)
 where ins i = manipulates the first element of the list

 Now, without the ins'es, the function appears to be lazy, i.e I can
 take a part of it without evalutating the rest.  With the ins'es,
 the whole thing seems to be calculated as soon as I touch it.

 Is this obvious?  Or is my observation wrong?

 -kzm


um, looks like ins i returns a function of one argument which results
in a list.  (or ins is a function of two arguments which results in a list
and ins i is just using currying;  however you want to look at it)

My question is if that function (ins i) is strict.

Jay

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: laziness again...

2002-02-18 Thread Jay Cox

On Mon, 18 Feb 2002, Jay Cox wrote:

 On 18 Feb 2002, Ketil Z. Malde wrote:

 
  Hi,
 
  I'm a bit puzzled by this observatio that I made.  I have a function
  that, pseudocoded, lookes somewhat like
 
  f i as bs cs = ins i (f (i+1) as) ++ ins i (f (i+1) bs) ++ ins i (f (i+1) cs)
  where ins i = manipulates the first element of the list
 
  Now, without the ins'es, the function appears to be lazy, i.e I can
  take a part of it without evalutating the rest.  With the ins'es,
  the whole thing seems to be calculated as soon as I touch it.
 
  Is this obvious?  Or is my observation wrong?
 
  -kzm


 um, looks like ins i returns a function of one argument which results
 in a list.  (or ins is a function of two arguments which results in a list
 and ins i is just using currying;  however you want to look at it)

 My question is if that function (ins i) is strict.

 Jay

Cough.

I should have read your letter more closely.

  where ins i = manipulates the first element of the list

if you mean that (ins i) :: [a] - [a] manipulates the first element of
the list it takes then of course it is strict.  because in

ins i = \x - case x of
  (a:as) - foo a : as
  [] - []
where foo a = ...

((ins i) list) must pattern match on list.  therefore (ins i) _|_ = _|_
QED.


I guess the kind of strictness I was thinking about is something along
the lines of what you might get if you applied deepseq to the list (if
such a type of strictness has been defined by anybody!)
I got a bit confused.

(example definition for sake of argument)

(a:xs) ++ list = a:(xs ++ list)
[] ++ list = list


If (ins i) had such deep strictness then when the first element of (f i as
bs cs) was forced, haskell would generate all of the list created by
(ins i (f (i+1) as) since (++) is strict in the first argument.
however, (:) is strict in neither argument, so (++) is lazily applied.


Anyway...


You might follow Bernard James POPE [EMAIL PROTECTED] recent posting
(Subject: re: order of evaluation) about using undefined in the list to
see how things are  being evaluated, like in

list = 3:4:5:undefined

or


list = 3:4:5:error foo


(I got that idea from one of Simon Marlow's papers)


Jay Cox


PS: deepseq was mentioned earlier in either this list or the other main
Haskell list.  I believe it was actually a DeepSeq module of some sort.








___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: laziness again...

2002-02-18 Thread Ketil Z. Malde

Jay Cox [EMAIL PROTECTED] writes:

 where ins i = manipulates the first element of the list

 if you mean that (ins i) :: [a] - [a] manipulates the first element of
 the list it takes then of course it is strict.  because in

It is strict in the head of the list, yes.  I.e. it is defined as

...where ins i ((_,x):xs) = (i,x):xs

Apologies for being unprecise.

 PS: deepseq was mentioned earlier in either this list or the other main
 Haskell list.  I believe it was actually a DeepSeq module of some sort.

After some heap profiling (which produces marvellous plots, but is
very expensive in running time.  My tests that normally (without
profiling, or just -p) run in a couple of minutes took over an hour.
Also, the graphs indicate quite a bit less than top, but I ascribe
that to the RTS and garbage-collectables lying around), I'm starting
to suspect I'm generating a lot of unevaluated thunks.  

Is there any good tutorial or other material dealing with, and
improving (memory) performance by, strictness in Haskell?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: laziness and functional middleware

1998-06-19 Thread Simon L Peyton Jones


 The paper says:
 "We are working on a distributed implementation of Concurrent Haskell.
 Once nice property of MVars is that they seem relatively easy to implement
 in a distributed setting..."
 I assume that they are not referring to GPH here. 
 (I was surprised that at this statement given what I presume are the
 substantial difficulties of defining a wire format for lazy
 datastructures).
 
 I would assume that the distributed implementation gives the programmer
 some access to whether a machine is "up" or I suppose that you could use
 distributed mvars to design your own ping (cool!)

OK, here's the current story.

* Concurrent Haskell works on uniprocessors only.  It's intended for
  programs that need explicitly-forked, concurrent, I/O-performing
  threads.  Processes are explicitly forked with forkIO, which is
  in the IO monad.

  Results are necessarily non-deterministic -- that's part
  of the point!  There is no attempt to use parallelism to gain
  performance.

* GpH (Glasgow Parallel Haskell) works on a variety of multiprocessors.
  It's specifically intended to harness parallelism to gain performance.

  Parallelism is explicitly sparked with `par`, but the RTS is free to
  ignore such sparks.  `par` is not in the IO monad: it has type
par :: a - b - b

  Results remain completely deterministic even on a multiprocessor.


I'd like to implement a sort of integrated system.  Certainly it
would be nice to be able to make Concurrent Haskell into Distributed
Haskell. To to this MVars would need to work across processors
(not hard), and forkIO would need a variant that said "fork this
process on this other host".  The goal would still not be performance-through-
parallelism; rather, it is part of the *specification* of a distributed
program that it different parts should execute in different places.

Perhaps one could also have some combination of DH and GpH; after
all they use similar underlying mechanisms.

I believe that Kevin Hammond, Phil Trinder and Rita Loogen are planning
to do something like this, but I'm afraid it's not directly in my 
sights!

Einar Karlsson ([EMAIL PROTECTED]) has been doing a 
huge application in Concurrent Haskell so he may have other comments.


I hope that helps to clarify where things are at from the implementation
point of view.  One way to get us hackers to do things is to 
say loud and clear what would actually be useful to you.

Best wishes

Simon





Re: laziness and functional middleware

1998-06-18 Thread Alastair Reid


 1. What implementations are supporting concurrent Haskell?

AFAIK only GHC and Hugs.

 What modules do I have to import to use it?

Read the Hugs-GHC documentation in hugs/doc

 Hugs claims to support it, but I can't figure out how to engage it.

It's on all the time.  The implementation cost is so low that it's
not worth turning off.

 2. Are Haskell processes/threads preemptive or cooperative?

GHC has preemptive threads.
Hugs has cooperative threads.
The merged Hugs-GHC runtime has premptive threads.

  Also, if you are using system
 threads then you get the additional benefit that some operating systems
 will automatically distribute them accross CPUs.

Hugs/GHC runs everything in a single OS thread.
There's no gain from having multiple threads because we'd have to 
 put a mutex round access to the heap - and Haskell programs spend
 all their time touching the heap.

 3. Do concurrent Haskell child-processes/threads survive the completion of
 main? 

I've no idea. 

 3. The paper discusses a distributed implementation.

There is a parallel GHC implementation - but I know very little about
it.  It tries to be totally transparent to the programmer.
AFAIK it can't handle machine/network failure in any useful way.


 5. How is the Glasgow Parallel Haskell's different from concurrent
 Haskell?

I think the Concurrent Haskell paper makes this distinction:

o they use "concurrent" when talking about things which are visible
  to the programmer - like interleaved execution.

o they use the word "parallel" when they want to talk about using multiple
  machines to make things go faster (but with no semantic consequences)


Alastair






Re: laziness and functional middleware

1998-06-18 Thread S. Alexander Jacobson

On Wed, 17 Jun 1998, Hans Aberg wrote:
   But I found it rather difficult to implement this style with POSIX (Java)
 threads: It is hard to guarantee that the computations does not hang. What
 I needed was to be able to guarantee that certain sequences in the
 implementation cannot the halted in the middle, but that is not possible
 with pre-emotive threads I think.

I think SPJ's excellent paper on concurrent Haskell answers this issue
(and all my prior questions about IPC).

But it does lead to other questions:
1. What implementations are supporting concurrent Haskell?  What modules
do I have to import to use it?  Hugs claims to support it, but I can't
figure out how to engage it.

2. Are Haskell processes/threads preemptive or cooperative?  Java's lack
of specification of this difference has resulted in code that hits 
race conditions only on some platforms.  Also, if you are using system
threads then you get the additional benefit that some operating systems
will automatically distribute them accross CPUs.

3. Do concurrent Haskell child-processes/threads survive the completion of
main?  In Java, only threads that are explicitly marked as Daemons,
survive after main completes.

3. The paper discusses a distributed implementation.  I am
currently writing code that will be distributed accross ~100 machines.
Can distributed Haskell support this?  Does it allow machines access to
local environment (e.g. a local file store for each server)?  How does it
handle failure of individual machines?  How do you pass lazy
datastructures around?  Where would I find out more?  

4. Has anyone used concurrent Haskell to write an HTTPd(or DBMS)?  If so,
how does it perform and where can I find it?   An HTTPd that is itself
distributed accross a network would be much more elegant and manageable
than our current hacks like TCP/IP redirectors and DNS round-robin.

5. How is the Glasgow Parallel Haskell's different from concurrent
Haskell?  My ignorant guess is that par is synchronous while forkIO is
asynchronous and that referential integrity means that the compiler can
paralellize a lot of functions without any hints from the programmer.  Is
that correct?

6. It seems like you can use concurrent Haskell to write a very elegant
and clean Linda-style parallel systems in Haskell.  How much conversation
is there between the Yale Haskell people and the Yale Linda people? 

-Alex-

___
S. Alexander Jacobson   i2x Media  
1-212-697-0184 voice1-212-697-1427 fax






Re: laziness and functional middleware

1998-06-18 Thread S. Alexander Jacobson

On Thu, 18 Jun 1998, Alastair Reid wrote:
  What modules do I have to import to use it?
 
 Read the Hugs-GHC documentation in hugs/doc

Can you give me a more direct pointer? I am not finding it in my docs.
 
 
   Also, if you are using system
  threads then you get the additional benefit that some operating systems
  will automatically distribute them accross CPUs.
 
 Hugs/GHC runs everything in a single OS thread.
 There's no gain from having multiple threads because we'd have to 
  put a mutex round access to the heap - and Haskell programs spend
  all their time touching the heap.

You would gain from having multiple threads if they distributed themselves
across processors and memory was cheap (you do a memcpy of all
datastructures accessable from the fork point).  I guess this is was
parallel haskell is for.
 
  3. Do concurrent Haskell child-processes/threads survive the completion of
  main? 
 
 I've no idea. 

I guess I can test once I know how to invoke forkIO (see above).
 
  3. The paper discusses a distributed implementation.
 
 There is a parallel GHC implementation - but I know very little about
 it.  It tries to be totally transparent to the programmer.
 AFAIK it can't handle machine/network failure in any useful way.

The paper says:
"We are working on a distributed implementation of Concurrent Haskell.
Once nice property of MVars is that they seem relatively easy to implement
in a distributed setting..."
I assume that they are not referring to GPH here. 
(I was surprised that at this statement given what I presume are the
substantial difficulties of defining a wire format for lazy
datastructures).

I would assume that the distributed implementation gives the programmer
some access to whether a machine is "up" or I suppose that you could use
distributed mvars to design your own ping (cool!)

-Alex-
___
S. Alexander Jacobson   i2x Media  
1-212-697-0184 voice1-212-697-1427 fax






Re: laziness and functional middleware

1998-06-17 Thread Simon L Peyton Jones


Alex,

  main = do  input - getContents
   putStr $ addTwo $ makeLines input
 
  addTwo lines = ask1++(ask2 (Strict x)) ++ (result (Strict y))
  where   x:y:xs = map read lines
  ask1   = "Enter an Integer: "
  ask2 _ = "Enter another Integer: "
  result _ = "Theis sum is "++show (x+y)++"\n"
 
  data (Eval a) = Strict a = Strict !a
  makeLines text = 
   (takeWhile ('\n'/=) text): (makeLines $ tail (dropWhile ('\n'/=) text))
 
 But this code doesn't work.  It prints all the text and then waits for
 input.  Shouldn't laziness guarantee that addTwo doesn't print "Enter
 another integer" until the user enters the first integer?  (that is what
 the file copy example in the tutorial implies) 

What is "Strict".  If you said (x `seq` ask2) instead 
of (ask2 (Strict x)) then you should get the behaviour you expect.
There's a critique of various I/O models in 

%A P Hudak
%A RS Sundaresh
%T On the expressiveness of purely-functional I/O systems
%R YALEU/DCS/RR-665, |DCS|, Yale University
%D March 1989

I don't know whether it got published anywhere.


 For example I am writing an application that handles HTTP transactions 
 and uses a database backend.
 Ideally, I would like to write cgifunctions of type:
 
 myCGIFunction:: [HTTPRequest]-[DatabaseVersion]-
   ([HTTPResponse],[DatabaseChanges])
 
 HTTPRequests come from _middleware_ that recieves http requests from the
 httpd and append them to a list. Attempts to get the next item in the
 HTTPRequest list block until requests come in to fill the list.
 
 DatabaseVersions come from a driver that appends a new txnBegin object
 to a list.
 
 The program would look something like:
 main = do 
   (dbSource,dbSink) - dbConnect "ODBC: someodbcurl"
   (httpSource,httpSink) - apacheConnect "urlToListenOn"
   (httpResponses,dbUpdates) - return $ myCGIFunction httpSource dbSource
   dbSink dbUpdates
   httpSink httpResponses

I gather that HTTPRequests and DataBaseVersions arrive 
asynchronously and unpredictably.  So how does myCGIFunction know
which list to probe first?  Whichever it chooses it may block on
one while there is data on the other. Unless I'm misunderstanding.

You need to be able to say "wait until there's input available
on one of these two ports, and tell me which".  Something like
the Unix select call.  GHC's Posix library has such a thing,
but you can't use it in the way you describe because myCGIFunction
is a pure function on lists.


So far as your dbSink is concerneds, presumably what you have in mind is that
(dbSink dbUpdates) spins off a concurrent process that pulls on dbUpdates and
sends them to the database.  Concurently, httpSink is doing the same.


In short, lots of concurrency and non-determinism.  Fine!  That's
what Concurrent Haskell is for.  You can find a paper saying what
Concurrent Haskell is intended for.
http://www.dcs.gla.ac.uk/~simonpj/papers.html
(under "monads, state, and concurrency").

I hope this helps somewhat.

Simon





Re: laziness and functional middleware

1998-06-17 Thread Hans Aberg

At 07:54 +0100 98/06/17, Simon L Peyton Jones wrote:
 Ideally, I would like to write cgifunctions of type:

 myCGIFunction:: [HTTPRequest]-[DatabaseVersion]-
  ([HTTPResponse],[DatabaseChanges])

 HTTPRequests come from _middleware_ that recieves http requests from the
 httpd and append them to a list. Attempts to get the next item in the
 HTTPRequest list block until requests come in to fill the list.
...
I gather that HTTPRequests and DataBaseVersions arrive
asynchronously and unpredictably.  So how does myCGIFunction know
which list to probe first?  Whichever it chooses it may block on
one while there is data on the other. Unless I'm misunderstanding.

You need to be able to say "wait until there's input available
on one of these two ports, and tell me which".  Something like
the Unix select call.  GHC's Posix library has such a thing,
but you can't use it in the way you describe because myCGIFunction
is a pure function on lists.

  I think one can use different techniques when doing concurrent
programming: One is to set forth different processes, and rather imperative
waiting for them in a loop to finish, which is my guess is what SLPJ thinks
of here.

  But one can do something more functional: In this model, the arguments
are references which handshake with the function. The function sets off
threads for HTTPRequests and DataBaseVersions, and go idle; these threads
report back when they have something to report, and the stuff is processed
then.

  But I found it rather difficult to implement this style with POSIX (Java)
threads: It is hard to guarantee that the computations does not hang. What
I needed was to be able to guarantee that certain sequences in the
implementation cannot the halted in the middle, but that is not possible
with pre-emotive threads I think.

  Hans Aberg
  * Email: Hans Aberg mailto:[EMAIL PROTECTED]
  * Home Page: http://www.matematik.su.se/~haberg/
  * AMS member listing: http://www.ams.org/cml/