Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-06 Thread Lennart Augustsson
I tried your example in GHC 6.10 and isum appears to work fine.
The type of 1000 gets defaulted to Integer, a specialized version
of isum for Integer is then created, the strictness analyzer
determines that isum is strict in s, and the code generator produces a
loop.  (If you want to look at the assembly code, it's easier if you
make the type Int.)
Ghc 6.8 had a bug (if I remember right) so when defaulting was
involved it didn't always optimize things properly.

  -- Lennart

On Mon, Nov 3, 2008 at 12:31 PM, Patai Gergely
[EMAIL PROTECTED] wrote:
 Hi everyone,

 I was experimenting with simple accumulator functions, and found that an
 apparently tail recursive function can easily fill the stack. Playing
 around with ghc and jhc gave consistently unpleasant results. Look at
 this program:

 %%%

 -- ghc: no, ghc -O3: yes, jhc: no
 isum 0 s = s
 isum n s = isum (n-1) (s+n)

 -- ghc: no, ghc -O3: no, jhc: yes (because gcc -O3 is clever)
 rsum 0 = 0
 rsum n = n + rsum (n-1)

 main = case isum 1000 0 {- rsum 1000 -} of
 0 - print 0
 x - print x

 %%%

 I would expect the analysis to find out that the result of the function
 is needed in every case (since we are matching a pattern to it), yet I
 need to add a $! to the recursive call of isum to get a truly iterative
 function. It's interesting how ghc and jhc seem to diverge, one
 favouring the iterative version, the other the recursive one
 (although it only works because gcc figures out that the function can be
 expressed in an iterative way).

 Of course this is a real problem when I'm trying to write an actual
 program, since it means I can be bitten by the laziness bug any time...
 Is there any solution besides adding strictness annotations? Can I
 expect the compiler to handle this situation better in the foreseeable
 future?

 Gergely

 --
 http://www.fastmail.fm - IMAP accessible web-mail

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-06 Thread Patai Gergely
 I tried your example in GHC 6.10 and isum appears to work fine.
 The type of 1000 gets defaulted to Integer, a specialized version
 of isum for Integer is then created, the strictness analyzer
 determines that isum is strict in s, and the code generator produces a
 loop.  (If you want to look at the assembly code, it's easier if you
 make the type Int.)
Thanks for checking it. I'll be curious to see any improvements the new
version brings, but this sounds promising already.

Gergely

-- 
http://www.fastmail.fm - The way an email service should be

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-05 Thread Kim-Ee Yeoh


Luke Palmer-2 wrote:
 
 I would like to know or to develop a way to allow abstract 
 analysis of time and space complexity.
 

In the same way that type inference and strictness analysis can be
seen as instances of abstract interpretation, so can complexity
inference. I agree that the interplay between these various instances
of AI is an unexplored lode for us cogheads.

Below are 2 references to complexity inference. I have yet to look
closely to ascertain the degree of compositionality of their
methodologies. Can anyone recommend a survey paper of the 
entire field?

Linear, Polynomial or Exponential? Complexity Inference in Polynomial Time
Amir M. Ben-Amram, Neil D. Jones and Lars Kristiansen  
http://dx.doi.org/10.1017/S095679683865

Automated complexity analysis of Nuprl extracted programs
Ralph Benzinger
http://dx.doi.org/10.1007/978-3-540-69407-6_7

-- 
View this message in context: 
http://www.nabble.com/Problems-with-strictness-analysis--tp20301967p2034.html
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-04 Thread Kim-Ee Yeoh


David Menendez-2 wrote:
 
 On Mon, 3 Nov 2008, Luke Palmer wrote:

 I was actually being an annoying purist.  f is strict means f _|_ =
 _|_, so strictness is a semantic idea, not an operational one.
 
 I think Luke was commenting on the terminology, not the optimization.
 We have a tendency to say lazy when we mean non-strict and
 strict when we mean eager.
 

Good point. If it sheds light on such elusive, important distinctions 
then this incorrigible pedantry can only be encouraged.

-- 
View this message in context: 
http://www.nabble.com/Problems-with-strictness-analysis--tp20301967p20319740.html
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-04 Thread Ketil Malde
Patai Gergely [EMAIL PROTECTED] writes:

 My only problem is that if I try to write a program without
 thinking about performance first and not bothering with annotations as
 long as it works, I end up with something that's practically
 impossible to profile as the costs spread everywhere.

I didn't think that was the case, does that happen with the example
you posted?  (I find GHC's heap profiling to be useful in tracking down
these strictness leaks.)

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-04 Thread Patai Gergely
Thanks everyone for the answers. I understood the underlying mechanism,
and I did see that turning on optimisation helps, as I said in the
comments. I just wasn't aware that this analysis is not turned on by
default, my bad for not reading the FM.

So the final verdict is that type and strictness annotations are
unavoidable when it comes to improving performance. This is no big
surprise. My only problem is that if I try to write a program without
thinking about performance first and not bothering with annotations as
long as it works, I end up with something that's practically
impossible to profile as the costs spread everywhere. I'd be curious to
hear about best practices to avoid this, while also getting the most
out of type inference and various analyses to cut down on developer
effort. How should I approach writing a large application in Haskell?

Gergely

-- 
http://www.fastmail.fm - IMAP accessible web-mail

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-04 Thread Derek Elkins
On Tue, 2008-11-04 at 12:18 +0100, Patai Gergely wrote:
 Thanks everyone for the answers. I understood the underlying mechanism,
 and I did see that turning on optimisation helps, as I said in the
 comments. I just wasn't aware that this analysis is not turned on by
 default, my bad for not reading the FM.
 
 So the final verdict is that type and strictness annotations are
 unavoidable when it comes to improving performance. This is no big
 surprise. My only problem is that if I try to write a program without
 thinking about performance first and not bothering with annotations as
 long as it works, I end up with something that's practically
 impossible to profile as the costs spread everywhere. I'd be curious to
 hear about best practices to avoid this, while also getting the most
 out of type inference and various analyses to cut down on developer
 effort. How should I approach writing a large application in Haskell?

Strictness annotations aren't optimizations.  As Luke Palmer pointed
out, making something strict is a change in semantics.  Similarly for
type annotations; either they do nothing or they restrict the type which
certainly has semantic ramifications.  Neither strictness nor type
annotations should be viewed as something outside of your program.
They are a part of your program.

At any rate, for the particular issues you've mentioned in this thread
I'd recommend not writing code you know is bad in the first place, in
much the same way that Scheme programmers usually write code tail
recursively from the start.  They don't write code (non tail)
recursively and then profile to find the hotspots.  There are a few
well-known problematic patterns in Haskell that can be relatively easily
recognized and remedied.  Those should be dealt with as you write the
code.  Note that here I'm talking about issues like stack overflow; you
shouldn't rely on strictness analysis to avoid stack overflow, but it is
fine to rely on strictness analysis for things like unboxing and such.  

Basically, if you don't do stupid stuff, such as well-known
anti-patterns or obviously inefficient things, then you should typically
get reasonable performance with maybe a few things that the profiler can
help you with.  If you are specifically going for performance, then
there are techniques and approaches geared toward that.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-04 Thread Lennart Augustsson
Nonsense, isum is strict in s.  If s is bottom, isum will always return bottom.
This is the definition of being strict.

  -- Lennart

On Mon, Nov 3, 2008 at 10:35 PM, Don Stewart [EMAIL PROTECTED] wrote:
 frantisek.kocun:
yet I need to add a $! to the recursive call of isum to get a truly
iterative ???

Wait a minute Patai. How would you do that? I'm only beginner I thought I
can only add strict ! to data parameters. But to make isum function
strict would be helpful.


 Consider this program,

isum  0  s = s
isum  n  s = isum (n-1) (s+n)

main = case isum 1000 0 {- rsum 1000 -} of
 0 - print 0
 x - print x

 Now, isum is *not* strict in 's', so without some additional hints or 
 analysis, this
 won't be evaluated until the result of isum is required. It will build up a 
 long change of (s + n)
 on the stack.

-O0
$ time ./A
Stack space overflow: current size 8388608 bytes.

 Of course, we make this strict in a number of ways:


 * Turning on optimisations:
-O2
$ time ./A
500500
./A  0.31s user 0.00s system 99% cpu 0.312 total

 * Use an explict bang pattern on the 's' variable:

{-# LANGUAGE BangPatterns #-}

isum  0  s = s
isum  n !s = isum (n-1) (s+n)

   -O0
$ time ./A
500500
./A  0.69s user 0.00s system 95% cpu 0.721 total

 Note that by being explict about the strictness in 's' this program produces 
 the desired result
 even with all optimisations disabled.

 We can then turn on other optimisations:

-O2 -fvia-C -optc-O2
$ time ./A
500500
./A  0.31s user 0.00s system 101% cpu 0.313 total

 And it just gets faster.

 Now, we can also add an explicit type signature to constrain to a machine Int:

-O2 -fvia-C -optc-O2
$ time ./A
500500
./A  0.03s user 0.00s system 100% cpu 0.033 total

 Meaing the final version is:

isum :: Int - Int - Int
isum  0  s = s
isum  n !s = isum (n-1) (s+n)

 So: if you rely on tail recursion on a particular variable, make sure it is
 enforced as strict. That's the simplest, most robust way to ensure the
 reduction strategy you want is used.

 -- Don

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Derek Elkins
On Mon, 2008-11-03 at 13:31 +0100, Patai Gergely wrote:
 Hi everyone,
 
 I was experimenting with simple accumulator functions, and found that an
 apparently tail recursive function can easily fill the stack. Playing
 around with ghc and jhc gave consistently unpleasant results. Look at
 this program:
 
 %%%
 
 -- ghc: no, ghc -O3: yes, jhc: no
 isum 0 s = s
 isum n s = isum (n-1) (s+n)
 
 -- ghc: no, ghc -O3: no, jhc: yes (because gcc -O3 is clever)
 rsum 0 = 0
 rsum n = n + rsum (n-1)
 
 main = case isum 1000 0 {- rsum 1000 -} of
  0 - print 0
  x - print x
 
 %%%
 
 I would expect the analysis to find out that the result of the function
 is needed in every case (since we are matching a pattern to it), yet I
 need to add a $! to the recursive call of isum to get a truly iterative
 function. It's interesting how ghc and jhc seem to diverge, one
 favouring the iterative version, the other the recursive one
 (although it only works because gcc figures out that the function can be
 expressed in an iterative way).
 
 Of course this is a real problem when I'm trying to write an actual
 program, since it means I can be bitten by the laziness bug any time...
 Is there any solution besides adding strictness annotations? Can I
 expect the compiler to handle this situation better in the foreseeable
 future?

Don't write code that relies on strictness analysis for correctness.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Don Stewart
patai_gergely:
 Hi everyone,
 
 I was experimenting with simple accumulator functions, and found that an
 apparently tail recursive function can easily fill the stack. Playing
 around with ghc and jhc gave consistently unpleasant results. Look at
 this program:
 
 %%%
 
 -- ghc: no, ghc -O3: yes, jhc: no
 isum 0 s = s
 isum n s = isum (n-1) (s+n)
 
 -- ghc: no, ghc -O3: no, jhc: yes (because gcc -O3 is clever)
 rsum 0 = 0
 rsum n = n + rsum (n-1)
 
 main = case isum 1000 0 {- rsum 1000 -} of
  0 - print 0
  x - print x
 
 %%%
 
 I would expect the analysis to find out that the result of the function
 is needed in every case (since we are matching a pattern to it), yet I
 need to add a $! to the recursive call of isum to get a truly iterative
 function. It's interesting how ghc and jhc seem to diverge, one
 favouring the iterative version, the other the recursive one
 (although it only works because gcc figures out that the function can be
 expressed in an iterative way).
 
 Of course this is a real problem when I'm trying to write an actual
 program, since it means I can be bitten by the laziness bug any time...
 Is there any solution besides adding strictness annotations? Can I
 expect the compiler to handle this situation better in the foreseeable
 future?

I think its sensible not to rely on an analysis to infer the correct
reduction strategy for your code. Make the strictness explict, and your
code will be more portable and more robust.

Also, write in some type annotations. Particularly for atomic types like
Int, these give the strictness analyser yet more information to work
with.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread frantisek kocun
yet I need to add a $! to the recursive call of isum to get a truly
iterative ???

Wait a minute Patai. How would you do that? I'm only beginner I thought I
can only add strict ! to data parameters. But to make isum function strict
would be helpful.

Thanks

On Mon, Nov 3, 2008 at 7:36 PM, Don Stewart [EMAIL PROTECTED] wrote:

 patai_gergely:
  Hi everyone,
 
  I was experimenting with simple accumulator functions, and found that an
  apparently tail recursive function can easily fill the stack. Playing
  around with ghc and jhc gave consistently unpleasant results. Look at
  this program:
 
  %%%
 
  -- ghc: no, ghc -O3: yes, jhc: no
  isum 0 s = s
  isum n s = isum (n-1) (s+n)
 
  -- ghc: no, ghc -O3: no, jhc: yes (because gcc -O3 is clever)
  rsum 0 = 0
  rsum n = n + rsum (n-1)
 
  main = case isum 1000 0 {- rsum 1000 -} of
   0 - print 0
   x - print x
 
  %%%
 
  I would expect the analysis to find out that the result of the function
  is needed in every case (since we are matching a pattern to it), yet I
  need to add a $! to the recursive call of isum to get a truly iterative
  function. It's interesting how ghc and jhc seem to diverge, one
  favouring the iterative version, the other the recursive one
  (although it only works because gcc figures out that the function can be
  expressed in an iterative way).
 
  Of course this is a real problem when I'm trying to write an actual
  program, since it means I can be bitten by the laziness bug any time...
  Is there any solution besides adding strictness annotations? Can I
  expect the compiler to handle this situation better in the foreseeable
  future?

 I think its sensible not to rely on an analysis to infer the correct
 reduction strategy for your code. Make the strictness explict, and your
 code will be more portable and more robust.

 Also, write in some type annotations. Particularly for atomic types like
 Int, these give the strictness analyser yet more information to work
 with.

 -- Don
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Don Stewart
frantisek.kocun:
yet I need to add a $! to the recursive call of isum to get a truly
iterative ???
 
Wait a minute Patai. How would you do that? I'm only beginner I thought I
can only add strict ! to data parameters. But to make isum function
strict would be helpful.
 

Consider this program,

isum  0  s = s
isum  n  s = isum (n-1) (s+n)

main = case isum 1000 0 {- rsum 1000 -} of
 0 - print 0
 x - print x

Now, isum is *not* strict in 's', so without some additional hints or analysis, 
this
won't be evaluated until the result of isum is required. It will build up a 
long change of (s + n)
on the stack.

-O0
$ time ./A
Stack space overflow: current size 8388608 bytes.

Of course, we make this strict in a number of ways:


* Turning on optimisations:
-O2
$ time ./A
500500
./A  0.31s user 0.00s system 99% cpu 0.312 total

* Use an explict bang pattern on the 's' variable:

{-# LANGUAGE BangPatterns #-}

isum  0  s = s
isum  n !s = isum (n-1) (s+n)

   -O0
$ time ./A
500500
./A  0.69s user 0.00s system 95% cpu 0.721 total

Note that by being explict about the strictness in 's' this program produces 
the desired result
even with all optimisations disabled.

We can then turn on other optimisations:
  
-O2 -fvia-C -optc-O2
$ time ./A  
500500
./A  0.31s user 0.00s system 101% cpu 0.313 total

And it just gets faster.

Now, we can also add an explicit type signature to constrain to a machine Int:

-O2 -fvia-C -optc-O2
$ time ./A
500500
./A  0.03s user 0.00s system 100% cpu 0.033 total
  
Meaing the final version is:

isum :: Int - Int - Int
isum  0  s = s
isum  n !s = isum (n-1) (s+n)

So: if you rely on tail recursion on a particular variable, make sure it is
enforced as strict. That's the simplest, most robust way to ensure the
reduction strategy you want is used.

-- Don

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Luke Palmer
On Mon, Nov 3, 2008 at 2:35 PM, Don Stewart [EMAIL PROTECTED] wrote:
 Consider this program,

isum  0  s = s
isum  n  s = isum (n-1) (s+n)

main = case isum 1000 0 {- rsum 1000 -} of
 0 - print 0
 x - print x

 Now, isum is *not* strict in 's', [...]

 Of course, we make this strict in a number of ways:

 * Turning on optimisations:

I am confused about your usage of strict.  Optimizations are not
supposed to change semantics, so I don't know how it is possible to
make a function strict by turning on optimizations.  This function was
always strict in s, given a strict numeral type.  By induction on n:

  isum 0 _|_ = _|_
  isum (n+1) _|_ = isum n (s+_|_) = isum n _|_ = _|_

For negative n, it either wraps around to positive in which case the
above analysis applies, or it does not halt, which is strict (in the
same way const _|_ is strict).

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Luke Palmer
On Mon, Nov 3, 2008 at 2:49 PM, Luke Palmer [EMAIL PROTECTED] wrote:
 I am confused about your usage of strict.  Optimizations are not
 supposed to change semantics, so I don't know how it is possible to
 make a function strict by turning on optimizations.  This function was
 always strict in s, given a strict numeral type.  By induction on n:

  isum 0 _|_ = _|_
  isum (n+1) _|_ = isum n (s+_|_) = isum n _|_ = _|_

Modulo math bugs :-)

 isum (n+1) _|_ = isum n (_|_+n) = isum n _|_ = _|_

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Don Stewart
lrpalmer:
 On Mon, Nov 3, 2008 at 2:35 PM, Don Stewart [EMAIL PROTECTED] wrote:
  Consider this program,
 
 isum  0  s = s
 isum  n  s = isum (n-1) (s+n)
 
 main = case isum 1000 0 {- rsum 1000 -} of
  0 - print 0
  x - print x
 
  Now, isum is *not* strict in 's', [...]
 
  Of course, we make this strict in a number of ways:
 
  * Turning on optimisations:
 
 I am confused about your usage of strict.  Optimizations are not
 supposed to change semantics, so I don't know how it is possible to
 make a function strict by turning on optimizations.  This function was
 always strict in s, given a strict numeral type.  By induction on n:
 
   isum 0 _|_ = _|_
   isum (n+1) _|_ = isum n (s+_|_) = isum n _|_ = _|_
 
 For negative n, it either wraps around to positive in which case the
 above analysis applies, or it does not halt, which is strict (in the
 same way const _|_ is strict).

Optimisations enable strictness analysis.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Luke Palmer
On Mon, Nov 3, 2008 at 2:54 PM, Don Stewart [EMAIL PROTECTED] wrote:
 lrpalmer:
 On Mon, Nov 3, 2008 at 2:35 PM, Don Stewart [EMAIL PROTECTED] wrote:
  Consider this program,
 
 isum  0  s = s
 isum  n  s = isum (n-1) (s+n)
 
 main = case isum 1000 0 {- rsum 1000 -} of
  0 - print 0
  x - print x
 
  Now, isum is *not* strict in 's', [...]
 
  Of course, we make this strict in a number of ways:
 
  * Turning on optimisations:

 I am confused about your usage of strict.  Optimizations are not
 supposed to change semantics, so I don't know how it is possible to
 make a function strict by turning on optimizations.  This function was
 always strict in s, given a strict numeral type.  By induction on n:

   isum 0 _|_ = _|_
   isum (n+1) _|_ = isum n (s+_|_) = isum n _|_ = _|_

 For negative n, it either wraps around to positive in which case the
 above analysis applies, or it does not halt, which is strict (in the
 same way const _|_ is strict).

 Optimisations enable strictness analysis.

I was actually being an annoying purist.  f is strict means f _|_ =
_|_, so strictness is a semantic idea, not an operational one.
Optimizations can change operation, but must preserve semantics.

But I'm not just picking a fight.  I'm trying to promote equational
reasoning over operational reasoning in the community, since I believe
that is Haskell's primary strength.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Henning Thielemann


On Mon, 3 Nov 2008, Luke Palmer wrote:


On Mon, Nov 3, 2008 at 2:54 PM, Don Stewart [EMAIL PROTECTED] wrote:


Optimisations enable strictness analysis.


I was actually being an annoying purist.  f is strict means f _|_ =
_|_, so strictness is a semantic idea, not an operational one.
Optimizations can change operation, but must preserve semantics.

But I'm not just picking a fight.  I'm trying to promote equational
reasoning over operational reasoning in the community, since I believe
that is Haskell's primary strength.


Maybe I missed the point, but the optimization seems correct to me. 
Without optimization and its (strictness) analysis the program would still 
output the correct answer - given that the stack is sufficiently large. 
Optimization simply makes this program run using much less space. Right?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread David Menendez
On Mon, Nov 3, 2008 at 5:39 PM, Henning Thielemann
[EMAIL PROTECTED] wrote:

 On Mon, 3 Nov 2008, Luke Palmer wrote:

 On Mon, Nov 3, 2008 at 2:54 PM, Don Stewart [EMAIL PROTECTED] wrote:

 Optimisations enable strictness analysis.

 I was actually being an annoying purist.  f is strict means f _|_ =
 _|_, so strictness is a semantic idea, not an operational one.
 Optimizations can change operation, but must preserve semantics.

 But I'm not just picking a fight.  I'm trying to promote equational
 reasoning over operational reasoning in the community, since I believe
 that is Haskell's primary strength.

 Maybe I missed the point, but the optimization seems correct to me. Without
 optimization and its (strictness) analysis the program would still output
 the correct answer - given that the stack is sufficiently large.
 Optimization simply makes this program run using much less space. Right?

I think Luke was commenting on the terminology, not the optimization.
We have a tendency to say lazy when we mean non-strict and
strict when we mean eager.

-- 
Dave Menendez [EMAIL PROTECTED]
http://www.eyrie.org/~zednenem/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread Matthew Brecknell
Don Stewart:
 Optimisations enable strictness analysis.

Luke Palmer:
 I was actually being an annoying purist.  f is strict means f _|_ =
 _|_, so strictness is a semantic idea, not an operational one.
 Optimizations can change operation, but must preserve semantics.

Henning Thielemann:
 Maybe I missed the point, but the optimization seems correct to me. 
 [...]


I guess the obvious clarification to make here is that strictness
analysis doesn't make non-strict functions strict. It is about
determining which functions are already strict. The *optimisation* is to
transform call-by-need into call-by-value (or something like that). But
that's an *operational* matter, as Luke points out. To preserve
*semantics*, that transformation can only be allowed for functions which
are already strict. Hence the need for strictness analysis as part of
optimisation.

So everybody is right. :-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with strictness analysis?

2008-11-03 Thread wren ng thornton

Patai Gergely wrote:

Hi everyone,

I was experimenting with simple accumulator functions, and found that an
apparently tail recursive function can easily fill the stack. Playing
around with ghc and jhc gave consistently unpleasant results. Look at
this program:

%%%

-- ghc: no, ghc -O3: yes, jhc: no
isum 0 s = s
isum n s = isum (n-1) (s+n)


This is tail recursive, and will be optimized to an iterative loop; 
however, the result of this function is a thunk ((...(s+n)+n')...+1) 
which can be potentially quite large. Pulling on this thunk is what 
causes the overflow, since (+) is strict in both arguments and can't 
return partial answers[1]. This function ---or variants like summing a 
list--- seems to be the canonical pitfall for people trying to use tail 
recursion to reason about laziness.


In terms of having a compiler 'smart enough', it's not clear that 
functions of this sort ought to be inferred strict simply because the 
accumulator is ultimately returned to the caller. Consider for example:


  f 0 xs = xs
  f n xs = f (n-1) (replicate n n ++ xs)

Since (++) can indeed return partial answers, it's fine for the 
accumulator to be lazy. Indeed, making it strict harms performance 
significantly. Another example is when the accumulator is a function, as 
is common in using tail recursion and CPS together.



The only way, really, to infer that the accumulator should be strict is 
if we know 'enough' about the function used to construct the accumulator 
in order to infer that amortizing the reduction at every iteration is 
equivalent to or better than deferring it. I'm not sure that this can be 
done in general without (an expanded type system and) user annotations. 
Personally I think strictness annotations are a lot clearer than having 
the type system express all strictness relations or distinguishing data 
and codata. Laziness is not the enemy, it merely illuminates the 
question of amortizing costs which eager languages obscure.



[1] Assuming you're not using Peano integers or some other lazy encoding 
in lieu of the built-in Num types.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe