Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-08 Thread Neil Mitchell
Hi

 Yes, though testing stackGobbler with a large enough data set could
 be problematic for the very reason we've been discsussing.

Yes you are sure, or yes you tested and the results show than
neilGobbler is x% slower and consume y% more memory on specific test
n?

 But let's say your hypothesis was correct.

My hypothesis isn't that the stack is slow.

 AFAICT neilGobbler isn't even entirely safe as an implementation of
 an eager take. There's nothing the Haskell standard to stop it being
 transformed into..

 neilGobbler :: Int - [x] - [x]
 neilGobbler n xs = length (take n xs) `seq` take n xs

Yes, but so much in the Haskell standard is also missing. I think in
this case you could reasonably argue that any compiler violating this
_is_ violating the Haskell standard as it was intended (albeit not as
it was written). You'll also find that the space behaviour of CAF's
isn't documented in Haskell, but if people changed it you'd break
quite a bit of the nofib suite.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-07 Thread Neil Mitchell
Hi

 But the point is that *both* heapGobbler and neilGobbler are likely to
 be slower and chew through at least twice as much heap as stackGobbler,
 which would be the implementation of choice for both simplicity and
 performance if it wasn't for this stack management problem.

Sure? That sounds like the thing that people can conjecture, but
benchmarks can prove. And I'd claim that neilGobbler is the simplest
function by a large margin.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-07 Thread Adrian Hey

Adrian Hey wrote:

AFAICT neilGobbler isn't even entirely safe as an implementation of
an eager take. There's nothing the Haskell standard to stop it being
transformed into..

neilGobbler :: Int - [x] - [x]
neilGobbler n xs = length (take n xs) `seq` take n xs


Whoops, I see stackGobbler has the same problem..
-- Strict version of take
stackGobbler :: Int - [x] - [x]
stackGobbler 0 _  = []
stackGobbler _ [] = []
stackGobbler n (x:xs) = let xs' = stackGobbler (n-1) xs
in  xs' `seq` (x:xs')

I guess this is an example of the Haskell standard needing to be
tightened up a bit, but that is another story..

Regards
--
Adrian Hey
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-07 Thread Adrian Hey

Neil Mitchell wrote:

Hi


But the point is that *both* heapGobbler and neilGobbler are likely to
be slower and chew through at least twice as much heap as stackGobbler,
which would be the implementation of choice for both simplicity and
performance if it wasn't for this stack management problem.


Sure?


Yes, though testing stackGobbler with a large enough data set could
be problematic for the very reason we've been discsussing.

But let's say your hypothesis was correct. If so then presumably *all*
Haskell programs could give better performance than they currently do
if we nuked the stack completely and have ghc generate CPS style code.

This too would be fine with me. The problem with the current situation
is that we have perfectly sound and correct programs that crash quite
unnecessarily (and even if they don't get quite that far, can still
cause considerable per thread memory wastage if what SPJ says is true).
Why their authors choose to use a stack greedy implementation and
whether that was by design or a mistake really *isn't* the point.

As I said before (this is the third time I think), the fact that these
programs use a lot of stack at all is just a peculiarity of *ghc*
implementation, so it really is a ghc responsibility to do a decent
job of stack management IMO. It's not a programmer responsibility to
code in such a way that minimal stack is used (with ghc).


That sounds like the thing that people can conjecture, but
benchmarks can prove. And I'd claim that neilGobbler is the simplest
function by a large margin.


AFAICT neilGobbler isn't even entirely safe as an implementation of
an eager take. There's nothing the Haskell standard to stop it being
transformed into..

neilGobbler :: Int - [x] - [x]
neilGobbler n xs = length (take n xs) `seq` take n xs

Regards
--
Adrian Hey


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-07 Thread Adrian Hey

Neil Mitchell wrote:

Hi


I have already partially scanned the list looking for the first
element that satisfies some condition, using a tail recursive search.

If such an element is found I want to split the list at that point.


span/break? I really can't believe the memory overhead of span is that
terrible, you are only duplicating the (:)'s and its only one
traversal.


As an aside, my version of this function would be:

neilGobbler :: Int - [x] - [x]
neilGobbler n xs = length res `seq` res
where res = take n xs


My guess is it will use O(1) stack and burn O(n) heap (in addition that
actually used by the result), so assymptotic complexity wise same as
heapGobbler, but probably higher constant factors with ghc due to lazy
building of take thunks and subsequent reduction and indirection costs.


Sure? Guessing constant factors related to strictness and laziness is
incredibly hard! My guess based on gut feeling is that the program
will take less time, and use half the memory. But given my initial
comment, that guess is not very reliable.


But the point is that *both* heapGobbler and neilGobbler are likely to
be slower and chew through at least twice as much heap as stackGobbler,
which would be the implementation of choice for both simplicity and
performance if it wasn't for this stack management problem.

Regards
--
Adrian Hey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-06 Thread Adrian Hey

Neil Mitchell wrote:

Hi


If you mean an example of coding style and choice of stack vs. heap,
I already have..

  http://www.haskell.org/pipermail/haskell-cafe/2008-January/038832.html


I'm at a loss as why you want a strict version of take. It's clearly
not for performance, as it performs slower. I'd say both the gobbler
programs have a bug, namely that they are not sufficiently lazy.


I have already partially scanned the list looking for the first
element that satisfies some condition, using a tail recursive search.

If such an element is found I want to split the list at that point.

If such an element is not found the entire list has been scanned without
using any extra stack or heap (other than that used by the list itself
and the condition test).

I could build the reversed list accumulator on the heap as I did the
search, but I don't because this will be completely wasted effort in the
case where such an element is not found. So instead I just use an
unboxed Int to count how far I get and have the search return this
and the unsearched suffix (in the case where a matching element is
found).

But the lifetimes of the list prefix and suffix from this point on are
completely unrelated so I don't want the prefix thunk to be hanging on
to the unknown sized suffix. As I already know that the list has been
evaluated at least up to the point that it gets chopped off, I choose
to use a strict (eager) take.


As an aside, my version of this function would be:

neilGobbler :: Int - [x] - [x]
neilGobbler n xs = length res `seq` res
where res = take n xs

I have no idea if it takes the heap or the stack, or if it performs
faster or slower. If you still have whatever test you used on the
gobbler, perhaps you could tell us.


My guess is it will use O(1) stack and burn O(n) heap (in addition that
actually used by the result), so assymptotic complexity wise same as
heapGobbler, but probably higher constant factors with ghc due to lazy
building of take thunks and subsequent reduction and indirection costs.


If you mean an example of it biting with lazy code, this is discussed
so often you'd be spoiled for choice if you search the mailing list
archives. Here's a good one..

  http://www.haskell.org/pipermail/haskell-cafe/2005-December/013521.html

This example actually shows the problem twice. In one case it's solvable
by using foldl' instead of foldl.


Which reduces the memory from O(n) to O(1).


Are you sure about that? Using foldl' here eliminates one of the two
possible sources of stack overflow, but it doesn't eliminate a space
leak. It's O(n) either way. Using strict Map insertion will eliminate
a space leak (in this case) and also a possible source stack overflow.


Surely thats a good thing,


Would be if it was true :-)


and the code before had a space leak. Space leak is bad, therefore
telling people about it is good.


There are plenty of space leaks that won't cause stack overflows, and
plenty of stack overflows that aren't caused by space leaks (see above
for one example).

Again I have to say that even if true, I think this is a pretty lame
justification for the current implementation. The *default* behaviour of
any useful program should surely be to make best effort to carry on
working (and in due course deliver an answer or whatever), even if
there is unexpectedly high stack use for some reason (that may or may
not be a bug).


I think its sensible to let people set their own stack bound (which is
possible),


I have no objection to people bounding their stack if that's their
choice. I can't imagine why anybody who stopped to think about this
would actually want this feature, but it's free world.

What I object to is it being bounded by default to something other
than overall program memory limit. I know that I could probably
achieve this effect myself with +RTS options, but I also want to be
able to write libraries that other people are going to use safely
without having to add a appropriate warning in the documentation
to the effect that some parts use O(n) stack space deliberately, by
design.

But of course this all assumes that underlying implementation is
sufficiently robust to make unbounded stacks safe (at least as safe as
any other unbounded data structure). Unfortunately it seems this isn't
the case at present if what various folk have told me is true.


but that clearly just from taking an informal poll of
respondants to this thread, the current default should indeed be the
default. You seem to be the only person clamouring for an unlimited
stack,


Yes, this is strange. Same thing happened in the global variables
debate despite it being obvious to any thinking person that I was
correct. Denial of the reality of some very simple examples of the
problem was typical of that debate too.

:-)

Regards
--
Adrian Hey


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-06 Thread Neil Mitchell
Hi

 I have already partially scanned the list looking for the first
 element that satisfies some condition, using a tail recursive search.

 If such an element is found I want to split the list at that point.

span/break? I really can't believe the memory overhead of span is that
terrible, you are only duplicating the (:)'s and its only one
traversal.

  As an aside, my version of this function would be:
 
  neilGobbler :: Int - [x] - [x]
  neilGobbler n xs = length res `seq` res
  where res = take n xs
 
 My guess is it will use O(1) stack and burn O(n) heap (in addition that
 actually used by the result), so assymptotic complexity wise same as
 heapGobbler, but probably higher constant factors with ghc due to lazy
 building of take thunks and subsequent reduction and indirection costs.

Sure? Guessing constant factors related to strictness and laziness is
incredibly hard! My guess based on gut feeling is that the program
will take less time, and use half the memory. But given my initial
comment, that guess is not very reliable.

 Yes, this is strange. Same thing happened in the global variables
 debate despite it being obvious to any thinking person that I was
 correct. Denial of the reality of some very simple examples of the
 problem was typical of that debate too.

The particular world I live in is special, but I like it :-)

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Simon Peyton-Jones

| First bad thing:
| Stack size (memory consumed) doubles each time it overflows.
|
| Second bad thing:
| Arbitrary limit on stack size unrelated to overall (heap) memory
| available.
|
| Third bad thing (the really bad thing):
| If a stack has temporarily grown (to 64M say), it will never shrink
| back down again to something more typical ( 4K say). If I understand
| correctly, it will continue to take 64M from the heap regardless.
|
| What I would like is to be able to set an upper limit on total memory
| useage and allow the program to freely use this memory as either stack
| or heap. At least that should be the default behaviour, but maybe
| also allow +RTS restrictions for debugging (though I don't think this
| is a very good way of investigating a programs stack use).
|
| I would also like stack memory allocation to increase (and decrease :-)
| in some sane sized linear increment, not double each time. With the
| current scheme, as I understand it, if 65M is needed then 128M will be
| allocated.

Would you like to create a ticket for this?  I don't know how many people it 
bites, and how often, but it'd be good to have it recorded.  Changing to a 
linear increment would be relatively straightforward, but would have quadratic 
copying cost as the stack grew big.  Shrinking stacks would not be hard, I 
think, at the cost of perhaps copying them again if they grew big.

| Stefan O'Rear suggested an alternative. I don't know how hard it would
| be to implement though (don't really know anything about ghc rts).
|
|   http://haskell.org/pipermail/glasgow-haskell-users/2007-May/012472.html

Yes, this is the standard solution, and it's a good one because it has a robust 
cost model (no quadratic costs).  However, it's tricky to get right; copying is 
simpler.  If a significant fraction of runtime (for some interesting 
program(s)) turned out to be consumed by copying stacks then we could consider 
this.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Adrian Hey

Stefan O'Rear wrote:

On Mon, Feb 04, 2008 at 10:13:12PM +, Adrian Hey wrote:

Also
remember that this behaviour never wastes more than 50% of the stack,
which is a relatively small amount.

Only if the stack is relatively small. Would you say the same about
heap, or about a stack that only needed 50% of heap space but ended
up using all of it? Or money? Using twice as much as you need of
anything is bad IMO.


Apparently you don't realize that GHC normally uses twice as much heap
as is needed, due to the decision to use a two-space copying collector
by default for the oldest generation. :)


Yikes again! It gets worse :-)

Perhaps I should have said *live* heap.

Regards
--
Adrian Hey








___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Adrian Hey

Simon Peyton-Jones wrote:

| First bad thing:
| Stack size (memory consumed) doubles each time it overflows.
|
| Second bad thing:
| Arbitrary limit on stack size unrelated to overall (heap) memory
| available.
|
| Third bad thing (the really bad thing):
| If a stack has temporarily grown (to 64M say), it will never shrink
| back down again to something more typical ( 4K say). If I understand
| correctly, it will continue to take 64M from the heap regardless.
|
| What I would like is to be able to set an upper limit on total memory
| useage and allow the program to freely use this memory as either stack
| or heap. At least that should be the default behaviour, but maybe
| also allow +RTS restrictions for debugging (though I don't think this
| is a very good way of investigating a programs stack use).
|
| I would also like stack memory allocation to increase (and decrease :-)
| in some sane sized linear increment, not double each time. With the
| current scheme, as I understand it, if 65M is needed then 128M will be
| allocated.

Would you like to create a ticket for this?


OK


I don't know how many people it bites, and how often,


The problem is that the fact that it *might* bite often affects your
whole coding style (well mine actually :-) for some problems. It also
seems to have given rise to the POV that ghc's current behaviour is good
because stack use is bad. MHO is that it's only ghc's current behaviour
that *makes* stack use bad.

I think it bites a lot less often than it otherwise would because most
people will deliberately chose to use heap in preference to stack (at
least when writing eager code) just to avoid the problem. But it still
bites pretty often anyway with lazy code for unexpected reasons.
Arguably such situations are indeed a bug more often than not, but
I still think terminating the program unnecessarily (at 8M stack) is
bad default policy.


Yes, this is the standard solution, and it's a good one because it has a robust 
cost model (no quadratic costs).  However, it's tricky to get right; copying is 
simpler.  If a significant fraction of runtime (for some interesting 
program(s)) turned out to be consumed by copying stacks then we could consider 
this.


Do you really need such evidence? If we agree that allowing stack to
grow to arbitrary (limited only by memory availability) size is
reasonable then surely we already know that there will be some stack
size for which quadratic copying cost is going to get stupid :-)

Of course there other possible more radical solutions that come to
mind, like not using a (C style) stack at all. But I guess we'd
be talking about a complete re-write of the pretty much all the
rts and much of the compiler to do this :-(

Regards
--
Adrian Hey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Simon Peyton-Jones

|  Yes, this is the standard solution, and it's a good one because it has a 
robust cost model (no quadratic
| costs).  However, it's tricky to get right; copying is simpler.  If a 
significant fraction of runtime (for some
| interesting program(s)) turned out to be consumed by copying stacks then we 
could consider this.
|
| Do you really need such evidence? If we agree that allowing stack to
| grow to arbitrary (limited only by memory availability) size is
| reasonable then surely we already know that there will be some stack
| size for which quadratic copying cost is going to get stupid :-)

Indeed, in principle.  But there are only so many GHC-HQ cycles.  Fixing stacks 
means not fixing something else, so it matters which issues bite most users.

This isn't a fixed-sum game.  The more people help fix and improve GHC, the 
more we can focus on the tricky bits that only we can do.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Luke Palmer
On Feb 5, 2008 2:50 AM, Adrian Hey [EMAIL PROTECTED] wrote:
 I think it bites a lot less often than it otherwise would because most
 people will deliberately chose to use heap in preference to stack (at
 least when writing eager code) just to avoid the problem. But it still
 bites pretty often anyway with lazy code for unexpected reasons.
 Arguably such situations are indeed a bug more often than not, but
 I still think terminating the program unnecessarily (at 8M stack) is
 bad default policy.

Please, after all this, do give an example.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Luke Palmer
On Feb 5, 2008 6:50 PM, Adrian Hey [EMAIL PROTECTED] wrote:

 Luke Palmer wrote:
  On Feb 5, 2008 2:50 AM, Adrian Hey [EMAIL PROTECTED] wrote:
  I think it bites a lot less often than it otherwise would because most
  people will deliberately chose to use heap in preference to stack (at
  least when writing eager code) just to avoid the problem. But it still
  bites pretty often anyway with lazy code for unexpected reasons.
  Arguably such situations are indeed a bug more often than not, but
  I still think terminating the program unnecessarily (at 8M stack) is
  bad default policy.
 
  Please, after all this, do give an example.

 If you mean an example of coding style and choice of stack vs. heap,
 I already have..

   http://www.haskell.org/pipermail/haskell-cafe/2008-January/038832.html

Ah, sorry, I must have missed that message.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Neil Mitchell
Hi

 If you mean an example of coding style and choice of stack vs. heap,
 I already have..

   http://www.haskell.org/pipermail/haskell-cafe/2008-January/038832.html

I'm at a loss as why you want a strict version of take. It's clearly
not for performance, as it performs slower. I'd say both the gobbler
programs have a bug, namely that they are not sufficiently lazy.

As an aside, my version of this function would be:

neilGobbler :: Int - [x] - [x]
neilGobbler n xs = length res `seq` res
where res = take n xs

I have no idea if it takes the heap or the stack, or if it performs
faster or slower. If you still have whatever test you used on the
gobbler, perhaps you could tell us.

 If you mean an example of it biting with lazy code, this is discussed
 so often you'd be spoiled for choice if you search the mailing list
 archives. Here's a good one..

   http://www.haskell.org/pipermail/haskell-cafe/2005-December/013521.html

 This example actually shows the problem twice. In one case it's solvable
 by using foldl' instead of foldl.

Which reduces the memory from O(n) to O(1). Surely thats a good thing,
and the code before had a space leak. Space leak is bad, therefore
telling people about it is good.

I think its sensible to let people set their own stack bound (which is
possible), but that clearly just from taking an informal poll of
respondants to this thread, the current default should indeed be the
default. You seem to be the only person clamouring for an unlimited
stack, and thanks to +RTS, you already have it.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Adrian Hey

Bulat Ziganshin wrote:

Hello Matthew,

Monday, February 4, 2008, 11:45:51 PM, you wrote:


That would be nice. But its only beneficial if there are programs
which takes large amounts of stack at some point, but then shrink down
to very little stack and continue for a reasonable amount of time.



From the 'when I was a lad' department...



Thinking back to when Java transitioned to a garbage collector that could give
memory back to the OS, we got some unexpected benefits. Consider a machine


i would be also happy if ghc will return unused *heap* memory back to
OS - it's immediately required for my GUI program where users may open
huge files and then close them. but i personally don't have the same
need for *stack*


How do you know you don't or won't have the same need for stack?

Given that most most real programs are going to pull in library code
written by all sorts of people, don't you want your program to be robust
and memory efficient even if it makes use of libraries whose authors
chose stack gobbling in preference to heap gobbling, or who used a
(currently non-existant AFAIK) CPS based implementation for development?

I just don't get this idea that the current implementation (8M limit
IIRC in the absence of +RTS options) is good. 8M is still a pretty
big stack and (8M - 4K) per thread seems like an awful lot of memory
to waste to me. If we're all so sure that big stacks are a bug then
why bother allowing them to grow at all. Why not just limit them to 4K?

Actually I think the latter option above might be good way to discover
how many bug free Haskell progs there really are out there. Precious
few I suspect :-(

Regards
--
Adrian Hey










___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Adrian Hey

Simon Peyton-Jones wrote:

| Yes, using lots of stack is clearly bad with ghc, but this is a ghc
| bug. In fact the only reason these programs do use lots of stack
| (vs. heap) is just a peculiarity of ghc rts implementation, so it
| really should be ghc that fixes the problem, or at least admits
| responsibility :-)

I don't think there's anything fundamental here. GHC allocates the stack in the heap, and 
it can grow as big as you like.  The size limit is simply to catch infinite recursion 
with a more helpful message than heap overflow.  I think.  There is one 
peculiarity though: I don't think we ever shrink the stack, so once it gets big it stays 
big.  This could be fixed, though.


Yikes!

Sorry, but if what you say is true then things are even worse than I
thought :-( This behaviour seems really bad to me, especially for
concurrent programs.

Also, I can't help thinking that the common justification for the
current limit (that it helps find alleged bugs) is a little lame.
It only helps find bugs if one expects ones program to use less than
8M of stack (hence if it's using more, it's a bug by ones *own*
definition). But if a program or library is deliberately designed to
make use of stack (in preference to heap) for efficiency reasons
(or even just to avoid the awkwardness of using explict CPS style),
then this is a source of bugs in otherwise perfectly correct and
reasonable programs.

If we want some way of investigating a programs stack use there must be
a better way of doing it than deliberately inducing a crash in any
program that exceeds 8M of stack.

Thanks for the answer though. I think I'll write a ticket about this :-)

Regards
--
Adrian Hey


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Simon Peyton-Jones

| Sorry, but if what you say is true then things are even worse than I
| thought :-( This behaviour seems really bad to me, especially for
| concurrent programs.

Which behaviour precisely?  Can you say what is wrong and what behaviour you 
expect?

S
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Neil Mitchell
Hi

 But if a program or library is deliberately designed to
 make use of stack (in preference to heap) for efficiency reasons
 then this is a source of bugs in otherwise perfectly correct and
 reasonable programs.

Can you give an example of a particular library or program, so
everyone can be a bit more concrete about what you think should be
allowed, but that isn't?

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Adrian Hey

Hello Simon,

Simon Peyton-Jones wrote:

| Sorry, but if what you say is true then things are even worse than I
| thought :-( This behaviour seems really bad to me, especially for
| concurrent programs.

Which behaviour precisely?  Can you say what is wrong and what behaviour you 
expect?


Roughly..

First bad thing:
Stack size (memory consumed) doubles each time it overflows.

Second bad thing:
Arbitrary limit on stack size unrelated to overall (heap) memory
available.

Third bad thing (the really bad thing):
If a stack has temporarily grown (to 64M say), it will never shrink
back down again to something more typical ( 4K say). If I understand
correctly, it will continue to take 64M from the heap regardless.

What I would like is to be able to set an upper limit on total memory
useage and allow the program to freely use this memory as either stack
or heap. At least that should be the default behaviour, but maybe
also allow +RTS restrictions for debugging (though I don't think this
is a very good way of investigating a programs stack use).

I would also like stack memory allocation to increase (and decrease :-)
in some sane sized linear increment, not double each time. With the
current scheme, as I understand it, if 65M is needed then 128M will be
allocated.

Stefan O'Rear suggested an alternative. I don't know how hard it would
be to implement though (don't really know anything about ghc rts).

 http://haskell.org/pipermail/glasgow-haskell-users/2007-May/012472.html

Regards
--
Adrian Hey





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Neil Mitchell
Hi

 First bad thing:
 Stack size (memory consumed) doubles each time it overflows.

Bad thing? Assume that allocating memory takes some constant amount of
time, such as invoking overflow behaviour etc. To get the size of the
stack to n bytes with doubling takes O(log n), to get it there with a
constant increase takes O(n). If you store the stack in a linear
block, then allocation costs O(n) and you can even risk O(n^2)
behaviour unless you double each time. I think its fairly well
understood that things like hash tables should double in size when
they overflow, rather than increasing by some small increment. Also
remember that this behaviour never wastes more than 50% of the stack,
which is a relatively small amount.

 Third bad thing (the really bad thing):
 If a stack has temporarily grown (to 64M say), it will never shrink
 back down again to something more typical ( 4K say). If I understand
 correctly, it will continue to take 64M from the heap regardless.

That would be nice. But its only beneficial if there are programs
which takes large amounts of stack at some point, but then shrink down
to very little stack and continue for a reasonable amount of time.
Console programs probably don't fit this pattern (since they tend to
be batch style and exit quickly). GUI programs probably do, so perhaps
stack reduction will be more important as the GUI toolkits mature and
Haskell starts getting used for UI type things. That said, unless
there is a real user with a real problem (rather than a theoretical
concern), priority may go to other bugs.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Matthew Pocock
On Monday 04 February 2008, Adrian Hey wrote:
 Yikes!

 Also, I can't help thinking that the common justification for the
 current limit (that it helps find alleged bugs) is a little lame.
 It only helps find bugs if one expects ones program to use less than
 8M of stack (hence if it's using more, it's a bug by ones *own*
 definition). 

My experience so far is that I've only triggered stack overflows when I've had 
an infinite recursion. Getting foldl and foldr wrong on long lists has 
usually lead to disasterous memory churn, not ever to an overflow.

 Regards
 --
 Adrian Hey
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Matthew Pocock
On Monday 04 February 2008, Neil Mitchell wrote:
 Hi

 That would be nice. But its only beneficial if there are programs
 which takes large amounts of stack at some point, but then shrink down
 to very little stack and continue for a reasonable amount of time.

From the 'when I was a lad' department...

Thinking back to when Java transitioned to a garbage collector that could give 
memory back to the OS, we got some unexpected benefits. Consider a machine 
that's running a load of programs, launched from some q system e.g. 
LSF/condor. If they keep memory, the box, q scheduler or admins get unhappy.

If I had £1 for each time our admins said your 200 java apps are using 500m 
each when I could see for sure that except for an initial memory burn 
during loading files in, only a few megs where resident. Magically, once Java 
could release heap, these grypes went away.

Matthew

 Thanks

 Neil
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Adrian Hey

Neil Mitchell wrote:

Hi


First bad thing:
Stack size (memory consumed) doubles each time it overflows.


Bad thing? Assume that allocating memory takes some constant amount of
time, such as invoking overflow behaviour etc. To get the size of the
stack to n bytes with doubling takes O(log n), to get it there with a
constant increase takes O(n).


But whatever the program did to get given stack size must have
been at least O(n) anyway, so overall it's still going to be O(n)
even if the stack allocation part is O(log n). We're just talking
about a very tiny increase in constant factors, at least if Stefan
O'Rears hypothesis is correct :-). I'm inclined to agree with him.


If you store the stack in a linear
block, then allocation costs O(n) and you can even risk O(n^2)
behaviour unless you double each time. I think its fairly well
understood that things like hash tables should double in size when
they overflow, rather than increasing by some small increment.


It is? Well obviously if the entire thing is copied each time this
will be bad, but that's not what we're talking about. See Stefans
proposal.


Also
remember that this behaviour never wastes more than 50% of the stack,
which is a relatively small amount.


Only if the stack is relatively small. Would you say the same about
heap, or about a stack that only needed 50% of heap space but ended
up using all of it? Or money? Using twice as much as you need of
anything is bad IMO.


Third bad thing (the really bad thing):
If a stack has temporarily grown (to 64M say), it will never shrink
back down again to something more typical ( 4K say). If I understand
correctly, it will continue to take 64M from the heap regardless.


That would be nice. But its only beneficial if there are programs
which takes large amounts of stack at some point, but then shrink down
to very little stack and continue for a reasonable amount of time.
Console programs probably don't fit this pattern (since they tend to
be batch style and exit quickly). GUI programs probably do, so perhaps
stack reduction will be more important as the GUI toolkits mature and
Haskell starts getting used for UI type things.


The nature of the app has nothing to do with it AFAICS, this problem
can affect any program that evaluates expressions.


That said, unless
there is a real user with a real problem (rather than a theoretical
concern), priority may go to other bugs.


The point is that writing a stack greedy function definition (rather
than a heap greedy alternative) is almost always the simpler option,
and would probably be more efficent too. It would also be perfectly
OK in *most* situations.

But being OK in most situations isn't good enough. You also (as far
as is possible given finite amount of total memory) want it to be
OK in pathological situations, or at least no worse than the heap
greedy version. Why should the decision to use a stack greedy definition
cause a crash at 8M whereas a heap greedy definition can happily use
much more without crashing?

I (like everyone else) tend to avoid knowingly writing stack greedy
definitions because of this. But I do this as a workaround for ghc's
currently (IMO) poor stack management, not because I consider code
that uses the stack to be inherently buggy.

Furthermore as I said earlier, using a lot of stack is purely a
ghc rts implementation detail. Other possible Haskell implementations
may not use a lot of stack for the same function (may not use a stack
at all). So you can't say a program has bugs just because it happens
to cause a stack overflow with ghc. You might reasonably argue that
it has a bug if it uses a lot of memory with any plausible Haskell
implementation (one way or another) *and* you can show that there is
an alternative implementation which uses asymptotically less memory.

Regards
--
Adrian Hey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-04 Thread Stefan O'Rear
On Mon, Feb 04, 2008 at 10:13:12PM +, Adrian Hey wrote:
 Also
 remember that this behaviour never wastes more than 50% of the stack,
 which is a relatively small amount.

 Only if the stack is relatively small. Would you say the same about
 heap, or about a stack that only needed 50% of heap space but ended
 up using all of it? Or money? Using twice as much as you need of
 anything is bad IMO.

Apparently you don't realize that GHC normally uses twice as much heap
as is needed, due to the decision to use a two-space copying collector
by default for the oldest generation. :)

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Haskell maximum stack depth

2008-02-01 Thread Simon Peyton-Jones
| Yes, using lots of stack is clearly bad with ghc, but this is a ghc
| bug. In fact the only reason these programs do use lots of stack
| (vs. heap) is just a peculiarity of ghc rts implementation, so it
| really should be ghc that fixes the problem, or at least admits
| responsibility :-)

I don't think there's anything fundamental here. GHC allocates the stack in the 
heap, and it can grow as big as you like.  The size limit is simply to catch 
infinite recursion with a more helpful message than heap overflow.  I think.  
There is one peculiarity though: I don't think we ever shrink the stack, so 
once it gets big it stays big.  This could be fixed, though.

In short, feel free to set a very big max-stack size.  If you are going to grow 
a stack of size N or a heap-allocated list of size N, the stack version will 
probably be more efficient -- with the caveat about deallocation that I 
mentioned.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Adrian Hey

Derek Elkins wrote:

While perhaps for a simple throw-away program it may be beneficial to
write code that allocates unnecessary stack, I personally consider
unnecessary stack use a bug.  A stack overflow, to me, is always
indicative of a bug.


The bug is in ghc stack management. Why is it so important that the
stack size is arbitrarily limited? It's just an intermediate data
structure, no different from any other intermediate data structure
you may build on the heap (well apart from it's efficiency). But I guess
we would be in danger of having our programs run too fast if folk were
silly enough to make use of the stack.

So perhaps the current ghc defaults are too generous. What limit do you
think should be placed on the stack size that a non buggy program can
use?

Regards
--
Adrian Hey


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Neil Mitchell
Hi Adrian,

 The bug is in ghc stack management. Why is it so important that the
 stack size is arbitrarily limited?

It's not, but it makes some things easier and faster. A better
question is why is it important for the stack to grow dynamically. The
answer is that its not.

 It's just an intermediate data
 structure, no different from any other intermediate data structure
 you may build on the heap (well apart from it's efficiency). But I guess
 we would be in danger of having our programs run too fast if folk were
 silly enough to make use of the stack.

In C putting something on the stack is massively more efficient than
putting it on the heap. In Haskell, there is nearly no difference, and
I can imagine some situations where the heap is actually faster. I
guess your comment about speed relates to that assumption?

 So perhaps the current ghc defaults are too generous. What limit do you
 think should be placed on the stack size that a non buggy program can
 use?

The current limits are fine for virtually all cases. They abort on
buggy programs, but its rare that a non-buggy program will need to
change them. i.e. years of experience has ended up with good defaults.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Jonathan Cast


On 29 Jan 2008, at 1:28 AM, Neil Mitchell wrote:


Hi Adrian,


The bug is in ghc stack management. Why is it so important that the
stack size is arbitrarily limited?


It's not, but it makes some things easier and faster. A better
question is why is it important for the stack to grow dynamically. The
answer is that its not.


It's just an intermediate data
structure, no different from any other intermediate data structure
you may build on the heap (well apart from it's efficiency). But I  
guess
we would be in danger of having our programs run too fast if folk  
were

silly enough to make use of the stack.


In C putting something on the stack is massively more efficient than
putting it on the heap. In Haskell, there is nearly no difference, and
I can imagine some situations where the heap is actually faster.


http://www.cs.princeton.edu/~appel/papers/45.ps is the traditional  
cite here, no?


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Stefan O'Rear
On Tue, Jan 29, 2008 at 07:38:24PM +, Neil Mitchell wrote:
  A lot also depends on compiler (and associated rts), such as whether
  or not it translates to CPS, thereby in effect building a stack (in
  all but name) on the heap.
 
 If you burn a lot of heap, for not much gain, that's still a bug,
 albeit one which large limits might be able to paper over for a short
 amount of time. Is the GHC stack not stored on the heap? I thought it
 was. I know the Hugs stack is stored on the stack, and the Yhc one
 isn't.

GHC's stacks are stored in the heap, but as large continuous objects.
GHC does not use the linked lists typical of CPS implementations, and
(as of the Great RTS Cleanout of 4.00) does not even have a chunked
stack.  (In case it proves relevant the stack is not pinned - GHC's GC
has special code to fix up relocated stacks)

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Adrian Hey

Jonathan Cast wrote:
http://www.cs.princeton.edu/~appel/papers/45.ps is the traditional cite 
here, no?


Can be is not the same as is. A lot depends on exactly what you
call a stack and the relative efficiencies of stack vs. heap
implementations. Certainly my experience of library tuning tells
me that (with ghc at least), designing your code and data structures
to keep heap allocation down to an absolute minimum is very important.
So I'm very sceptical about claims that burning heap is just as
efficient. Heap allocation maybe just as cheap, but reclaiming costs
more.

A lot also depends on compiler (and associated rts), such as whether
or not it translates to CPS, thereby in effect building a stack (in
all but name) on the heap.

So you could exactly have the same correct and *bug free* program
giving a stack overflow on ghc, but not on a CPS based compiler
(the CPS implementation just uses a shed load of heap instead).

Other implementations (Chicken Scheme?) effectively build their
heap on the stack, which never shrinks until it overflows. Is
that inherently buggy?

Surely the alleged buginess of programs should not be dependent
on choice of compiler/rst?

As nobody has provided any sensible justification for the assertion
that using lots of stack (vs. heap) inherently is bad (and that
programs which do have bugs, by definition), it seems to me this is
one of those quasi-religious beliefs, like (so called) global
variables or the cyclic module dependencies being a bug (by
definition).

Yes, using lots of stack is clearly bad with ghc, but this is a ghc
bug. In fact the only reason these programs do use lots of stack
(vs. heap) is just a peculiarity of ghc rts implementation, so it
really should be ghc that fixes the problem, or at least admits
responsibility :-)

Regards
--
Adrian Hey


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Don Stewart
ndmitchell:
 Hi
 
  implementations. Certainly my experience of library tuning tells
  me that (with ghc at least), designing your code and data structures
  to keep heap allocation down to an absolute minimum is very important.
 
 Yes. Keeping allocation low is very important, be it heap or stack.
 Heap allocation is not free, its just not any more expensive than the
 stack.
 
  A lot also depends on compiler (and associated rts), such as whether
  or not it translates to CPS, thereby in effect building a stack (in
  all but name) on the heap.
 
 If you burn a lot of heap, for not much gain, that's still a bug,
 albeit one which large limits might be able to paper over for a short
 amount of time. Is the GHC stack not stored on the heap? I thought it
 was. I know the Hugs stack is stored on the stack, and the Yhc one
 isn't.
 
  Surely the alleged buginess of programs should not be dependent
  on choice of compiler/rst?
 
 Hmm, debatable. Things like the garbage collection inside tuples can
 change the space behaviour and implementability of some things, and
 are supported by GHC but not Hugs.
 
  As nobody has provided any sensible justification for the assertion
  that using lots of stack (vs. heap) inherently is bad
 
 My claim is that any program which needs to adjust the stack size has
 a laziness leak - since I've made a universally quantified claim, a
 couple of real examples should blow it out of the water.

I think that's a reasonable rule of thumb

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Neil Mitchell
Hi

 implementations. Certainly my experience of library tuning tells
 me that (with ghc at least), designing your code and data structures
 to keep heap allocation down to an absolute minimum is very important.

Yes. Keeping allocation low is very important, be it heap or stack.
Heap allocation is not free, its just not any more expensive than the
stack.

 A lot also depends on compiler (and associated rts), such as whether
 or not it translates to CPS, thereby in effect building a stack (in
 all but name) on the heap.

If you burn a lot of heap, for not much gain, that's still a bug,
albeit one which large limits might be able to paper over for a short
amount of time. Is the GHC stack not stored on the heap? I thought it
was. I know the Hugs stack is stored on the stack, and the Yhc one
isn't.

 Surely the alleged buginess of programs should not be dependent
 on choice of compiler/rst?

Hmm, debatable. Things like the garbage collection inside tuples can
change the space behaviour and implementability of some things, and
are supported by GHC but not Hugs.

 As nobody has provided any sensible justification for the assertion
 that using lots of stack (vs. heap) inherently is bad

My claim is that any program which needs to adjust the stack size has
a laziness leak - since I've made a universally quantified claim, a
couple of real examples should blow it out of the water.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Stefan O'Rear
On Tue, Jan 29, 2008 at 09:28:56AM +, Neil Mitchell wrote:
 Hi Adrian,
 
  The bug is in ghc stack management. Why is it so important that the
  stack size is arbitrarily limited?
 
 It's not, but it makes some things easier and faster. A better
 question is why is it important for the stack to grow dynamically. The
 answer is that its not.

No, it is.  A single thread running a recursion-intensive program can
use many 10's of K's of stack; and 10's of K's per thread is not cheap
threads.  GHC has had a dynamically growing stack for many years,
starting at 4k and redoubling when exhaused; the stack size limit is a
bug checker and nothing else.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Adrian Hey

Neil Mitchell wrote:


My claim is that any program which needs to adjust the stack size has
a laziness leak - since I've made a universally quantified claim, a
couple of real examples should blow it out of the water.


But people often deliberately introduce lazyness leaks for improved
efficency and in order to avoid space leaks.

http://haskell.org/pipermail/glasgow-haskell-users/2007-May/012467.html

Here there is essentially no difference between stackGobbler and
heapGobbler (they both use a stack), but in order to avoid a
stack overflow heapGobbler is tail recursive and explicitly
implements the stack as a reversed list accumulator, which then
has to be reversed at the end, so will burn twice as much heap
to get a result as stackGobbler (at least if we already know the
list has evaluated at least up to the point where it's tail get
chopped off).

Regards
--
Adrian Hey


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-29 Thread Derek Elkins
On Tue, 2008-01-29 at 08:18 +, Adrian Hey wrote:
 Derek Elkins wrote:
  While perhaps for a simple throw-away program it may be beneficial to
  write code that allocates unnecessary stack, I personally consider
  unnecessary stack use a bug.  A stack overflow, to me, is always
  indicative of a bug.
 
 The bug is in ghc stack management. Why is it so important that the
 stack size is arbitrarily limited? It's just an intermediate data
 structure, no different from any other intermediate data structure
 you may build on the heap (well apart from it's efficiency). But I guess
 we would be in danger of having our programs run too fast if folk were
 silly enough to make use of the stack.

I said -unnecessary- stack use.  It makes no difference if the stack
size is only limited by how much core+swap space you have.  If you
have no restrictions on stack use and you get a stack overflow, I still
consider it a bug.  It's the unnecessary use of the stack, not the
overflow that is the bug; as I said, the stack overflow is -indicative-
of a bug.

However, a question comes up: why is unnecessary stack use bad but
unnecessary heap use acceptable?  The answer is, for me, it isn't, but
heap and stack are different: length shouldn't take O(n) space period;
whether it's stack space or heap space.  I consider an Out of Memory
error to be indicative of a bug too unless you are operating over large
data sets or knowingly using a particularly memory-hungry algorithm.  A
program that has live sets in the gigabyte range on small inputs are
buggy to me. Space leaks are bugs whether it's stack space or heap
space. But.

But, there is a difference (theoretically) on what goes on the heap and
what goes on the stack.  The stack is the -control- stack.  It, in
theory, holds (only) control information (which may include parameters
which are waiting for other parameters to be evaluated, and other such
things).  Usually stack overflows are caused by linear stack use.
Usually you need pathological control flow to need linear stack use,
so either you were unlucky and your data was pathological or you are
setting things up so that the actual case is the pathological one. [As
you might guess, I consider the pathological (left associative) use of
(++) to be a bug as well, albeit more of a time leak than a space
leak.]

Practically speaking, stack overflows almost always indicate a (stack)
space leak or an unintended infinite loop.


 So perhaps the current ghc defaults are too generous. What limit do you
 think should be placed on the stack size that a non buggy program can
 use?

Hopefully, you can see why what, if any, limit is set is irrelevant to
my point.  All other things being equal, memory being the limit is
ideal.  Usually, all other things aren't equal.

[From a different email]
 As nobody has provided any sensible justification for the assertion
 that using lots of stack (vs. heap) inherently is bad (and that
 programs which do have bugs, by definition), it seems to me this is
 one of those quasi-religious beliefs, like (so called) global
 variables or the cyclic module dependencies being a bug (by
 definition).

Unnecessarily using lots of stack -or- heap is inherently bad, though,
it may perhaps be the lesser of two evils.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-28 Thread Neil Mitchell
Hi Istarex,

 Does Haskell have a maximum stack depth restriction like Java 
 Python, or is it only limited by the maximum stack available on the
 system (i.e. what would be available to a C program)?

You are probably thinking that recursive functions use up program
stack, and hence the stack depth bounds the amount of recursion. In
Haskell, that isn't the case. Haskell is lazily evaluated, and has
tail recursion, which means that you rarely run into a problem with
exceeding the stack depth. In GHC the stack is stored in the heap
area of memory, so is not limited to the C stack, but can be set at
runtime with a flag (+RTS ... something ...) - but you won't need to.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-28 Thread Neil Mitchell
Hi

 ghc uses a pretty conventional stack AFAIK, and it is arbitrarily
 limited, but you can change the limit with +RTS options.

GHC uses a conventional stack (in that you put stuff at the top, and
take it off from the top), but it is not a conventional stack in the
way imperative programs work. In an imperative program if you make a
function call, a frame gets pushed on the stack. When the function
call returns, the frame gets popped off the stack. In Haskell, lazy
evaluation makes it massively more confusing.

 Also, stack overflows are a pretty common cause of program failure
 IME, not at all rare. At least, far more common than whatever error
 message you get from heap exhaustion (can't even remember the last
 time I saw one of those).

Yes, I agree. However, in imperative programming stack overflow nearly
always means you've exceeded some recursion depth in the program, and
should think about refactoring it to use explicit loops. In Haskell,
stack overflow is usually a laziness bug. Same error message,
completely different causes. In Haskell, the solution to stack
overflow is almost never increase the stack depth, but fix your
laziness leak.

To answer the question if Haskell has a stack depth restriction ...
like Java the answer is no. It has a stack depth restriction, but its
absolutely nothing like Java in the way it uses the stack, so you
can't compare them.

My guess is that Istarex's inner thought might have been along the
lines of in Java if I do too much recursion I get a stack overflow,
but Haskell only has recursion, does that mean I get into stack
overflows all the time?. I could of course be entirely wrong ;-)

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-28 Thread Adrian Hey

Neil Mitchell wrote:

Hi Istarex,


Does Haskell have a maximum stack depth restriction like Java 
Python, or is it only limited by the maximum stack available on the
system (i.e. what would be available to a C program)?


You are probably thinking that recursive functions use up program
stack, and hence the stack depth bounds the amount of recursion. In
Haskell, that isn't the case. Haskell is lazily evaluated, and has
tail recursion, which means that you rarely run into a problem with
exceeding the stack depth. In GHC the stack is stored in the heap
area of memory, so is not limited to the C stack, but can be set at
runtime with a flag (+RTS ... something ...) - but you won't need to.


Sorry, but I think that's a very misleading answer to give to someone
(who's presumably a noob).

The answer is that no such limit is defined in the standard, for the
obvious reason that the standard does not presume anything about
runtime implementation, not even the presence of a stack.

ghc uses a pretty conventional stack AFAIK, and it is arbitrarily
limited, but you can change the limit with +RTS options.

Also, stack overflows are a pretty common cause of program failure
IME, not at all rare. At least, far more common than whatever error
message you get from heap exhaustion (can't even remember the last
time I saw one of those).

Regards
--
Adrian Hey
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-28 Thread istarex
On Jan 28, 2008 1:07 PM, Neil Mitchell [EMAIL PROTECTED] wrote:

 To answer the question if Haskell has a stack depth restriction ...
 like Java the answer is no. It has a stack depth restriction, but its
 absolutely nothing like Java in the way it uses the stack, so you
 can't compare them.
Fair enough.

 My guess is that Istarex's inner thought might have been along the
 lines of in Java if I do too much recursion I get a stack overflow,
 but Haskell only has recursion, does that mean I get into stack
 overflows all the time?. I could of course be entirely wrong ;-)
Well, it wasn't quite that simplistic :-).  I was considering a
specifically non-tail recursive solution to a problem, and I was
wondering if Haskell has an artificial recursion depth limit.  I
didn't stop to consider laziness, and I now realize there's a whole
dimension of this question that I didn't consider.  Thanks for the
input guys.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-28 Thread Derek Elkins
On Mon, 2008-01-28 at 14:39 -0500, istarex wrote:
 On Jan 28, 2008 1:07 PM, Neil Mitchell [EMAIL PROTECTED] wrote:
 
  To answer the question if Haskell has a stack depth restriction ...
  like Java the answer is no. It has a stack depth restriction, but its
  absolutely nothing like Java in the way it uses the stack, so you
  can't compare them.
 Fair enough.
 
  My guess is that Istarex's inner thought might have been along the
  lines of in Java if I do too much recursion I get a stack overflow,
  but Haskell only has recursion, does that mean I get into stack
  overflows all the time?. I could of course be entirely wrong ;-)
 Well, it wasn't quite that simplistic :-).  I was considering a
 specifically non-tail recursive solution to a problem, and I was
 wondering if Haskell has an artificial recursion depth limit.  I
 didn't stop to consider laziness, and I now realize there's a whole
 dimension of this question that I didn't consider.  Thanks for the
 input guys.

You may want to look at
http://www.haskell.org/haskellwiki/Stack_overflow

While perhaps for a simple throw-away program it may be beneficial to
write code that allocates unnecessary stack, I personally consider
unnecessary stack use a bug.  A stack overflow, to me, is always
indicative of a bug.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-28 Thread Jonathan Cast


On 28 Jan 2008, at 10:07 AM, Neil Mitchell wrote:


Hi


ghc uses a pretty conventional stack AFAIK, and it is arbitrarily
limited, but you can change the limit with +RTS options.


GHC uses a conventional stack (in that you put stuff at the top, and
take it off from the top), but it is not a conventional stack in the
way imperative programs work. In an imperative program if you make a
function call, a frame gets pushed on the stack. When the function
call returns, the frame gets popped off the stack. In Haskell, lazy
evaluation makes it massively more confusing.


Also, stack overflows are a pretty common cause of program failure
IME, not at all rare. At least, far more common than whatever error
message you get from heap exhaustion (can't even remember the last
time I saw one of those).


Yes, I agree. However, in imperative programming stack overflow nearly
always means you've exceeded some recursion depth in the program, and
should think about refactoring it to use explicit loops. In Haskell,
stack overflow is usually a laziness bug. Same error message,
completely different causes. In Haskell, the solution to stack
overflow is almost never increase the stack depth, but fix your
laziness leak.

To answer the question if Haskell has a stack depth restriction ...
like Java the answer is no. It has a stack depth restriction, but its
absolutely nothing like Java in the way it uses the stack, so you
can't compare them.

My guess is that Istarex's inner thought might have been along the
lines of in Java if I do too much recursion I get a stack overflow,
but Haskell only has recursion, does that mean I get into stack
overflows all the time?. I could of course be entirely wrong ;-)


Or, to put it another way, the bugs Java's stack overflow is designed  
to catch are considered good style in Haskell.


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-28 Thread Henning Thielemann

On Mon, 28 Jan 2008, Jonathan Cast wrote:

 Or, to put it another way, the bugs Java's stack overflow is designed
 to catch are considered good style in Haskell.

I consider explicit recursion in Haskell as bad style. One should use
higher order functions like 'map', 'fold', 'filter' and so on whereever
possible. Even if one needs explicit recursion one should separate the
traversal through a data structure from the particular operation applied
to the elements.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-01-28 Thread Jonathan Cast

On 28 Jan 2008, at 11:00 PM, Henning Thielemann wrote:



On Mon, 28 Jan 2008, Jonathan Cast wrote:


Or, to put it another way, the bugs Java's stack overflow is designed
to catch are considered good style in Haskell.


I consider explicit recursion in Haskell as bad style. One should use
higher order functions like 'map', 'fold', 'filter' and so on  
whereever

possible. Even if one needs explicit recursion one should separate the
traversal through a data structure from the particular operation  
applied

to the elements.


Maybe so, but this approach is impossible in Java (and not just  
because of the limitation on the size of the stack).  Imperative  
programs tend to rely on a small number of loop constructs repeated  
over and over again; good style in Haskell is in part recognizing  
when new constructs are preferable and implementing them  
(recursively).  It's a completely different mindset.


jcc
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe