Re: ./T and ./T log

2009-02-24 Thread Malcolm Wallace
Simon Marlow marlo...@gmail.com wrote:

 stdout should be flushed when the program exits, regardless of whether
 it  exits as a result of a clean exit or an exception.  I've just
 checked the  code, and that's certainly what is supposed to happen. 
 If anyone has  evidence to the contrary, please submit a bug report!

I believe flushing of file handles on program exit is handled by
finalizers attached to the handle.  Until recently, ghc did not
guarantee that any finalizer would ever run.

Regards,
Malcolm
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: how dynamic stack approximation works

2009-02-24 Thread Malcolm Wallace
Peter Hercek pher...@gmail.com wrote:

  http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack
 
 I was writing about a way how to maintain the stack as described in 
 point 6 of the page (provided that point is about dynamic stack).

The whole page (including point 6) is about explicitly maintaining a
(simulated) lexical call stack, not the dynamic one.

 As I already said in other emails, I would rather choose dynamic stack
 over lexical one if I was forced to choose only one of them. Actually,
 I  almost do not care about lexical stack and still do not understand
 why  people want it.

In a lazy language, the dynamic stack rarely tells you anything of
interest for debugging.  For the value at the top of the stack, you get
one of many possible _demand_ chains, rather than the creation chain.
The demanding location is pretty-much guaranteed not to be the site of
the bug.

But you can think of the lexical call stack as what _would_ have been
the dynamic call stack, if only the language were completely strict
rather than lazy.  Most people find the latter notion more intuitive for
the purposes of finding program errors.

 Sure, but the plan to maintain an approximate debugging dynamic stack 
 depends on one thing:

There is no need to approximate the dynamic stack.  It is directly
available to the RTS, in full detail.

Regards,
Malcolm
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: how dynamic stack approximation works

2009-02-24 Thread Simon Marlow

Peter Hercek wrote:

Simon Marlow wrote:
You seem to have a plan for maintaining a dynamic stack for debugging, 
perhaps you could flesh out the details in a wiki page, mainly to 
ensure that we're discussing the same thing?


Sure, but the plan to maintain an approximate debugging dynamic stack 
depends on one thing:
The number of items (continuations) on the return stack  from the 
beginning of /case tickn of {_-e}/ to the moment when we can check 
the count of items in the return stack inside /tickn/ is constant and 
known for a given runtime version of ghc. Or variable but known for each 
call individually. This is important to find out the number of return 
addresses on the return stack just before the execution of /case tickn 
of {_-e}/.


I don't fully understand what it is you mean.  e.g. I don't know what from 
the beginning of /case tickn of {_-e}/ means.


Let me try to explain a couple of things that might (or might not!) help 
clarify.  We don't normally see


 case tickn of { _ - e }

because the byte-code generator turns this into

 let
 z = case tickn of { _ - e }
 in
 z

the debugger paper explains why we do this.  Anyway, the byte code for the 
closure for z does this:


  - if the breakpoint at n is enabled then stop,
  - otherwise, evaluate e

i.e. it doesn't push any stack frames.

Does that help frame your question?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ./T and ./T log

2009-02-24 Thread Simon Marlow

Malcolm Wallace wrote:

Simon Marlow marlo...@gmail.com wrote:


stdout should be flushed when the program exits, regardless of whether
it  exits as a result of a clean exit or an exception.  I've just
checked the  code, and that's certainly what is supposed to happen. 
If anyone has  evidence to the contrary, please submit a bug report!


I believe flushing of file handles on program exit is handled by
finalizers attached to the handle.  Until recently, ghc did not
guarantee that any finalizer would ever run.


Not exactly.

There's a top-level exception handler that flushes stdout and stderr, so 
they always get flushed on exit (or at least, they are supposed to).  Look 
in base:GHC.TopHandler for the code.


Other handlers will only get flushed if their finalizers run, and nothing 
has changed here: we now guarantee execution of C finalizers, but that 
doesn't apply to the finalizers for Handles, which are Haskell code.


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Performance Issue

2009-02-24 Thread James Swaine
i'm implementing a benchmark which includes a detailed specification for a
random number generator.  for any of the kernels outlined in the benchmark,
i might have to generate a set of random numbers R, which has a length n,
using the following formulas:

R[k] = ((2^-46)(X[k])) mod 2^46, where

X[k] = (a^k)s

where the values of a and s are constant and defined below.
many of the kernels in the benchmark require a large number of randoms to be
generated (in the tens of millions).  when i invoke the following getRandAt
function that many times to build up a list, evaluation of the list takes
forever (somewhere between 5 and 10 minutes).  i've tried optimizing this
several different ways, with no luck.  i though i might post my code here
and see if anyone notices anything i'm doing wrong that might be causing
such a large bottleneck:

--constants
a :: Int64
a = 5^13

divisor :: Int64
divisor = 2^46

multiplier :: Float
multiplier = 2**(-46)


--gets r[k], which is the value at the kth
--position in the overall sequence of
--pseudorandom numbers
getRandAt :: Int64 - Int64 - Float
getRandAt 0 seed = multiplier * (fromIntegral seed)
getRandAt k seed = multiplier * (fromIntegral x_next)
where
x_prev = (a^k * seed) `mod` divisor
x_next = (a * x_prev) `mod` divisor

thanks all in advance for your help!
-james
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Bug in STM with data invariants

2009-02-24 Thread Ben Franksen
My ghc(i) crashes when using STM data invariants. This little piece of code
demonstrates the problem:

module Bug where

import Control.Concurrent.STM

test = do
  x - atomically $ do
v - newTVar 0
always $ return True -- remove this line and all is fine
return v
  atomically (readTVar x) = print

This is what ghci makes of it:

b...@sarun ghci Bug.hs
GHCi, version 6.10.1: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
[1 of 1] Compiling Bug  ( Bug.hs, interpreted )
Ok, modules loaded: Bug.
*Bug test
Loading package syb ... linking ... done.
Loading package array-0.2.0.0 ... linking ... done.
Loading package stm-2.1.1.2 ... linking ... done.
zsh: segmentation fault  ghci Bug.hs

I am using ghc-6.10.1 freshly installed from source with just a 'cabal
install stm' thrown after it.

BTW, the documentation for Control.Concurrent.STM.TVar lists... nothing.
Similar with Control.Monad.STM. Well, at least the source link works, so
one isn't completely lost... :-)

Cheers
Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


optimization and rewrite rules questions

2009-02-24 Thread Claus Reinke

In the recently burried haskell-cafe thread speed: ghc vs gcc,
Bulat pointed out some of the optimizations that GHC doesn't
do, such as loop unrolling. I suggested a way of experimenting 
with loop unrolling, using template haskell to bypass GHC's 
blindspot (it usually doesn't unfold recursive definitions

http://www.haskell.org/pipermail/glasgow-haskell-users/2007-July/012936.html ,
but if we unfold a loop combinator at compile time, GHC's
normal optimizations can take over from there):

http://www.haskell.org/pipermail/haskell-cafe/2009-February/056241.html

While this is fine as far as it goes (it should really be handled
within GHC), and does offer some initial speedup, Bulat pointed 
out that GCC does further optimizations after unrolling, such as 
reassociating sums to expose potential for constant folding:


http://www.haskell.org/pipermail/haskell-cafe/2009-February/056367.html

(since the ghc -ddump-simpl output doesn't show this optimization,
I assume that gcc handles it, and the *ghc* in that message is a
typo, but haven't checked - how would I do that, btw?). In this
case, GHC optimizations following the loop unrolling leave a sum
like (note the repeated variable interspersed with constants)

(GHC.Prim.+#
   (GHC.Prim.+# ww_s1lN 3)
   (GHC.Prim.+#
   (GHC.Prim.+# ww_s1lN 2)
   (GHC.Prim.+#
   (GHC.Prim.+# ww_s1lN 1)
   (GHC.Prim.+# (GHC.Prim.+# ww_s1lN 0) ww_s1lR

which can be simplified (assuming associativity and commutativity
of + here..) after sorting the variable references and constants into
separate groups.

We currently inherit such optimizations when using -fvia-C, even 
though GHC sometimes produces C code that GCC can't handle 
optimally. If I understand correctly, -fvia-C is on its way out - is 
that correct, and what plans are there for recovering the optimizations 
previously left to GCC?


The next thing I was looking at was rewrite rules, the obvious GHC
tool for implementing this kind of rule

   (var+const1)+(var+const2) == 2*var + const3

and I ran into more questions:

- can RULES left-hand sides test for variables (I don't want to
   reassociate sums randomly, that wouldn't terminate; instead,
   I want to float out subterms that are non-variable, and group
   repeated variables)?

- is there any way to control the rewrite strategy, similar to
   strategy combinators (if rules are applied all over the place,
   they introduce new structure not covered by rules; if I could
   limit the strategy to top-down, or bottom-up, I could at least
   cover some special cases)?

- how would one handle this kind of optimization in GHC in full
   generality? wait for compiler plugins? are there features of rewrite 
   rules that I'm missing? would it make sense to flag rewrite rules 
   system improvements as a GHC GSoC project, given that GHC 
   will have to pull its weight there when moving away from GCC?


Claus

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users