Re: containing memory-consuming computations
On 19/04/2012 11:45, Herbert Valerio Riedel wrote: For the time-dimension, I'm already using functions such as System.Timeout.timeout which I can use to make sure that even a (forced) pure computation doesn't require (significantly) more wall-clock time than I expect it to. Note that timeout uses wall-clock time, but you're really interested in CPU time (presumably). If there are other threads running, then using timeout will not do what you want. You could track allocation and CPU usage per thread, but note that laziness could present a problem: if a thread evaluates a lazy computation created by another thread, it will be charged to the thread that evaluated it, not the thread that created it. To get around this you would need to use the profiling system, which tracks costs independently of lazy evaluation. On 19/04/2012 17:04, Herbert Valerio Riedel wrote: At least this seems easier than needing a per-computation or per-IO-thread caps. How hard would per-IO-thread caps be? For tracking memory use, which I think is what you're asking for, it would be quite hard. One problem is sharing: when a data structure is shared between multiple threads, which one should it be charged to? Both? To calculate the amount of memory use per thread you would need to run the GC multiple times, once per thread, and observe how much data is reachable. I can't think of any fundamental difficulties with doing that, but it could be quite expensive. There might be some tricky interactions with the reachability property of threads themselves: a blocked thread is only reachable if the object it is blocked on is also reachable. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: containing memory-consuming computations
So, it would be pretty interesting if we could have an ST s style mechanism, where the data structure is not allowed to escape. But I wonder if this would be too cumbersome for anyone to use. Edward Excerpts from Simon Marlow's message of Fri Apr 20 06:07:20 -0400 2012: On 19/04/2012 11:45, Herbert Valerio Riedel wrote: For the time-dimension, I'm already using functions such as System.Timeout.timeout which I can use to make sure that even a (forced) pure computation doesn't require (significantly) more wall-clock time than I expect it to. Note that timeout uses wall-clock time, but you're really interested in CPU time (presumably). If there are other threads running, then using timeout will not do what you want. You could track allocation and CPU usage per thread, but note that laziness could present a problem: if a thread evaluates a lazy computation created by another thread, it will be charged to the thread that evaluated it, not the thread that created it. To get around this you would need to use the profiling system, which tracks costs independently of lazy evaluation. On 19/04/2012 17:04, Herbert Valerio Riedel wrote: At least this seems easier than needing a per-computation or per-IO-thread caps. How hard would per-IO-thread caps be? For tracking memory use, which I think is what you're asking for, it would be quite hard. One problem is sharing: when a data structure is shared between multiple threads, which one should it be charged to? Both? To calculate the amount of memory use per thread you would need to run the GC multiple times, once per thread, and observe how much data is reachable. I can't think of any fundamental difficulties with doing that, but it could be quite expensive. There might be some tricky interactions with the reachability property of threads themselves: a blocked thread is only reachable if the object it is blocked on is also reachable. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Build failure with HEAD
Hi, Building GHC HEAD (git HEAD SHA1 = 88f476b98709731d997ab57612cce4753cb65a0a) this morning, I've encountered 2 build failures that seem to be a result of some recent changes in the past day or two. If I build a clean repository with `make -j13` on my 12 core machine, I get: -- compiler/hsSyn/Convert.lhs:178:30: `td_fvs' is not a (visible) field of constructor `TyData' compiler/hsSyn/Convert.lhs:190:31: `td_fvs' is not a (visible) field of constructor `TyData' make[1]: *** [compiler/stage2/build/Convert.o] Error 1 make[1]: *** Waiting for unfinished jobs --- However, reinvoking `make` with no parallelism and attempting to continue the build (there's no race condition obviously, I was just curious,) I get another (possibly related) build failure as well: --- compiler/deSugar/DsMeta.hs:326:18: Constructor `FamInstD' should have 2 arguments, but has been given 1 In the pattern: FamInstD fi_decl In the pattern: L loc (FamInstD fi_decl) In an equation for `repInstD': repInstD (L loc (FamInstD fi_decl)) = do { dec - repFamInstD fi_decl; return (loc, dec) } make[1]: *** [compiler/stage2/build/DsMeta.o] Error 1 --- I would assume someone forgot to push something perhaps, but I figured naturally someone should be aware so they can fix it. :) -- Regards, Austin ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: containing memory-consuming computations
On Fri, Apr 20, 2012 at 12:56, Edward Z. Yang ezy...@mit.edu wrote: So, it would be pretty interesting if we could have an ST s style mechanism, where the data structure is not allowed to escape. But I wonder if this would be too cumbersome for anyone to use. Isn't this what monadic regions are for? -- brandon s allbery allber...@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: containing memory-consuming computations
Excerpts from Brandon Allbery's message of Fri Apr 20 19:31:54 -0400 2012: So, it would be pretty interesting if we could have an ST s style mechanism, where the data structure is not allowed to escape. But I wonder if this would be too cumbersome for anyone to use. Isn't this what monadic regions are for? That's right! But we have a hard enough time convincing people it's worth it, just for file handles. Edward ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Prelude for type-level programming
Hello, To experiment with some of GHC's new features in HEAD, I have started ported part of the Prelude from values and functions to types and constraints. For example: comparing lists. instance Compare '[] '[] EQ instance Compare '[] (x ': xs) LT instance Compare (x ': xs) '[] GT instance (Compare x y o, Case o [ EQ -- Compare xs ys r, o -- o ~ r ]) = Compare (x ': xs) (y ': ys) r Prelude.Type T :: (Compare '[I 1, I 2] '[I 1, I 3] a) = T a LT Sometimes I get nice error messages. Prelude.Type T :: If (I 1 == I 2) (a ~ hello) (a ~ 3) = T a Kind mis-match The left argument of the equality predicate had kind `Symbol', but `3' has kind `Nat' But often GHC refuses to type large constraints when there are too many constraint kinds and free variables. It also gets confused if I pretend to use higher-order kinds. instance ((a = Const b) c) = (a b) c Kind mis-match The first argument of `' should have kind `k0 k1', but `a' has kind `k0 k1' In the instance declaration for `(a b) c' I'm not sure yet how this is useful this, but I hope it can at least entertain a few people and stress-test the typechecker. http://code.atnnn.com/projects/type-prelude/repository/entry/Prelude/Type.hs darcs get http://code.atnnn.com/darcs/type-prelude/ Etienne Laurin ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users