Functional Reactive Programming can model this sort of 'change over time'
incremental computation, but I doubt you'd get a performance benefit from it
unless your operations are considerably more expensive than '+' on numbers.
Look into the 'Reactive' library, and Conal Elliott's paper on it
Hi Benjamin,
My question is, roughly, is there already an existing framework for
incremental evaluation in Haskell?
We at Utrecht have done some work on this:
http://people.cs.uu.nl/andres/Incrementalization/
Simply put, if your computation is a fold/catamorphism, then you can easily
take
David Barbour wrote:
Benjamin Redelings wrote:
My question is, roughly, is there already an existing framework for
incremental evaluation in Haskell?
Functional Reactive Programming can model this sort of 'change over
time' incremental computation, but I doubt you'd get a performance
benefit
Hello,
I'm trying to make the following faster:
Data.Vector.Generic.fromList list
where 'list' is some expression yielding a list.
(For example: map (+1) $ map (*2) [1..100])
Note that Data.Vector.Generic.fromList is defined as:
fromList :: Vector v a = [a] - v a
{-# INLINE fromList #-}
Hi Cafe,
We are lucky to have a plethora of data structures out there. But it does
make choosing one off hackage difficult at times. In this case I'm *not*
looking for a O(1) access bit vector (Data.Vector.Unboxed seems to be the
choice there), but an efficient representation for a list of bits
On Fri, Oct 7, 2011 at 3:17 AM, Heinrich Apfelmus apfel...@quantentunnel.de
wrote:
FRP is somewhat orthogonal to incremental computation because FRP is
focusing on expressiveness while incremental computation focuses on
performance. You can formulate some incremental algorithms in terms of
Hi all,
Is it a known issue that -fllvm doesn't produce split objs?
I tried ghc 7.0.3 with llvm 2.8 and ghc 7.2.1 with llvm 2.9.
Thanks,
Mathijs
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
I have made a dummy program that seems to exhibit the same GC
slowdown behavior, minus the segmentation faults. Compiling with -threaded
and running with -N12 I get very bad performance (3x slower than -N1),
running with -N12 -qg it runs approximately 3 times faster than -N1. I don't
know if I
From: Peter Gammie pete...@gmail.com Oct 6. 2011 6:58 PM
Ben,
On 07/10/2011, at 8:55 AM, Benjamin Redelings I wrote:
My question is, roughly, is there already an existing framework for
incremental
evaluation in Haskell?
Margnus Carlsson did something monadic several years ago.
I couldn't find one on hackage that isn't better described as a RegEx
library.
I'm looking for things like minimization, completion, etc. kinda like
http://www.cis.upenn.edu/~cis639/docs/xfst.html
However, I may have just missed it.
--
Alex R
I'm not sure if this is at all related, but if I run a small Repa program
with more threads than I have cores/CPUs then it gets drastically slower, I
have a dual core laptop - and -N2 makes my small program take approximately
0.6 of the time. Increasing to -N4 and we're running about 2x the time,
I am guessing that it is slowdown caused by GC needing to co-ordinate with
blocked threads. That requires lots of re-scheduling to happen in the
kernel.
This is a hard problem I think, but also increasingly important as
virtualization becomes more important and the number of schedulable cores
It's GHC, and partly the OS scheduler in some sense. Oversaturating,
i.e. using an -N option your number of logical cores (including
hyperthreads) will slow down your program typically. This isn't
uncommon, and is well known - GHC's lightweight threads have an M:N
threading model, but for good
13 matches
Mail list logo