http://www.vex.net/~trebla/haskell/lazy.xhtml
It is half done.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Even though advertised as parallel programming tools, parMap and other
functions that work in parallel over *sequential* access data
structures (i.e. linked lists.) We want flat, strict, unpacked data
structures to get good performance out of parallel algorithms. DPH,
repa, and even vector
Using insertWith' gets time down to 30-40 secs (thus only being 3-4
times slower than PHP).
PHP still is at 13 secs, does not require installing libraries - does
not require compilation and is trivial to write.
A trivial C++ application takes 11-12secs and even with some googling
was trivial to
On Tue, Jan 31, 2012 at 6:05 AM, Marc Weber marco-owe...@gmx.de wrote:
I didn't say that I tried your code. I gave enumerator package a try
counting lines which I expected to behave similar to conduits
because both serve a similar purpose.
Then I hit the the sourceFile returns chunked lines
jsonLines :: C.Resource m = C.Conduit B.ByteString m Value
jsonLines = C.sequenceSink () $ do
val - CA.sinkParser json'
CB.dropWhile isSpace_w8
return $ C.Emit () [val]
Adding a \state - (the way Felipe Lessa told me) make is work and
it runs in about 20sec and that although some
On Tue, Jan 31, 2012 at 1:36 PM, Marc Weber marco-owe...@gmx.de wrote:
Adding a \state - (the way Felipe Lessa told me) make is work and
it runs in about 20sec and that although some conduit overhead is likely
to take place.
Just out of curiosity: did you use conduit 0.1 or 0.2?
Cheers! =)
Excerpts from Felipe Almeida Lessa's message of Tue Jan 31 16:49:52 +0100 2012:
Just out of curiosity: did you use conduit 0.1 or 0.2?
I updated to 0.2 today because I was looking for a monad instance for
SequenceSink - but didn't find it cause I tried using it the wrong way
(\state - see last
Hi Everyone,
I had a similar experience with a similar type of problem. The
application was analyzing web pages that our web crawler had collected,
well not the pages themselves but metadata about when the page was
collected.
The basic query was:
SELECT
Domain, Date, COUNT(*)
FROM
Pages
On Tue, Jan 31, 2012 at 9:19 PM, Steve Severance
ssevera...@alphaheavy.comwrote:
The other thing is that deepseq is very important . IMHO this needs to be
a first class language feature with all major libraries shipping with
deepseq instances. There seems to have been some movement on this
On Tue, Jan 31, 2012 at 1:22 PM, Gregory Collins
g...@gregorycollins.net wrote:
I completely agree on the first part, but deepseq is not a panacea either.
It's a big hammer and overuse can sometimes cause wasteful O(n) no-op
traversals of already-forced data structures. I also definitely
On Tue, Jan 31, 2012 at 12:19 PM, Steve Severance
ssevera...@alphaheavy.com wrote:
The webpage data was split out across tens of thousands of files compressed
binary. I used enumerator to load these files and select the appropriate
columns. This step was performed in parallel using parMap and
On Sun, 2012-01-29 at 23:47 +0100, Marc Weber wrote:
So maybe also the JSON parsing library kept too
many unevaluated things in memory. So I could start either writing my
own JSON parsing library (being more strict)
Jfyi, aeson has been added strict parser variants json' and value' [1]
some
On Mon, Jan 30, 2012 at 6:21 AM, Herbert Valerio Riedel h...@gnu.org wrote:
On Sun, 2012-01-29 at 23:47 +0100, Marc Weber wrote:
So maybe also the JSON parsing library kept too
many unevaluated things in memory. So I could start either writing my
own JSON parsing library (being more strict)
On Sun, Jan 29, 2012 at 11:25:09PM +0100, Ertugrul Söylemez wrote:
First of all, /learning/ to optimize Haskell can be difficult. The
optimizing itself is actually fairly easy in my experience, once you
understand how the language works.
Given the fact that you have obviously mastered the
On 29 Jan 2012, at 22:25, Ertugrul Söylemez wrote:
A strict-by-default Haskell comes with the
implication that you can throw away most of the libraries, including the
base library. So yes, a strict-by-default Haskell is very well
possible, but the question is whether you actually want that.
On Mon, Jan 30, 2012 at 6:24 AM, Malcolm Wallace malcolm.wall...@me.comwrote:
On 29 Jan 2012, at 22:25, Ertugrul Söylemez wrote:
A strict-by-default Haskell comes with the
implication that you can throw away most of the libraries, including the
base library. So yes, a strict-by-default
Alexander Bernauer alex-hask...@copton.net wrote:
On Sun, Jan 29, 2012 at 11:25:09PM +0100, Ertugrul Söylemez wrote:
First of all, /learning/ to optimize Haskell can be difficult. The
optimizing itself is actually fairly easy in my experience, once you
understand how the language works.
Replying to all replies at once:
Malcolm Wallace
At work, we have a strict version of Haskell
:-) which proofs that it is worth thinking about it.
Ertugrul
If you want to save the time to learn how to write efficient Haskell
programs, you may want to have a look into the Disciple language.
On Mon, Jan 30, 2012 at 2:12 PM, Marc Weber marco-owe...@gmx.de wrote:
@ Felipe Almeida Lessa (suggesting conduits and atto parsec)
I mentioned that I already tried it. Counting lines only was a lot slower than
counting lines and parsing JSON using PHP.
Then please take a deeper look into my
Marc Weber wrote:
Replying to all replies at once:
Malcolm Wallace
At work, we have a strict version of Haskell
:-) which proofs that it is worth thinking about it.
But doesn't necessarily prove that it's a good idea.
Just (Item id ua t k v) -
Marc Weber wrote:
Replying to all replies at once:
Malcolm Wallace
At work, we have a strict version of Haskell
:-) which proofs that it is worth thinking about it.
But doesn't necessarily prove that it's a good idea.
Just (Item id ua t k v) -
Hi,
Am Montag, den 30.01.2012, 10:52 +0100 schrieb Alexander Bernauer:
On Sun, Jan 29, 2012 at 11:25:09PM +0100, Ertugrul Söylemez wrote:
First of all, /learning/ to optimize Haskell can be difficult. The
optimizing itself is actually fairly easy in my experience, once you
understand how
Generally strict Haskell means using strict data types - vectors, arrays,
bytestrings, intmaps where required.
However, you usually don't want all code and data strict, all the time,
since laziness/on-demand eval is critical for deferring non-essential work.
Summary; -fstrict wouldn't magically
On Mon, Jan 30, 2012 at 10:13 AM, Marc Weber marco-owe...@gmx.de wrote:
A lot of work has been gone into GHC and its libraries.
However for some use cases C is still preferred, for obvious speed
reasons - because optimizing an Haskell application can take much time.
As much as any other
Marc Weber marco-owe...@gmx.de wrote:
A lot of work has been gone into GHC and its libraries.
However for some use cases C is still preferred, for obvious speed
reasons - because optimizing an Haskell application can take much
time.
Is there any document describing why there is no ghc
The strict-ghc-plugin (under my maintenance) is just a continuation of
one of the original demos Max had for plugin support in the compiler.
The idea is fairly simple: 'let' and 'case' are the forms for creating
lazy/strict bindings in Core. It just systematically replaces all
occurrences of 'let'
Excerpts from Don Stewart's message of Sun Jan 29 22:55:08 +0100 2012:
Summary; -fstrict wouldn't magically make your code good.
No. you're right. I don't expect that to happen. I agree on it being
always the programmers fault using wrong tools or not knowing the tools
well enough to get a job
27 matches
Mail list logo