Greg Buchholz [EMAIL PROTECTED] writes:
I've been looking at the other shootout results (with the hope of
learning something about making haskell programs faster/less memory
hungry) and I couldn't quite figure out why the Hashes, part II test
comsumes so much memory (
Ketil Malde [EMAIL PROTECTED] writes:
To get memory consumption down, I tried a strict update function:
update k fm = let x = (get hash1 k + get fm k)
in x `seq` addToFM fm k x
which slowed the program down(!),
I wonder if this isn't due to never evaluating the values
Keith Wansbrough wrote:
I just saw this on the OCaml list (in a posting by Rafael 'Dido'
Sevilla [EMAIL PROTECTED] in the Observations on OCaml vs. Haskell
thread). I can't believe that a simple wc implementation should be
570 times slower in Haskell than OCaml - could someone investigate
Just out of interest, I ran all of these suggested variations of
the word count solution in Haskell head-to-head against each other.
Here are the results, in seconds, on my machine (2.4GHz x86/Linux)
for the suggested input (N=500) from the shootout site. All Haskell
versions were compiled with
On Thu, Sep 30, 2004 at 11:26:15AM +0100, Malcolm Wallace wrote:
Those marked with a * gave the wrong number of words. The really
interesting thing is that Tomasz's solution is twice as fast as the
standard Gnu implementation!
That's probably because Gnu wc is locale aware.
Best regards,
Tom
On Thu, Sep 30, 2004 at 11:26:15AM +0100, Malcolm Wallace wrote:
Just out of interest, I ran all of these suggested variations of
the word count solution in Haskell head-to-head against each other.
Here are the results, in seconds, on my machine (2.4GHz x86/Linux)
for the suggested input
On Thu, Sep 30, 2004 at 09:49:46AM -0400, Kevin Everets wrote:
I took Georg's, fixed the word count logic and added prettier
printing, and then combined it with Sam's main (which I find more
elegant, but others may find less straightforward). I think it
strikes a good balance between
Sam Mason wrote:
You probably want some strictness annotations in there. . .
Now we're getting somewhere. When I replace the tuples with my own
(strict) data structure, it gets about 7.5 times faster than the original
shootout example (or about 24 times faster than the version with
At 16:56 30/09/04 +0200, Tomasz Zielonka wrote:
Then how about a solution like this: I took your program but used
my fast fileIterate instead of ,,foldl over getContents''.
I also added {-# OPTIONS -funbox-strict-fields #-}, and played a bit
to get the best optimisations from GHC.
It's about 7
On Thu, Sep 30, 2004 at 05:40:58PM +0100, Graham Klyne wrote:
2. Your fileIterator certainly looks nicer (to me) than your other
solution, but...
It looks nicer to me too.
Tagging along with this debate, I found myself wondering if, in order to
get performance comparable to other
At 19:39 30/09/04 +0200, Tomasz Zielonka wrote:
What I like about GHC is that I can start from simple, high-level,
sometimes slow solutions, but if there are efficiency problems, there is
a big chance that I can solve them without switching the language.
That's a very good point, I think. One to
Hi folks,
On Thu, 30 Sep 2004 01:02:54 +0100, Sam Mason [EMAIL PROTECTED] wrote:
Greg Buchholz wrote:
The algorithm isn't correct (it counts spaces instead of words), but
anyone have advice for improving its performance?
You probably want some strictness annotations in there. . .
snip
Last night
Malcolm Wallace wrote:
Here are the results, in seconds, on my machine (2.4GHz x86/Linux)
for the suggested input (N=500) from the shootout site. All Haskell
versions were compiled with ghc-5.04.2 -O2.
I thought I'd take a stab at timing a few of the examples with
different compiler
Hi Greg
Anyone have an explaination for the 2x speed
increase for running Kevin's version with '+RTS -G1'?
+RTS -Sstderr -RTS and +RTS -sstderr -RTS will probably indicate why.
I'd be surprised if the amount of data copied for the semi-space
collector isn't much less than for the generational.
Andrew Cheadle wrote:
+RTS -Sstderr -RTS and +RTS -sstderr -RTS will probably indicate why.
I'd be surprised if the amount of data copied for the semi-space
collector isn't much less than for the generational.
Ahh. Data copied with '-G1' = 58MB vs. 203MB without. For posterities
sake,
Georg Martius wrote:
Some more general comment: The code for the shootout doesn't need to be
extremly fast in my eyes, it needs to be elegant and reasonable at
performance and memory consuptions (In this order). I don't want to say
that Thomaszs solution is bad, but it is not a typical
At 10:55 28/09/04 +0100, Malcolm Wallace wrote:
Keith Wansbrough [EMAIL PROTECTED] writes:
I can't believe that a simple wc implementation should be
570 times slower in Haskell than OCaml - could someone investigate and
fix the test?
With code like this, I'm not surprised!
main =
On Wed, Sep 29, 2004 at 01:41:03PM +0100, Graham Klyne wrote:
With code like this, I'm not surprised!
main = do file - getContents
putStrLn $ show (length $ lines file) ++ ++
show (length $ words file) ++ ++
show
Graham Klyne [EMAIL PROTECTED] writes:
main = do file - getContents
putStrLn $ show (length $ lines file) ++ ++
show (length $ words file) ++ ++
show (length file)
Space-leak or what?
I can see that this
Greg Buchholz wrote:
The algorithm isn't correct (it counts spaces instead of words), but
anyone have advice for improving its performance?
You probably want some strictness annotations in there. . . When I
tried the same thing, I came up with something like:
import Char;
cclass c | isSpace
I just saw this on the OCaml list (in a posting by Rafael 'Dido'
Sevilla [EMAIL PROTECTED] in the Observations on OCaml vs. Haskell
thread). I can't believe that a simple wc implementation should be
570 times slower in Haskell than OCaml - could someone investigate and
fix the test?
--KW 8-)
Keith Wansbrough [EMAIL PROTECTED] writes:
I can't believe that a simple wc implementation should be
570 times slower in Haskell than OCaml - could someone investigate and
fix the test?
With code like this, I'm not surprised!
main = do file - getContents
On Tue, Sep 28, 2004 at 10:46:14AM +0100, Keith Wansbrough wrote:
I just saw this on the OCaml list (in a posting by Rafael 'Dido'
Sevilla [EMAIL PROTECTED] in the Observations on OCaml vs. Haskell
thread). I can't believe that a simple wc implementation should be
570 times slower in
On Tue, Sep 28, 2004 at 12:01:11PM +0200, Tomasz Zielonka wrote:
On Tue, Sep 28, 2004 at 10:46:14AM +0100, Keith Wansbrough wrote:
I just saw this on the OCaml list (in a posting by Rafael 'Dido'
Sevilla [EMAIL PROTECTED] in the Observations on OCaml vs. Haskell
thread). I can't believe
On Tue, Sep 28, 2004 at 12:49:52PM +0200, Tomasz Zielonka wrote:
On Tue, Sep 28, 2004 at 12:01:11PM +0200, Tomasz Zielonka wrote:
On Tue, Sep 28, 2004 at 10:46:14AM +0100, Keith Wansbrough wrote:
I just saw this on the OCaml list (in a posting by Rafael 'Dido'
Sevilla [EMAIL PROTECTED]
On Tue, 28 Sep 2004, Tomasz Zielonka wrote:
Changed readArray to unsafeRead, and it is 47 times faster now.
I must say I am pleasantly surprised that GHC managed to unbox
everything there was to unbox without much annotations. For 5MB file the
program allocated only 192KB in the heap.
26 matches
Mail list logo