On Sun, 2007-12-23 at 11:52 +, Ian Lynagh wrote:
On Thu, Dec 20, 2007 at 10:58:17AM +, Malcolm Wallace wrote:
Nobench does already collect code size, but does not yet display it in
the results table. I specifically want to collect compile time as well.
Not sure what the best way
On Thu, Dec 20, 2007 at 10:58:17AM +, Malcolm Wallace wrote:
Nobench does already collect code size, but does not yet display it in
the results table. I specifically want to collect compile time as well.
Not sure what the best way to measure allocation and peak memory use
are?
This:
Don, and others,
This thread triggered something I've had at the back of my mind for some time.
The traffic on Haskell Cafe suggests that there is a lot of interest in the
performance of Haskell programs. However, at the moment we don't have any good
*performance* regression tests for GHC. We
* Simon Peyton-Jones wrote:
Does anyone feel like doing this? It'd be a great service. No need to
know anything much about GHC.
I'd like to do that. For a lecture I'm already generated performance tests
for various sorting algorithms.
It's designed about a function performance :: Size - IO
Simon Peyton-Jones [EMAIL PROTECTED] wrote:
What would be v helpful would be a regression suite aimed at
performance, that benchmarked GHC (and perhaps other Haskell
compilers) against a set of programs, regularly, and published the
results on a web page, highlighting regressions.
Something
* Malcolm Wallace wrote:
Something along these lines already exists - the nobench suite.
Ok, your turn.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
Malcolm.Wallace:
Simon Peyton-Jones [EMAIL PROTECTED] wrote:
What would be v helpful would be a regression suite aimed at
performance, that benchmarked GHC (and perhaps other Haskell
compilers) against a set of programs, regularly, and published the
results on a web page, highlighting
On Thu, 2007-11-08 at 22:54 -0800, Don Stewart wrote:
bulat.ziganshin:
definitely, it's a whole new era in low-level ghc programming
victory!
Now I want a way of getting (well-used) SIMD instructions and such, and
with some luck some high-level approach as well.
On Nov 8, 2007 10:28 PM, Bulat Ziganshin [EMAIL PROTECTED] wrote:
just for curiosity, can you try to manually unroll loop and see
results?
I don't get any noticeable performance change when I unroll the loop
by 2 by hand. I suspect that on a modern CPU loop unrolling probably
isn't that much
I see lots of shootout examples where Haskell programs seem to perform
comparably with C programs, but I find it hard to reproduce anything
like those figures when testing with my own code. So here's a simple
case:
I have this C program:
#include stdio.h
#define n 1
double a[n];
int
Hello Dan,
Thursday, November 8, 2007, 9:33:12 PM, you wrote:
main = do
a - newArray (0,n-1) 1.0 :: IO (IOUArray Int Double)
forM_ [0..n-2] $ \i - do { x - readArray a i; y - readArray a
(i+1); writeArray a (i+1) (x+y) }
x - readArray a (n-1)
print x
1. ghc doesn't implement
Dan Piponi [EMAIL PROTECTED] writes:
Even though 'n' is 10 times bigger in the C program it runs much
faster than the Haskell program on my MacBook Pro with Haskell 6.6.1.
I've tried lots of different combinations of flags that I've found in
various postings to haskell-cafe but to no avail.
On Fri, 2007-11-09 at 00:51 +0600, Mikhail Gusarov wrote:
Dan Piponi [EMAIL PROTECTED] writes:
Even though 'n' is 10 times bigger in the C program it runs much
faster than the Haskell program on my MacBook Pro with Haskell 6.6.1.
I've tried lots of different combinations of flags that
On Nov 8, 2007 2:48 PM, Duncan Coutts [EMAIL PROTECTED] wrote:
You really do not need happy to build ghc. Just ignore the extralibs
tarball.
Well that was the crucial fact I needed. 6.8.1 is now built. ghci
doesn't work, it complains about an unknown symbol '_environ' in
HSbase-3.0.0.0.o but I
On Fri, Nov 09, 2007 at 01:39:55AM +0100, Thomas Schilling wrote:
On Thu, 2007-11-08 at 16:24 -0800, Stefan O'Rear wrote:
On Thu, Nov 08, 2007 at 07:57:23PM +0100, Thomas Schilling wrote:
$ ghc --make -O2 ghc-bench.hs
Even for GCC (/not/ G_H_C)?
No, GCC implements -Ox properly.
I
On Thu, Nov 08, 2007 at 05:03:54PM -0800, Stefan O'Rear wrote:
On Fri, Nov 09, 2007 at 01:39:55AM +0100, Thomas Schilling wrote:
On Thu, 2007-11-08 at 16:24 -0800, Stefan O'Rear wrote:
On Thu, Nov 08, 2007 at 07:57:23PM +0100, Thomas Schilling wrote:
$ ghc --make -O2 ghc-bench.hs
On Thu, 2007-11-08 at 13:00 -0800, Dan Piponi wrote:
It looks like my whole question might become moot with ghc 6.8.1, but
so far I've been unable to build it due to the cyclic happy
dependency.
You really do not need happy to build ghc. Just ignore the extralibs
tarball. You can install any
Hello Don,
Thursday, November 8, 2007, 10:53:28 PM, you wrote:
a - newArray (0,n-1) 1.0 :: IO (IOUArray Int Double)
forM_ [0..n-2] $ \i - do { x - readArray a i; y - readArray a
(i+1); writeArray a (i+1) (x+y) }
oh, i was stupid. obviously, first thing you need to do is to use
nominolo:
On Thu, 2007-11-08 at 10:33 -0800, Dan Piponi wrote:
I see lots of shootout examples where Haskell programs seem to perform
comparably with C programs, but I find it hard to reproduce anything
like those figures when testing with my own code. So here's a simple
case:
I have
On Nov 8, 2007 11:34 AM, Jason Dusek [EMAIL PROTECTED] wrote:
Can you show us your compilation options and timings?
I was simply using -O3. I tried a bunch of other flags (copied from
the shootout examples) but they made no appreciable difference.
I was getting about 1.5s for the Haskell
Bulat Ziganshin [EMAIL PROTECTED] writes:
Hello Dan,
Thursday, November 8, 2007, 9:33:12 PM, you wrote:
main = do
a - newArray (0,n-1) 1.0 :: IO (IOUArray Int Double)
forM_ [0..n-2] $ \i - do { x - readArray a i; y - readArray a
(i+1); writeArray a (i+1) (x+y) }
x - readArray a
On Thu, Nov 08, 2007 at 07:57:23PM +0100, Thomas Schilling wrote:
$ ghc --make -O2 ghc-bench.hs
and got:
$ time ./ghc-bench
2.0e7
real0m0.714s
user0m0.576s
sys 0m0.132s
$ time ./ghcbC
2000.00
real0m0.305s
user0m0.164s
sys 0m0.132s
This
Don Stewart [EMAIL PROTECTED] writes:
xj2106:
I used `unsafePerformIO' with `INLINE', because I don't know
where `inlinePerformIO' is now. And also the `-optc-march'
is changed to `nocona'.
Using unsafePerformIO here would break some crucial inlining.
(the same trick is used in
dpiponi:
On Nov 8, 2007 11:34 AM, Jason Dusek [EMAIL PROTECTED] wrote:
Can you show us your compilation options and timings?
I was simply using -O3. I tried a bunch of other flags (copied from
the shootout examples) but they made no appreciable difference.
Argh, -O2 please. -O3 does
Dan Piponi [EMAIL PROTECTED] writes:
My wasn't intended to represent the problem that I'm trying to solve,
but the approach I want to take. The problems that I do want to solve
don't lend themselves to this kind of approach.
My real situation is that I want to write code that has both a
On Thu, 2007-11-08 at 10:33 -0800, Dan Piponi wrote:
I see lots of shootout examples where Haskell programs seem to perform
comparably with C programs, but I find it hard to reproduce anything
like those figures when testing with my own code. So here's a simple
case:
I have this C program:
On Thu, 2007-11-08 at 10:33 -0800, Dan Piponi wrote:
I see lots of shootout examples where Haskell programs seem to perform
comparably with C programs, but I find it hard to reproduce anything
like those figures when testing with my own code. So here's a simple
case:
I have this C program:
Bulat,
The strictness gave me something like a 10% performance increase
making the Haskell code more than 10 times slower than the C. Is this
the right type of array to use for performance?
--
Dan
On Nov 8, 2007 10:36 AM, Bulat Ziganshin [EMAIL PROTECTED] wrote:
Hello Dan,
Thursday, November
Mikhail,
main = do
print $ foldl' (+) 0 $ take 1 [1.0,1.0..]
works 10 times faster than your C version. You just need to adapt to the
radically different style of programming.
My wasn't intended to represent the problem that I'm trying to solve,
but the approach I want to take.
On Nov 8, 2007 11:24 AM, Thomas Schilling [EMAIL PROTECTED] wrote:
Wow. You should *really* try using GHC 6.8.1:
I was hoping you weren't going to say that :-) As soon as I find a
suitable 64-bit Intel binary for MacOSX, or can bootstrap my way out
of happy needing happy in my attempted source
Can you show us your compilation options and timings?
--
_jsn
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
dpiponi:
I see lots of shootout examples where Haskell programs seem to perform
comparably with C programs, but I find it hard to reproduce anything
like those figures when testing with my own code. So here's a simple
case:
I have this C program:
#include stdio.h
#define n 1
On Nov 8, 2007 11:36 AM, Paul Brown [EMAIL PROTECTED] wrote:
All that said, I'm not sure where I got the GHC that I used to build
the 6.6.1 via MacPorts; I think it shipped with MacOS once upon a time.
sudo port install ghc
. . .
configure: error: GHC is required unless bootstrapping from .hc
On Nov 8, 2007 12:16 PM, Don Stewart [EMAIL PROTECTED] wrote:
If you can post the code somewhere, that would be great, with examples
of how to reproduce your timings.
The code is exactly what I posted originally (but nore that n is 10
times larger in the C code). I compiled using ghc -O3 -o
dpiponi:
On Nov 8, 2007 12:16 PM, Don Stewart [EMAIL PROTECTED] wrote:
If you can post the code somewhere, that would be great, with examples
of how to reproduce your timings.
The code is exactly what I posted originally (but nore that n is 10
times larger in the C code). I compiled
On Nov 8, 2007 12:36 PM, Don Stewart [EMAIL PROTECTED] wrote:
dpiponi:
Can you start by retrying with flags from the spectral-norm benchmark:
http://shootout.alioth.debian.org/gp4/benchmark.php?test=spectralnormlang=ghcid=0
Actually, that was my starting point for investigating how to
Hello Dan,
Thursday, November 8, 2007, 10:12:04 PM, you wrote:
The strictness gave me something like a 10% performance increase
making the Haskell code more than 10 times slower than the C. Is this
the right type of array to use for performance?
yes. but ghc is especially slow doing FP
Hello Xiao-Yong,
Thursday, November 8, 2007, 10:41:11 PM, you wrote:
forM_ [0..n-2] $ \i - do { return $! i;
x - readArray a i; return $! x;
y - readArray a (i+1); return $! y;
writeArray a (i+1) (x+y) }
Don Stewart [EMAIL PROTECTED] writes:
Can you start by retrying with flags from the spectral-norm benchmark:
http://shootout.alioth.debian.org/gp4/benchmark.php?test=spectralnormlang=ghcid=0
The interaction with gcc here is quite important, so forcing -fvia-C
will matter.
Clearly
bulat.ziganshin:
Hello Don,
Thursday, November 8, 2007, 10:53:28 PM, you wrote:
a - newArray (0,n-1) 1.0 :: IO (IOUArray Int Double)
forM_ [0..n-2] $ \i - do { x - readArray a i; y - readArray a
(i+1); writeArray a (i+1) (x+y) }
oh, i was stupid. obviously, first thing you
On Thu, 2007-11-08 at 16:24 -0800, Stefan O'Rear wrote:
On Thu, Nov 08, 2007 at 07:57:23PM +0100, Thomas Schilling wrote:
$ ghc --make -O2 ghc-bench.hs
and got:
$ time ./ghc-bench
2.0e7
real0m0.714s
user0m0.576s
sys 0m0.132s
$ time ./ghcbC
xj2106:
Don Stewart [EMAIL PROTECTED] writes:
Can you start by retrying with flags from the spectral-norm benchmark:
http://shootout.alioth.debian.org/gp4/benchmark.php?test=spectralnormlang=ghcid=0
The interaction with gcc here is quite important, so forcing -fvia-C
will
Hello Dan,
Friday, November 9, 2007, 3:58:42 AM, you wrote:
HSbase-3.0.0.0.o but I was able to rerun the timings for my code. WIth
-O2 the run time went from about 1.5s to 0.2s. With unsafeRead and
unsafeWrite that becomes 0.16s.
cool! seems that small loops now are runs on registers, in 6.6
bulat.ziganshin:
definitely, it's a whole new era in low-level ghc programming
victory!
-- Don :D
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
On 10/8/06, Udo Stenzel u.stenzel-at-web.de |haskell-cafe|
... wrote:
Yang wrote:
type Poly = [(Int,Int)]
addPoly1 :: Poly - Poly - Poly
addPoly1 p1@(p1h@(p1c,p1d):p1t) p2@(p2h@(p2c,p2d):p2t)
| p1d == p2d = (p1c + p2c, p1d) : addPoly1 p1t p2t
| p1d p2d = p1h : addPoly1 p1t p2
|
I think your first try looks good. The only thing to worry about
would be the + being too lazy. But that's easy to fix at the same
time as improving your code in another respect.
It's usually good to use real types instead of synonyms, so let's do
that.
data Nom = Nom Int Int
type
Lennart Augustsson wrote:
I think your first try looks good.
[snip]
...
addPoly1 p1@(p1h@(Nom p1c p1d):p1t) p2@(p2h@(Nom p2c p2d):p2t)
| p1d == p2d = Nom (p1c + p2c) p1d : addPoly1 p1t p2t
| p1d p2d = p1h : addPoly1 p1t p2
| p1d p2d = p2h : addPoly1 p1 p2t
...
The last comparison
This email actually turned out much longer than I expected, but I hope
it sparks interesting (and hopefully, thorough!) discussion on points
that weren't touched on by previous threads on this topic. What
follows describes my journey thus far exploring what I see (as a
newcomer to Haskell) as a
On 10/8/06, Yang [EMAIL PROTECTED] wrote:
And do most (experienced) Haskell
users sacrifice cleanliness for speed, or speed for cleanliness?
Keep the internals of your code--that which will be looked at a
lot--fast and ugly, while the rest can be clean. If you have a
function that does
On 10/8/06, ihope [EMAIL PROTECTED] wrote:
Keep the internals of your code--that which will be looked at a
lot--fast and ugly, while the rest can be clean.
Sorry. Meant that which will be used a lot.
___
Haskell-Cafe mailing list
On 10/8/06, ihope [EMAIL PROTECTED] wrote:
On 10/8/06, Yang [EMAIL PROTECTED] wrote:
And do most (experienced) Haskell
users sacrifice cleanliness for speed, or speed for cleanliness?
Keep the internals of your code--that which will be looked at a
lot--fast and ugly, while the rest can be
On Sun, 2006-10-08 at 15:25 -0700, Jason Dagit wrote:
Another good idea when you have a pretty version which is easy to
verify for correctness and an ugly version that is harder to verify is
to use QuickCheck or SmallCheck and define a property that says both
versions are equal for all
duncan.coutts:
On Sun, 2006-10-08 at 15:25 -0700, Jason Dagit wrote:
Another good idea when you have a pretty version which is easy to
verify for correctness and an ugly version that is harder to verify is
to use QuickCheck or SmallCheck and define a property that says both
versions are
Yang wrote:
type Poly = [(Int,Int)]
addPoly1 :: Poly - Poly - Poly
addPoly1 p1@(p1h@(p1c,p1d):p1t) p2@(p2h@(p2c,p2d):p2t)
| p1d == p2d = (p1c + p2c, p1d) : addPoly1 p1t p2t
| p1d p2d = p1h : addPoly1 p1t p2
| p1d p2d = p2h : addPoly1 p1 p2t
addPoly1 p1 [] = p1
addPoly1 [] p2 =
Alberto Ruiz has developed a linear algebra library which could be
seen as an alternative to Matlab/Octave, using the GSL, ATLAS, LAPACK,
etc. IIRC.
http://dis.um.es/~alberto/GSLHaskell/
I've optimized it in some places, and added an interface which
guarantees operand conformability through the
honest, the documentation for Arrows blows my
mind. I think a few
examples would go a long way.
John Hughes' original paper on arrows is full of examples.
Additionally, Hughes wrote a tutorial on programming with arrows, for the
2004 AFP summer school in Tartu, which is very accessible. Both
Joel Reymont wrote:
Is anyone using Haskell for heavy numerical computations? Could you
share your experience?
My app will be mostly about running computations over huge amounts of
stock data (time series) so I'm thinking I might be better of with OCaml.
If you are really serious about
Hello Joel,
Friday, July 7, 2006, 2:03:11 AM, you wrote:
Is anyone using Haskell for heavy numerical computations? Could you
share your experience?
My app will be mostly about running computations over huge amounts of
stock data (time series) so I'm thinking I might be better of with
Is anyone using Haskell for heavy numerical computations? Could you
share your experience?
My app will be mostly about running computations over huge amounts of
stock data (time series) so I'm thinking I might be better of with
OCaml.
Thanks, Joel
--
http://wagerlabs.com/
I have tried. Using just normal lists, I found it difficult to avoid huge space leaks, but once things are working, the end result is easy to reason about.I'm considering giving it another try using Stream Processors or some form of Arrows instead of lists. I think this strategy might allow me to
60 matches
Mail list logo