Jeff Polakow [EMAIL PROTECTED] writes:
Besides anything else, sequence will diverge on an infinite list.
Argh, of course. Thanks!
It is necessary to compute all of the computations in the list before
returning
any of the pure resulting list.
Replacing sequence with sequence', given as:
Has anyone tried to embed GHC as a library recently?
What is the size of the resulting binary?
I'm assuming a bare minimum of needed libraries.
Thanks, Joel
--
http://wagerlabs.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
Austin Seipp has written about this in his blog:
http://austin.youareinferior.net/?q=node/29
I will take this time to point out that using the GHC API in your
applications results in *large* executables. The Interact example
above when compiled with vanilla --make options resulted in a
Hello all,
I, along with some friends, have been looking to Haskell lately. I'm
very happy with Haskell as a language, however, a friend sent me the
link:
http://shootout.alioth.debian.org/gp4/
which enables you compare several language implementations. Haskell
seems to lag behind of Clean.
From
Hello Bjorn,
Wednesday, October 31, 2007, 11:54:33 AM, you wrote:
data fields. Data Field Haskell had a function to force the evaluation of
all elements in a data field at once.
this looks like a parallel arrays?
--
Best regards,
Bulatmailto:[EMAIL PROTECTED]
From what I've seen of Clean it seems almost like Haskell. It even
distributes a Haskell-Clean translator so the obvious question is,
why is Haskell slower?
It's also something I've wondered about, and I'm curious about the
answer...
One of the differences between Haskell and Clean is
Paulo J. Matos wrote:
Hello all,
I, along with some friends, have been looking to Haskell lately. I'm
very happy with Haskell as a language, however, a friend sent me the
link:
http://shootout.alioth.debian.org/gp4/
which enables you compare several language implementations. Haskell
seems to
I'm curious what experts think too.
So far I just guess it is because of clean type system getting
better hints for optimizations:
* it is easy to mark stuff strict (even in function signatures
etc), so it is possible to save on unnecessary CAF creations
* uniqueness types allow to do
Add to that better unbox / box annotations, this may make even
bigger difference than the strictness stuff because it allows
you to avoid a lot of indirect references do data.
Anyway, if Haskell would do some kind of whole program analyzes
and transformations it probably can mitigate all the
On 31/10/2007, Peter Hercek [EMAIL PROTECTED] wrote:
Anyway, if Haskell would do some kind of whole program analyzes
and transformations it probably can mitigate all the problems
to a certain degree.
I think JHC is supposed to do whole-program optimisations. Rumour has
it that its Hello
Hi Brad,
On Tue, Oct 30, 2007 at 10:10:17PM -0700, brad clawsie wrote:
i have decided to take on the task of packaging-up (for hackage) and
documenting the curl bindings as available here:
http://code.haskell.org/curl/
if the originators of this code are reading this and do not wish me
On 31/10/2007, Peter Hercek [EMAIL PROTECTED] wrote:
Add to that better unbox / box annotations, this may make even
bigger difference than the strictness stuff because it allows
you to avoid a lot of indirect references do data.
Anyway, if Haskell would do some kind of whole program
Paulo J. Matos wrote:
So the slowness of Haskell (compared to Clean) is consequence of
its type system. OK, I'll stop, I did not write Clean nor Haskell
optimizers or stuff like that :-D
type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will
Paulo J. Matos wrote:
type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will know all types at compile time?
The *existence* of a type system is helpful to the compiler.
Peter was referring to the differences between haskell and clean.
On Wed, 31 Oct 2007 14:17:13 +
Jules Bean [EMAIL PROTECTED] wrote:
Specifically, clean's uniqueness types allow for a certain kind of
zero-copy mutation optimisation which is much harder for a haskell
compiler to automatically infer. It's not clear to me that it's
actually worth it, but
Robin Green wrote:
On Wed, 31 Oct 2007 14:17:13 +
Jules Bean [EMAIL PROTECTED] wrote:
Specifically, clean's uniqueness types allow for a certain kind of
zero-copy mutation optimisation which is much harder for a haskell
compiler to automatically infer. It's not clear to me that it's
Peter Hercek wrote:
* it is easy to mark stuff strict (even in function signatures
etc), so it is possible to save on unnecessary CAF creations
Also, the Clean compiler has a strictness analyzer. The compiler will
analyze code and find many (but not all) cases where a function argument
can
On Wed, Oct 31, 2007 at 01:36:40PM +, Ian Lynagh wrote:
otherwise i was wondering if people had good examples to point me to
for providing the cross-platform support needed for a FFI-based module
such as this. i have made the necessary changes to compile the code on
freebsd, but for
On 10/29/07, Tim Newsham [EMAIL PROTECTED] wrote:
I would love to have the ability to define binary operator modifiers.
For example:
f \overline{op} g = liftM2 op f g
f \overleftarrow{op} n = liftM2 op f (return n)
n \overrightarrow{op} g = liftM2 op (return n) f
Hi
I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this. The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get down to it - so I
ndmitchell:
Hi
I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this. The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get
On 10/30/07, Felipe Lessa [EMAIL PROTECTED] wrote:
On 10/30/07, Tim Chevalier [EMAIL PROTECTED] wrote:
ppos = pi/len2; pi and len2 are both Ints, so dividing them gives you
an Int. To convert to a Double, write ppos = fromIntegral (pi/len2).
(Type :t fromIntegral in ghci to see what else
I often use this in my cabal ghc-options:
ghc-options: -funbox-strict-fields -O2 -fasm -Wall -optl-Wl,-s
the last runs ld's strip automatically.
mnislaih:
Austin Seipp has written about this in his blog:
http://austin.youareinferior.net/?q=node/29
I will take this time to point out
So in a few years time when GHC has matured we can expect performance to be
on par with current Clean? So Clean is a good approximation to peak
performance?
--ryan
On 10/31/07, Don Stewart [EMAIL PROTECTED] wrote:
ndmitchell:
Hi
I've been working on optimising Haskell for a little while
goalieca:
So in a few years time when GHC has matured we can expect performance to
be on par with current Clean? So Clean is a good approximation to peak
performance?
The current Clean compiler, for micro benchmarks, seems to be rather
good, yes. Any slowdown wrt. the same program
liftM2 (/) sum length. Anyway, the closest you can get in Haskell is
something like this, using the infix expressions of Ken Shan and Dylan
Thurstonhttp://www.haskell.org/pipermail/haskell-cafe/2002-July/003215.html
:
[]
It works (?), but it's pretty ugly and hardly seems worth it,
On 31/10/2007, Don Stewart [EMAIL PROTECTED] wrote:
goalieca:
So in a few years time when GHC has matured we can expect performance to
be on par with current Clean? So Clean is a good approximation to peak
performance?
The current Clean compiler, for micro benchmarks, seems to
On Wed, Oct 31, 2007 at 08:35:42AM -0700, brad clawsie wrote:
On Wed, Oct 31, 2007 at 01:36:40PM +, Ian Lynagh wrote:
otherwise i was wondering if people had good examples to point me to
for providing the cross-platform support needed for a FFI-based module
such as this. i have made
Hi Thomas,
On Wed, Oct 31, 2007 at 03:27:20PM -0400, Thomas Hartman wrote:
I have a situation where
... stuff...
$(expose ['setState, 'getState]
f = SetState
compiles but
f = SetState
$(expose ['setState, 'getState]
doesn't compile, with error: Not in scope: data constructor
Hi
So in a few years time when GHC has matured we can expect performance to be
on par with current Clean? So Clean is a good approximation to peak
performance?
No. The performance of many real world programs could be twice as fast
at least, I'm relatively sure. Clean is a good short term
I have a situation where
... stuff...
$(expose ['setState, 'getState]
f = SetState
compiles but
f = SetState
$(expose ['setState, 'getState]
doesn't compile, with error: Not in scope: data constructor 'SetState.
Is this a bug?
expose is defined in HAppS.State.EventTH
t,.
---
This e-mail
On 10/31/07, Neil Mitchell [EMAIL PROTECTED] wrote:
in the long run Haskell should be aiming for equivalence with highly
optimised C.
Really, that's not very ambitious. Haskell should be setting its
sights higher. :-)
When I first started reading about Haskell I misunderstood what
currying was
order matters. But I hope people are transitioning to using mkCommand
instead of expose as it provides more functionality.
-Alex-
Thomas Hartman wrote:
I have a situation where
... stuff...
$(expose ['setState, 'getState]
f = SetState
compiles but
f = SetState
$(expose ['setState,
bf3:
Are these benchmarks still up-to-date? When I started learning FP, I had
to choose between Haskell and Clean, so I made a couple of little
programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes
a lot faster... I don't have the source code anymore, but it was based
on
On Tue, 30 Oct 2007, noa wrote:
I have the following function:
theRemainder :: [String] - [String] - Double
theRemainder xs xt = sum( map additional (unique xs) )
where
additional x = poccur * (inf [ppos,pneg]) --inf takes [Double]
where
xsxt = zip
Are these benchmarks still up-to-date? When I started learning FP, I had
to choose between Haskell and Clean, so I made a couple of little
programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes
a lot faster... I don't have the source code anymore, but it was based
on the book
On Wed, 31 Oct 2007, Dan Piponi wrote:
But every day, while coding at work (in C++), I see situations where
true partial evaluation would give a big performance payoff, and yet
there are so few languages that natively support it. Of course it
would require part of the compiler to be present
There are many ways to implement currying. And even with GHC you can get it
to do some work given one argument if you write the function the right way.
I've used this in some code where it was crucial.
But yeah, a code generator at run time is a very cool idea, and one that has
been studied, but
I'd like to see Supero and Jhc - compiled examples in the language shootout.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
The site claims it is quite up to date:
about Haskell GHC
The Glorious Glasgow Haskell Compilation System, version 6.6
Examples are compiled mostly in the middle of this year and
at least -O was used. Each test has a log available. They
are good at documenting what they do.
Peter.
Peter
On Wed, 2007-10-31 at 23:44 +0100, Henning Thielemann wrote:
On Wed, 31 Oct 2007, Dan Piponi wrote:
But every day, while coding at work (in C++), I see situations where
true partial evaluation would give a big performance payoff, and yet
there are so few languages that natively support
On Wed, Oct 31, 2007 at 03:37:12PM +, Neil Mitchell wrote:
Hi
I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this. The Clean and Haskell languages both reduce to pretty much
the same Core language,
Hi
I don't think the register allocater is being rewritten so much as it is
being written:
From talking to Ben, who rewrote the register allocator over the
summer, he said that the new graph based register allocator is pretty
good. The thing that is holding it back is the CPS conversion bit,
On Thu, Nov 01, 2007 at 02:30:17AM +, Neil Mitchell wrote:
Hi
I don't think the register allocater is being rewritten so much as it is
being written:
From talking to Ben, who rewrote the register allocator over the
summer, he said that the new graph based register allocator is
On 01/11/2007, at 2:37 AM, Neil Mitchell wrote:
My guess is that the native code generator in Clean beats GHC, which
wouldn't be too surprising as GHC is currently rewriting its CPS and
Register Allocator to produce better native code.
I discussed this with Rinus Plasmeijer (chief designer
G'day all.
Quoting Derek Elkins [EMAIL PROTECTED]:
Probably RuntimeCompilation (or something like that and linked from the
Knuth-Morris-Pratt implementation on HaWiki) written by Andrew Bromage.
I didn't keep a copy, but if someone wants to retrieve it from the Google
cache and put it on the
46 matches
Mail list logo