On Feb 27, 2007, at 1:59 PM, Sven Panne wrote:
On Tuesday 27 February 2007 13:44, Andrzej Jaworski wrote:
I have learned logic from much deeper sources;-)
My statement was:
Guys started in Haskell and got to conclusion that for performance
reasons
it is better to move to C. The guys know
On Feb 27, 2007, at 3:51 PM, Andrzej Jaworski wrote:
[...]
Nevertheless my point is still valid: when on compiler side the
heap is
stretched and on program side you need Ockham's Razor in action
Haskell
chokes. I hoped at least to stimulate interest in repeating GP
experiment
with latest
I hoped at least to stimulate interest in repeating GP experiment with latest GHC version.
until that happens, I'd be wary to draw too many conclusions for today's
applications from this paper. two orders of magnitude difference would seem
to imply programming problems to me (though the
On Tue, 2007-02-27 at 16:51 +, Claus Reinke wrote:
okay, profiling was not available for the Haskell version back then, but
using ML
profiling to improve a Haskell version sounds highly dangerous to me, even
more
so if the authors do not even mention any awareness of this danger. in
On 2/26/07, Kirsten Chevalier honored me with his attention:
Can you clarify what you mean by this? How do you formally prove that
a programming language (rather than a specific implementation of one)
performs better for a given problem? (..)
It is about my saying:SML was exhaustively proved to
It'd be interesting to get the real code for this. Partly to just try
optimising it but more so as a real test case for list/array fusion.
As far as I see, there's no reason that consuming an assoc list of a
bool vector with a foldl' (the ' is probably essential) should be slow.
If it's fused
[redirecting to haskell-cafe, since this is getting to be a long discussion]
On 2/26/07, Andrzej Jaworski [EMAIL PROTECTED] wrote:
The examples I pointed to seem to share strong and relatively consistent
logic of a program. In case of large GA (e.g. Royal Road Problem) and IFP
(e.g. ADATE) SML
Hello Neil,
Friday, January 26, 2007, 3:06:18 AM, you wrote:
One could point to O'caml or others in the ML family, or even more
interesting is the case of Clean, whose syntax heavily borrows from Haskell.
ML is strict, this makes a big difference. Things that Haskell
compilers do easily
simonmarhaskell:
Forwarding on behalf of Andrzej Jaworski [EMAIL PROTECTED]:
Original Message
From: Andrzej Jaworski [EMAIL PROTECTED]
Dear fellows,
It is ironic that just after SPJ disclosed Comments from Brent Fulgham on
Haskell and the shootout the situation has
On 1/25/07, Donald Bruce Stewart [EMAIL PROTECTED] wrote:
The degradation is due to two things:
* several entries have been disqualified (some fairly, some unfairly)
Fix: fix is to submit more
* the shootout haskellers stopped submitting once it was clear we'd
need
Hi
I have to disagree with this. That is, I don't object to Don's
explanation of why the shootout entries degraded in this particular
case, but I do think that Andrzej was right to point this out:
Perhaps making a collective effort towards benchmarking Haskell programs and
analyzing the results
On Thu, 25 Jan 2007, Kirsten Chevalier wrote:
Anything better than staring at intermediate code would be an
improvement, since time spent staring at intermediate code usually is
time spent narrowing down the 2 lines out of 1000 that are relevant.
Maybe it's possible to design tools that could
Hi
Sorry for being unclear. I agree with your comments on GHC, and one
thing I was suggesting was that somebody should think about profiling
tools for improving our understanding of how those transformations
interact with each other, not just profiling tools for understanding
the end result.
Neil Mitchell wrote:
The problem is that something like GHC is very complex, with lots of
transformations. When transformations are firing other
transformations, which in turn fire other transformations, it doesn't
take a great deal to disrupt this flow of optimisation and end up with
a
Neil Mitchell wrote:
That would be very neat. Another neat trick would be generalising
optimisations so that there are fewer and more independant passes,
this would make it easier to understand (and is what I was working on
for Yhc).
Well, it's the nature of repeatedly applying local
Hi
Yhc has intermediate code that is substantially more Haskell like, and
with the command:
Wow, the core looks really cool! One look and you see it all. I would
even rename the local variables to single letters like a,b,c because the
cryptic numbers are quite hard to track. This is
Hi
Although there may not be a lot of optimizing Haskell compilers, there are
compilers for languages similar to Haskell that consistently perform well.
One could point to O'caml or others in the ML family, or even more
interesting is the case of Clean, whose syntax heavily borrows from
Uniqueness types does give some extra optimisation potential, such as
destructive updates if you can guarantee a variable is only referred
to once. But even with that, the language that has impressed me most
on the shootout is Clean. Where the Haskell community spends
significant time they
18 matches
Mail list logo