Isaac Gouy <[EMAIL PROTECTED]> writes:
> Ketil Malde wrote:
>> [LOC vs gz as a program complexity metric]
> Do either of those make sense as a "program /complexity/ metric"?
Sorry, bad choice of words on my part.
-k
--
If I haven't seen further, it is by standing in the footprints of giants
_
On Nov 3, 2007 5:00 AM, Ryan Dickie <[EMAIL PROTECTED]> wrote:
> Lossless File compression, AKA entropy coding, attempts to maximize the
> amount of information per bit (or byte) to be as close to the entropy as
> possible. Basically, gzip is measuring (approximating) the amount of
> "information"
On 11/2/07, Sterling Clover <[EMAIL PROTECTED]> wrote:
>
> As I understand it, the question is what you want to measure for.
> gzip is actually pretty good at, precisely because it removes
> boilerplate, reducing programs to something approximating their
> complexity. So a higher gzipped size means
As I understand it, the question is what you want to measure for.
gzip is actually pretty good at, precisely because it removes
boilerplate, reducing programs to something approximating their
complexity. So a higher gzipped size means, at some level, a more
complicated algorithm (in the cas
--- Greg Fitzgerald <[EMAIL PROTECTED]> wrote:
> >> while LOC is not perfect, gzip is worse.
> > the gzip change didn't significantly alter the rankings
>
> Currently the gzip ratio of C++ to Python is 2.0, which at a glance,
> wouldn't sell me on a "less code" argument.
a) you're looking at a
On Friday 02 November 2007 23:53, Isaac Gouy wrote:
> > > Best case you'll end up concluding that the added complexity had
> > > no adverse effect on the results.
>
> Best case would be seeing that the results were corrected against bias
> in favour of long-lines, and ranked programs in a way that
>> while LOC is not perfect, gzip is worse.
> the gzip change didn't significantly alter the rankings
Currently the gzip ratio of C++ to Python is 2.0, which at a glance,
wouldn't sell me on a "less code" argument. Although the rank stayed the
same, did the change reduce the magnitude of the vict
On Friday 02 November 2007 20:29, Isaac Gouy wrote:
> ...obviously LOC doesn't tell you anything
> about how much stuff is on each line, so it doesn't tell you about the
> amount of code that was written or the amount of code the developer can
> see whilst reading code.
Code is almost ubiquitousl
igouy2:
>
> --- Sebastian Sylvan <[EMAIL PROTECTED]> wrote:
> -snip-
> > It still tells you how much content you can see on a given amount of
> > vertical space.
>
> And why would we care about that? :-)
>
>
> > I think the point, however, is that while LOC is not perfect, gzip is
> > worse.
--- Sebastian Sylvan <[EMAIL PROTECTED]> wrote:
-snip-
> It still tells you how much content you can see on a given amount of
> vertical space.
And why would we care about that? :-)
> I think the point, however, is that while LOC is not perfect, gzip is
> worse.
How do you know?
> > Best
On 11/2/07, Isaac Gouy <[EMAIL PROTECTED]> wrote:
> How strange that you've snipped out the source code shape comment that
> would undermine what you say - obviously LOC doesn't tell you anything
> about how much stuff is on each line, so it doesn't tell you about the
> amount of code that was wri
--- Jon Harrop <[EMAIL PROTECTED]> wrote:
> On Friday 02 November 2007 19:03, Isaac Gouy wrote:
> > It's slightly interesting that, while we're happily opining about
> LOCs
> > and gz, no one has even tried to show that switching from LOCs to
> gz
> > made a big difference in those "program bulk"
On Friday 02 November 2007 19:03, Isaac Gouy wrote:
> It's slightly interesting that, while we're happily opining about LOCs
> and gz, no one has even tried to show that switching from LOCs to gz
> made a big difference in those "program bulk" rankings, or even
> provided a specific example that th
On 11/2/07, Isaac Gouy <[EMAIL PROTECTED]> wrote:
> Ketil Malde wrote:
>
> > [LOC vs gz as a program complexity metric]
>
> Do either of those make sense as a "program /complexity/ metric"?
You're right! We should be using Kolmogorov complexity instead!
I'll go write a program to calculate it fo
Ketil Malde wrote:
> [LOC vs gz as a program complexity metric]
Do either of those make sense as a "program /complexity/ metric"?
Seems to me that's reading a lot more into those measurements than we
should.
It's slightly interesting that, while we're happily opining about LOCs
and gz, no one
On 02/11/2007, Bulat Ziganshin <[EMAIL PROTECTED]> wrote:
> Hello Sebastian,
>
> Thursday, November 1, 2007, 9:58:45 PM, you wrote:
>
> > the ideal. Token count would be good, but then we'd need a parser for
> > each language, which is quite a bit of work to do...
>
> i think that wc (word count) w
Hello Sebastian,
Thursday, November 1, 2007, 9:58:45 PM, you wrote:
> the ideal. Token count would be good, but then we'd need a parser for
> each language, which is quite a bit of work to do...
i think that wc (word count) would be good enough approximation
--
Best regards,
Bulat
"Sebastian Sylvan" <[EMAIL PROTECTED]> writes:
[LOC vs gz as a program complexity metric]
>> Obviously no simple measure is going to satisfy everyone, but I think the
>> gzip measure is more even handed across a range of languages.
>> It probably more closely aproximates the amount of mental ef
Quoting Justin Bailey <[EMAIL PROTECTED]>:
Done: http://www.haskell.org/haskellwiki/RuntimeCompilation . Please
update it as needed.
Thanks!
Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman
Yes, of course. But they don't do partial evaluation.
On 11/1/07, Bulat Ziganshin <[EMAIL PROTECTED]> wrote:
>
> Hello Lennart,
>
> Thursday, November 1, 2007, 2:45:49 AM, you wrote:
>
> > But yeah, a code generator at run time is a very cool idea, and one
> > that has been studied, but not enoug
On 01/11/2007, Tim Newsham <[EMAIL PROTECTED]> wrote:
> > Unfortunately, they replaced line counts with bytes of gzip'ed code --
> > while the former certainly has its problems, I simply cannot imagine
> > what relevance the latter has (beyond hiding extreme amounts of
> > repetitive boilerplate in
Unfortunately, they replaced line counts with bytes of gzip'ed code --
while the former certainly has its problems, I simply cannot imagine
what relevance the latter has (beyond hiding extreme amounts of
repetitive boilerplate in certain languages).
Sounds pretty fair to me. Programming is a jo
On 10/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> I didn't keep a copy, but if someone wants to retrieve it from the Google
> cache and put it on the new wiki (under the new licence, of course), please
> do so.
>
> Cheers,
> Andrew Bromage
Done: http://www.haskell.org/haskellwiki/Runtime
Ketil Malde wrote:
Python used to do pretty well here compared
to Haskell, with rather efficient hashes and text parsing, although I
suspect ByteString IO and other optimizations may have changed that
now.
It still does just fine. For typical "munge a file with regexps, lists,
and maps" tas
l.org
| Subject: Re: [Haskell-cafe] Re: Why can't Haskell be faster?
|
| On 01/11/2007, Simon Peyton-Jones <[EMAIL PROTECTED]> wrote:
| > Yes, that's right. We'll be doing a lot more work on the code generator in
the rest of this year and 2008.
| Here "we" includ
On 01/11/2007, Simon Peyton-Jones <[EMAIL PROTECTED]> wrote:
> Yes, that's right. We'll be doing a lot more work on the code generator in
> the rest of this year and 2008. Here "we" includes Norman Ramsey and John
> Dias, as well as past interns Michael Adams and Ben Lippmeier, so we have
> re
Neil wrote:
The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get down to it - so I don't think the difference between the
performance is a language thing, but it is a compiler thing. The
uniqueness type stuff may g
Bernie wrote:
I discussed this with Rinus Plasmeijer (chief designer of Clean) a
couple of years ago, and if I remember correctly, he said that the
native code generator in Clean was very good, and a significant
reason why Clean produces (relatively) fast executables. I think he
said that
Yes, that's right. We'll be doing a lot more work on the code generator in the
rest of this year and 2008. Here "we" includes Norman Ramsey and John Dias, as
well as past interns Michael Adams and Ben Lippmeier, so we have real muscle!
Simon
| > I don't think the register allocater is being r
I assume the reason the switched away from LOC is to prevent
programmers artificially reducing their LOC count, e.g. by using
a = 5; b = 6;
rather than
a = 5;
b = 6;
in languages where newlines aren't syntactically significant. When
gzipped, I guess that the ";\n" string will be represented about
Hello Lennart,
Thursday, November 1, 2007, 2:45:49 AM, you wrote:
> But yeah, a code generator at run time is a very cool idea, and one
> that has been studied, but not enough.
vm-based languages (java, c#) has runtimes that compile bytecode to
the native code at runtime
--
Best regards,
Bula
Don Stewart <[EMAIL PROTECTED]> writes:
> goalieca:
>>So in a few years time when GHC has matured we can expect performance to
>>be on par with current Clean? So Clean is a good approximation to peak
>>performance?
If I remember the numbers, Clean is pretty close to C for most
benchm
G'day all.
Quoting Derek Elkins <[EMAIL PROTECTED]>:
Probably RuntimeCompilation (or something like that and linked from the
Knuth-Morris-Pratt implementation on HaWiki) written by Andrew Bromage.
I didn't keep a copy, but if someone wants to retrieve it from the Google
cache and put it on th
On 01/11/2007, at 2:37 AM, Neil Mitchell wrote:
My guess is that the native code generator in Clean beats GHC, which
wouldn't be too surprising as GHC is currently rewriting its CPS and
Register Allocator to produce better native code.
I discussed this with Rinus Plasmeijer (chief designer of
On Thu, Nov 01, 2007 at 02:30:17AM +, Neil Mitchell wrote:
> Hi
>
> > I don't think the register allocater is being rewritten so much as it is
> > being written:
>
> >From talking to Ben, who rewrote the register allocator over the
> summer, he said that the new graph based register allocator
Hi
> I don't think the register allocater is being rewritten so much as it is
> being written:
>From talking to Ben, who rewrote the register allocator over the
summer, he said that the new graph based register allocator is pretty
good. The thing that is holding it back is the CPS conversion bit,
On Wed, Oct 31, 2007 at 03:37:12PM +, Neil Mitchell wrote:
> Hi
>
> I've been working on optimising Haskell for a little while
> (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
> on this. The Clean and Haskell languages both reduce to pretty much
> the same Core languag
On Wed, 2007-10-31 at 23:44 +0100, Henning Thielemann wrote:
> On Wed, 31 Oct 2007, Dan Piponi wrote:
>
> > But every day, while coding at work (in C++), I see situations where
> > true partial evaluation would give a big performance payoff, and yet
> > there are so few languages that natively sup
There are many ways to implement currying. And even with GHC you can get it
to do some work given one argument if you write the function the right way.
I've used this in some code where it was crucial.
But yeah, a code generator at run time is a very cool idea, and one that has
been studied, but
I'd like to see Supero and Jhc - compiled examples in the language shootout.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
The site claims it is quite up to date:
about Haskell GHC
The Glorious Glasgow Haskell Compilation System, version 6.6
Examples are compiled mostly in the middle of this year and
at least -O was used. Each test has a log available. They
are good at documenting what they do.
Peter.
Peter Vers
On Wed, 31 Oct 2007, Dan Piponi wrote:
> But every day, while coding at work (in C++), I see situations where
> true partial evaluation would give a big performance payoff, and yet
> there are so few languages that natively support it. Of course it
> would require part of the compiler to be prese
On 10/31/07, Neil Mitchell <[EMAIL PROTECTED]> wrote:
> in the long run Haskell should be aiming for equivalence with highly
> optimised C.
Really, that's not very ambitious. Haskell should be setting its
sights higher. :-)
When I first started reading about Haskell I misunderstood what
currying
Hi
> So in a few years time when GHC has matured we can expect performance to be
> on par with current Clean? So Clean is a good approximation to peak
> performance?
No. The performance of many real world programs could be twice as fast
at least, I'm relatively sure. Clean is a good short term ta
On 31/10/2007, Don Stewart <[EMAIL PROTECTED]> wrote:
> goalieca:
> >So in a few years time when GHC has matured we can expect performance to
> >be on par with current Clean? So Clean is a good approximation to peak
> >performance?
> >
>
> The current Clean compiler, for micro benchmark
goalieca:
>So in a few years time when GHC has matured we can expect performance to
>be on par with current Clean? So Clean is a good approximation to peak
>performance?
>
The current Clean compiler, for micro benchmarks, seems to be rather
good, yes. Any slowdown wrt. the same progra
So in a few years time when GHC has matured we can expect performance to be
on par with current Clean? So Clean is a good approximation to peak
performance?
--ryan
On 10/31/07, Don Stewart <[EMAIL PROTECTED]> wrote:
>
> ndmitchell:
> > Hi
> >
> > I've been working on optimising Haskell for a litt
ndmitchell:
> Hi
>
> I've been working on optimising Haskell for a little while
> (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
> on this. The Clean and Haskell languages both reduce to pretty much
> the same Core language, with pretty much the same type system, once
> yo
Hi
I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this. The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get down to it - so I do
Peter Hercek wrote:
> * it is easy to mark stuff strict (even in function signatures
> etc), so it is possible to save on unnecessary CAF creations
Also, the Clean compiler has a strictness analyzer. The compiler will
analyze code and find many (but not all) cases where a function argument
can
Robin Green wrote:
On Wed, 31 Oct 2007 14:17:13 +
Jules Bean <[EMAIL PROTECTED]> wrote:
Specifically, clean's uniqueness types allow for a certain kind of
zero-copy mutation optimisation which is much harder for a haskell
compiler to automatically infer. It's not clear to me that it's
act
On Wed, 31 Oct 2007 14:17:13 +
Jules Bean <[EMAIL PROTECTED]> wrote:
> Specifically, clean's uniqueness types allow for a certain kind of
> zero-copy mutation optimisation which is much harder for a haskell
> compiler to automatically infer. It's not clear to me that it's
> actually worth it
Paulo J. Matos wrote:
type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will know all types at compile time?
The *existence* of a type system is helpful to the compiler.
Peter was referring to the differences between haskell and clean.
Specifically,
Paulo J. Matos wrote:
So the slowness of Haskell (compared to Clean) is consequence of
its type system. OK, I'll stop, I did not write Clean nor Haskell
optimizers or stuff like that :-D
type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will
On 31/10/2007, Peter Hercek <[EMAIL PROTECTED]> wrote:
> Add to that better unbox / box annotations, this may make even
> bigger difference than the strictness stuff because it allows
> you to avoid a lot of indirect references do data.
>
> Anyway, if Haskell would do some kind of whole program
On 31/10/2007, Peter Hercek <[EMAIL PROTECTED]> wrote:
> Anyway, if Haskell would do some kind of whole program analyzes
> and transformations it probably can mitigate all the problems
> to a certain degree.
>
I think JHC is supposed to do whole-program optimisations. Rumour has
it that its H
Add to that better unbox / box annotations, this may make even
bigger difference than the strictness stuff because it allows
you to avoid a lot of indirect references do data.
Anyway, if Haskell would do some kind of whole program analyzes
and transformations it probably can mitigate all the p
I'm curious what experts think too.
So far I just guess it is because of clean type system getting
better hints for optimizations:
* it is easy to mark stuff strict (even in function signatures
etc), so it is possible to save on unnecessary CAF creations
* uniqueness types allow to do in-plac
58 matches
Mail list logo