Sebastian Sylvan wrote:
On 11/10/06, Henk-Jan van Tuyl <[EMAIL PROTECTED]> wrote:


On Fri, 10 Nov 2006 01:44:15 +0100, Donald Bruce Stewart
<[EMAIL PROTECTED]> wrote:

> So back in January we had lots of fun tuning up Haskell code for the
> Great Language Shootout[1]. We did quite well at the time, at one point
> ranking overall first[2]. [...]

Haskell suddenly dropped several places in the overall socre, when the
size measurement changed from line-count to number-of-bytes after
gzipping. Maybe it's worth it, to study why this is; Haskell programs are
often much more compact then programs in other languages, but after
gzipping, other languages do much better. One reason I can think of, is
that for very short programs, the import statements weigh heavily.


I think the main factor is that languages with large syntactic
redundancy get that compressed away. I.e if you write:

MyVeryLongAndConvlutedClassName MyVeryLargeAndConvulutedObject new
MyVeryLongAndConvolutedClassName( somOtherLongVariableName );

Or something like that, that makes the code clumpsy and difficult to
read, but it won't affect the gzipped byte count very much.
Their current way of meassuring is pretty much pointless, since the
main thing the gzipping does is remove the impact of clunky syntax.
Meassuring lines of code is certainly not perfect, but IMO it's a lot
more useful as a metric then gzipped bytes.

Sure, since gzip is the metric, then we can optimise for that. For example, instead of writing a higher-order function, just copy it out N times instantiating the higher-order argument differently each time. There should be no gzipped-code-size penalty for doing that, and it'll be faster :-)

Cheers,
        Simon
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to