> Following quite a bit of discussion at a meeting at ICFP,
> and subsequent discussion with a smaller group at Yale,
> I must say that I am now strongly inclined to adopt (2); that is,
> to make Haskell 98 be the same as Haskell 1.4 on Int vs Integer matter.
> (This differs from the view put forth on the "state of play" web page.)

Phew!  I am sure that this is the right thing to do.  If we can make Haskell 98
as close to 1.4 (and Thompson, Bird, etc.), encourage compiler writers to
support this standard for a while then ordinary users can benefit from
stability while the committee steward through the impending backlog of
changes.  Not only will this help teachers, students and ordinary Haskell
programmers, but I am sure it will make other future developments more viable.
I doubt if I would have tried to make a proper release of TclHaskell if Haskell
and its compilers hadn't been so stable; it just wouldn't have been worth the
candle with all the other things I am trying to get done.  (BTW, I am nearly
finished preparing TclHaskell for general release.)

On another note, I have started playing around with benchmarking and sorting
algorithms and have to say that I am extremely impressed with GHC.  I have been
a wee bit sceptical about the general emphasis placed on code improvement to
make programs go fast, as distinct from writing your program right, something
which could be made more difficult if your operational model is complicated by
a sophisticated compiler.  Take for example the recent question on one of the
mailing lists about `seq1 and seq2' (where one program had no space leak but a
minor variant of it did, apparently a level 3 problem on the Marlow-Coffee
scale).  Arguably GHC in eliminating the space leak of the first program, but
not minor variation, undermined the programmer's confidence in the compiler; if
the space leak had been present in both programs then the programmer would have
been forced to go back to basics and review his or her dodgy practise that lead
to the problem in the first place.  Anyway I am playing devil's advocate here
and digressing.

As I was saying, when I switched on the code improvement flag (-O2) when
compiling the benchmarks it was like engaging after-burners; without
optimisation I can sort with my preferred sorting method 160,000 Ints in 6.4
seconds; with -O2, on the same program, that drops to 5.1 seconds.  It is also
turning the code round quite quickly (on an Ultrasparc).  Furthermore it is
responding fairly predictably to tweaks and it is quite easy to get good
information from GC status reports which make sense (e.g., after forcing a full
GC just after reading in the data but before sorting the heap contains 3200232
bytes, 232 bytes more than I would expect it to use to store the input list of
integers, by a back-of-an-envelope calculation); this kind of thing is
extremely reassuring.

Finally my experiments are leading me to believe that optimising compilers work
best with small simple definitions, surely a good thing.

BTW, do you think this would be worth writing up?  I have had a lot of fun
doing it and have learnt a bit but am not sure where it would go: Software P&E
or would the JFP be interested?

Thanks very much for the generous reduction on the workshop fee; I will send
Helen a sterling draft tomorrow.

Cheers,

Chris


Reply via email to