Fergus Henderson <[EMAIL PROTECTED]> writes:
> This mail is in reply to something posted to the Standard Haskell
> WWW page / mailing list. If this is not the best forum for such
> responses, please let me know what is.
I think it's the only forum available.
In http://www.cs.chalmers.se/~rjmh/Haskell/Messages/Display.cgi?id=425, I
wrote:
> > So newtype fails to provide a __ZERO COST__ way of changing types.
Fergus replied:
> With *current* compilers, yes. But the problem you refer to may be
> forgotten about next year, if compiler optimization gets a little better.
>
> If this problem is really important enough to worry about, then
> it would not be difficult to do the necessary compiler optimization.
> A good compiler will already specialize `map mkNat'.
> The problem is just that the compiler may not notice that the
> specialized version just traverses a list and reconstructs it.
> But a quite simple bottom-up analysis could identify functions which
> happen to be identity functions (modulo do-nothing newtype type conversions),
> and optimize them away.
>
> Implementing this would be easier than adding support for your
> proposed language extensions, I think.
I don't think it's as simple as you suggest:
1 Your optimisation isn't just trying to spot that a function does
nothing, it has to spot that it does nothing provided some of its
arguments are do nothing functions.
2 Your optimisation has to handle recursive functions (this requires
just a little more than bottom-up detection) and indirectly
recursive do-nothing functions and functions which call other functions
across module boundaries and might even have to handle
mutually recursive functions defined in a pair of mutually
recursive modules.
3 There's a number of different ways of writing the typecast functions
using higher order functions, overloading, etc to varying degrees.
Can we easily detect all reasonable variations that might be
generated by programmers? Can we agree on which styles will be
implemented? How brittle will this detection be? Could a small
change break the detection and incur a huge overhead?
Will all implementations detect the same set of variations or
will it vary from one implementation to another?
Will programmers end up adding big comments next to their coercion
code that says "don't change a single character of this code,
I've carefully checked that it works with GHC version 4.76
and hbc 0.9999999999".
Issues 1 and 2 can be solved with sufficient effort. In fact, you
can probably go a long, long way to solving them by implementing
cross-module inlining and a few simple optimisations.
Issue 3 is more of a problem - do we want a guarantee that there's no
overhead or do we want to hope that Simon and Lennart can figure it out
for us?
Personally, I think this sounds like it'll be hard to implement right
and it won't be implemented in all compilers (eg neither Hugs nor NHC
do any real optimisation, and I don't think hbc does much cross-module
optimisation).
Adding a new typeclass, providing a trivial deriving for it
and deleting uses of its only method sounds pretty easy in comparision.
Note that there's no need to actually derive the method since noone ever
uses it.
Of course, it might be that neither optimisation is justified by the
performance improvements they would make. Are there any examples
where the amortised cost of the coercion is more than a small
constant?
Alastair
ps Of course, the whole Standard Haskell discussion seems to be irrelevant
since no-one on the committee seems to be doing anything at the moment.