Now that Haskell 98 is a done deal (for good or for ill -- what on earth
were people thinking of with 'isAlphaNum'?), can we hope that the committee
will revisit the Monomorphism Restriction, and somewhat more seriously
than it's done to date?

Having just wasted an hour or so chasing MR-induced errors through
a moderate-sized module after introducing a relatively minor change,
I feel the need to vent on this topic, yet again.  Why is it considered
acceptable to force users to add oodles of type signatures to an
otherwise well-typed program in a language which is happy to make a
big deal about supporting type inference?  Does it seem stylisticly
reasonable to encourage "eta-expanding" compositional-style function
definitions to get around this problem-masquerading-as-a-solution?

What on earth is wrong with:  a) flagging instances of possible
overloading inefficiency, hopefully in a more sophisticated manner
than does the MR;  and  b) where actual type ambiguities arise,
reporting those as at present?


In all the instances I've just had to stomp on, the MR seems like
using a type-theoretic sledgehammer to squish a nut, and what's
more, it wasn't even the right nut.

Slan libh,
Alex.



Reply via email to