Jón Fairbairn wrote:
"Brian Hulley" <[EMAIL PROTECTED]> writes:
I imagine that almost every editor at least does lexical
fontification, and if so, then I don't think there could be
much confusion in practice between these uses of '-'.

I think that unnecessarily disadvantages people with poorer
than average (including zero) eyesight.

For people lacking good eyesight the equivalent of fontification could simply be some text-to-speech system which read "-2" as "negative 2" and "x - y" as "x minus y".


Yes, a typeclass for exp would be ideal

Well, so long as you call it “exponent” or “expt”.

I'd completely forgotten about the normal (exp) function. I should have written (power) or (pow), though as Cale pointed out a typeclass may not be a suitable solution due to the lack of a functional dependency to help the compiler choose the correct overloading. - in that case I'd go back to advocating (powNat) (powInt) etc.


(and a newtype for Natural).

Here's a design principle for you: if an error can be
detected at compile time, it should be. If we have literals
for naturals and not negative integers, “negate 100” causes
no problem, it just means “negate (fromNatural 100)”. If we
have literals for integers and try to restrict them to get
naturals, “-100:: Natural” becomes shorthand for
“integralToNatural (-100)”, and would (in the absence of
somewhat arbitrary special-casing in the compiler) give a
runtime error.

Ok I'm slowly coming round to the view that having negative literals is not ideal.


I also agree with Tamas's suggestion that an empirical
analysis of Haskell source code could be useful to determine
the practical implications of unary minus,

It has merit and I would laud anyone who got round to doing
it, but there's a danger of measuring the wrong thing. What
we want to know is not what is more frequent, but what
causes the greater number of misreadings and which pieces of
code had the most syntax errors before they were completed,
and that's harder to measure. Though if unary minus turned
out to be very rare, we could just drop it. Using “(0-)”
wouldn't be much of a hardship in that case.

Anyway no doubt I've posted enough emails about unary minus
and negative literals so I'll be quiet now ;-)

:-) ... ?

I think the main problem with unary negation is that it's the only place in Haskell where the same symbol is being used to represent two different (ie not overloads of each other) functions.

I can see why people were tempted to do this, because there is such an intimate relationship between unary minus and binary subtraction. However I feel it is a slippery slope: convenience has been put before uniformity leading to confusion.

While such things might be justified in a domain specific language like mathematica or matlab, for a general purpose language like Haskell it seems less reasonable to make an exception just for one arithmetic function.

Regards, Brian.
--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to