On Thu, Jan 12, 2012 at 19:38, Donn Cave <[email protected]> wrote: > >> > Seems obvious to me: on the one hand, there should be a plain-ASCII > >> > version of any Unicode symbol; on the other, the ASCII version has > >> > shortcomings the Unicode one doesn't (namely the existing conflict > between > >> > use as composition and use as module and now record qualifier). So, > the > >> > Unicode one requires support but avoids weird parse issues. > >> > >> OK. To me, the first hand is all you need - if there should be a > >> plain-ASCII version of any Unicode symbol anyway, then you can avoid > >> some trouble by just recognizing that you don't need Unicode symbols > >> (let alone with different parsing rules.) > >> > > > > What? The weird parsing rules are part of the ASCII one; it's what the > > Unicode is trying to *avoid*. We're just about out of ASCII, weird > parsing > > is going to be required at some point. > > What what? Are you not proposing to allow both ways to write > composition, "." and "<unicode symbol>" at the same time, but > with different syntactical requirements? Unicode characters as > code would be bad enough, but mixing them with a hodge-podge of > ASCII aliases with different parsing rules isn't going to win > any prizes for elegance.
Backward compatibility is rarely elegant, and this is in any case piggybacking on already existing (indeed, longstanding) parser horkage. The point of the Unicode is a first step at getting away from said horkage, which hopefully can be completed someday. -- brandon s allbery [email protected] wandering unix systems administrator (available) (412) 475-9364 vm/sms
_______________________________________________ Glasgow-haskell-users mailing list [email protected] http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
