> Marco Cimarosti scripsit: > > > The same should be true for the � sign. > > > > But unluckily, for some obscure reason, Unicode thinks that currencies > > called "pound" should have one bar and be encoded with U+00A3, while > > currencies called "lira" should have two bars and be encoded with U+20A4. > > "Every character has its own story." > > Can the old farts^W^Wtribal elders shed any light on this one?
Not much. The proximate cause of the inclusion of U+20A4 LIRA SIGN in 10646 was: WG2 N708, 1991-06-14, Table of Replies (to the ballot on 10646 DIS, "DIS-1"). That document contains the U.S. comments asking for all the additions which would synchronize the DIS repertoire with the Unicode 1.0 repertoire, and that included U+20A4 LIRA SIGN. It is a deeper subject to figure out how the LIRA SIGN got into Unicode 1.0 in the first place, and I don't have all the relevant documents to hand to track it down. It was certainly already in the April 1990 pre-publication draft of Unicode 1.0 which was widely circulated. I do recall the issue of one-bar versus two-bar yen/yuan sign being researched in detail and being explicitly decided. I also recall explicit (and tedious) discussions about the various dollar sign glyphs. I do not, however, recall any time spent in discussing the analogous problem of glyph alternates for the pound/lira sign, although it was probably mentioned in passing. So it is possible that the lira sign simply derives from a draft list that was standardized without anyone ever spending time to debate the pound/lira symbol unification first. It was probably in the same lists that distinguished yen/yuan sign before it was determined that distinguishing those two as a *character* was untenable. Those were heady days. It is generally much easier to track down why something was added post-Unicode 1.0 than it is to figure out how something got into Unicode 1.0 in the first place. To quote from a particularly memorable email I sent around on April 4, 1991 about an unrelated mistake that was almost made: "The High Ogonek is symptomatic of one of the things wrong about the character standardization business, which encourages the blithe perpetuation of mistaken 'characters' from standard to standard, like code viruses. At least, in the past, the epidemic was constrained by the fact that the encoding bodies only had 256 cells which could get infected by such abominations as half-integral signs. Now, however,... the number of cells available for infection is vast, and the temptation to encode everybody else's junk just seems to have become irresistible... "...I don't think I would be telling any tales out of school if I revealed that Unicode almost got a 'High ogonek', too, since Unicode was busy incorporating all the 10646 mistakes in Unicode while 10646 was busy incorporating all the Unicode mistakes in 10646. ..." --Ken

