Bob continued: > I didn't want unicode people to do math, but it might even be apparent to > those who encode symbols that the overwhelming usage of the single > symbol pi is in the conceptually natural combination 2 pi (are there any > other examples)
This is apparent to those of us familiar with math and/or those who took the time to read your article -- but that is irrelevant to any decision about whether it is appropriate to even start the process of encoding a character for your suggested symbol for this constant. > and this would better be done in half as many bits ;-) But this is not apparent at all. > > The two points below raise the question to me, is popularity/use > the criterion or not. ^^^ Presuppositional error. These encoding decisions are not made on the basis of a single, axiomatic criterion. Demonstration of use in published or otherwise printed material is an important criterion. So is demonstration of use in implemented software, which may not be the same thing at all. (Some characters are invisible units of processing, after all, and may have no direct manifestation in print.) So is indication of support by a relevant community of users -- sometimes in advance of published use. So is use in textual interchange, or conversion, or use in interworking with legacy character encodings -- again, sometimes in the absence of printed publications. Architectural necessity to solve some problem in the encoding is another possible criterion. (In case anybody is interested in a bit of trivia, that is how U+0229 LATIN SMALL LETTER E WITH CEDILLA got in, by the way.) Preexisting encoding in other character encodings, even if badly conceived in the first place, is another criterion used for lots of characters. Popularity, per se, is generally not a good criterion, though it depends in part on the nature of the constituency. Certainly, for example, the Klingon script had a certain passion and popularity among its supporters, but it failed of other criteria for being suitable for encoding. But "popularity" among the voting members of the UTC and the active voting members of WG2 is absolutely vital. The existence of well-documented proposals, and the presence of advocates -- especially advocates who have clout (earned or unearned) -- in the relevant standards committees is also a factor. And just plain horsetrading during resolution of national body ballot comments during ISO voting is also a factor. Lots of things that probably shouldn't be characters got to be encoded as characters that way. > From one hand, I am hearing that it is THE > criterion, and from another it is not. I've always felt the majority > is not always right, nor is history. Spoken like a true reformer. But let me try once more: The mission of Unicode is not to fix the writing systems of the world, nor to advocate for reforms in symbolic usage. The mission of Unicode is to provide an enumerated set of characters sufficient to represent the vast majority of what is (or was) in regular use in writing systems and character-like symbol systems of the world, so that IT systems can be used to process, store and render them. > > I'd even support the inclusion of a copyleft symbol ahead of \newpi! If and when the copyleft symbol and/or \newpi rise far enough above the noise level to demonstrate validity for encoding on one (or better, many) of the kinds of criteria listed above, then the UTC is likely to quickly encode it/them as characters. But strong advocacy by an individual that a character should be encoded for a symbol which *should* be used because it is logically better than what *is* used just doesn't cut it. --Ken