Karl,

I've published similar "surveys" in the past, where the object was to get feedback on the desirability of further action. I stick by my recommendation in favor of keeping "raw data" out of the document registry and of doing the committee a favor by "adding value" in form of a sifting or "analysis" of such data.

Previewing the data is not the same as making a character encoding proposal, and there aren't any procedural rules for "non-proposals", so there's nothing that prevents doing that. I have always provided some level of analysis, and I have not always chosen to register all such documents - for the reasons I gave you earlier.

The original rationale for encoding certain symbols had been their widespread use. The word "widespread" is key here. At the time that Unicode was first created, symbol sets associated with printers defined widespread use. After these sets were backed into the 2600 and 2700 blocks, the phenomenal rise of Windows made the W/W-Dings sets even more widespread.

As you and WG2 evaluate additional such widely disseminated fonts, you will need to come up with your own criteria of what constitutes "widespread". Those criteria should be applied both to the fonts considered as potential source of symbols, as well as to each category of symbols within these fonts.

I'll be interested in looking at a list of Apple symbols, once it's categorized a bit better by symbol function and / or gives a better idea of which (and how many) symbols extend existing sets (e.g. by adding directional variants) and which (and how many) might possibly be only variants of existing symbols - and similar information like that. (Unlike a full character encoding proposal I would not expect definite answers to these, but some tentative / approximate information would be nice).

A./

Reply via email to