Ted Hopp wrote:

Let me rephrase the point as a question:

 What in the encoding of 'Phoenician' characters in Unicode
 obliges anyone to use those characters for ancient Canaanite
 texts?

An analogous statement can be made of any script in Unicode. We can all
continue to use code pages or the myriad Hebrew fonts that put the glyphs at
Latin-0 code points. If the proposed Phoenician block can be so easily
ignored in encoding ancient Canaanite texts, then is the block really
needed?

Ironic to find myself arguing the other side of this debate, having been broadly sympathetic to the semiticist objections to the proposal, but here goes...


Note that I was not ever suggesting using myriad codepages, font hacks or other methods to encode ancient Canaanite texts. My point was that *within Unicode* one would have an option whether to encode these texts using the Hebrew characters or 'Phoenician' characters. The option, of course, may be a source of confusion, as choices often are. But my point is that no one is forced to choose one or the other.

There are people who do not want to distinguish the encoding of ancient Canaanite from square Aramaic. But there are also people who do want to distinguish them. Both groups of people include respected scholars and experts in their fields.

Somehow (how?) forcing the former group of people to use Phoenician characters for their texts would make them unhappy.

Not separately encoding 'Phoenician' characters, so that there was no way to distinguish in plain text, would make the latter group of people unhappy.



What was insincere about my posting? Forgive me, but it seemed to me that
when you claim that Semiticists will be able to ignore the Phoenician block,
there is an implication that they will use something else. I never said that
they would have to ignore Unicode altogether, but they will have to develop
their own standards (agreements, if you prefer) for what that "something
else" will be.

But the whole basis of the discussion to that point had been that some semiticists wanted to use the existing Hebrew block. The 'something else' is Hebrew, already encoded in Unicode and supported my much existing software. As far as I could tell, no one was suggesting developing some 'new standard'.


This frames the discussion in a way that ignores the coercive power of
Unicode in the marketplace.

One could, with only a little imagination, foresee that there will be
software packages that will only display Palaeo-Hebrew fonts for text
encoded in the 'Phoenician' block...

This frames the discussion in a way that ignores basic concepts of font and software interaction. A software package has no way of knowing whether the glyph encoded at U+05D4 is Aramaic square script, stam, rashi, modern cursive or palaeo-Hebrew. If your *text* is encoded using Hebrew characters, you can display it in any font that supports those characters, regardless of the glyph shape mapped to those characters in the font. If your text is encoded using Phoenician characters, the same applies: any font that supports those characters can be used.


Moreover, if anyone wanted to use Phoenician in some future http protocol,
Unicode conformance is required (at least so says the standard).

What does that have to do with how semiticists decide to encode *texts*? If you want to encode Palaeo-Hebrew texts using Hebrew characters, you are going to have a Hebrew document. Phoenician is only relevant at all if you decide to use Phoenician characters and produce a Phoenician document. This is what I mean when I say there is no reason not to ignore the Phoenician characters if they do not suit your purpose.



Now, all that said, I still remain concerned that the people who want to distinguish 'Phoenician' from Aramaic square script and other Hebrew script styles in plain text have not thought through the larger implications of encoding 'significant' nodes from a script continuum. Encoding a single 'Ancient Near-Eastern 22-letter Alphabet', whether you're one of the people who wants to use it or now, doesn't strike me as a significant problem. Encoding half a dozen of these 'nodes' might be, because with each additional structurally identical script the number of choices and likely confusion increase.


John Hudson

--

Tiro Typeworks        www.tiro.com
Vancouver, BC        [EMAIL PROTECTED]

Currently reading:
Typespaces, by Peter Burnhill
White Mughals, by William Dalrymple
Hebrew manuscripts of the Middle Ages, by Colette Sirat



Reply via email to