There was a discussion about finding a short correction for the "widespread
belief" that Unicode is 16-bit character set containing 65536 characters.

Now I have noticed this statement by Roman Czyborra (taken from the last
paragraph of http://czyborra.com/charsets/iso646.html), and I found that it
is one of the most compact, precise, and understandable explanations that I
have seen so far:
 
        "Unicode [...] encodes all the world's characters in a 16bit space
and a 20bit extension zone for everything that did not fit into the 16bit
space."

_ Marco

Reply via email to