On Sat, Jul 24, 2010 at 1:00 PM, Michael Everson <[email protected]> wrote:

> Digits can be scattered randomly about the code space and it wouldn't make 
> any difference.

Having written a library for performing conversions between Unicode
strings and numbers, I disagree. While it is not all that hard to deal
with the case in which the characters for the digits are scattered
about the code space, if they occupy a contiguous set of code points
in order of their value as they do, e.g., in ASCII, it simplifies both
the conversion itself and such tasks as identifying the numeral system
of a numeric string and checking the validity of a string as a number
in a particular numeral system.

It may well be that adopting such a policy is not realistic, but there
would be advantages to it if were.

Reply via email to