Jonathan M Davis: > It's not necessarily a bad idea,
I don't know if it's a good idea. > but I'm not sure that we want to encourage code > that assumes ASCII. It's far too easy for English-speaking programmers to end > up > making that assumption in their code and then they run into problems later > when > they unexpectedly end up with unicode characters in their input, or they have > to > change their code to work with unicode. On the other hand there are situations when you know you are dealing just with digits, or few predetermined symbols like ()+-*/", or when you process very large biological strings that are composed by a restricted and limited number of different ASCII chars. Bye, bearophile
