On Mon, Sep 13, 2010 at 4:50 AM, bearophile <[email protected]> wrote:
> Jonathan M Davis:
>
>> It's not necessarily a bad idea,
>
> I don't know if it's a good idea.
>
>
>> but I'm not sure that we want to encourage code
>> that assumes ASCII. It's far too easy for English-speaking programmers to 
>> end up
>> making that assumption in their code and then they run into problems later 
>> when
>> they unexpectedly end up with unicode characters in their input, or they 
>> have to
>> change their code to work with unicode.
>
> On the other hand there are situations when you know you are dealing just 
> with digits, or few predetermined symbols like ()+-*/", or when you process 
> very large biological strings that are composed by a restricted and limited 
> number of different ASCII chars.
>
> Bye,
> bearophile
>

Can't you just use byte[] for that? If you're 100% sure your string
only contains ASCII characters, you can just cast it to byte[], feed
that into algorithms and cast it back to char[] afterwards, I guess.

Cheers,
- Daniel

Reply via email to