On 13.10.2011 23:55, Marco Leise wrote: > I recently wrote JavaScript code that generates a Greek letter starting > with α (alpha) from an integer. It's just the shortest way to do it that > works in most languages based on C syntax. I assume I could do it > another way. Like initializing with a character and then incrementing by > the index of the letter I want.
That's the kind of hack that works nicely on certain ranges of Unicode but makes no sense in a language-independent program. Having this in a quick-and-dirty piece of code may be acceptable, but experience shows that such quick-and-dirty solutions typically survive much longer than intended and bite you years later. Modern languages that offer full unicode have to prevent quick-and-dirty conversions wherever possible. This means that string handling gets more difficult for everyone. You always have to do proper conversions even if you don't intend to handle international text. However, it also means that your code will still work if it turns out lateron that non-english speakers want to use it. IMO, D should make a strict separation between numbers and unicode characters.
