On Thu, Jan 31, 2013 at 3:17 PM, Joó Ádám <[email protected]> wrote: >> What do you mean "no problem would arise"? ASCII would have been >> unimplementable if they had tried to insist that the dot be explicitly >> encoded. > > Yes, this is why ASCII was a failure as an international character > encoding format.
Design a better one in 7-bits for the hardware ASCII was designed for. Demonstrate that it doesn't slow down the programs using it. (Okay, so the first might not be hard, but it wouldn't touch this problem.) >> That whole view is putting Turkish and a couple minor cases >> over the rest of the users of the Latin alphabet, where i naturally >> uppercases to I, whether i be dotted or undotted. > > How is it a challange to replace two characters with one? I take it you've never programmed in C, where one character is a primitive data type that can be passed around trivially, but multiple characters are strings allocated on the heap whose allocation must be tracked manually? If you're working with fixed buffers, the fact that you can't predict the length of a string after processing is a big cost. It's not a huge problem in the 21st century, but not one you could handwave in the 20th. >> (Really; it's not >> common typography except in fi ligatures, but fancy fonts wouldn't >> hesitate to leave it out, and English speakers wouldn't miss a beat >> reading a text without dots over the eyes.) > > Sorry, I lost context on this sentence. The English i is not a dotted i. It may or may not have a dot, depending on the needs of the typography. Adding an explicit dot on top is forcing English (and other Latin-script languages) to conform to a model that doesn't really fit it in order to have a model that fits Turkish well. -- Kie ekzistas vivo, ekzistas espero.

