> Design a better one in 7-bits for the hardware ASCII was designed for. > Demonstrate that it doesn't slow down the programs using it. (Okay, so > the first might not be hard, but it wouldn't touch this problem.)
I don’t blame ASCII for failing, it’s just a fact. International usage demands more than 7 bits. > I take it you've never programmed in C, where one character is a > primitive data type that can be passed around trivially, but multiple > characters are strings allocated on the heap whose allocation must be > tracked manually? If you're working with fixed buffers, the fact that > you can't predict the length of a string after processing is a big > cost. It's not a huge problem in the 21st century, but not one you > could handwave in the 20th. Actually, I’ve been programming in C professionally for some time, even embedded medical applications. The obvious solution to the problem would have been a two byte character. > The English i is not a dotted i. It may or may not have a dot, > depending on the needs of the typography. Adding an explicit dot on > top is forcing English (and other Latin-script languages) to conform > to a model that doesn't really fit it in order to have a model that > fits Turkish well. The fi ligature’s i is a dotted i: its dot is ligated with the ascender of f. Had the i been dotless, there wouldn’t be point in ligating in the first place. Adding an explicit dot on top is forcing English and other Latin-script languages to conform to a unified model, which fits the whole set of languages written in the modern Latin script, including Turkish. Á

