Alan Manuel Gloria: > Well, it's very easy to describe ENLIST informally:
Well, it's *slightly* easier to describe - maybe - but the losses are really substantial. And "easy to describe" is an illusion once you re-add position calculation for arbitrary encodings and code systems and typefaces. The "ASCII-only universe" stays simpler but just doesn't describe the world as it is or will be. And "ease of description" is NOT the best measure anyway. We want "ease of reading and use, at scale"... not just simplicity of description. Most programming languages - and math - have shown that humans are willing to learn some syntax, if it's something they use often enough that they can amortize the learning time. And people read more than they write. A notation that is more pleasant to *read* is, I think, more important than "shortest possible description" or "shortest number of rules". There's a balance, of course, and reasonable people can differ on where that is best placed. But after reading these ideas, I'm convinced this approach would be much *worse* than what we have now. > We *could* argue that for 90% of the code you'd want to write, the > ASCII-only restriction is not a big problem, and for 90% of the > environments you'd want to program in, having a fixed-width font is a > given. Then we could say that for international text, you can't have > ":" after any international parts (not portably, anyway). We lose > some code density (due to loss of SPLIT and SUBLIST) and the ability > to read code meaningfully when presented in a variable-width font, but > gain a very simple (informally) semantic, which is (relatively) easy > for the uninitiated to grasp. Sure, that can be argued, but that seems a precarious position to me. We know that these assumptions can be (and are) falsified a thousand ways. The semantic doesn't actually appear all that much simpler to me, and the losses of the other capabilities are substantial. And I still don't see the strong use case. > As an aside, I have no idea how right-to-left text works in Unicode > (arabic text, I think hebrew text too). I do know there are > "direction changed" code points in Unicode. http://xkcd.com/1137/ > So, more complexity in > order to keep track of "real characters". And then there's text > normalization, where multiple code points should end up being treated > as single characters semantically.... Yes indeed. Yet another area of problems. The whole idea of knowing what a "position" is seems very pointless when you have differing sequences of characters. If there were no other way to handle it, then we'd have to handle it, but we *already* have a notation that does not require this kind of magic. --- David A. Wheeler ------------------------------------------------------------------------------ The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials, tech docs, whitepapers, evaluation guides, and opinion stories. Check out the most recent posts - join the conversation now. http://goparallel.sourceforge.net/ _______________________________________________ Readable-discuss mailing list Readable-discuss@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/readable-discuss