On 11/27/2013 08:53 AM, Jakob Ovrum wrote:
On Wednesday, 27 November 2013 at 16:18:34 UTC, Wyatt wrote:
I agree with the assertion that people SHOULD know how unicode works if they want to work with it, but the way our docs are now is off-putting enough that most probably won't learn anything. If they know, they know; if they don't, the wall of jargon is intimidating and hard to grasp (more examples up front of more things that you'd actually use std.uni for). Even though I'm decently familiar with Unicode, I was having trouble following all that (e.g. Isn't "noe\u0308l" a grapheme cluster according to std.uni?). On the flip side, std.utf has a serious dearth of examples and the relationship between the two isn't clear.

I thought it was nice that std.uni had a proper terminology section, complete with links to Unicode documents to kick-start beginners to Unicode. It mentions its relationship with std.utf right at the top.

Maybe the first paragraph is just too thin, and it's hard to see the big picture. Maybe it should include a small leading paragraph detailing the three levels of Unicode granularity that D/Phobos chooses; arrays of code units -> ranges of code points -> std.uni for graphemes and algorithms.

Yes, please. While operations on single codepoints and characters seem pretty robust (i.e. you can do lots of things with and to them), it feels like it just falls apart when you try to work with strings. It honestly surprised me how many things in std.uni don't seem to work on ranges.

-Wyatt

Most string code is Unicode-correct as long as it works on code points and all inputs are of the same normalization format; explicit grapheme-awareness is rarely a necessity. By that I mean the most common string operations, such as searching, getting a substring etc. will work without any special grapheme decoding (beyond normalization).

The hiccups appear when code points are shuffled around, or the order is changed. Apart from these rare string manipulation cases, grapheme awareness is necessary for rendering code.

I would put things a bit more emphatically. The codepoint is analogous to assembler, where the character is analogous to a high level language (and the binary representation is analogous to a binary representation). The desire is to make the characters easy to use in a way that is cheap to do. To me this means that the highlevel language (i.e., D) should make it easy to deal with characters, possible to deal with codepoints, and you can deal with binary representations if you really want to. (Also note the isomorphism between assembler code and binary is matched by an isomorphism between codepoints and binary representation.) To do this cheaply, D needs to know what kind of normalization each string is in. This is likely to cost one byte per string, unless there's some slack in the current representation.

But is this worth while? This is the direction that things will eventually go, but that doesn't really mean that we need to push them in that direction today. But if D had a default normalization that occurred during i/o operations, to cost of the normalization would probably be lost during the impedance matching between RAM and storage. (Again, however, any default requires the ability to be overridden.)

Also, of course, none of this will be of any significance to ASCII.

--
Charles Hixson

Reply via email to