2010/3/10 Jonathan S. Shapiro <[email protected]>:
> Do people think that is a sensible position?

Honestly, I don't see a lot of arguments in favor of the 16-bit char,
there. :-) There's the interop thing, and well... a 16-bit char has no
other use: it doesn't represent anything meaningful, it's just a
uint16. To satisfy interop requirements, adding a separate type for
16-bit code units seems by far the most sensible thing to do, and I
don't see any real downsides. Interoperation between BitC and CTS
isn't going to be straightforward in any case.

Additionally, IMO Kevin is right and the main string type shouldn't
prefer any particular way of indexing or iterating: graphemes, code
points and UTF-8 bytes are all equally important views. Applications
in particular typically have no reason to use code points (nor code
units!) at all; even input is better handled as strings. In fact,
other than for implementing unicode-aware lexers, I don't know where
code points are useful.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to