On Thu, Sep 5, 2013 at 10:08 PM, Brendan Eich <[email protected]> wrote: > Thanks for the reminders -- we've been over this.
It's not clear the arguments were carefully considered though. Shawn Steele raised the same concerns I did. The unicode.org thread also suggests that the ideal value space for a string is Unicode scalar values (i.e. what utf-8 can do) and not code points. It did indeed indicate they have code points because of legacy, but JavaScript has 16-bit code units due to legacy. If we're going to offer a higher level of abstraction over the basic string type, we can very well make that a utf-8 safe layer. If you need anything for tests, you can just ignore the higher level of abstraction and operate on 16-bit code units instead. -- http://annevankesteren.nl/ _______________________________________________ es-discuss mailing list [email protected] https://mail.mozilla.org/listinfo/es-discuss

