On Wednesday, 7 March 2018 at 13:24:25 UTC, Jonathan M Davis
I'd actually argue that that's the lesser of the problems with
auto-decoding. The big problem is that it's auto-decoding. Code
points are almost always the wrong level to be operating at.
For me the fundamental problem is having char in the language
at all, meaning a Unicode string. Arbitrary slicing and indexing
are not Unicode compatible, if we revisit this we need a String
type that doesn't support those operations. Plus the issue of
string validation - a Unicode string type should be assumed to
have valid contents - unsafe data should only be checked at
string construction time, so iterating should always be nothrow.