On 5/13/16 5:25 PM, Alex Parrill wrote:
On Friday, 13 May 2016 at 16:05:21 UTC, Steven Schveighoffer wrote:
On 5/12/16 4:15 PM, Walter Bright wrote:
10. Autodecoded arrays cannot be RandomAccessRanges, losing a key
benefit of being arrays in the first place.
I'll repeat what I said in the other thread.
The problem isn't auto-decoding. The problem is hijacking the char[]
and wchar[] (and variants) array type to mean autodecoding non-arrays.
If you think this code makes sense, then my definition of sane varies
slightly from yours:
static assert(!hasLength!R && is(typeof(R.init.length)));
static assert(!is(ElementType!R == R.init[0]));
static assert(!isRandomAccessRange!R && is(typeof(R.init[0])) &&
is(typeof(R.init[0 .. $])));
I think D would be fine if string meant some auto-decoding struct with
an immutable(char)[] array backing. I can accept and work with that. I
can transform that into a char[] that makes sense if I have no use for
auto-decoding. As of today, I have to use byCodePoint, or
.representation, etc. and it's very unwieldy.
If I ran D, that's what I would do.
Well, the "auto" part of autodecoding means "automatically doing it for
plain strings", right? If you explicitly do decoding, I think it would
just be "decoding"; there's no "auto" part.
No, the problem isn't the auto-decoding. The problem is having *arrays*
do that. Sometimes.
I would be perfectly fine with a custom string type that all string
literals were typed as, as long as I can get a sanely behaving array out
of it.
I doubt anyone is going to complain if you add in a struct wrapper
around a string that iterates over code units or graphemes. The issue
most people have, as you say, is the fact that the default for strings
is to decode.
I want to clarify that I don't really care if strings by default
auto-decode. I think that's fine. What I dislike is that
immutable(char)[] auto-decodes.
-Steve