On 6/3/16 1:51 PM, Patrick Schluter wrote:
On Friday, 3 June 2016 at 11:24:40 UTC, ag0aep6g wrote:
This is mostly me trying to make sense of the discussion.

So everyone hates autodecoding. But Andrei seems to hate it a good bit
less than everyone else. As far as I could follow, he has one reason
for that, which might not be clear to everyone:

char converts implicitly to dchar, so the compiler lets you search for
a dchar in a range of chars. But that gives nonsensical results. For
example, you won't find 'ö' in  "ö".byChar, but you will find '¶' in
there ('¶' is U+00B6, 'ö' is U+00F6, and 'ö' is encoded as 0xC3 0xB6
in UTF-8).

You mean that '¶' is represented internally as 1 byte 0xB6 and that it
can be handled as such without error? This would mean that char literals
are broken. The only valid way to represent '¶' in memory is 0xC3 0x86.
Sorry if I misunderstood, I'm only starting to learn D.

Not if '¶' is a dchar.

What is happening in the example is that find is looking at the "ö".byChar range and saying "hm... can I compare dchar('¶') to char? Well, char implicitly casts to dchar, so I'm good!", but a direct cast of the bits from char does NOT mean the same thing as a dchar. It has to go through a decoding first.

The real problem here is that char implicitly casts to dchar. That should not be allowed.

-Steve

Reply via email to