09-Mar-2014 23:40, Andrei Alexandrescu пишет:
On 3/9/14, 12:25 PM, Dmitry Olshansky wrote:
Okay putting potential breakage aside.
Let me sketch up an additive way of improving current situation.
Now you're talking.
1. Say we recognize any indexable entity of char/wchar/dchar, that
however has .front returning a dchar as a "narrow string". Nothing fancy
- it's just a generalization of isNarrowString. At least a range over
Array!char will work as string now.
Wait, why is dchar[] a narrow string?
Indeed `...entity of char/wchar/dchar` --> `...entity of char/wchar`.
3. ElementEncodingType is too verbose and misleading. Something more
explicit would be useful. ItemType/UnitType maybe?
We're stuck with that name.
Too bad, but we have renamed imports... if only they worked correctly.
But let's not derail.
[snip]
Great, so this may be turned into smallish DIP or bugzilla enhancements.
6. Take into account ASCII and maybe other alphabets? Should be as
trivial as .assumeASCII and then on you march with all of std.algo/etc.
Walter is against that. His main argument is that UTF already covers
ASCII with only a marginal cost
He certainly doesn't have things like case-insensitive matching or
collation on his list. Some cute tables are what "directly to the UTF"
algorithms require for almost anything beyond simple-minded "find me a
substring".
Walter certainly would have different stance the moment he observe the
extra bulk of object code for these.
(that can be avoided)
How? I'm not talking about `x < 0x80` branches, these wouldn't cost a dime.
I really don't feel strong about 6th point. I see it as a good idea to
allow custom alphabets and reap performance benefits where it makes
sense, the need for that is less urgent though.
and that we should
go farther into the future instead of catering to an obsolete
representation.
That is something I agree with.
--
Dmitry Olshansky