On 06/01/2016 06:25 AM, Marc Schütz wrote:
On Tuesday, 31 May 2016 at 21:01:17 UTC, Andrei Alexandrescu wrote:
On 05/31/2016 04:01 PM, Jonathan M Davis via Digitalmars-d wrote:
Wasn't the whole point of operating at the code point level by
default to
make it so that code would be operating on full characters by default
instead of chopping them up as is so easy to do when operating at the
code
unit level?
The point is to operate on representation-independent entities
(Unicode code points) instead of low-level representation-specific
artifacts (code units).
_Both_ are low-level representation-specific artifacts.
Maybe this is a misunderstanding. Representation = how things are laid
out in memory. What does associating numbers with various Unicode
symbols have to do with representation? -- Andrei