Hi Joseph,

On Thu, 12 Jul 2001, Joseph S. Myers wrote:
> On 12 Jul 2001, Christoph Rohland wrote:
> 
>> I think this documentation describes the feature (not the
>> implementation) sufficiently:
>>
>> http://wwwold.dkuug.dk/jtc1/sc22/wg20/docs/n830-utf-16-c.txt
> 
> It doesn't. 

At least it was for the Unicode Consortium.

> Why is it required to be 16-bit rather than just at least 16-bit?

Because of the need of a good compromise (and IMO it is a compromise)
between memory and cpu requirements for long-lived, string intensive
applications. This is explained in the document.

> GCC supports c4x with 32-bit chars, and support for pdp10 with 9-bit
> chars is being worked on.  For C++, is utf16_t special like wchar_t,
> or a typedef?  Are the strings NUL-terminated?  In C++, is there a
> deprecated conversion to a pointer to a non-const-qualified type?
> What arrays can be initialised from these strings?  Do they
> concatenate with each other; with narrow strings; with wide strings;
> and what sort of strings result?  Is the quiet change to
> interpretation of programs in which u is a macro and is immediately
> followed by a string literal justified, or should the specification
> use a macro defined in a header to form these string literals?
> (Some proposals of the latter form - a macro defined in a header -
> were being discussed on the WG14 reflector at the point I
> subscribed.)

I see, there is a lot of room for discussion (and as I said I am not
the expert on language or Unicode internals). But I would like to
invite these experts to discuss these issues without prejudice.

Greetings
                Christoph


-
Linux-UTF8:   i18n of Linux on all levels
Archive:      http://mail.nl.linux.org/linux-utf8/

Reply via email to