On Fri, 13 Jul 2001, Eble, Markus wrote:
> > Why is it required to be 16-bit rather than just at least 16-bit?
>
> This is required to save memory space and for efficient communication
> with Java programs. If we would say "at least 16-bit" we could stay
> using wchar_t.
Think in the C standard context, not the Unicode or Java context. Why is
the specification of uint_least16_t inappropriate here? Why should your
specification not be supported on systems with 9-bit or 32-bit char? The
C standard does not address communication with Java; while, as a quality-
of-implementation issue, an implementation might find it desirable to make
it 16 bits, why should you prohibit an implementation on a Cray, say, from
choosing a wider type if 16-bit accesses are unavailable or inefficient
and memory is not so critical?
Do you mean 16 bits total, or 16 value bits (with possibly some padding
bits)?
You could not stay with wchar_t if you want UTF-16 (rather than UCS-2 or
UCS-4), since UTF-16 is not a valid encoding for wchar_t, although it
would be usable as a multibyte encoding with 16-bit char.
> - NUL-terminated strings (2-byte NUL)
Do you really mean 2-byte? Remember that a byte might be 16 bits.
> - if concatenated with narrow or wide strings the result should be
> the largest occuring string type
What exactly do you mean by "largest"?
--
Joseph S. Myers
[EMAIL PROTECTED]
-
Linux-UTF8: i18n of Linux on all levels
Archive: http://mail.nl.linux.org/linux-utf8/