On Mon, Feb 25, 2002 at 02:56:09PM -0500, Jimmy Kaplowitz wrote:
> I haven't tested this, nor really done anything relating to programming
> with i18n, but based on looking at man pages, you can use one of three
> functions (mbstowcs, mbsrtowcs, or mbsnrtowcs) to convert your multibyte
> string to a wide character string (an array of type wchar_t, one wchar_t
> per *character*), and then use the many wcs* functions to do various
> tests. My recollection of the consensus on this list is that for

That's extremely cumbersome for everyday ops.

Doing conversions at every turn is expensive, too.

> internal purposes, wchar_t is the way to go, and conversion to multibyte
> strings of char is necessary only for I/O, and there only when you can't
> use functions like fwprintf. However, wchar_t is only guaranteed to be

Not always.  Some people use the locale encoding internally; some use
UTF-8 internally.  They all have their advantages.

wchar-based programs are still harder to debug; gdb doesn't deal with
them yet.

I expect there'll be lot more libraries that expect locale-encoded char *
strings in their API than will be providing an alternate wide interface.

Using locale encodings internally is the quickest to start, but then you
know nothing about your strings and need to convert everything for most
ops (if you really want it to work).

Converting existing programs is a case where wchar is particularly
difficult.

-- 
Glenn Maynard
--
Linux-UTF8:   i18n of Linux on all levels
Archive:      http://mail.nl.linux.org/linux-utf8/

Reply via email to