> Also, I would forget about wchar_t. Nobody uses that. 'char*' is
> better than 'wchar_t*': both are locale dependent, but the 'char*'
> strings can be more easily communicated to stdout.

Bruno,

Hold on. Reset please. I'm NOT trying to normalize to an specific
_encoding_. I just want to normalize on one particular string
_representation_ that has all the string manipluation routines to go with
it (e.g. strstr, strlen, printf).  If I serialize a string as ecoding X
and send it to a different machine, as long as I read it out as encoding
X it doesn't matter how that string is represented in memory. All I
need to know is that it is. I will let iconv take care of figuring out
how to convert whatever the serialized encoding was to the internal
representation.  I am not going to serialize strings as wchar_t. If
wchar_t characters are UCS with or without the __STDC_ISO_10646__ macro
on one machine and rot13 mixed with locales and OS dependancies on another
I don't see how that has anything to do with serialization functions.

I don't understand why people are telling me to use particular encodings
over another and look for the stdc macro, etc. Please help. I'm hopelessly
confused to the point of just giving up on this whole project and finding
another project to work on.

Mike

-- 
Wow a memory-mapped fork bomb! Now what on earth did you expect? - lkml
-
Linux-UTF8:   i18n of Linux on all levels
Archive:      http://mail.nl.linux.org/linux-utf8/

Reply via email to