> NC> Neko uses plain old ANSI C file IO routines. Using UTF8 encoding methods
> NC> would require a lot of OS-specific code. Might be more appropriate in a
> NC> separate library (?)
> 
> I see... And when I use neko.Utf8.encode, it makes UTF8 assuming the
> source was in UTF16 (binary Unicode), right? 

No, it encodes ISO bytes into UTF8 chars.

So for example the french ISO é (0xE9) is encoded into the UTF8 char
(0x00E9).

> I.e. there is no easy way
> to convert from a native codepage to UTF8?

By chance, french (ISO-8859-1) maps naturally to UTF8. I'm not sure for
other languages. In particular, if you want to create UTF8 chars that
are > 0xFF then you can't use UTF8.encode, but instead an UTF8Buffer
(primitives exists in Neko but are not wrapped in haXe yet).

Nicolas

-- 
Neko : One VM to run them all
(http://nekovm.org)

Reply via email to