I have trouble understanding how endianess works for UTF-16.

For example UTF-16 code for 'ł' character is 0x0142. But this program shows
otherwise:

import std.stdio;

public void main () {
  ubyte[] properOrder = [0x01, 0x42];
        ubyte[] reverseOrder = [0x42, 0x01];
        writefln( "proper: %s, reverse: %s", 
                cast(wchar[])properOrder, 
                cast(wchar[])reverseOrder );
}

output:

proper: 䈁, reverse: ł

Is there anything I should know about UTF endianess?

-- 
Marek Janukowicz

Reply via email to