When a UTF-16 string is treated as an array of bytes, it's supremely important to know the byte order.
Well, but normally you only do look at it as an array of bytes when you are trying to byte-serialize or -unserialize, which gets into the encoding _schemes_.
Otherwise, looking at in-process UTF-16 as bytes is no different than looking at any other 16-bit data as bytes: Of course you will see the bytes in the CPU's byte order. Unlike the encoding form vs. scheme issue, this is nothing Unicode-specific though.
Regards, markus

