Because all well-formed sequences (and subsequences) are interpreted according to the corresponding UTF. That is quite different from random byte stream with no declared semantics, or a byte stream with a different declared semantic.
Thus if you are given a Unicode 8-bit string <61, 62, 80, 63>, you know that the interpretation is <a, b, <fragment>, c>, and *not* the EBCDIC US interpretation as </, Â, Ø, Ä> = U+002F U+00C2 U+00C4 U+00D8. Mark <https://plus.google.com/114199149796022210033> * * *— Il meglio è l’inimico del bene —* ** On Mon, Jan 7, 2013 at 12:44 PM, Philippe Verdy <[email protected]> wrote: > Well then I don't know why you need a definition of an "Unicode 16-bit > string". For me it just means exactly the same as "16-bit string", and >

