Mike Schwab wrote:

>Would UTF-16 to UTF-8 be a better conversion?  You still have to be

>certain of the source character set.  And is supported by some z/OS

>software.

 

As Cameron indicated, your comment doesn't quite make sense. UTF-16 is just a 
variable-length encoding, in which basic ASCII*
(0-127) are single-byte, some characters are two-byte, some three-, and some 
four-. More efficient, especially in the "mostly basic
ASCII" case.

 

*Yes, I know, talking about "ASCII" in the context of Unicode is dangerous and 
arguably incorrect. You know what I mean.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to