On Mon, 19 Feb 2018 12:12:07 -0600, Alan Altmark wrote:
>
>I've been doing code page and translation table development and analysis since 
>about 1987.  The term "ASCII" is just as ambiguous as "EBCDIC", as without 
>qualification each term only sets an expectation for the 8-bit encoding of a 
>somewhat vague set of glyphs.
> 
How about USASCII?  Is that unambiguously the 7-bit set?

I've encountered two EBCDIC C implementations.  One of them returned "true" if
the EBCDIC character translated to ASCII was a USASCII character.  The other
returned :true" simply if the EBCDIC code point was less than 128.

>The original 7-bit ASCII established a full 95-glyph character set that 
>remains invariant today among all 8-bit ASCII code pages.   EBCDIC wasn't 
>quite so lucky, as it has only 56 invariant characters.  It would be 82, but 
>lower case a-z can vary or be non-existent.  If your data is composed of only 
>the 56 common invariant characters, any EBCDIC and ASCII code page will 
>suffice.  All "Latin" EBCDIC code pages will work for lower case a-z.
>
That's *so* 20th century!  And 95 is better than 56.  And luck had little to do 
with it;
it was more lackadaisical design.  7-bit ASCII was extant when EBCDIC was 
conceived.
Prudence should have dictated that EBCDIC code points correspinding to those 95
glyphs be kept invariant.

If the domain of practically any problem is sufficiently restricted the solution
becomes trivial.  And mostly useless.

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to