On 16-Feb-16 10:58, [email protected] wrote:
> On Tue, 16 Feb 2016 10:28:27 -0500
> Clem Cole <[email protected]> wrote:
>
>> The using
>> EBCDIC
>> ​instead of​
>>  ASCII
>> ​ was a sad thing.   IBM had been heavy in development of ASCII and I
>> believe even chaired the ANSI committee creating it.   In fact if you look
>> at marketing literature, S360 was supposed to be the
>> first commercial system to support it.   But with OS/360 being so late,
>> Brooks was said to make the decision to keep the primary code EBCDIC (for
>> compatibility).   Until the switch to Power years 25+ years later, IBM
>> (and its users) would pay that price.
> There was no switch to POWER. The very latest z Archicture machines still
> use EBCDIC and it has never been an issue. Yeah, we prefer big-endian too.
>
Not to fan the architecture religious wars, but that's a rather insular
point of view.

EBCDIC *is* an issue for users outside the IBM sphere.

It's a pain for interchange with the rest of the world.

The different collating sequence confounds people who expect digits
before letters (or vice-versa).

Languages end up with different pain points as a result of trying to
paper this over.   Look at Perl, which is reasonably portable.  EBCDIC
still trips up library modules regularly. 

Regular expressions frequently are coded as though ASCII were the only
codeset.  So are sorts.  The few characters in one but not the other are
an annoyance - now lessened by Unicode.  But that has its own issues.

Yes, people in the EBCDIC world learn to code 'properly' to accommodate
these difference.  People in the rest of the world are regularly
tripped-up by them.  It's a lot harder to deal with code-set independent
coding than portable C.  Neither is fun.  But at least there are lots
more tools for detecting the issues.  And few non-IBM development
platforms support an EBCDIC locale for people to validate their code -
if they are even aware they should care.

I knowingly code stuff for myself with alarm bells going off "that won't
work for EBCDIC".  But I don't care to take the time for code that will
"never leave my house".  Until it does :-)

That's not to say that one codeset is better than the other.  At one
time, either was a sane choice between legacy compatibility and
technology.  IBM made one.  Unfortunately, the rest of the world didn't
follow IBM.   And IBM, for sensible business, if not technical reasons
stuck with its choice.

Today, there still is pain.  It's not which one is 'better'.  It's that
they're *different*.

Unicode has its own coding and migration issues.  But at least within
the UTF-8 subset, your collating sequence doesn't change.  And you can
bring in applications without a porting effort.

I'm endian-neutral.  I like bit 0 always being a sign bit & network
compatibility of BE.  And I like the easy multi/variable-precision math
of LE.

I'm also a PDP-10 person -  where byte size and codeset are whatever you
want.  (1-36 bits for the byte size.)  And COBOL supported ASCII, EBCDIC
and SIXBIT equally as far back as the 70s... As did sort, and tape labels.



Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Simh mailing list
[email protected]
http://mailman.trailing-edge.com/mailman/listinfo/simh

Reply via email to