Bernd,

I can't agree that mainframe applications are more stable because of packed
decimal arithmetic.

Rather, I would argue that mainframe developers generally have a
philosophically different approach to software development than
distributed/workstation developers.  In particular, I would argue that they
generally take more time, both in the development process and in the code
review process, and that this additional time is what makes the difference.

By the way, a 5-byte field capable of containing a 9-digit packed decimal
value has a 0.55% probability of containing a valid packed decimal value
(taking into consideration all six (6) valid sign representations) and a
0.18% probability of containing a valid packed decimal value (taking into
consideration only the two (2) preferred sign representations).

John P. Baker
NGSSA, LLC

-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[email protected]] On
Behalf Of Bernd Oppolzer
Sent: Saturday, July 14, 2012 4:42 PM
To: [email protected]
Subject: Re: COBOL packed decimal

There is one thing I like very much about packed decimal data, that is its
redundancy.

With packed decimal data, the probability that the use of an un-initialized
variable will lead to a run time error (0C7 abend) is very high. Take a nine
digit decimal variable - the probability that it contains a valid decimal
representation if all bit patterns have the same probability is not very
high (0.1 percent).

With all binary data types, all bit patterns are valid, and you have no
chance to detect the use of an un-initialized variable (the german
Telefunken hardware had tag bits in storage which allowed for the detection
of such errors even in the binary case - the storage was initially tagged
with a "strange" data type, and so the hardware produced a so-called
"Typenkennungs-Alarm").

I believe that most of the rumour that mainframe applications are more
"stable" than applications on other platforms comes from this property of
the decimal data type, and from the fact, that most applications are written
in COBOL and use this data type, so that such errors are detected during
testing stage.

The more decimal data you have in your applications, the more stable are
your applications. This could also be one of the reasons for the programming
language C being not so stable (among others, like pointer arithmetic, no
control of subscripts etc): the absence of decimal arithmetic in C -
normally.

I'm not sure, if I told you already: in our shop, we run the PL/1
applications in production with SUBSCRIPTRANGE enabled since some years ago,
and we are very happy with this. The performance is acceptable.

Kind regards

Bernd

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to