Im my opinion, there is one true fact about mainframe stability and decimal arithmetic.
I'm not talking about hardware and so on, maybe the MFs are more stable than the other platforms. But software: the use of packed decimal arithmetic leads to more stable software, because not every bit representation in storage is a valid decimal number. In fact, for a decimal number of length, say, six bytes, it is very unlikely to find a valid representation if you look at some arbitrary six bytes in storage (less than 0.001). So most uses of uninitialized packed decimal fields will be detected in the test stage of the application, which is not true for applications using only binary numbers. This is, in my opinion, one of the top reasons why mainframe (application) software appears to be stable and reliable (use of uninitialized storage flagged by 0C7 with high probability). You get much more security, this way, but you have to pay a performance price for it (not much). BTW, on older machines (not IBM) there were concepts like storage tags, which allowed to detect the use of uninitialized variables even for binary values. I don't understand why these concepts never reached the market. This would make software development and testing easier and maybe cheaper. Kind regards Bernd Am Freitag, 8. September 2006 01:16 schrieben Sie: > Ooops, I had too many brief statements. What I was trying to say was: > > 1. Even a mainframe can be insecure and unreliable if the facilities > aren't used properly. The application has to use the security > facilities in order to be secure. Thus the application (or the system > installation process or the security administration) may be the least > reliable component. > > 2. I am saying that COBOL is required to deliver the same results on > decimal arithmetic regardless of platform and presence or absence of > decimal arithmetic on that platform. Thus the HP Superdomes in this > case should still get the same results in any given computation if > compatible compiler options are chosen to match what was done on the z > series. > > 3. Packed decimal arithmetic is much slower than binary on the z > series. True decimal arithmetic becomes even more painful compute > time wise on those platforms that don't have a decimal arithmetic > instruction set. The greater speed of the other processors offsets > this to at least some extent. > > >Or something else? > ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html

