On 2019-05-02 9:51 AM, Jesse 1 Robinson wrote:
This is almost nutty enough to be a weekend post, but it's a live production
environment, so here goes. We have a prod job (batch Db2) that has run daily
for years. Suddenly on 14 April it started abending with this message from
Fault Analyzer:
IEW2541S 471A MEMBER CUA625 IDENTIFIED BY DDNAME JOBLIB WITH CONCATENATION
NUMBER 1 CONTAINS A BLOCK OF SIZE 32760 WHICH IS LONGER THAN THE
DATA SET BLKSIZE.
IDI0010E IEWBIND error INCLUDE CUA625 rc=83000507
IDI0002I Module CUA625, program CUA625, offset X'7712': Abend U3003
So this is all absolutely true. The module*is* 32760 while the PDS*is*
19069-the ancient 3350 track size that was fairly standard for load libraries
in the Dark Ages. So what's the mystery? How on earth did the 13 April and*all
previous* runs work OK?
Here's my theory...
So you have this set up all ticking over nicely, and it works for years.
And then one day the application has a problem (an abend?) which causes
Fault Analyzer to wake up and think: "I better look into this!"
So then FA decides to make some Binder API calls to get the low-down on
this application program, and it is this I/O which does not work
properly because of the block size in the data set label being smaller
than an actual block in the program.
Of course, program fetch does not care about such niceties because he
makes his own channel programs with numbers from control information and
therefore ignores DCB attributes in the VTOC entry.
So, no application problems means that the block size mismatch is not
exposed. Application problems means that the automatic application
problem looker-at-er tries to do "normal" I/O to the library which
exposes the block size mismatch.
Cheers,
Greg
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN