Is the file RECFM=V? For me, variable records seem to vex ISPF browse and edit 
more than RECFM=F files. It runs out of storage sooner with the variable length 
files. That is with a TSO region of 131072.

If alternative editors/viewers are available, they may fair better. IBM File 
Manager works but can have storage issues when a copybook is mapped to large 
records. However, there's ways to have it work with a subpart of the file. At 
$JOB - 1, I seem to recall Fileaid and InSync worked well with large files.

Alan

-----Original Message-----
From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>
Sent: May 23, 2023 4:37 PM
To: <IBM-MAIN@LISTSERV.UA.EDU>
Subject: Re: Why does ISPF BROWSE abend with S878 searching a large sequential 
file?

Your system is a lot less busy than the one I use. Just maxing to the bottom of 
a smaller 8,778 cylinder generation of the same file set I mentioned earlier in 
BROWSE didn't complete in over 30 minutes. I had to ATTN out of it, and that 
logged me off too.

Then again the nightly batch window already opened here, and my TSO session is 
competing with all of that work.

Peter

-----Original Message-----
From: IBM Mainframe Discussion List On Behalf Of Schmitt, Michael
Sent: Tuesday, May 23, 2023 6:30 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Why does ISPF BROWSE abend with S878 searching a large sequential 
file?

I just tried a browse on a sequential file with 6,355,209 records, using 6,621 
cylinders, finding a 31-byte key at the beginning of the record, where the key 
is found at record number 6,355,208.

I normally have 256 MB of region, but for this test I changed it to 48000K.

Before Browse:

Limit In-use Avail
Below 16M: 9060K 400K 8660K
Above 16M: 49024K 5240K 43784K


After entering Browse of data set:

Limit In-use Avail
Below 16M: 9060K 468K 8592K
Above 16M: 49024K 5280K 43744K


After f 1 'I000065800232001005001GIORFIORC' command completed (in about 44 
seconds):

Limit In-use Avail
Below 16M: 9060K 468K 8592K
Above 16M: 49024K 6876K 42148K


After exiting Browse:

Limit In-use Avail
Below 16M: 9060K 440K 8620K
Above 16M: 49024K 5240K 43784K


This is on z/OS 2.4 on a z14.

-----Original Message-----
From: IBM Mainframe Discussion List On Behalf Of Farley, Peter
Sent: Tuesday, May 23, 2023 4:19 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Why does ISPF BROWSE abend with S878 searching a large sequential 
file?

Possibly. But if so we are in a lot more trouble than annoying abends out of 
TSO, as this is customer data with legal archiving requirements. If it is 
somehow corrupted by compression (not sure at this point if that file is using 
IBM software compression or zEDC hardware compression) we have MUCH larger 
issues.

IMHO it is probably just BAD (broken as designed) BROWSE code, but with OCO 
there is no way for an ordinary customer to know.

Peter

-----Original Message-----
From: IBM Mainframe Discussion List On Behalf Of Schmitt, Michael
Sent: Tuesday, May 23, 2023 4:56 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Why does ISPF BROWSE abend with S878 searching a large sequential 
file?

I agree that Browse isn't supposed to require enough memory to hold the file, 
so it should work.

But.

Is "compressed data" the key? How is it compressed? Does the fact that it is 
compressed mean that more data has to be in memory? Or is it going wild on 
decompression, such as if the data is corrupted so the length is wrong? (Like 
how it is possible to create a tiny .ZIP file that decompresses to terabytes.)


-----Original Message-----
From: IBM Mainframe Discussion List On Behalf Of Farley, Peter
Sent: Tuesday, May 23, 2023 3:13 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Why does ISPF BROWSE abend with S878 searching a large sequential file?

This has happened to me twice this afternoon, and several other times in the 
last few months - I am trying to browse (from ISPF 3.4) a quite large 
sequential file (> 14500 cylinders of compressed data) for a record with a 
specific 31-byte key at the beginning of the record, and browse abends with 
S878 and throws me off TSO entirely, requiring me to login again each time.

My TSO logon region size is set to 48000, so what in the world is making browse 
consume so much memory that it runs out and crashes my TSO session entirely?

I know, OCO prevents anyone knowing for sure, but if you have any clue I'd 
appreciate knowing the answer.

I've been forced to search that file using SORT in a batch job to keep the 
frustration level lower.

Peter
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to