Re: dataset allocation

2020-10-06 Thread Wayne Bickerdike
Give us an idea of how big each file is. OPEN/CLOSE is expensive. QSAM with large buffers should go pretty quickly. LOCATE instead of MOVE mode can speed things up when you are reading. On a different note. I just compared EDIT macro performance versus IPOUPDTE. IPOUPDTE was about 600 times

Re: IEASYS problem

2020-10-06 Thread R.S.
W dniu 06.10.2020 o 13:39, Barbara Nitz pisze: In either case, is possible to "functionally replace" IEASYS00 with other members. We have ieasys00 in our regular parmlib, overriding the IBM delivered one. All our ieasys00 contains is CLPA, meaning that we always IPL with CLPA. All the rest

Re: IEASYS problem

2020-10-06 Thread Peter Relson
>Try ieasynck from sys1.samplib I don't know of such a member. 'sys1.samplib(SPPINST)' installs a tool for which the front exec is SYSPARM. It has two main purposes -- help you to build a syntactically correct LOADxx and let you view what your parmlib members would look like with the

Re: IEASYS problem

2020-10-06 Thread Allan Staller
Classification: The Init/Tuning ref is quite specific in the order of precedence between SYS00 and SYSxx. Remember, there may be more than one SYSxx. (e.g. SYSPARM=(aa,bb,cc,00) specified in LOADxx, I don't remember if it is the 1st hit or last hit that wins. In either case, is possible to

Re: IEASYS problem

2020-10-06 Thread Allan Staller
Classification: Yo could try having no IEASYS00 in the parmlib concat -Original Message- From: IBM Mainframe Discussion List On Behalf Of Attila Fogarasi Sent: Monday, October 5, 2020 11:42 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: IEASYS problem [CAUTION: This Email is from

Re: IEASYS problem

2020-10-06 Thread Barbara Nitz
>In either case, is possible to "functionally replace" IEASYS00 with other >members. We have ieasys00 in our regular parmlib, overriding the IBM delivered one. All our ieasys00 contains is CLPA, meaning that we always IPL with CLPA. All the rest of the parms are in sysplex-/system-specific

Re: IEASYS problem

2020-10-06 Thread R.S.
You want something unnecessary and impossible. IEASYS00 has to be read. But... Why do you want 0B member? a) you always choose 0B => just rename it to 00. b) you want to have a choice (0B, AA, 04, 0C...) In this case ...you have further choice: b.1) you may create empty or "almost empty"

Re: IEASYS problem

2020-10-06 Thread Allan Staller
Classification: Agreed! -Original Message- From: IBM Mainframe Discussion List On Behalf Of R.S. Sent: Tuesday, October 6, 2020 7:18 AM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: IEASYS problem [CAUTION: This Email is from outside the Organization. Unless you trust the sender, Don’t

Re: IEASYS problem

2020-10-06 Thread Gadi Ben-Avi
I tried that. It complained, and the IPL stoped. -Original Message- From: IBM Mainframe Discussion List On Behalf Of Allan Staller Sent: Tuesday, October 6, 2020 2:24 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: IEASYS problem Classification: Yo could try having no IEASYS00 in the

How to Refresh System REXX Libraries

2020-10-06 Thread Harris Morgenstern
To refresh the System REXX libraries, you'll need to stop System REXX, update AXRxx parmlib members and restart it. There is an open RFE against System REXX (#49562) to address this issue (Dynamic Reconfiguration of System REXX). Harris Morgenstern z/OS Storage Management and System REXX

Re: DAE Dataset - Compression

2020-10-06 Thread Jousma, David
How big is your DAE dataset? We've got sysplex shared DAE, and it’s a whopping 45 tracks, and only 20% utilized. _ Dave Jousma AVP | Director, Technology Engineering  Fifth Third Bank  |  1830

Re: DAE Dataset - Compression

2020-10-06 Thread Mark Jacobs
Point noted. Thanks. Mark Jacobs Sent from ProtonMail, Swiss-based encrypted email. GPG Public Key - https://api.protonmail.ch/pks/lookup?op=get=markjac...@protonmail.com ‐‐‐ Original Message ‐‐‐ On Tuesday, October 6th, 2020 at 10:37 AM, Jousma, David

Re: DAE Dataset - Compression

2020-10-06 Thread Tom Conley
On 10/6/2020 10:06 AM, Mark Jacobs wrote: Before I open up a ticket with IBM I wanted to ask if the DAE dataset can be allocated as compressed? I tried to migrate our shared DAE dataset to a newly allocated one that with compression enabled. It didn't go well. One SVCDUMP we received was

Re: DAE Dataset - Compression

2020-10-06 Thread Jousma, David
I guess my point was why even bother with dataset compression, even if it was 100 cylinders. _ Dave Jousma AVP | Director, Technology Engineering  Fifth Third Bank  |  1830 East Paris Ave, SE  | 

Re: How to Refresh System REXX Libraries

2020-10-06 Thread Lionel B Dyck
I tried to look at that RFE so I could vote for it but was told I wasn't authorized ☹ Lionel B. Dyck < Website: https://www.lbdsoftware.com "Worry more about your character than your reputation. Character is what you are, reputation merely what others think you are." - John Wooden

Re: DAE Dataset - Compression

2020-10-06 Thread Jim Mulder
DAE uses QSAM - the DCB is DAEDCB DCB MACRF=(GL,PM),DSORG=PS,RECFM=FB,LRECL=255,EROPT=ACC I don't know offhand of anything that would preclude using compressed data set for DAE. Jim Mulder z/OS Diagnosis, Design, Development, Test IBM Corp. Poughkeepsie NY "IBM Mainframe Discussion

Re: DAE Dataset - Compression

2020-10-06 Thread Mark Jacobs
I issued a T DAE=01 to stop it across the sysplex, renamed the DAE datasets and then restarted DAE, T DAE=00. Mark Jacobs Sent from ProtonMail, Swiss-based encrypted email. GPG Public Key - https://api.protonmail.ch/pks/lookup?op=get=markjac...@protonmail.com ‐‐‐ Original Message ‐‐‐

Re: DAE Dataset - Compression

2020-10-06 Thread Mark Jacobs
The old one is 6 cylinders in size and is 100% full. We're a software development organization so we tend to get more software errors than average. Mark Jacobs Sent from ProtonMail, Swiss-based encrypted email. GPG Public Key -

Re: DAE Dataset - Compression

2020-10-06 Thread Jousma, David
I guess I don’t agree with that Lizette? I mean we can agree to disagree, there are many ways to run our environments, but clearing out DAE will just retrigger DUMPs for repetitive issues all over again? It may be old technology, but it works! 1400+ unique entries in our 20% utilized dae

Re: DAE Dataset - Compression

2020-10-06 Thread Lizette Koehler
If you fix the issue, and the DAE entry is still in place, you might not see if there is an issue. So as always, it depends. By removing the entry in DAE it might show if you actually fixed it Lizette -Original Message- From: IBM Mainframe Discussion List On Behalf Of Jousma, David

DAE Dataset - Compression

2020-10-06 Thread Mark Jacobs
Before I open up a ticket with IBM I wanted to ask if the DAE dataset can be allocated as compressed? I tried to migrate our shared DAE dataset to a newly allocated one that with compression enabled. It didn't go well. One SVCDUMP we received was this; COMPID=SC143,ISSUER=ADYTRNS FAILURE IN THE

Re: DAE Dataset - Compression

2020-10-06 Thread Lizette Koehler
I have not read this whole thread, so I apologize if I cover the same ground DAE dataset is very old technology. It is a sequential file and I am not sure you can make it compressed. That could be an RFS Second - shutting down DAE like you did - is the correct process There is no reason to

Re: dataset allocation

2020-10-06 Thread Farley, Peter x23353
Joseph, I agree with Michael, if you are trying to do this in a TSO session, then stop doing that. Run it as a batch job. It still may not get done very quickly, it is common for the initiators that allow a programmer to run large CPU / large elapsed time batch jobs also get bumped way down

Re: dataset allocation

2020-10-06 Thread Joseph Reichman
Thanks With the concatenation seemed to go a lot quicker I could be wrong > On Oct 6, 2020, at 3:19 PM, Charles Mills wrote: > > Try kicking up BUFNO. I think QSAM is generally about 98% as good as it > gets. I could be wrong. > > Reading a big pile of big files is going to take some time

Re: dataset allocation

2020-10-06 Thread Joseph Reichman
Seemed like I processed 100 files concatenated a lot quicker But I didn’t do any exact testing you may Be right > On Oct 6, 2020, at 3:30 PM, Paul Gilmartin > <000433f07816-dmarc-requ...@listserv.ua.edu> wrote: > > On Tue, 6 Oct 2020 14:56:21 -0400, Joseph Reichman wrote: >> >> I

Re: dataset allocation

2020-10-06 Thread Paul Gilmartin
On Tue, 6 Oct 2020 14:56:21 -0400, Joseph Reichman wrote: > >I posted a problem last week regarding allocating a concatenated dataset a >few of you (Seymour,Paul Gilmartin) suggested that when processing the 4,608 >VB (huge) files > (I believe Lizette offered a similar suggestion.) >That rather

Re: dataset allocation

2020-10-06 Thread Charles Mills
Allocation takes time (everything does, of course) but you were allocating either way, right? OPEN takes time, and you are now doing 'n' OPENs rather than one -- but OPEN is not "slow" -- not as slow as it was once -- and with concatenation you are doing a "mini-OPEN" under the covers every

Re: IEASYS problem

2020-10-06 Thread Matthew Stitt
It is the last one processed that takes precedence. Each one is processed from left to right with each parameter over riding the previous specification. On my system I specify (00,HW,H8). Note IEASYS00 is the first one processed. Matthew On Tue, 6 Oct 2020 11:32:34 +, Allan Staller

dataset allocation

2020-10-06 Thread Joseph Reichman
Hi I posted a problem last week regarding allocating a concatenated dataset a few of you (Seymour,Paul Gilmartin) suggested that when processing the 4,608 VB (huge) files That rather then concatenate them and when I reach the limit deco catenate them I just process on file at a time

Re: DAE Dataset - Compression

2020-10-06 Thread Roger Lowe
On Tue, 6 Oct 2020 09:31:26 -0700, Lizette Koehler wrote: >If you fix the issue, and the DAE entry is still in place, you might not see >if there is an issue. > >So as always, it depends. > >By removing the entry in DAE it might show if you actually fixed it > >Lizette > > >-Original

Re: dataset allocation

2020-10-06 Thread Charles Mills
Try kicking up BUFNO. I think QSAM is generally about 98% as good as it gets. I could be wrong. Reading a big pile of big files is going to take some time no matter what you do. Charles -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf

Re: dataset allocation

2020-10-06 Thread Michael Stein
On Tue, Oct 06, 2020 at 03:34:51PM -0400, Joseph Reichman wrote: > Seemed like I processed 100 files concatenated a lot quicker > > But I didn’t do any exact testing you may be right I'd get or build a subroutine which captured the current real and cputime (timeused macro?) and call it