Re: REXX SDSF

2024-05-21 Thread Paul Feller
There is another possibility to look at.  There is the BPXWUNIX interface for 
REXX that might be useful.  I don't have a system to test to see if the 
"tn3270,t,conn,max=*" command could be executed through BPXWUNIX.

This is an example of how I used it to work with the NETSTAT command.  I was 
trying to capture idle time information for IP connections.

I executed this from my TSO session but I don't see any reason that might stop 
it from say working in a batch job.

ADDRESS USS 
CALL BPXWUNIX 'netstat -b IDLETIME',,REC.,STDE. 
Address TSO 


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of Tom 
Marchant
Sent: Tuesday, May 21, 2024 2:02 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: REXX SDSF

I suppose it might be nice for the DISPLAY command to have an option that means 
"return the results only to the requesting console and do not write to the 
log." It isn't something that I will request.

--
Tom Marchant

On Tue, 21 May 2024 12:48:14 -0500, Paul Gilmartin  wrote:

>On Tue, 21 May 2024 12:24:14 -0500, Tom Marchant wrote:
>
>>I certainly hope not. The system log is supposed to be a log of things done 
>>to the system.
>>
>I'd hardly regard DISPLAY, for example, as "things done to".  If 
>there's a security concern, the command should be rejected, not executed and 
>logged.
>
>But what was the OP trying to do, and what alternatives exist?
>
>
>>On Tue, 21 May 2024 13:19:03 -0400, Roberto Halais wrote:
>>
>>>I have a rexx that issues console commands thru ISFEXEC. Is there a 
>>>way to prevent the command output to not appear in the system syslog.
>>>I capture the output in my rexx but it also appears in the system log.
--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: WHERE in debugger?

2024-05-17 Thread Paul Feller
This is a link to the PDF versions on the IBM Debugger manuals.  I
personally have not used the debugging tool but I'm sure there are
commands/displays that will map out variables.  I know I've seen that in the
Fault Analyzer tool.

Looks like there are manuals for the version 16.0, 15.0 and 14.2 of the
software.  The link points to the 16.0 version.

z/OS® Debugger PDF versions - IBM Documentation
 

I hope this helps.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Seymour J Metz
Sent: Friday, May 17, 2024 4:11 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: WHERE in debugger?

Do the IDF and z/OS Debugger have commands given an address will return
return a load module, csect and symbol for that address? What about storage
mapped by a dsect?

--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3
עַם יִשְׂרָאֵל חַי
נֵ֣צַח יִשְׂרָאֵ֔ל לֹ֥א יְשַׁקֵּ֖ר

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu   with the
message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Inquiry Regarding Sudden Increase in DFSMSrmm CDS Utilization

2024-05-10 Thread Paul Feller
Jason, I've never used DFSMSrmm, but if you got the EDG2120W message it
looks like you could possibly get back space by following the suggestion
under the system programmer response. 

Here are some links to manuals that might help under stand what is going on
in you DFSMSrmm world.

z/OS DFSMS - IBM Documentation
 

idard00_v3r1.pdf (ibm.com)
   (DFSMSrmm
Diagnosis Guide)

idarr00_v3r1.pdf (ibm.com)
   (DFSMSrmm
Reporting)


EDG2120W Explanation CDS THRESHOLD = threshold_value% REACHED - CDS IS
percentage_value% FULL. 

During CDS write activity, the CDS data set reached the percentage full
threshold defined on the CDSFULL operand in the EDGRMMxx parmlib member.
DFSMSrmm also issues this message during startup if the CDS is already at or
past the threshold specified. Each time the CDS fills up an additional 1%,
DFSMSrmm issues the EDG2122W message again. 

Note that it can be possible to add new records into the CDS even after it
reaches 100% utilization. Still, the High Used RBA/High Allocated RBA ratio
used for the CDS Utilization value is the best indicator of possible
shortage of space in a VSAM data set such as the RMM CDS. 

In the message text: 
threshold_value Indicates the current percentage full threshold for the
DFSMSrmm CDS data set. 
percentage_value Indicates how full the CDS data set is, in percentage
terms. 

System action 
Processing continues. 

Operator response 
Inform the system programmer. 

System programmer response
Use EDGBKUP to reorganize the control data set. The DFSMSrmm subsystem must
be stopped or quiesced when you reorganize the active control data set.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Mike Schwab
Sent: Friday, May 10, 2024 6:59 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Inquiry Regarding Sudden Increase in DFSMSrmm CDS Utilization

Is CA-Reclaim active to reuse empty control areas?  You can add it to
existing datasets but won't reclaim CAs already empty.

On Fri, May 10, 2024 at 5:00 AM Jason Cai mailto:ibmm...@foxmail.com> > wrote:
>
> Dear all
>
>  We have encountered a situation that requires your expertise and
guidance.
>
> Today, we have noticed a sudden increase in the utilization of our
DFSMSrmm Control Data Set (CDS) by 20%, from 55% to 75%, exceeding our
warning threshold of 70%. This surge is puzzling since our daily data backup
volume remains unchanged.
>
> Our question revolves around the possible DFSMSrmm activities, other than
backup operations, that could contribute to an increase in the RMM CDS
utilization. Currently, we only have the RMM journal .
>
> Could you kindly advise us on which messages we should be looking into to
identify the potential cause of this issue in the RMM journal?
>
> We would greatly appreciate any suggestions or recommendations that you
could provide.
>
> Thank you in advance for your time and assistance.
>
> Kind Regards,
>
> Jason Cai
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu   with
the message: INFO IBM-MAIN



--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu   with the
message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Consultation on the Potential Risks of Deleting Specific Datasets

2024-05-08 Thread Paul Feller
Jason, I did something similar to what David suggested.  I created a list of 
datasets from DCOLLECT that had been allocated but never opened.  I then ran a 
REXX routine that read the list to open/close the datasets.  At that point I 
let HSM do its thing.  In the back of my mind, I'm thinking that HSM might not 
touch a dataset if it has never been opened.  Now I don't recall if you can 
force HSM to migrate a dataset that has never been opened.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Jousma, David
Sent: Wednesday, May 8, 2024 7:46 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Consultation on the Potential Risks of Deleting Specific Datasets

Do you have DFHSM or some other management tool?   Just migrate them all with 
an expiration date.   Those that get recalled are still referenced somewhere.

Dave Jousma
Vice President | Director, Technology Engineering





From: IBM Mainframe Discussion List  on behalf of 
Jason Cai 
Date: Wednesday, May 8, 2024 at 8:17 AM
To: IBM-MAIN@LISTSERV.UA.EDU 
Subject: Consultation on the Potential Risks of Deleting Specific Datasets Dear 
all I am reaching out to discuss a specific operation we are considering for 
our z/OS DCOLLECT reports. Currently, we are planning to delete all the 
datasets where LASTREF=NONE and DSORG=PS. This operation seems crucial for 
system maintenance


Dear all



I am reaching out to discuss a specific operation we are considering for our 
z/OS DCOLLECT reports. Currently, we are planning to delete all the datasets 
where LASTREF=NONE and DSORG=PS. This operation seems crucial for system 
maintenance and optimization, yet I wanted to clarify any potential risks 
associated with this action.

As far as I understand, LASTREF refers to the last accessed date of the file. 
Hence, LASTREF=NONE implies that the file has not been accessed since its 
creation. Could you confirm if my understanding is correct?

Additionally, I have been attempting to search for more information about 
LASTREF=NONE in the IBM manual, but my efforts have yielded limited success. If 
possible, could you kindly guide me on how to navigate the manual to find 
specific information like this one or recommend any reliable resources where 
such details may be more readily available?



Lastly, does IBM have any specific requirements regarding the closure of PS 
datasets? By this I mean if a PS has been opened and never closed, would the 
LASTREF also be none?



This e-mail transmission contains information that is confidential and may be 
privileged.   It is intended only for the addressee(s) named above. If you 
receive this e-mail in error, please do not read, copy or disseminate it in any 
manner. If you are not the intended recipient, any disclosure, copying, 
distribution or use of the contents of this information is prohibited. Please 
reply to the message immediately by informing the sender that the message was 
misdirected. After replying, please erase it from your computer system. Your 
assistance in correcting this error is appreciated.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Mainframe performance tool replacement

2024-05-07 Thread Paul Feller
I'll go a little different route.  If the real issue is with the dollars for 
the software there is an interesting approach you could look at.

The place I worked at had setup some years ago several lpars that got grouped 
together in a softcap capacity group.  Then we forced jobs to run there based 
on the software they ran.  A good example for us was SAS and SAS/MXG stuff.  We 
had other software that also got forced there.  This allowed us to save money.  
Didn't have to pay SAS the full machine price for their software.

So, there are some things to consider.

You would need to have some spare MIPS and memory for two or three lpars on the 
same CEC.  We used three lpars.  One prod, one test and one for the systems 
programmers.

You would have to setup some type of routing scheme to get the jobs over to the 
lpars.  We used JES2 exits to do that.  This way we didn't have to change JCL 
to get the jobs run on those lpars.  Naturally you would either use shared 
spool or an NJE connection to get jobs routed and run.

It would be best if you are a sysplex so you could properly setup things like 
WLM and GRS and your security product.

There maybe some work for your scheduling software and maybe your spool offload 
product (if you have one).

You have to look at all the resources your SAS jobs need and how do you share 
those resources across lpars.

I'm sure I missed something to mention.

Yes, this is a bit of work to setup.

The up side is you now have a place to run things like SAS that maybe have low 
usage but high dollars.  I think we had someplace between 6 to 10 software 
products that we pushed to the environment.  Basically, low usage software that 
we needed but didn't like paying full price for.

Also, this assumes the software vendor plays nicely and agrees to charge based 
on the softcap.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of Bob 
Bridges
Sent: Tuesday, May 7, 2024 4:23 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Mainframe performance tool replacement

I think I'm about to reveal my obsolescence:  Where my clients didn't use SAS, 
they mostly used DYL-280II or QuikJob.  Or REXX, of course.

---
Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313

/* Jonny snorted. "You mean out among the decadence of the big worlds? Come on, 
Jame, you don't really believe that sophistication implies depravity, do you?" 
/ "Of course not. But someone's bound to try and convince you that depravity 
implies sophistication."  From _Cobra_ by Timothy Zahn */

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
raji ece
Sent: Tuesday, May 7, 2024 8:20 AM

We have been running with SAS and MICS software to analysis system performance 
and to produce reports on daily basis.

There is a situation to come out of using SAS due to many reasons.

We would like to know the alternate product for this SAS and MICS.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Big LPAR vs small LPAR and DataSharing.

2024-04-19 Thread Paul Feller
First, I'm not a performance expert.

Here are my thoughts.

Based on what information you have provided it would seem you are talking
one CEC.  The environment I last worked at we had two CECs and we did Db2
data sharing with Db2 systems spread across both CECs.  From time to time,
we would look at how well the CF environment was performing.  Things like
how busy was the CFs overall.  Also, we would look to see if there any
issues related to lock and buffer pool structure sizes.

I'm sorry but the "it depends" really does come into play.  The overall
workload might determine the proper number of CPs that a given lpar might
need to support the workload.

When we say that its better to do 4 lpars with 8 CPs compared to 1 lpar
with 32 CPs.  It seems you are saying you would have a CEC with 32 general
CPs.  So, from what I can see you are basically dedicating the CPs to the 4
lpars.  So yes, based on what I know the overhead for any given lpar for
managing the 8 CPs would be less than managing all 32 CPs in one lpar.

Now if the CEC does not have 32 CPs and you still want 4 lpars with 8 CPs
then you start adding overhead for managing the logical to physical
activity/management.

Now there is overhead in a Db2 data sharing environment.  The CF
environment has gotten better over the years related to overall performance.
How much overhead you will have is going to be determined by how busy/active
the data sharing environment is.  There is the CF overhead and there is the
Db2 shared buffer pool management.  If you have a lot of lock contention
that could impact your overall Db2 performance.  

Also, if you are running several lpars in a sysplex that are sharing a
workload you might encounter some overhead from things like WLM and GRS.
This might add a little to the amount of CPU usage and CF usage.

The fog of retirement is starting to set in, but I seem to remember there
is an IBM tool that can be used for modeling lpar/CEC layout.  I've normally
saw it used for helping determine the size of a new CEC and how the lpars
would be laid out.  But I'm guessing you might use it to help determine a
new lpar lay out for an existing CEC.  Someone from IBM (or someone more
familiar with the tool) might have a better answer.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Radoslaw Skorupka
Sent: Friday, April 19, 2024 8:06 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Big LPAR vs small LPAR and DataSharing.

W dniu 19.04.2024 o 10:32, Massimo Biancucci pisze:
> Hi everybody,
>
> In a presentation at GSE I saw a slide with a graph about the 
> advantage of having more small sysplex LPARs versus a bigger one.
> So for instance, it's better to have 5 LPARs with 4 processors than 
> one with 20.
>
> There was a sentence: "It doesn't take an extremely large number of 
> CPUs before a single-image system will deliver less capacity than a 
> sysplex configuration of two systems, each with half as many CPUs".
> And: "In contrast to a multiprocessor, sysplex scaling is near linear.
> Adding another system to the sysplex may give you more effective 
> capacity than adding another CP to an existing system."
>
> We've been told (IBM Labs, it seems) that a 4 ways DataSharing with 8 
> CPUs perform 20% better than a single LPARs with 32 CPUs.
> The same (at another customer site) with "having more than 8 CPUs in a 
> single LPAR is counterproductive".
>
> Putting these infos all together, it seems it's better to have more 
> small partitions (how small ???) in data sharing than, let me say, 
> four bigger ones (in data sharing too).
>
> Anybody there has direct experience on doing and measuring such scenarios
?
> Mainly standard CICS/Batch/DB2 application.
> Of course I'm talking about well defined LPARs with High Polarization 
> CPUs, so don't think about that.
>
> Could you imagine and share your thoughts (direct experiences would be
> better) about where the inefficiency comes from ?
> Excluding HW issues (Polarization and so on), could it come from zOS 
> related inefficiency (WLM queue management) ?
> If so, do zIIP CPUs participate in inefficiency growth ?
>
> I know that the usual response is "it depends", anyway I'm looking for 
> general guidelines that allow me to choose.
>
> Thanks a lot in advance for your valuable time.
> Max

Well, it is not my problem (I use smaller configurations).
However I do have some remarks:
1. Sysplex overhead. Parallel Sysplex have a lot of advantages, except
one: CPU. That means it is more effective to assign 1000MSU to single LPAR
than to spread it across 2 or more LPARs.
2. Parallel Sysplex history - many years ago IBM introduced new CPU
technology - CMOS. However new CMOS CPs were significantly less powerful
than old ECL. And there was no way to add more CPs to the CPC or LPAR. 
(LPAR was quite new concept at the time). So Parallel Sysplex was the only
way to scale the machines. However today single CPC can have up to
200 CPs - a lot. More than you need. So 

Re: ./ ADD - which utility?

2024-04-13 Thread Paul Feller
If you have access to DFSMS manuals look in the DFSMSdfp Utilities.  There is a 
whole section on IEBUPDTE.  It includes examples.  

Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
ITschak Mugzach
Sent: Saturday, April 13, 2024 9:40 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ./ ADD - which utility?

IEBUPDTE. JCL can be found in google

ITschak Mugzach
*|** IronSphere Platform* *|* *Information Security Continuous Monitoring for 
z/OS, x/Linux & IBM I **| z/VM coming soon  *




On Sat, Apr 13, 2024 at 5:30 PM   < 
0619bfe39560-dmarc-requ...@listserv.ua.edu> wrote:

> Which utility do you use for control statement/input:
> ./ ADD
>
> A jcl for that would be nice too.
>
> ...Embarassed by my lack of memory after 8 years out of this 
> envirinment...
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SMP/E

2024-03-29 Thread Paul Feller
I'll ask if you did the UPGRADE in the same run as the RECEIVE process.  The 
example from the z/OS SMP/E Commands manual shows the UPGRADE command as part 
of the same run.

The following is from the SMP/E command manual.

SET BDY(ZOSTGT).
 UPGRADE.
 BYPASS(HOLDSYS)
 CHECK.

In this example, the UPGRADE command indicates incompatible changes may be made 
to SMP/E data sets.  If UPGRADE is not specified, then SMP/E stops any 
processing that would make incompatible changes to SMP/E data sets.  In this 
example, the APPLY command would very likely fail if the UPGRADE command were 
not first run.  However, once the UPGRADE command is run for a particular zone, 
then all SMP/E commands are authorized to make incompatible changes to all 
SMP/E data sets in that zone.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Steely.Mark
Sent: Friday, March 29, 2024 5:49 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SMP/E

I did perform the upgrade command on the global and target zone and nothing 
happen. The SMP/e level stayed the same and the receive still failed with the 
same error. 



-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of Tom 
Marchant
Sent: Friday, March 29, 2024 5:20 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SMP/E



CAUTION! EXTERNAL SENDER! STOP, ASSESS, AND VERIFY Do you know this person? 
Were you expecting this email? If not, report it using the Report Phishing 
Button!

Just issue the UPGRADE command. IIRC FIXCAT support was added to SMP/E around 
2008.
Your global zone must have been created with a release of SMP/E before that.

--
Tom Marchant

On Fri, 29 Mar 2024 21:59:49 +, Steely.Mark  wrote:

>I recently received this message: GIM58903WSMP/E COULD NOT PROCESS A 
>++HOLD FIXCAT MCS BECAUSE IT WOULD HAVE MADE A CHANGE TO THE GLOBAL ZONE THAT
>   
>   CANNOT BE PROCESSED COMPLETELY BY PRIOR LEVELS OF SMP/E. USE THE 
> UPGRADE COMMAND TO ALLOW SMP/E TO MAKE
>   
>   SUCH CHANGES.
>
>The message says you can execute the upgrade command.  ( I am not sure 
>if I can do that without having some type of maintenance applied)
>
>It has been a while since I upgraded SMP/e independently of a Service Pack or 
>some other type of installation.
>
>I thought I would just install the new release of SMP/e - for some reason I 
>can't find it on ShopzSeries.
>
>I have tried to look all over the internet for the answer, but I always gets 
>hits on installing other products using SMP/e.
>
>We are currently at SMP/E 37.13
>Our SMP/E is 2.4 FMID - HMP1K00.
>
>What should I do ?

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How can I determine MVS FQDSN from DD Name in Batch COBOL Program?

2024-03-26 Thread Paul Feller
I'm going to suggest something a little different.  Let me say that I'm not
against what Cameron is trying to do.  I've done the run the control block
thing and I've done the RDJFCB thing.

In the original email the statement was made.

"But in emergency, support could override SYSOUT=* with SYSOUT=mydatasetName
and my program will be able to determine we can honour the DISPLAYs."

I'm assuming that the override would be some type of JCL change.  Well what
about just adding a parm to the JCL that is used to indicate to do the
display or not to do the display.  Keep the SYSOUT as a FQDN, maybe a GDG
that only has two or three versions.

So now the COBOL code checks the parm to indicate how to handle the display
stuff.

Just a little different way of looking at things.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Kirk Wolf
Sent: Tuesday, March 26, 2024 10:46 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: How can I determine MVS FQDSN from DD Name in Batch COBOL
Program?

IMO, a assembler subroutine that does RDJFCB is a better option than chasing
control blocks.   I wrote one that we call from our C++ product and its
about 120 lines of assembler code, with XPLINK linkage.   However, we use
EDCDSECT to convert the JFCB dsect to a C header so that you can look at
anything you want from C/C++.It's been very reliable for many years in
all sorts of use cases.

Kirk Wolf
Dovetailed Technologies
http:// coztoolkit.com


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: CICS suspended wait time increased

2024-03-24 Thread Paul Feller
If you use the link that Colin gave you for the CICS VSAM string waits you can 
"back track" to all the needed documentation you would need to help you with 
things.

https://www.ibm.com/docs/en/cics-ts/5.3?topic=waits-resource-types-fcpssusp-fcsrsusp-vsam-strings

Also, if I recall correctly SYSVIEW should be able to show you the string 
counts and other things like buffer counts for non-RLS and RLS Vsam datasets.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Colin Paice
Sent: Sunday, March 24, 2024 9:12 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: CICS suspended wait time increased

Ceda ?

On Sun, Mar 24, 2024, 13:47 raji ece <
05ff2ba04c83-dmarc-requ...@listserv.ua.edu> wrote:

> May I know how to check the string value in file definition?
>
> Looks like we have wait time for KSDS dataset.
>
>
>
> On Sun, Mar 24, 2024, 17:57 Colin Paice < 
> 059d4daca697-dmarc-requ...@listserv.ua.edu> wrote:
>
> >
> >
> https://www.ibm.com/docs/en/cics-ts/5.6?topic=facility-monitoring-by-u
> sing-cics-db2-statistics
> > says
> > In the Db2 Connection statistics, check the field “Peak number of 
> > tasks
> on
> > Pool Readyq”, and also the field “Peak number of Tasks on TCB 
> > Readyq”. If the latter is nonzero, tasks were queued waiting for a 
> > Db2 connection to use with their open TCBs, rather than waiting for 
> > a thread. The tasks
> were
> > queued because the TCBLIMIT, the maximum number of TCBs that can be 
> > used
> to
> > control threads into Db2, had been reached. This shows that the 
> > number of threads available (the sum of the THREADLIMIT values for 
> > the pool, for command threads and for all DB2ENTRYs) exceeds the 
> > number of TCBs
> allowed.
> > TCBLIMIT or the THREADLIMIT values should be adjusted in this case.
> >
> > You may not have enough threads in the DB2 pool
> >
> >
> >
> https://www.ibm.com/docs/en/cics-ts/5.3?topic=waits-resource-types-fcp
> ssusp-fcsrsusp-vsam-strings
> > says
> > For non-RLS mode, the number of strings defined for a VSAM data set 
> > (STRINGS parameter in the FILE resource definition) determines how 
> > many tasks can use the data set concurrently. STRINGS can have a 
> > value in the range 1–255. For RLS mode, strings are automatically 
> > allocated as needed
> up
> > to a maximum of 1024. When all the strings are in use, any other 
> > task wanting to access the data set must wait until a string has 
> > been
> released.
> >
> > Check the strings in the FILE definition
> >
> > Colin
> >
> > On Sun, 24 Mar 2024 at 11:33, raji ece < 
> > 05ff2ba04c83-dmarc-requ...@listserv.ua.edu> wrote:
> >
> > > We don't have any CPU constraint for cics address space.
> > >
> > > Below values are high for some transactions in the business hours 
> > > but
> in
> > > off hours we don't see much wait time with any other Parameters.
> > >
> > > DB2 readyq wait time and FC Vsam string wait time has high values 
> > > for
> > some
> > > transactions during the mid hours and we don't see any wait time 
> > > for
> off
> > > hours.
> > >
> > >
> > >
> > > On Sun, Mar 24, 2024, 13:33 Massimo Biancucci < 
> > > 05a019256424-dmarc-requ...@listserv.ua.edu> wrote:
> > >
> > > > Hi,
> > > >
> > > > let me joke a bit.
> > > >
> > > > A man went to the doctor saying: "Doctor if I use my finger and 
> > > > touch
> > my
> > > > head I feel pain, if I touch my arm I feel pain, if I touch my 
> > > > chest
> I
> > > feel
> > > > pain, if I touch my leg I feel pain. What do you think the 
> > > > problem is
> > ?"
> > > > The doctor: "Your finger is the problem, maybe it's broken"
> > > >
> > > > The paining points are at different level and for each one 
> > > > there's something to look at.
> > > > You didn't report any "value" so we can assume the problem is 
> > > > equally spreaded all over those indicators.
> > > >
> > > > Do you have any CPU constraints for CICS Address Spaces ?
> > > >
> > > > Best regards.
> > > > Max
> > > >
> > > >
> > > > Il giorno dom 24 mar 2024 alle ore 07:08 raji ece < 
> > > > 05ff2ba04c83-dmarc-requ...@listserv.ua.edu> ha scritto:
> > > >
> > > > > Hello Team,
> > > > >
> > > > > Good day,
> > > > >
> > > > > We have noticed some of our CICS transaction are getting 
> > > > > delayed
> > recent
> > > > > days. When we check in sysview , the suspended time is high 
> > > > > and the execution time is very short.
> > > > >
> > > > > In the suspended segregation list, we could see below list 
> > > > > with
> more
> > > wait
> > > > > time.
> > > > >
> > > > > Temp storage wait time
> > > > > File I/O wait time
> > > > > CICS exception wait time
> > > > > CICS TCB change mode delay time FC VSAM string wait time
> > > > > DB2 readyq wait time
> > > > > Resources manager interface time Resources manager suspended 
> > > > > time ( What are the aspects for all this wait time)
> > > > >
> > > > > Also, I have noticed CICS exception and VSAM string wait time 
> > > > > has
> > same
> > > > > value.
> > > > 

Re: Ideas for less-distruptive disruptions - Netmaster:Solve and CICS

2024-03-20 Thread Paul Feller
Tom, I think whoever is responsible for this project is going to have to come 
up with some help around the CICS stuff.  It sounds like Solve is similar to 
CL/Supersession.  I don't think the Solve software will help you much in this 
situation.  

Technically the CICS program does not need a MAP to send out the message.  It 
can do a simple EXEC CICS SEND (without the MAP option) of a text message to 
the screen with the needed information and then the program would end.  Then in 
CICS you would point the transaction that is being decommissioned to this 
simple program.  If you wanted to get tricky you could put in code to see what 
transaction is calling the program and customize the message based on the 
transaction.  If it was me, I would just keep the program simple and use a 
single generic message.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Mike Schwab
Sent: Wednesday, March 20, 2024 4:57 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Ideas for less-distruptive disruptions - Netmaster:Solve and CICS

Here is a set of CICS transactions to perform DogeCoin transactions via CICS.  
The first screen could be simplified to be display only,

https://github.com/mainframed/DOGECICS

On Wed, Mar 20, 2024 at 4:01 PM Tom Longfellow 
<03e29b607131-dmarc-requ...@listserv.ua.edu> wrote:
>
> Paul
>
> The answer to your question is BOTH - Individual apps are being yanked before 
> the eventual complete shutdown of everything the region does.
>
> Our internal thoughts parallel your ideas for CICS.   One of the hurdles is 
> that since the mainframe is marked for death, we have no real access to 
> application programmers to write the new transaction.  I am too old to learn 
> all the skills required to write the code and screen maps for a new program.
>
> Solve is a VTAM session switcher. If we ever get a dedicated region with 
> only the "landing page" transaction, I would redirect SOLVE to send the 
> switching definition to the 'death zone' CICS.
>
> Has anybody developed an 'Out of Service' transaction for use during periods 
> of extended application or data base maintenance?
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Ideas for less-distruptive disruptions - Netmaster:Solve and CICS

2024-03-20 Thread Paul Feller
Hi Tom..

Let me start by saying I don't know anything about Solve.

Now to my question.  Are you talking about individual CICS transactions going 
away or are you talking about the whole CICS region going away?

If you are talking about individual CICS transactions then you could handle 
things in the following way.  Have someone write a simple CICS program that 
sends out a screen with the needed information.  Then you could point the 
transaction that is going away to that simple program.

If you are talking the whole CICS region going away then I'm not sure what you 
could do within Solve.  I guess if the CICS region only supported one 
transaction and you used Solve to logon the person to that region and then 
automatically launched the transaction you could use my suggestion about.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of Tom 
Longfellow
Sent: Wednesday, March 20, 2024 11:32 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Ideas for less-distruptive disruptions - Netmaster:Solve and CICS

Our mainframe is scheduled for termination.As such, bits and pieces are 
being turned off.
Management edicts wants "no sudden surprise screens and error messages"  when a 
function is killed.
A "landing screen" has been proposed that would do the required hand holding 
with messages like "Thanks for playing" "This is gone"  "Call someone who 
cares" and maybe "Counselors are on call for your withdrawal needs"

I am scrambling for ways to implement this to kill one thing and replace it 
with another things that is not dead (yet)..   

The SOLVE product is basically a session switcher that takes your 3270 terminal 
to another active VTAM application.
I am wondering if there is a way to change the menu item on the switching 
screen to replace it with the "landing screen".  For example, it currently 
connects you to a CICS region.  Is there something else it could do within 
Solve to just blast them "landing screen" when they select the menu item?

I am not a CICS programmer I just start and stop CICS regions.   I am picturing 
some kind of 3270 based transaction that could just present the "landing 
screen" and nothing else.   I would then replace my current welcome menu with 
this new transaction.   Ending this transaction could even be used to initiate 
a sign off from CICS.

Anybody have ideas on how to  get  from here to there and allow this mainframe 
to die politely and with dignity.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ASVT & ASID discrepancy mystery

2024-03-14 Thread Paul Feller
Alan, I'm curious how long had the lpar (system) been up when this issue
happened.  

I've seen issues with the number of non-reusable ASIDs growing over time
related to how long a lpar is up and how active things are related to cross
memory connects.  What I'm getting at is if you have some tasks that stay up
for the life of the IPL that have cross memory connections to other tasks
that get cycled from time to time your non-reusable ASID count could grow to
a point that you run short.  I see this normally with things like Db2
subsystems that normally stay up for the life of the IPL.  At least that’s
how I understand it.

Jim Mulder, please correct me if I'm incorrect in my comments. 


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Alan Haff
Sent: Thursday, March 14, 2024 5:08 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: ASVT & ASID discrepancy mystery

On Thu, 14 Mar 2024 16:49:03 -0500, Alan Haff mailto:alan.h...@microfocus.com> > wrote:

>RIDS/IEAVXSRM#L RIDS/#UNKNOWN AB/S0AC7 PRCS/001B REGS/0D218 
>REGS/041B8

Ok, I looked up 001B and yeah, it's just indicating we had an IEA059E.
So no surprise there.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu   with the
message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: What happens in HSM when I change a Management Class

2024-03-13 Thread Paul Feller
Gadi, Mike has a good point.  Doing HSM tape recycles could help free up any 
tapes with a low number of datasets on them.  This is true for migration tapes 
and backup tapes.  Be warned that if you don't do tape recycles on a scheduled 
basis the process could possibly run for a long time.  As an example, I think 
that could then impact HSM process of recalls from tape.  I believe the place I 
last worked at did tape recycles every day.

Also, I should have mentioned the information I pasted in my earlier reply came 
from the "DFSMSdfp Storage Administration" manual for z/OS 2.5.  It can be 
found in "Chapter 8. Defining management classes".


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Mike Schwab
Sent: Wednesday, March 13, 2024 1:30 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: What happens in HSM when I change a Management Class

You should be recycling tapes with a low percent of active datasets.
This copies the remaining active datasets to a new tape then marking this on as 
inactive.  You can manually issue a recycle command against a specific volume.

On Wed, Mar 13, 2024 at 10:56 AM Gadi Ben-Avi  wrote:
>
> Hi Paul,
> Thanks for your detailed explanation.
>
> I would like to delete extra copies of backups, backups that have been on HSM 
> for more than a specified number of days, and as a result of that, tapes that 
> become empty will be deleted.
>
> The end result would be that every backed-up dataset that still exists could 
> have more than one backup, and datasets that do not exist anymore would have 
> only one backup.
>
> Gadi
> ____
> From: IBM Mainframe Discussion List  on 
> behalf of Paul Feller <05aa34d46684-dmarc-requ...@listserv.ua.edu>
> Sent: Wednesday, March 13, 2024 16:09
> To: IBM-MAIN@LISTSERV.UA.EDU 
> Subject: Re: What happens in HSM when I change a Management Class
>
> [You don't often get email from 
> 05aa34d46684-dmarc-requ...@listserv.ua.edu. Learn why this is 
> important at https://aka.ms/LearnAboutSenderIdentification ]
>
> Gadi, I have to ask.  Are trying to delete backups of individual 
> datasets or the actual tapes created during dataset backup processing?
>
> If you are trying to manage the actual backups of individual datasets, 
> have you looked at the different management classes used for the datasets?
>
> The following fields in the management class affect backups taken for 
> an individual dataset.
>
> Retain Days Only Backup Ver:
>  Indicates how many days to keep the most recent backup version of a 
> deleted data set, starting from the day DFSMShsm detects it has been 
> deleted. This attribute applies only when a data set no longer exists 
> on primary (level 0) or migrated (levels 1 and 2) storage. The default 
> is 60. This field does not apply to objects. Backup copies of objects 
> are not retained when the original object is deleted.
>
> Retain Days Extra Backup Vers:
>  Indicates how many days to keep backup versions other than the most 
> recent one, starting from the day backups were created. It applies 
> only when more than one backup version exists, and when a data set has 
> low activity. This attribute applies whether the data set has been 
> deleted or not. The default is 30. The number of extra versions is the 
> number of backup versions minus one. If you specify 1 for Number of 
> Backup Vers, there are no extra versions. For example, if you specify 
> 3 for Number of Backup Vers (Data Set Deleted), the number of "extra" 
> versions for deleted data sets is 2. These 2 versions are managed 
> according to the Retain Days Extra Backup Vers attribute. Any other 
> versions that may have existed when the data set was deleted will be deleted 
> the next time EXPIREBV is processed.
>
> Number of Backup Vers:
>  Specifies the maximum number of backup versions to retain for a data set.
> The default is 2 if the data set still exists and 1 if it has been deleted.
>  Creating a new backup version when the number of backup versions 
> already equals the value specified for the appropriate Number of 
> Backup Vers attribute (Data Set Exists) causes the oldest version of 
> the appropriate type to be deleted.
>  The number of backup versions is used to determine whether OAM should 
> write one or two backup copies of the objects, when you activate the 
> SECONDBACKUPGROUP function for objects using SETOSMC in the CBROAMxx 
> member of PARMLIB. If the number of backup versions is greater than 1 
> and AUTO BACKUP is Y, OAM will create two backup copies. When the 
> original object is expired or deleted, all backup copies are also deleted.
>
>
>
> Paul
>
> -Original Message-
> From: IBM Mainframe Discu

Re: What happens in HSM when I change a Management Class

2024-03-13 Thread Paul Feller
Gadi, I have to ask.  Are trying to delete backups of individual datasets or
the actual tapes created during dataset backup processing?

If you are trying to manage the actual backups of individual datasets, have
you looked at the different management classes used for the datasets?

The following fields in the management class affect backups taken for an
individual dataset.

Retain Days Only Backup Ver:
 Indicates how many days to keep the most recent backup version of a deleted
data set, starting from the day DFSMShsm detects it has been deleted. This
attribute applies only when a data set no longer exists on primary (level 0)
or migrated (levels 1 and 2) storage. The default is 60. This field does not
apply to objects. Backup copies of objects are not retained when the
original object is deleted. 

Retain Days Extra Backup Vers:
 Indicates how many days to keep backup versions other than the most recent
one, starting from the day backups were created. It applies only when more
than one backup version exists, and when a data set has low activity. This
attribute applies whether the data set has been deleted or not. The default
is 30. The number of extra versions is the number of backup versions minus
one. If you specify 1 for Number of Backup Vers, there are no extra
versions. For example, if you specify 3 for Number of Backup Vers (Data Set
Deleted), the number of "extra" versions for deleted data sets is 2. These 2
versions are managed according to the Retain Days Extra Backup Vers
attribute. Any other versions that may have existed when the data set was
deleted will be deleted the next time EXPIREBV is processed.

Number of Backup Vers:
 Specifies the maximum number of backup versions to retain for a data set.
The default is 2 if the data set still exists and 1 if it has been deleted.
 Creating a new backup version when the number of backup versions already
equals the value specified for the appropriate Number of Backup Vers
attribute (Data Set Exists) causes the oldest version of the appropriate
type to be deleted.
 The number of backup versions is used to determine whether OAM should write
one or two backup copies of the objects, when you activate the
SECONDBACKUPGROUP function for objects using SETOSMC in the CBROAMxx member
of PARMLIB. If the number of backup versions is greater than 1 and AUTO
BACKUP is Y, OAM will create two backup copies. When the original object is
expired or deleted, all backup copies are also deleted.



Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Gadi Ben-Avi
Sent: Wednesday, March 13, 2024 6:54 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: What happens in HSM when I change a Management Class

Thanks

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Mark Jacobs
Sent: יום ד 13 מרץ 2024 13:43
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: What happens in HSM when I change a Management Class

[You don't often get email from
0224d287a4b1-dmarc-requ...@listserv.ua.edu. Learn why this is important
at https://aka.ms/LearnAboutSenderIdentification ]

I believe it'll take affect during your secondary space management cycle.

Mark Jacobs

Sent from ProtonMail, Swiss-based encrypted email.

GPG Public Key -
https://api.protonmail.ch/pks/lookup?op=get=markjac...@protonmail.com


On Wednesday, March 13th, 2024 at 6:46 AM, Gadi Ben-Avi 
wrote:

> Hi
> when reviewing our HSM configuration we found that the management class
used for all SMS backups says:
>
> Expire after Days Non-usage . . NOLIMIT (1 to 93000 or NOLIMIT) Expire 
> after Date/Days . . . . . NOLIMIT (0 to 93000, /mm/dd or
> NOLIMIT)
>
> Retention Limit . . . . . . . . . NOLIMIT (0 to 93000 or NOLIMIT)
>
> As I understand it, this means that backups will remain forever.
>
> If I change the values, will HSM start deleting backups?
> Will it happen immediately, during nightly processing or do I have to tell
HSM to do it?
>
> Gadi
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EXTERNAL EMAIL: ZOS Sending Logs to Sumologic Experience?

2024-03-04 Thread Paul Feller
Steve, to add to what Jerry and Charles have said.  I don't have any experience 
with Sumologic, but I'm going to guess it will need data sent to it in a format 
it will understand.  The place that I retired from was using the BMC product to 
send data to Splunk.  The BMC product allowed us to pick which SMF records to 
look at and which fields in those records to format and send to Splunk. We ran 
an agent on several lpars to capture data.  One of the SMF record types we 
looked at was related to RACF information.  We also looked at SMF record types 
related to CICS activity and batch processes. 

As a side note to Charles.  We started out with the product when it was called 
Correlog.  We had looked to several products and went with Correlog.  My 
impression is that it is a nice product.

Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Charles Mills
Sent: Monday, March 4, 2024 7:05 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: EXTERNAL EMAIL: ZOS Sending Logs to Sumologic Experience?

Thanks for the shout-out, Jerry! (I was the principal developer of said 
product.) I think BMC now calls the product AMI Defender. (I have no financial 
interest in BMC or the product.)

I am pretty much of an expert on this topic. Feel free to reach out to me 
off-line if you have any questions.

Charles


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Jerry Whitteridge
Sent: Monday, March 4, 2024 12:12 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: EXTERNAL EMAIL: ZOS Sending Logs to Sumologic Experience?

We used a product to send syslog/smf data to splunk called Correlog - since 
acquired by BMC and I don't know its new same. I don't think you will have any 
success in doing this without some agent on the mainframe that can extract and 
then send the data.

Jerry Whitteridge
Sr Manager Managed Services
jerry.whitteri...@albertsons.com
480 578 7889

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Steve Estle
Sent: Monday, March 4, 2024 11:43 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: EXTERNAL EMAIL: ZOS Sending Logs to Sumologic Experience?

All,

We are embarking on an endeavor to explore sending logics to a tool called 
Sumologic(sumologic.com).  For those who are unaware, Sumologic is a competitor 
to Splunk and contains a very powerful real time log parsing analytics engine 
which can be used to build dashboards, alerts, and more.  My basic question is 
has anyone heard of or actually been involved in devising ways to send ZOS logs 
into Sumalogic - our initial efforts will be security related, but for now am 
just asking if anyone has any experience in this realm at all?  Or maybe you 
are doing something similar to Splunk?  If so, you can post in forum or feel 
free to reach directly out to me:

Thanks much,

Steve Estle
sest...@gmail.com
303-817-9954

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN 

 Warning: All e-mail sent to this address will be received by the corporate 
e-mail system, and is subject to archival and review by someone other than the 
recipient. This e-mail may contain proprietary information and is intended only 
for the use of the intended recipient(s). If the reader of this message is not 
the intended recipient(s), you are notified that you have received this message 
in error and that any review, dissemination, distribution or copying of this 
message is strictly prohibited. If you have received this message in error, 
please notify the sender immediately.


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Getting rid of a z14 zr1 - any value in the host cards?

2024-02-27 Thread Paul Feller
If you have access to IBM Redbooks, you can look for things like the IBM z15
(8561) Technical Guide or IBM z15 (8562) Technical Guide to get information
about what I/O features can the carried over from one model to the next.
There are similar manuals for the z16-A01 and z16-A02.

But like Ed has mentioned depending on the feature code the card(s) may not
work.

Paul


_
From: IBM Mainframe Discussion List
 On Behalf Of Ed Jaffe
Sent: Monday, February 26, 2024 10:27 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Getting rid of a z14 zr1 - any value in the
host cards?


On 2/26/2024 7:54 PM, Laurence Chiu wrote:
> Somebody said to me the z14 cards cannot be used in a Z15
or z16 
> because of a difference in form factor!  This seemed like
an 
> ill-informed comment to me since all the cards probably
use some sort 
> of PCI connector and IBM would not change them between
model Z's. But 
> it would be nice to be able to quote some authoritative
source as this 
> person appears to have the ear of some of our senior
managers.

Most cards work, some cards don't.

We put a bunch of cards from our decommissioned z13s into
our z15-T02 -- skipping past the z14 generation entirely.

Doing so was far, Far, FAR cheaper than buying them "net
new..."

--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/





This e-mail message, including any attachments, appended
messages and the
information contained therein, is for the sole use of the
intended
recipient(s). If you are not an intended recipient or have
otherwise
received this email message in error, any use,
dissemination, distribution,
review, storage or copying of this e-mail message and the
information
contained therein is strictly prohibited. If you are not an
intended
recipient, please contact the sender by reply e-mail and
destroy all copies
of this email message and do not otherwise utilize or retain
this email
message or any or all of the information contained therein.
Although this
email message and any attachments or appended messages are
believed to be
free of any virus or other defect that might affect any
computer system into
which it is received and opened, it is the responsibility of
the recipient
to ensure that it is virus free and no responsibility is
accepted by the
sender for any loss or damage arising in any way from its
opening or use.


--
For IBM-MAIN subscribe / signoff / archive access
instructions,
send email to lists...@listserv.ua.edu with the message:
INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Something keeps releasing space on a large (annual) DS

2024-02-21 Thread Paul Feller
I need to correct myself.  The limit is 16 extents per volume for standard 
datasets and for extended it is 123 extents per volume.  I guess I was typing 
faster than my brain was thinking.


>From the DFSMSdfp Storage Administration manual (z/OS 2.5)

The maximum number of extents per volume and the maximum number of volumes per 
data set vary
depending on data set type as follows:
 • A basic-format or large-format sequential data set and a direct data set can 
have up to 16 extents
 per volume and up to 59 volumes.
 • An extended-format sequential data set can have up to 123 extents per volume 
and up to 59
 volumes. Either all or none of these volumes can be arranged into stripes for 
parallel processing.
 • A non-system-managed VSAM data set can have up to 255 extents per component 
and up to 59
 volumes.
 • A system-managed VSAM data set can have up to 255 extents per stripe and up 
to 59 volumes. This
 extent limit can be removed if the associated data class has extent constraint 
removal specified. Up
 to 16 volumes at a time can be read or written in parallel due to striping.
 • A PDS can have up to 16 extents and only one volume.
 • A PDSE can have up to 123 extents and only one volume.
 • An HFS data set can have up to 123 extents per volume and up to 59 volumes.

Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Michael Watkins
Sent: Wednesday, February 21, 2024 4:15 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Something keeps releasing space on a large (annual) DS

DSNTYPE=LARGE allows the 65,535 track (4,369 cylinder) limit to be exceeded. 
This should be restricted to SPOOL datasets and the like.

DSNTYPE=EXTENDED does NOT allow this size limit to be exceeded.


-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Paul Feller
Sent: Wednesday, February 21, 2024 4:10 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Something keeps releasing space on a large (annual) DS

CAUTION: This email originated from outside of the Texas Comptroller's email 
system.
DO NOT click links or open attachments unless you expect them from the sender 
and know the content is safe.

Bob, sorry we should have answered some of your questions at the end of your 
email

Let me start by saying your storage management team should be able to answer 
all your questions.  That said I'll answer some of your questions based on what 
I know.  These are general answers.  The SMS environment you are running under 
could have overrides that may affect what happens.

You asked "What IS extended PS, anyway?  I'm told it allows more than 16 
extents, but a) how many more? And b) how else is it different?"

Basically, an extended format dataset is allowed to be bigger than a standard 
(or none extended) in several ways.
Extended format allows for 123 volumes where non-extended is 16 volumes.
Extended format allows for far larger allocation of a dataset on one volume 
then a non-extended.  If I recall a non-extended max size is 65,535 tracks.


As far as I know using 3.2 to allocate the data should not be an issue.  It 
should (as far as I know) drive the same SMS ACS routines that are used in 
batch.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of Bob 
Bridges
Sent: Wednesday, February 21, 2024 11:45 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Something keeps releasing space on a large (annual) DS

I'm not a sysprog (just a security geek), but I can at least allocate datasets, 
and at the start of this year it fell to me to allocate a new dataset in which 
are logged all changes made in the security system.  Past year's log are in the 
12000-track range, so I started with a smaller allocation while I took the time 
to talk to our sysprog about space requirements.  It's populated from a daily 
production job, by the way.

When I re-allocated it, on his advice I tried a multi-volume and extended 
allocation (PS-E).  Almost immediately the job started bombing, claiming that 
the first four volumes it tried didn't have the necessary space to add an 
extension.  The sysprog is puzzled - says it should have looked in volumes that 
DO have the space, not the ones that don't.

Second attempt (I don't count the temporary smaller allocation) I kept PS-E but 
dropped the multi-volume requirement.  I've never done one of those anyway, and 
don't trust 'em.  The system promptly dropped the extra tracks I allocated, and 
a day or two later the job started bombing with a B37-04.

Third attempt: Forget PS-E (I'm unfamiliar with that too) and just used 
SPACE=(TRK,(9000,1000)).  That seemed to work for a whole week, but I just 
noticed that something, somewhere, has released extra space AGAIN; 3.4 tells me 
it's now 1960 tracks and 83%.  The job isn’t bombing yet; some time later in 
the year I'm guessing it's going to.

Pardon my frustration: WHAT THE HECK IS GOING ON?  Why does it keep releasing 
space although I never specified RLSE?  The sysprog

Re: Something keeps releasing space on a large (annual) DS

2024-02-21 Thread Paul Feller
Bob, sorry we should have answered some of your questions at the end of your 
email

Let me start by saying your storage management team should be able to answer 
all your questions.  That said I'll answer some of your questions based on what 
I know.  These are general answers.  The SMS environment you are running under 
could have overrides that may affect what happens.

You asked "What IS extended PS, anyway?  I'm told it allows more than 16 
extents, but a) how many more? And b) how else is it different?"

Basically, an extended format dataset is allowed to be bigger than a standard 
(or none extended) in several ways.
Extended format allows for 123 volumes where non-extended is 16 volumes.
Extended format allows for far larger allocation of a dataset on one volume 
then a non-extended.  If I recall a non-extended max size is 65,535 tracks.


As far as I know using 3.2 to allocate the data should not be an issue.  It 
should (as far as I know) drive the same SMS ACS routines that are used in 
batch.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of Bob 
Bridges
Sent: Wednesday, February 21, 2024 11:45 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Something keeps releasing space on a large (annual) DS

I'm not a sysprog (just a security geek), but I can at least allocate datasets, 
and at the start of this year it fell to me to allocate a new dataset in which 
are logged all changes made in the security system.  Past year's log are in the 
12000-track range, so I started with a smaller allocation while I took the time 
to talk to our sysprog about space requirements.  It's populated from a daily 
production job, by the way.

When I re-allocated it, on his advice I tried a multi-volume and extended 
allocation (PS-E).  Almost immediately the job started bombing, claiming that 
the first four volumes it tried didn't have the necessary space to add an 
extension.  The sysprog is puzzled - says it should have looked in volumes that 
DO have the space, not the ones that don't.

Second attempt (I don't count the temporary smaller allocation) I kept PS-E but 
dropped the multi-volume requirement.  I've never done one of those anyway, and 
don't trust 'em.  The system promptly dropped the extra tracks I allocated, and 
a day or two later the job started bombing with a B37-04.

Third attempt: Forget PS-E (I'm unfamiliar with that too) and just used 
SPACE=(TRK,(9000,1000)).  That seemed to work for a whole week, but I just 
noticed that something, somewhere, has released extra space AGAIN; 3.4 tells me 
it's now 1960 tracks and 83%.  The job isn’t bombing yet; some time later in 
the year I'm guessing it's going to.

Pardon my frustration: WHAT THE HECK IS GOING ON?  Why does it keep releasing 
space although I never specified RLSE?  The sysprog doesn't know either - but 
he's an external contractor who just took over the system a few months ago and 
if it's something simple he may not be aware yet of ... I dunno, something in 
SMS maybe?

Some wrinkles that may or may not be relevant:

1) The dataset is written using a REXX exec that calculates the DSN by 
reference to the current year.  This relieves folks from having to update the 
JCL every year, but maybe something about the way the exec does the allocate is 
causing the problem?  I'm guessing not, because as far as I now this job has 
run correctly for years.  But just in case:

  "ALLOC DDN(CHG$$OT) DSN('') MOD CATALOG REUSE",
  "SPACE(300,30) CYLINDERS RECFM(V,B) LRECL(304) BLKSIZE(27998)"

2) I don't know anything about SMS, but could something there be releasing 
space?

3) What IS extended PS, anyway?  I'm told it allows more than 16 extents, but 
a) how many more? And b) how else is it different?

4) I allocated the dataset each time using not batch JCL but 3.2 ... expecting 
there's no difference.

---
Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313

/* Law #6 of combat operations:  If it's stupid but it works, it isn't stupid. 
*/

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Something keeps releasing space on a large (annual) DS

2024-02-21 Thread Paul Feller
Bob as Steve said you might want to talk to your storge management team.  What 
I think is happening is your dataset is getting a management class that has 
Partial Release set and then HSM is doing space management and releasing any 
unused space.  I've seen this happen before and it has happened to me.

Good luck.

Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Steve Thompson
Sent: Wednesday, February 21, 2024 1:59 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Something keeps releasing space on a large (annual) DS

Ask the storage guys about the preferred method to allocate a file that will 
get very large during production runs. And you don't want production to fail 
with a storage ABEND. And let them know about the current behavior. They may 
have made a change that is the cause of your problem.

Then you can modify the REXX code to include the class info they give you.

Also, I suggest you allocate in CYLS not TRKS in the case they are doing 
compression of space on data sets allocated in tracks...

I've seen various odd things done to recover space for files that can't be on 
the tracks after 65xxx (forgot the last track accessible by the old NOTE/POINT 
that causes products like Panvalet to break).

And if I remember correctly, compression/decompression is done by the access 
method. Double check to make sure this isn't a VSAM thing.

I've not had to deal with these features for a few years So things slip 
from one's mind.

Steve Thompson

On 2/21/2024 2:33 PM, Bob Bridges wrote:
> Ooh, now that's interesting!  The content of this file would lend 
> itself well to compression - all alphanumeric with a few parens, 
> colons and the like.  But what happens when someone needs to view it?  
> Does it compress automatically or is another step required?
>
> It's not something I can bring up now, because everyone's busy with a 
> z/OS upgrade.  But next month...
>
> ---
> Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313
>
> /* For Sale: Parachute.  Only used once, never opened, small stain. */
>
> -Original Message-
> From: IBM Mainframe Discussion List  On 
> Behalf Of Michael Oujesky
> Sent: Wednesday, February 21, 2024 13:49
>
> You might consider SMS compression to reduce the physical size of the file.
> If you do, change the BLKSIZE to 32760 as SMS compression writes full 
> tracks and the BLKSIZE becomes logical (the size of the buffer used in 
> passing date to/from the application).
>
> --- At 11:44 AM 2/21/2024, Bob Bridges wrote:
>> I'm not a sysprog (just a security geek), but I can at least allocate 
>> datasets, and at the start of this year it fell to me to allocate a 
>> new dataset in which are logged all changes made in the security system.
>> Past year's log are in the 12000-track range, so I started with a 
>> smaller allocation while I took the time to talk to our sysprog about 
>> space requirements.  It's populated from a daily production job, by 
>> the way.
>>
>> When I re-allocated it, on his advice I tried a multi-volume and 
>> extended allocation (PS-E).  Almost immediately the job started 
>> bombing, claiming that the first four volumes it tried didn't have 
>> the necessary space to add an extension.  The sysprog is puzzled - 
>> says it should have looked in volumes that DO have the space, not the 
>> ones that don't.
>>
>> Second attempt (I don't count the temporary smaller allocation) I 
>> kept PS-E but dropped the multi-volume requirement.  I've never done 
>> one of those anyway, and don't trust 'em.  The system promptly 
>> dropped the extra tracks I allocated, and a day or two later the job 
>> started bombing with a B37-04.
>>
>> Third attempt: Forget PS-E (I'm unfamiliar with that too) and just 
>> used SPACE=(TRK,(9000,1000)).  That seemed to work for a whole week, 
>> but I just noticed that something, somewhere, has released extra 
>> space AGAIN;
>> 3.4 tells me it's now 1960 tracks and 83%.  The job isn't bombing 
>> yet; some time later in the year I'm guessing it's going to.
>>
>> Pardon my frustration: WHAT THE HECK IS GOING ON?  Why does it keep 
>> releasing space although I never specified RLSE?  The sysprog doesn't 
>> know either - but he's an external contractor who just took over the 
>> system a few months ago and if it's something simple he may not be 
>> aware yet of ... I dunno, something in SMS maybe?
>>
>> Some wrinkles that may or may not be relevant:
>>
>> 1) The dataset is written using a REXX exec that calculates the DSN 
>> by reference to the current year.  This relieves folks from having to 
>> update the JCL every year, but maybe something about the way the exec 
>> does the allocate is causing the problem?  I'm guessing not, because 
>> as far as I now this job has run correctly for years.  But just in case:
>>
>>"ALLOC DDN(CHG$$OT) DSN('') MOD CATALOG REUSE",
>>"SPACE(300,30) CYLINDERS RECFM(V,B) LRECL(304) BLKSIZE(27998)"
>>
>> 2) I don't know anything about SMS, but 

Re: Nanosecond resolution timestamps for HLL's?

2024-02-18 Thread Paul Feller
Peter, I'll start up saying I don't have access to a system that I can try this 
on.

Have you looked at the FORMATTED-TIME function?  This looks to be part of COBOL 
6.3.  My concern would be that you still might not get the uniqueness you are 
looking for.

FORMATTED-TIME: The FORMATTED-TIME function uses a format to convert a value 
that represents seconds past midnight to a formatted time of day in the 
requested format.


>From the Enterprise COBOL for z/OS 6.3 Language Reference manual.

Format
FUNCTION FORMATTED-TIME ( argument-1 argument-2 argument-3)

argument-1
Must be a national, a UTF-8, or an alphanumeric literal.
The content of argument-1 must be a time format. For details, see “Date and 
time formats” on page 468.

argument-2
Must be a numeric value in standard numeric time form. For details, see 
“Standard numeric time form” on page 468.
A value in standard numeric time form is a numeric value that represents 
seconds past midnight.

argument-3
Argument-3 is an integer representation of the offset from Coordinated 
Universal Time (UTC) expressed in minutes. If argument-3 is specified, the 
magnitude of the value must be less than or equal to 1439. For details, see 
“UTC offset value” on page 468.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Peter Farley
Sent: Sunday, February 18, 2024 6:23 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Nanosecond resolution timestamps for HLL's?

I have been reviewing all the documentation I can find to provide nano-second 
resolution timestamps from a calling HLL batch program.  STCK and STCKE 
instructions of course provide this (and more) resolution, but using them from 
any HLL besides C/C++ requires an assembler subroutine (however simple that may 
be for those of us who are already comfortable in assembler).  In shops where 
any new assembler functionality is proscribed or strongly discouraged can't or 
would strongly prefer not to use assembler for this functionality.

The only HLL-callable function already provided in z/OS that I can find that 
provides anything near that resolution is the LE Callable Services function 
CEEGMT, but two calls to that service from a COBOL program in a row separated 
by only a few calculations and a DISPLAY to SYSOUT produce identical values.  
This is not good enough for high-volume processing needs.  Every request for a 
time value needs to generate a new higher value.

Is there any other place I am not yet looking which provides nano-second 
resolution like STCK/STCKE and the linux function clock_gettime() besides an 
assembler invocation of STCK/STCKE?  z/OS Unix has not yet implemented the 
clock_gettime() function anyway, so that is off the table.  The calling HLL 
here will be COBOL, so the C/C++ builtin functions "__stck" and "__stcke" are 
not available.  Would that they were, but they are not at this time.  (Maybe 
that calls for a new "idea" to IBM . . . ?)

HTH for any pointers or RTFM you can provide.

Peter

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Tn3270 back door

2024-02-16 Thread Paul Feller
This is why I have setup a few 3270 sessions for each lpar in the OSA-ICC 
environment.  That was my back door into the lpars if something went wrong with 
TCPIP/TN3270 and associated stuff.  As for updating the cert I'm sorry I can't 
help with that.  That type of activity is handled by only a small group of 
people and I was not part of that group.

If you have session manager running on another lpar that allows cross access to 
the test lpar, I think that might bypass the whole cert stuff.  Not 100% sure 
about it.

Good luck getting things fixed.

Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Keith Gooding
Sent: Friday, February 16, 2024 8:36 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Tn3270 back door

My understanding is that a policy agent refresh only reloads the definitions if 
something has changed in the policy. I have certainly had a problem when a 
keyring had been changed - policy agent did not recognise a change so the 
cached keyring remains. The solution was to increment the connection instance 
value in the policy before the refresh. Have you tried restarting pagent ?

Keith

> On 16 Feb 2024, at 10:54, James Cradesh 
> <05a6576c6fa2-dmarc-requ...@listserv.ua.edu> wrote:
> 
> I’m locked out of my test lpar.  The ssl cert expired.  A new cert was 
> uploaded but the tn3270 doesn’t see it. I did refresh Pagent but it didn’t 
> help.  How do you get around this situation?  Is there a way to enable the 
> non ssl port?
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How can I keep JES2 from being SYSPLEXed?

2024-01-21 Thread Paul Feller
Okay this wakes up some retired brain cells.  Because tasks communicate across 
XCF (even when you don't know about it) they have to have a unique identifier 
for things to work properly.  Some tasks will create a unique identifier by 
default and some will not.  I forgot about that little item.  I'm glad things 
are now working for you.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Wendell Lovewell
Sent: Sunday, January 21, 2024 3:14 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: How can I keep JES2 from being SYSPLEXed?

Thanks for your help Bruce and Paul.

I was able to ask this to the IBM support team and they told me about the 
"XCFGRPNM" parm on the JES2 MASDEF statement.  

I hadn't specified a value for this, so both were using the default of "JES2".  
This was causing the conflict, even though my intent was that they not be part 
of the same MAS. 

Adding “XCFGRPNM=someuniqueval” to the MASDEF statements allowed both systems 
to come up, apparently independently of each other.  (I used JES2Z3 on my S0W3 
system and JES2Z4 on my S0W1 system.)

I did a cold start on the S0W3 JES, and these Groups/Members were used:
GROUP SYSJES MEMBER S0W3  
GROUP JES2Z3  MEMBER JES2$S0W3 
GROUP SYSJ2$XD MEMBER JES2Z3$S0W3$  

I did not do a cold start on the S0W1 JES, and I believe it used the 
former/default value: 
GROUP SYSJES MEMBER S0W1
GROUP JES2  MEMBER JES2$S0W1   
GROUP SYSJ2$XD MEMBER JES2$S0W1$$$


Thanks again,
Wendell

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Masking SMF data internally

2024-01-21 Thread Paul Feller
Jake, I agree you need to identify what record types are needed for the
sizing operation.  After you know which record types (and subtypes) you may
not need to do anything.  As an example, I can't think of any sensitive data
that might be in the SMF type 7x records.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Binyamin Dissen
Sent: Sunday, January 21, 2024 1:30 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Masking SMF data internally

On Sun, 21 Jan 2024 09:58:29 +0400 Jake Anderson 
wrote:

:>We have a requirement of sharing our SMF data to vendor for a sizing
:>operation of our hardware connected to our mainframe

:>Our organization has a policy of masking the critical values before
sharing :>it. I see SMF datasets are are editable from ISPF.

:>Is there a way or someone has undergone this exercise of masking the
:>confidential values inside SMF output Dataset?

First identify which types and subtypes are required. That will reduce the
job and may make it trivial.

--
Binyamin Dissen  http://www.dissensoftware.com

Director, Dissen Software, Bar & Grill - Israel

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: How can I keep JES2 from being SYSPLEXed?

2024-01-19 Thread Paul Feller
I'm not sure I have an answer for you at this time.  But I do have a few 
questions.

Are the JES2 checkpoint datasets for the two systems the same name?
Are the VOLSERs the same name?

I don't have access to any z/OS anymore but when I did, we had three different 
JES2 MAS in one sysplex and didn't have any issues.  The checkpoint (and spool 
datasets) datasets had different names and the volumes everything was on had 
different VOLSERs.  We even had a checkpoint structure for each MAS in our CF 
environment.

We had full shared DASD between the different z/OS lpars.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Wendell Lovewell
Sent: Friday, January 19, 2024 3:33 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: How can I keep JES2 from being SYSPLEXed?

I have two systems that I want to share dasd and allow for VSAM RLS between the 
systems, but I don’t want to SYSPLEX JES2.  I have MASDEF SHARED=NOCHECK.  What 
else can I do to un-sysplex JES?  
 
Is it safe to delete the JES2 and/or SYSJES group definitions from XCF?  I 
don't have a JES_CKPT_1 structure defined at all.
 
Background:
 
I’m trying to add a z/OS 3.1 ( S0W1) to a sysplex where z/OS 2.5 
(Z3- S0W3) is running, but ONLY for VSAM RLS.  I don’t want to “plex” 
anything more than is needed for VSAM/RLS—not JES, not SYSLOG, not VTAM, not 
SMF or anything else.   

The systems are running fine stand-alone (each with their own set of disks, 
including the volume with the CDS), but when I try to use the same CDS disk 
between the systems, the first system comes up fine, but JES2 on the system 
brought up last complains that it cannot find the checkpoint datasets of the 
OTHER system: 

Z4  : H *$HASP284 JES2 INITIALIZATION CHECKPOINT DIALOG STARTED
Z4  : H *$HASP277 JES2 CAN NOT FIND OR USE THE CHECKPOINT DATA SET(S)
Z4  : H   THAT WERE IN USE WHEN THE MAS WAS LAST ACTIVE BECAUSE 
VOLUME
Z4  : H   FOR CKPT1 NOT MOUNTED
Z4  : H *
Z4  : H   CURRENT CHECKPOINT VALUES:
Z4  : H   CKPT1=(DSNAME=SYS1.HASPCKPT,VOLSER=B5SYS1,INUSE=YES),
Z4  : H   CKPT2=(DSNAME=SYS1.HASPCKP2,VOLSER=B5CFG1,INUSE=YES)
Z4  : H *
Z4  : H   VALID RESPONSES ARE:
Z4  : HCKPT1=...  - UPDATE CURRENT CHECKPOINT SPECIFICATION WITH
Z4  : HCKPT2=...THE VALUES USED WHEN JES2 MAS WAS LAST 
ACTIVE
Z4  : H'RECONFIG' - THE CHECKPOINT DATA SET(S) THAT WERE IN USE
Z4  : H WHEN JES2 WAS LAST ACTIVE ARE NO LONGER
Z4  : H AVAILABLE (ALL-MEMBER WARM START ONLY)
Z4  : H'CONT' - ATTEMPT INITIALIZATION WITH THE VALUES 
LISTED
Z4  : H'TERM' - TERMINATE JES2 INITIALIZATION ON THIS MEMBER
Z4  : H *04 $HASP272 ENTER RESPONSE (ISSUE D R,MSG=$HASP277 FOR RELATED MSG)

"Z4" is the z/OS 3.1 system.  The “B5SYS1” and “B5CFG1” volumes belong to the 
z/OS 2.5 system.  If I bring up z/OS 3.1 with z/OS 2.5 down and then try to 
bring up z/OS 2.5, the same message is displayed except it identifies the 
volumes belonging to the other system.

I can RECONFIG with the correct CKPT volumes, but it always ends with:

Z4  :  $HASP478 INITIAL CHECKPOINT READ IS FROM CKPT1
Z4  :   (SYS1.HASPCKPT ON A3SYS1)
Z4  :   LAST WRITTEN THURSDAY, 18 JAN 2024 AT 21:57:08 (GMT)

Z4  :  $HASP792 JES2 HAS JOINED XCF GROUP JES2 THAT INCLUDES ACTIVE MEMBERS
Z4  :   THAT ARE NOT PART OF THIS MAS
Z4  :   MEMBER=JES2$S0W3,REASON=DIFFERENT COLD START TIME

Z4  :  $HASP428 CORRECT THE ABOVE PROBLEMS AND RESTART JES2
Z4  :  IXZ0002I CONNECTION TO JESXCF COMPONENT DISABLED,
Z4  :   GROUP JES2 MEMBER JES2$S0W1
Z4  :  $HASP9085 JES2 MONITOR ADDRESS SPACE STOPPED FOR JES2
Z4  :  $HASP085 JES2 TERMINATION COMPLETE

Like I said, I don't want a MAS at all--each JES2 should be separate.  


(This has already gone too long, but here are some displays that might help:)

Both JES parms use the same MASDEF definition: 
MASDEF   SHARED=NOCHECK,   
 CKPTLOCK=INFORM,  
 RESTART=NO,   
 DORMANCY=(25,300),
 HOLD=10,  
 LOCKOUT=1200  
 
Z3 $dmasdef  (If other system is down, after a cold start.)
   $HASP843 MASDEF
> $HASP843 MASDEF  OWNMEMB=S0W3,AUTOEMEM=OFF,CKPTLOCK=INFORM,
   $HASP843 COLDTIME=(2022.313,18:50:45),COLDVRSN=z/OS 2.5,
   $HASP843 ENFSCOPE=SYSPLEX,DORMANCY=(25,300),HOLD=10,
> $HASP843 LOCKOUT=1200,RESTART=NO,SHARED=NOCHECK,  <==
   $HASP843 SYNCTOL=120,WARMTIME=(2024.018,22:05:36),
   $HASP843 XCFGRPNM=JES2,QREBUILD=0,CYCLEMGT=MANUAL,
   $HASP843 ESUBSYS=HASP

Z4 $DMASDEF  (If other system is down, after a cold start.)
Z4  :  $HASP843 MASDEF
Z4  :  $HASP843 MASDEF  OWNMEMB=S0W1,AUTOEMEM=OFF,CKPTLOCK=INFORM,
Z4  :  $HASP843 

Re: Questions about COBOL debugging lines in subroutines

2024-01-01 Thread Paul Feller
Peter, I'm glad it worked.  Don't you just hate it when you try something and 
it fails only to try again later and it works.  I've had that happen to me a 
time or two over the years.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Farley, Peter
Sent: Monday, January 1, 2024 1:05 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Questions about COBOL debugging lines in subroutines

Steve and Paul,

I tried using the “SOURCE-COMPUTER” paragraph and WITH DEBUGGING clause in the 
subroutine in an earlier iteration of this testing, but at that time I saw that 
the compiler did not allow that – I gave a severe error if more than one 
CONFIGURATION SECTION occurred in the same compiler input file.

However, I just re-added the CONFIGURATION SECTION and WITH DEBUGGING clause to 
the subroutine as posted earlier and it Just Works (tm).

Probably PEBCAK.  Mea culpa for wasting bandwidth.

Peter

From: IBM Mainframe Discussion List  On Behalf Of 
Steve Thompson
Sent: Sunday, December 31, 2023 8:10 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Questions about COBOL debugging lines in subroutines


Each program/subprogram, that you need to debug, you must specify

in its source



"WITH DEBUGGING MODE"



That "Activates a compile-time switch for debugging lines written

in the source text.



"A debugging line is a statement that is compiled only when the

compile-time switch is activated."



So If you only need to debug a subprogram, then you only compile

and link it with the debug logic generated/activated.



HTH

Steve Thompson



On 12/31/2023 7:22 PM, Paul Feller wrote:

> Peter, I'll start by saying I've never used this option.  It does 
> sound

> interesting.  From what I have read would you not also need the "WITH

> DEBUGGING MODE" setup in the called program.

>

>

> Paul

>

> -Original Message-

> From: IBM Mainframe Discussion List 
> mailto:IBM-MAIN@LISTSERV.UA.EDU>> On Behalf 
> Of

> Farley, Peter

> Sent: Sunday, December 31, 2023 5:50 PM

> To: IBM-MAIN@LISTSERV.UA.EDU<mailto:IBM-MAIN@LISTSERV.UA.EDU>

> Subject: Re: Questions about COBOL debugging lines in subroutines

>

> P.S. - COBOL compiler is Enterprise COBOL V6.4.

>

> From: IBM Mainframe Discussion List 
> mailto:IBM-MAIN@LISTSERV.UA.EDU>> On Behalf 
> Of

> Farley, Peter

> Sent: Sunday, December 31, 2023 6:46 PM

> To: IBM-MAIN@LISTSERV.UA.EDU<mailto:IBM-MAIN@LISTSERV.UA.EDU>

> Subject: Questions about COBOL debugging lines in subroutines

>

>

> I have a little mystery concerning debugging lines ("D" in column 7) 
> in

> COBOL subroutines compiled in the same input file as the main program.

> Sample output and code are pasted below.

>

> The execution JCL for this sample program includes a CEEOPTS DD with 
> the LE

> "DEBUG" option set so debugging lines SHOULD display on SYSOUT.  In my

> little test, only the debugging line in the main program displays on SYSOUT.

>

> Q1: Can anyone tell me why the debugging line in the subroutine does 
> not

> execute at run time?

>

> Q2: Is there any way I can adjust the code or the compile process to 
> cause

> the subroutine debugging line to execute at run time?

>

> Peter

>

> Sample SYSOUT output:

>

> DBGSAMPL I=+3,J=+4,K=+2

>

> Sample COBOL code compiled as a single SYSIN file to the compiler, 
> using

> options 'AR(EX),DS(S),NOSEQ':

>

> IDENTIFICATION DIVISION.

> PROGRAM-ID. DBGSAMPL.

> ENVIRONMENT DIVISION.

> CONFIGURATION SECTION.

> SOURCE-COMPUTER.

> Z-SYSTEM

> WITH DEBUGGING MODE

> .

> DATA DIVISION.

> LOCAL-STORAGE SECTION.

> 01  IPIC S9(9) BINARY VALUE 1.

> 01  JPIC S9(9) BINARY VALUE 2.

> 01  KPIC S9(9) BINARY VALUE 3.

> PROCEDURE DIVISION.

> MAIN-PARAGRAPH.

> CALL "SUBSAMP1" USING I, J, K

>DDISPLAY "DBGSAMPL I=" I ",J=" J ",K=" K

> GOBACK

> .

> END PROGRAM DBGSAMPL.

>

> IDENTIFICATION DIVISION.

> PROGRAM-ID. SUBSAMP1.

> ENVIRONMENT DIVISION.

> DATA DIVISION.

> LINKAGE SECTION.

> 01  I1   PIC S9(9) BINARY VALUE 1.

> 01  J1   PIC S9(9) BINARY VALUE 2.

> 01  K1   PIC S9(9) BINARY VALUE 3.

> PROCEDURE DIVISION USING I1, J1, K1.

> MAIN-PARAGRAPH.

> MOVE K1 TO I1

> MOVE J1 TO K1

> MOVE 4  TO J1

>DDISPLAY "SUBSAMP1 I=" I1 "

Re: Questions about COBOL debugging lines in subroutines

2023-12-31 Thread Paul Feller
Peter, I'll start by saying I've never used this option.  It does sound
interesting.  From what I have read would you not also need the "WITH
DEBUGGING MODE" setup in the called program.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Farley, Peter
Sent: Sunday, December 31, 2023 5:50 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Questions about COBOL debugging lines in subroutines

P.S. - COBOL compiler is Enterprise COBOL V6.4.

From: IBM Mainframe Discussion List  On Behalf Of
Farley, Peter
Sent: Sunday, December 31, 2023 6:46 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Questions about COBOL debugging lines in subroutines


I have a little mystery concerning debugging lines ("D" in column 7) in
COBOL subroutines compiled in the same input file as the main program.
Sample output and code are pasted below.

The execution JCL for this sample program includes a CEEOPTS DD with the LE
"DEBUG" option set so debugging lines SHOULD display on SYSOUT.  In my
little test, only the debugging line in the main program displays on SYSOUT.

Q1: Can anyone tell me why the debugging line in the subroutine does not
execute at run time?

Q2: Is there any way I can adjust the code or the compile process to cause
the subroutine debugging line to execute at run time?

Peter

Sample SYSOUT output:

DBGSAMPL I=+3,J=+4,K=+2

Sample COBOL code compiled as a single SYSIN file to the compiler, using
options 'AR(EX),DS(S),NOSEQ':

   IDENTIFICATION DIVISION.
   PROGRAM-ID. DBGSAMPL.
   ENVIRONMENT DIVISION.
   CONFIGURATION SECTION.
   SOURCE-COMPUTER.
   Z-SYSTEM
   WITH DEBUGGING MODE
   .
   DATA DIVISION.
   LOCAL-STORAGE SECTION.
   01  IPIC S9(9) BINARY VALUE 1.
   01  JPIC S9(9) BINARY VALUE 2.
   01  KPIC S9(9) BINARY VALUE 3.
   PROCEDURE DIVISION.
   MAIN-PARAGRAPH.
   CALL "SUBSAMP1" USING I, J, K
  DDISPLAY "DBGSAMPL I=" I ",J=" J ",K=" K
   GOBACK
   .
   END PROGRAM DBGSAMPL.

   IDENTIFICATION DIVISION.
   PROGRAM-ID. SUBSAMP1.
   ENVIRONMENT DIVISION.
   DATA DIVISION.
   LINKAGE SECTION.
   01  I1   PIC S9(9) BINARY VALUE 1.
   01  J1   PIC S9(9) BINARY VALUE 2.
   01  K1   PIC S9(9) BINARY VALUE 3.
   PROCEDURE DIVISION USING I1, J1, K1.
   MAIN-PARAGRAPH.
   MOVE K1 TO I1
   MOVE J1 TO K1
   MOVE 4  TO J1
  DDISPLAY "SUBSAMP1 I=" I1 ",J=" J1 ",K=" K1
   GOBACK
   .
   END PROGRAM SUBSAMP1.
--
This message and any attachments are intended only for the use of the
addressee and may contain information that is privileged and confidential.
If the reader of the message is not the intended recipient or an authorized
representative of the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please notify us immediately by e-mail
and delete the message and any attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: What is the PDS command?

2023-12-15 Thread Paul Feller
Greetings Bob,

I was looking through my old JCL library and ran across several examples of
scans using ISRSUPC.  Depending on what you want to do you could try
ISRSUPC.  If you have access to JOBSCAN you could try it.  If you client has
DAF, you can use that to scan SMF records to see if any executing jobs are
touch the dataset.


//SEARCH02 EXEC PGM=ISRSUPC,PARM=('SRCHCMP,ANYC,LPSF')

//NEWDDDD DSN=D0PCPN.JCLLIB.CA7PROD,DISP=(SHR,KEEP,KEEP)

// DD DSN=D0PCPN.JCLLIB.OVERRIDE,DISP=(SHR,KEEP,KEEP)

// DD DSN=D0PCPN.JCLLIB.ALTERNAT,DISP=(SHR,KEEP,KEEP)

// DD DSN=D0PCPN.JCLLIB.FREEZE,DISP=(SHR,KEEP,KEEP)

// DD DSN=D0PCPN.JCLLIB.ABEND,DISP=(SHR,KEEP,KEEP)

//OUTDDDD SYSOUT=X

//SYSINDD *

SRCHFOR'UNIT=TAPE'

/*


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
John Pratt
Sent: Friday, December 15, 2023 5:09 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: What is the PDS command?

Hi Bob,

If I remember correctly =3.14 has a batch option and you can concatenate all
your JCL libraries into the generated job.

John.

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Bob Bridges
Sent: Saturday, 16 December 2023 8:55 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: What is the PDS command?

Long ago I wrote - I'm pretty sure I wrote - a REXX exec that would do a
3.14 search through multiple libraries for a character string.  I'm looking
for it now, and I find one in my archives that uses the PDS command to do
the search.

But what's the PDS command?  I've a strong suspicion that I wrote this at a
client that had a popular CBTTAPE utility, and if so it's not appropriate
for my current location.  Can someone confirm?

If you care, what I really want to do is search through a list of JCL
libraries for certain DSN fragments.  There's a job we're probably going to
shut down, and I want to be sure the datasets it produces are not used
anywhere else in production.

---
Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313

/* "Bother", said the Borg, "we've assimilated a Pooh". */

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SQA overflow condition

2023-12-13 Thread Paul Feller
Peter and others,

This is a very interesting situation.  My curiosity is peaked as to how that 
storage would have shown in a display of CSA/SQA?  Would the storage be 
"listed" under the MASTER address space or would it be attributed to SYSTEM or 
would it be under CONSOLE?


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
kekronbekron
Sent: Wednesday, December 13, 2023 7:11 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SQA overflow condition

Yup, I've used V CN(*),ROUT=ALL and V CN(*),ROUT=NONE right before and right 
after IPLs to keep tabs on what's going on.



On Wednesday, December 13th, 2023 at 18:32, Steve Horein 
 wrote:


> System Automation can use SYSCONS with the Processor Operations 
> (ProcOps) functionality. I take advantage of that, having SYSCONS 
> activated in PD mode 24/7, but issue VARY CN(),ROUT=NONE on that 
> console once the system is fully up (the MONITOR JOBNAMES/SESS/STATUS 
> console attribute is honored regardless of ROUT settings) . At system 
> shutdown time, another VARY
> CN(),ROUT=(1,2,10) is issued for monitoring progress and "external"
> automation when the time comes. D C,CN= may show some
> 
> undesirable routing codes or DEL attributes included in the CONSOLxx 
> "DEVNUM(SYSCONS)" definitions.
> 
> https://www.ibm.com/docs/en/zos/2.5.0?topic=rc-routing-code-meaning-1
> https://www.ibm.com/docs/en/zos/2.5.0?topic=consolxx-syntax-parameters
> -console-statement
> 
> On Wed, Dec 13, 2023 at 4:29 AM Peter dbajava...@gmail.com wrote:
> 
> > Finally found the reason for this condition
> > 
> > Our HMC operating system message(SYSCONS) were flooding with a 
> > product error message
> > 
> > After resetting the SYSCONS
> > 
> > ESQA got a relief and deactivated SYSCONS from operating system 
> > message console in HMC
> > 
> > On Tue, Dec 12, 2023, 2:34 PM Peter dbajava...@gmail.com wrote:
> > 
> > > Are there any tools available in cbttape to view 78-2 ?
> > > 
> > > On Tue, Dec 12, 2023, 2:01 PM Martin Packer 
> > > martin_pac...@uk.ibm.com
> > > wrote:
> > > 
> > > > Right. To Allan’s point it’s CSA that shows up by key. Though 
> > > > SQA subpools are in the 78-2.
> > > > 
> > > > I also agree with Paul’s point that a longitudinal view can 
> > > > prove helpful. Even Time Of Day could be helpful. Even comparing 
> > > > one system to another, likewise.
> > > > 
> > > > Cheers, Martin
> > > > 
> > > > From: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU on 
> > > > behalf of Paul Feller prjfeller1...@gmail.com
> > > > Date: Monday, 11 December 2023 at 14:20
> > > > To: IBM-MAIN@LISTSERV.UA.EDU IBM-MAIN@LISTSERV.UA.EDU
> > > > Subject: [EXTERNAL] Re: SQA overflow condition Peter, several 
> > > > people have given you some good suggestions. There are a few 
> > > > things you need to think about.
> > > > 
> > > > 1) As others have said, EQSA overflow is not a bad thing as long 
> > > > as your ECSA is okay. At the place I last worked at we routinely 
> > > > saw ESQA overflow on some of our larger lpars that had lots of 
> > > > activity.
> > > > 2) Has you ESQA always been "running" high and now it finaly has 
> > > > statred to overflowing?
> > > > 3) If you have RMF and have SMF history data you can look back 
> > > > at how your CSA/SQA usage has been doing. You can use the batch 
> > > > reporting function of RMF. I think the manual is "z/OS Resource 
> > > > Measurement Facility Report Analysis" that should help you.
> > > > 4) I would suggest you talk to the vendor about your question 
> > > > around the SVC module.
> > > > 
> > > > Paul
> > > > 
> > > > -Original Message-
> > > > From: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU On 
> > > > Behalf Of Allan Staller
> > > > Sent: Monday, December 11, 2023 7:39 AM
> > > > To: IBM-MAIN@LISTSERV.UA.EDU
> > > > Subject: Re: SQA overflow condition
> > > > 
> > > > Classification: Confidential
> > > > 
> > > > RMF will do this provided VSTOR(D) is specified in ERBRMFxx. It 
> > > > will show the alllocations, but not necessarily the "actual" user.
> > > > E,g. VTAM, TCPIP,.
> > > > 
> > > > HTH
> > > > 
> > > > -Original Me

Re: SQA overflow condition

2023-12-11 Thread Paul Feller
Peter, several people have given you some good suggestions.  There are a few 
things you need to think about.

1) As others have said, EQSA overflow is not a bad thing as long as your ECSA 
is okay.  At the place I last worked at we routinely saw ESQA overflow on some 
of our larger lpars that had lots of activity.
2) Has you ESQA always been "running" high and now it finaly has statred to 
overflowing?
3) If you have RMF and have SMF history data you can look back at how your 
CSA/SQA usage has been doing.  You can use the batch reporting function of RMF. 
 I think the manual is "z/OS Resource Measurement Facility Report Analysis" 
that should help you.
4) I would suggest you talk to the vendor about your question around the SVC 
module.


Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Allan Staller
Sent: Monday, December 11, 2023 7:39 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SQA overflow condition

Classification: Confidential

RMF  will do this provided VSTOR(D) is specified in ERBRMFxx. It will show the 
alllocations, but not necessarily the "actual" user.
E,g. VTAM, TCPIP,.

HTH

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Peter
Sent: Sunday, December 10, 2023 10:29 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SQA overflow condition

[CAUTION: This Email is from outside the Organization. Unless you trust the 
sender, Don’t click links or open attachments as it may be a Phishing email, 
which can steal your Information and compromise your Computer.]

The ESQA usage has gone to 108%.

Is there any tool available in CBTTAPEA which can tell me or trace SQA users 
and who are not releasing the storage?

On Mon, Nov 27, 2023, 5:37 PM Allan Staller < 
0387911dea17-dmarc-requ...@listserv.ua.edu> wrote:

> Classification: Confidential
>
> 100% concur w/Martin
>
> -Original Message-
> From: IBM Mainframe Discussion List  On 
> Behalf Of Martin Packer
> Sent: Sunday, November 26, 2023 2:39 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: SQA overflow condition
>
> [CAUTION: This Email is from outside the Organization. Unless you 
> trust the sender, Don’t click links or open attachments as it may be a 
> Phishing email, which can steal your Information and compromise your 
> Computer.]
>
> (This is not specific advice but a way of thinking about things.)
>
> SQA can, of course, overflow into CSA - with no real harm done. Unless 
> it causes CSA to go short. (CSA can't overflow into SQA, of course.)
>
> The above statements are true for both 24-bit and 31-bit.
>
> 1409K below the line, though, is pretty extreme - for 24 bit. If you 
> made SQA larger so that it only overflowed, say, by 100K there would 
> be no wasted virtual storage.
>
> More importantly, check out the "free CSA" picture. You really don't 
> want to run out of that. For 24-bit you want a few hundred K free.
> (But to achieve that might require losing 1MB of 24-bit private, which 
> might not be consequence free.)
>
> For 31 bit I like to see at least 100MB free ECSA, preferably more. 
> The reason is because ECSA is - in my experience - more volatile.
>
> Speaking of volatility, you need to plan defensively - as a problem 
> can lead to surge in SQA and CSA usage .
>
> Final point: I would advocate using SMF 78-2 to build a picture of 
> common storage usage - and how variable it is. Here is a blog post I 
> wrote on the
> matter:
>
> htt ps://
> mainframeperformancetopics.com/2020/01/05/how-i-look-at-virtual-storag
> e
>
> (Take out the space to follow the URL - as my mail client turned it 
> into an attachment.) 
>
> Cheers, Martin
>
> Sent from my iPad
>
> > On 26 Nov 2023, at 05:40, Peter  wrote:
> >
> > Hello
> >
> > I am able to see the below alert condition under RMF postprocessor 
> > III
> >
> >
> >
> > Name Reason Critical val. Possible cause or action
> >
> > *STOR TSQAO > 0 1409K bytes SQA overflow into CSA 1409K.
> >
> >
> >
> >
> >
> > Our SQA and CSA set up in our IEASYSxx is as below
> >
> >
> >
> > CSA=(2000,30)
> >
> >
> >
> > SQA=(16,192)
> >
> >
> > Hardware: z14
> > LPAR : 16gb memory
> > zOS 2.4
> >
> > Do I have think about tunning the SQA parameter ?
> >
> > Regards
> > Peter
> >
> > 
> > -- For IBM-MAIN subscribe / signoff / archive access instructions, 
> > send email to lists...@listserv.ua.edu with the message: INFO 
> > IBM-MAIN
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598 Registered office: 
> PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> ::DISCLAIMER::
> 
> The contents of this e-mail and any attachment(s) are confidential and 

Re: SQA overflow condition

2023-11-26 Thread Paul Feller
Peter, I've also have taken the attitude that SQA or ESQA overflow is not 
necessarily a bad thing as long as you are not running short of CSA or ECSA.  
As Martin mentioned you don't want to run out of CSA/ECSA.  

There are a few things that I've done over the years.  

There is a set of IRA messages that you could look for in the SYSLOG to get an 
idea of the data/time of when the overflow happens.  There is also an IRA 
message that indicates when the overflow is relieved.  

Another thing I looked at was who was using up the SQA/ESQA.  Did something 
"new" show up that is allocating SQA/ESQA.  As a side note I would look at what 
the new thing might also be doing to the CSA/ECSA.

As Martin mentioned the SMF type 78 record can be helpful.  I had some SAS code 
I ran that looked at the SMF type 78 records to get a picture of the ups and 
downs of SQA/ESQA and CSA/ECSA usage.  I'm not very good at coding SAS but I'm 
able to make things work.  I got the original code off the CBT website and 
modified it for my needs.  Unfortunately, I don't have access to the z/OS 
system anymore because I'm retired so I don't have any additional details I can 
give you around the SAS code.

Paul

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Martin Packer
Sent: Sunday, November 26, 2023 2:39 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SQA overflow condition

(This is not specific advice but a way of thinking about things.)

SQA can, of course, overflow into CSA - with no real harm done. Unless it 
causes CSA to go short. (CSA can't overflow into SQA, of course.)

The above statements are true for both 24-bit and 31-bit.

1409K below the line, though, is pretty extreme - for 24 bit. If you made SQA 
larger so that it only overflowed, say, by 100K there would be no wasted 
virtual storage.

More importantly, check out the "free CSA" picture. You really don't want to 
run out of that. For 24-bit you want a few hundred K free. (But to achieve that 
might require losing 1MB of 24-bit private, which might not be consequence 
free.)

For 31 bit I like to see at least 100MB free ECSA, preferably more. The reason 
is because ECSA is - in my experience - more volatile.

Speaking of volatility, you need to plan defensively - as a problem can lead to 
surge in SQA and CSA usage .

Final point: I would advocate using SMF 78-2 to build a picture of common 
storage usage - and how variable it is. Here is a blog post I wrote on the 
matter:

htt ps://mainframeperformancetopics.com/2020/01/05/how-i-look-at-virtual-storage

(Take out the space to follow the URL - as my mail client turned it into an 
attachment.) 

Cheers, Martin

Sent from my iPad

> On 26 Nov 2023, at 05:40, Peter  wrote:
> 
> Hello
> 
> I am able to see the below alert condition under RMF postprocessor III
> 
> 
> 
> Name Reason Critical val. Possible cause or action
> 
> *STOR TSQAO > 0 1409K bytes SQA overflow into CSA 1409K.
> 
> 
> 
> 
> 
> Our SQA and CSA set up in our IEASYSxx is as below
> 
> 
> 
> CSA=(2000,30)
> 
> 
> 
> SQA=(16,192)
> 
> 
> Hardware: z14
> LPAR : 16gb memory
> zOS 2.4
> 
> Do I have think about tunning the SQA parameter ?
> 
> Regards
> Peter
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Unless otherwise stated above:

IBM United Kingdom Limited
Registered in England and Wales with number 741598 Registered office: PO Box 
41, North Harbour, Portsmouth, Hants. PO6 3AU


--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN