Re: how-to Sysplex? - the LOGR and exploiters part

2009-08-14 Thread Barbara Nitz
We had problems with OPERLOG, using a structure, so now we only enable it
on one subplex that shares DASD.  We have the odd problem with EJES users
on other systems trying to connect to the logstream from images outside the
duplex.  Perhaps we should try moving to DASD-only to resolve it.

We have had no issues with RRS or CICS logstreams - they are DASD-only 
andsystem-specific.

Remember - we do NOT share DASD other than the three columes with CDSs, 
no common catalog *at all*.

LOGR had been setup on both subplexes to use SMS. Operlog is only active in 
one half, as is RRS. Both operlog and RRS require structures (and cannot go to 
DASD only) as RRS is needed for some DB2s that do data sharing.

We *thought* we were safe on this front. Until we found out that the operlog 
logstream gets corrupted on a regular basis because it gets offloaded on the 
wrong subsysplex where the offload datasets cannot be found from the 'other 
side'. One would have thought (and it came as a BIG surprise to me) that an 
SDSF session cannot access operlaog if operlog is NOT enabled on that 
system. Boy, have I been wrong here!

It is extremely easy to access the operlog from one subplex from the other 
side that actually should not access that. Just type a simple sdsd/log o on the 
system that does NOT have operlog enables, and you'll get it because SDSF 
simply connects to the operlog structure and reads it out. There is no check in 
place if operlog is actually *enabled* on that system!

Again, *I* am the main user of operlog (because I have to look after 
shutdown problems *after* JES2 was shut down), so I just live with that 
corrupted log stream and delete the offload datasets on the wrong system 
periodically.

RRS is another matter entirely. If any of the needed RRS log streams gets 
offloaded on the wrong system (and thus corrupts the log stream), one is in 
for an RRS cold start. Not really advisable in a heavy load/usage production 
environment just because an AD system in the same plex was IPL'd.

Again with RRS, you're safe as long as there aren't any connections from the 
wrong system. Unfortunately here, too, is is extremely easy (using the IBM 
provided RRS application) way to force a corrupted log stream, given enough 
activity on even separate structures/separate RRS groups for the separate 
subplexes! Just take the RRS subgroup from the 'wrong' side and browse the 
data. That connection from the wrong side is short, but long enough on bigger 
streams to fall into an offload window. Corruption on the other side, cold 
start. 

Which is why I fight tooth and nails NOT to activate RRS on the second 
subplex. And that is especially hard, as there are DB2 functions now that 
*require* RRS, and the concern is not with RRS, it is with Logger offload 
processing. Try to get *that* across. My colleagues just roll their eyes and 
think I want to make their lives unnecessarily hard.

So, it is a good thing we're IMS and not CICS. And why I say everything LOGR 
that needs sysplex sharing also MUST share the DASD necessary to offload 
this, no matter which LOGR exploiter it is. I'll take a real hard look at SMF 
sharing once we get to 1.10 to see if we can live with it or not. 

Given the problems offload causes for just about all exploiters, offload 
processing should really get redesigned, as it was NOT intended for price-
plexes!

Regards, Barbara Nitz

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: The Shame Approach

2009-08-14 Thread Timothy Sipples
Speaking for myself also includes the fact that I'm in Japan. I am
constantly reminded that Japan is different. Getting a coffee (コーヒーお
ねがいします) here is a full service experience. :-)

- - - - -
Timothy Sipples
IBM Consulting Enterprise Software Architect
Based in Tokyo, Serving IBM Japan / Asia-Pacific
E-Mail: timothy.sipp...@us.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? - The TSO user part

2009-08-14 Thread Bruce Hewson
Hi Barbara,

This is what I run after TSO logon and before ISPF is started for real.

The code runs as an ISPF dialog...better if the ISPF LOG is set to 0 pages.

We use 5 character user IDs.

Address TSO 
sysclone_name = Strip(Left('MVSVAR'(SYMDEF  ,SYSCLONE ),2))
console_name_suffix  = Right(@ || sysclone_name,8-Length(Userid()))  
console_name  = Left(Strip(Userid()) || Strip(console_name_suffix),8)  
   
Address ISPEXEC  
isfcons = console_name || N   
  
VPUT (ISFCONS ) PROFILE 
   


On Thu, 13 Aug 2009 00:00:48 -0500, Barbara Nitz nitz-...@gmx.net 
wrote:

snip
If this is not what you mean, can you elaborate a bit on what you do mean?
Check out Marks page on what needs to be done to allow this. As I explained
above, the broadcast issue *and* shared EMCS consoles via SDSF finally
made me uncustomize the use of the same userid.

'shared EMCS consoles via SDSF': Whenever SDSF.LOG (the syslog/operlog) 
or
ULOG is called, an EMCS console is established. Unless explicitly customized 
to
use a different name on every system for that EMCS console (that was the
part I could not get the TSO users to grasp, not even some sysprogs), the
default name for that console is TSO userid.  Inevitably they were in u/LOG 
on
one system and did not get responses to commands on the system they were
currently working. (Yes, I know there is a route command, yes, I know, SDSF
can customize the name of the EMCS console, no, I was unable to get *that*
concept across.)

snip

Barbara Nitz


Regards
Bruce Hewson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Concatenations and blocksizes

2009-08-14 Thread Binyamin Dissen
On Thu, 13 Aug 2009 17:44:32 -0500 Patrick O'Keefe patrick.oke...@wamu.net
wrote:

:On Thu, 13 Aug 2009 16:58:21 -0500, Paul Gilmartin 
:paulgboul...@aim.com wrote:

:My understanding/conjecture is that when the Assembler
:(for example), using BPAM, encounters a COPY nested
:within another COPY member, it:

:o Does a NOTE to mark the current block.

:o Saves the NOTE word _and_ the offset of the current
:  source record relative to that block

:o Does a FIND to open the referenced member.

:At the end of that member, it reverses the process
:with a POINT and displacing to the saved offset.

:If the motivation of Q*AM is to make blocking and
:unblocking transparent to the program, the putative
:QNOTE would need to save both the TTR and the record
:offset in an opaque data object.  QPOINT would need
:to employ both.
:...

:Ok.  I agree.   A Q*AM version of NOTE and POINT would have to
:include record number or block offset or some such (just as that 
:info has to be added to the NOTE data by a B*AM program).

As most QSAM programs pay not attention to blocks, and that their data files
are more freely available for reblocking, the results of the NOTE would have
to be a relative record number and not a TTR+RecordOffset. The system would
have to calculate a TTR from it - which may require reading the entire dataset
(also for POINT) up to there to make sure there ain't no short blocks.

:All those about to submit a formal request for QPAM should take
:note (so to speak).

--
Binyamin Dissen bdis...@dissensoftware.com
http://www.dissensoftware.com

Director, Dissen Software, Bar  Grill - Israel


Should you use the mailblocks package and expect a response from me,
you should preauthorize the dissensoftware.com domain.

I very rarely bother responding to challenge/response systems,
especially those from irresponsible companies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Ron Hawkins
Ted,

I'm eating some humble pie while typing this.

I found an old article by Cheryl Watson published in August 1988 that
described IO Service Units, I presume with IOSERV=TIME, as 1 IO Service
Unit = 8.32msec of connect time (about 1/2 revolution of most DASD
devices).

Being 1988 the DASD would have been 3380 which spun at 3600 RPM, which is an
average latency of 8 1/3 msec. 3390s spun at 4200 RPM. 8.32 msec does divide
evenly by 128 microseconds (128 * 6500 = 832000). I would be surprised if
the proximity to average latency is anything more than a coincidence though.

Ron



 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Ted MacNEIL
 Sent: Thursday, August 13, 2009 7:23 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Degraded I/O performance in 1.10?
 
 Ted,
 
 I like to see the documentation. The Channel measurement block records
 connect time and SRM in turn converts that to IO service units. Are you
 saying that 8.3ms was equivalent to 1 IO service Unit?
 
 Gotta look over 20 years ago.
 
 IOSRV=COUNT
 IOSRV=TIME
 
 was an option, a long time ago.
 I believe XA, but (as always) I could be wrong.
 And, back then, it was documented as 8.3ms.
 
 Or was it IOSRVC?
 
 -
 Too busy driving to stop for gas!
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS activation - RACF AUDIT

2009-08-14 Thread Robert S. Hansel (RSH)
Jennifer,

Unfortunately, it is WAD. The ISMF programs do not use the FACILITY class
STGADMIN profiles for governing user authority. To control ISMF, you either
have restrict access to the ISMF program library or restrict access to the
ISMF programs using PROGRAM class profiles. Some organizations choose to
only protect program DGTFPF05 which allows you to switch to 'Storage
Administrator Mode', but this is not a rigorous control measure since the
mode is actually governed by a bit in your ISPF profile that acts as a
switch.

For more information, see our presentation titled RACF and Storage
Administration available through our website at url:

http://www.rshconsulting.com/racfres.htm

Regards, Bob

-
Robert S. Hansel   | 2009 RACF Training
Lead RACF Specialist   |  Intro  Basic Admin - Boston - SEPT 22-24
RSH Consulting, Inc.   |  Audit for Results   - Boston - NOV 3-5
www.rshconsulting.com  | Visit our website for registration  details
617-969-8211   |
-

-Original Message-
Date:Thu, 13 Aug 2009 05:18:52 -0500
From:Jennifer Currell jennifer_curr...@standardlife.com
Subject: Re: SMS activation - RACF AUDIT

Hi there
Thanks for the tip. It looks like it is the other way around. If you
activate via
ISMF then it doesn't get logged into RACF type 80. But if I issue SETSMS
SCDS
(dsname) it does get picked up. I think I will raise a question with IBM.

Thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Java question

2009-08-14 Thread Tom Grieve
On Wed, 12 Aug 2009 12:47:19 +0200, Hunkeler Peter (KIUP 4)
peter.hunke...@credit-suisse.com wrote:

Denis,
Thanks for the excellent argumentation. I basically concur with you.

Yes, it was an excellent reply, certainly better than I could have done.

I'd like to reply to a few arguments, though:

1. Today's JVMs offer the option to have the byte code compiled on the
fly when certain conditions are met. So, these JVMs already have the
capability to run machine code instead of byte code. This is the
runtime environment you'd need to run Java code that has been compiled
at the will of the programmer instead of at the will of the JVM. All
you need is an option to tell the JVM where to find and/or how to
recognize pre-compiled java class files.

The JIT compiler can do things that a static compiler can't. The more
frequently a method is used, the more optimisations can be applied, such as
inlining other methods and branch table reorganisation. This results in code
which can actually be faster than statically-compiled code.


No need for a new runtime environment, no need for application
programmers
to care any more about memory management as they need to care about with
today's Java environment. You still instantiate a JVM and tell it which
Java class file to run.

BTW, programmer's don't need to care about memory management in other
HLL languages, do they? It's the HLL's runtime that manages this.


That *is* meant to be an ironic statement, isn't it? If you use malloc() in
a C program, then it's a good idea to care about free() as well, otherwise
you can end up out of memory. In Java, the JVM (or the Garbage Collector, to
be precise) automatically releases the storage occupied by objects which are
no longer referenced. The programmer has no direct control over memory
management, so doesn't generally need to worry about it.

Tom Grieve
CICS Development
IBM Hursley Park

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HLLAPI support in Pcomm ?

2009-08-14 Thread Nuttall, Peter (P.)
Thanks Chris,

Just needed to check  Can't believe I'm supporting a screen scraping
app again :-) ... 

Regards,
Peter 

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Chris Mason
Sent: 13 August 2009 17:48
To: IBM-MAIN@bama.ua.edu
Subject: Re: HLLAPI support in Pcomm ?

Peter

The evidence I can glean from Personal Communications for Windows,
Version 
5.9, Emulator Programming is that HLLAPI, in the guise of EHLLAPI, is
alive and 
well.

The IBM web site which directs you to all things PCOMM is the following:

http://www-01.ibm.com/software/network/pcomm/

I guess it helps to know that the manuals can be located somewhere in
IBM 
web pages to prompt persistence in finding them!

Chris Mason

On Thu, 13 Aug 2009 16:23:03 +0100, Nuttall, Peter (P.) 
pnutt...@jaguarlandrover.com wrote:

Hi All,

Used to lurk on this list quite a while ago, but been away for awhile
for various reasons 

Can anyone tell me if HLLAPI is still supported in the newer version(s)
of pcomm ? ... Preferably point me to a website where I can find this
info ...

Have tried the archives, but no joy ...

Many thanks in advance for your help,
Peter

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Gerhard Postpischil

Ron Hawkins wrote:

Being 1988 the DASD would have been 3380 which spun at 3600 RPM, which is an
average latency of 8 1/3 msec. 3390s spun at 4200 RPM. 8.32 msec does divide
evenly by 128 microseconds (128 * 6500 = 832000). I would be surprised if
the proximity to average latency is anything more than a coincidence though.


I would be surprised if it isn't, although I'd use a value a 
little smidgen higher. The typical I/O requires some setup to 
get IOs to handle the request, it needs to be queued, wait until 
the device is available, position the heads, search or seek the 
record, transfer the data, and clean up. Processors in the late 
eighties were fast enough so that only the search or seek 
processing took any significant time compared to processing 
time. If the disks were favorably positioned at the time of 
request, there would be no overhead, vs. maximum overhead if it 
just passed the requested record. So half the latency represents 
an average; I'd also add a correction factor for arm 
positioning, but if you're the only one running, after the first 
I/O that also becomes negligible.



Gerhard Postpischil
Bradford, VT

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? - the LOGR and exploiters part

2009-08-14 Thread Mark Zelden
On Fri, 14 Aug 2009 01:01:43 -0500, Barbara Nitz nitz-...@gmx.net wrote:

We *thought* we were safe on this front. Until we found out that the operlog
logstream gets corrupted on a regular basis because it gets offloaded on the
wrong subsysplex where the offload datasets cannot be found from the 'other
side'. One would have thought (and it came as a BIG surprise to me) that an
SDSF session cannot access operlaog if operlog is NOT enabled on that
system. Boy, have I been wrong here!


Long ago (prior to Y2K) I was consulting at the same shop I'm at now.  
We had similar issues due to consolidations and disparate systems in
the sysplex.   But the logger problem (for operlog / logrec) was still
easily solved by creating a shared SMS pool (even though there were
separate SMSplexes), a shared catalog on one of the logger volumes
and a new logger HLQ.  The priceplex was born!

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html
  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


ISPF Panel - REXX usage

2009-08-14 Thread Terri Shaffer
Hi,
  I was wondering if someone could help. 

 I am building an exec that displays a panel, from this panel a person can 
choose a system and a date and under the covers we will display the 
appropraite syslog for the 100 systems we maintain.

Were I am having trouble, since I do not work with ISPF Dialogs all that often 
is in the panel/rexx interfacing.  So my question are?

1. How does the exec know when the person hits the PF3 (END) key as they 
are done viewing logs?  The way this is designed is they can view multiple logs 
with 1 invokation of the exec.

2. In looking at the )INIT, )REINIT and )PROC statements I am also having 
trouble clearing out a field for the redisplay (like system name).  I can blank 
the field out, but the panel still shows it filled in.  Then the person 
actually 
has to enter it twice for the next display.  Does anyone have a sample that 
would help or maybe I am doing this in the wrong place?

Any other ideas/help would be greatly appreciated?

Thanks
Ms Terri Shaffer

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Vernooij, CP - SPLXM


Gerhard Postpischil gerh...@valley.net wrote in message
news:4a855dee.4060...@valley.net...
 Ron Hawkins wrote:
  Being 1988 the DASD would have been 3380 which spun at 3600 RPM,
which is an
  average latency of 8 1/3 msec. 3390s spun at 4200 RPM. 8.32 msec
does divide
  evenly by 128 microseconds (128 * 6500 = 832000). I would be
surprised if
  the proximity to average latency is anything more than a coincidence
though.
 
 I would be surprised if it isn't, although I'd use a value a 
 little smidgen higher. The typical I/O requires some setup to 
 get IOs to handle the request, it needs to be queued, wait until 
 the device is available, position the heads, search or seek the 
 record, transfer the data, and clean up. Processors in the late 
 eighties were fast enough so that only the search or seek 
 processing took any significant time compared to processing 
 time. If the disks were favorably positioned at the time of 
 request, there would be no overhead, vs. maximum overhead if it 
 just passed the requested record. So half the latency represents 
 an average; I'd also add a correction factor for arm 
 positioning, but if you're the only one running, after the first 
 I/O that also becomes negligible.
 
 
 Gerhard Postpischil
 Bradford, VT
 

Please, this is, as often in this group, far Off-Topic.
Is there anybody who can say something On-Topic, meaning answer Davids
question? We are going to 1.10 soon and are very interested in this
threads Topic.

Kees.
**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286 
**

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Bill Fairchild
You're right.  It spun at 70 revolutions per second.  The 3380 spun at 60 rps, 
so its revolution took 16.67 ms.

The average latency of a disk drive was useful for calculating connect time 
when every I/O probably involved a real seek (disconnect time) and a real 
partial revolution for the search loop to find the correct record (which was 
all connect time).  But with today's hardware, caching, RAID, channel speed, 
controller buffering, etc., the connect time component should consist almost 
totally of data transfer.  1/2 revolution's worth of data transfer indicates 
the average amount of data to be transferred per I/O is 1/2 of a full track.  
Since EXCP tells SMF to add one to its I/O counters not for every I/O request 
but rather for every block being transferred, then RMF's reported connect time 
for these I/Os should vary widely if BUFNO is varied widely, say from one to 
ten, while the EXCP counted reported by SMF would be constant.

I don't doubt the validity of the IBM number at the time it was published 
(aeons ago).  I doubt its validity for today's hardware.  I am only trying to 
guess why IBM recommended that number aeons ago in the face of its obvious 
inapplicability today.

Bill Fairchild

Software Developer 
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Ron Hawkins
Sent: Thursday, August 13, 2009 8:56 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Degraded I/O performance in 1.10?

Bill,

My memory, and follow up calculation, says that a 3390 rotated every 14.2ms,
not 16.67ms.

Even so, it would hardly seem a good move to multiply or divide a metric
based on transfer time by the avg latency of a disk drive. I don't see the
relationship.

Ron

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Rexx Question

2009-08-14 Thread Baraniecki, Ray
The reason for the outtrap is because the program scans the the output fro the 
RMM command and I display some, but not all, output.

Thanks, 
 
Ray Baraniecki 
Morgan Stanley Smith Barney
18th Floor 
1 New York Plaza 
New York, NY 10004 
Office - 212-276-5641
   Cell - 917-597-5692 
ray.baranie...@morganstanley.com  
BE CARBON CONSCIOUS. PLEASE CONSIDER OUR ENVIRONMENT BEFORE PRINTING THIS 
E-MAIL.

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Mike Wood
Sent: Thursday, August 13, 2009 7:54 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Rexx Question

Ray,   Why are you using outtrap for an rmm subcommand?  Do you really 
want to trap the line mode output, and if so, why?
Have you perhaps also set SYSAUTH.EDGDATE so that all command output is 
via rexx variables - hence no line mode output.

Mike Wood   RMM Development

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
Important Notice to Recipients:
It is important that you do not use e-mail to request, authorize or effect the 
purchase or sale of any security or commodity, to send fund transfer 
instructions, or to effect any other transactions. Any such request, orders, or 
instructions that you send will not be accepted and will not be processed by 
Morgan Stanley Smith Barney.
The Global Wealth Management Group of Morgan Stanley  Co. Incorporated and the 
Smith Barney division of Citigroup Global Markets Inc. have combined into 
Morgan Stanley Smith Barney LLC, a new investment adviser and broker-dealer 
registered with the Securities and Exchange Commission. The sender of this 
email is an employee of Morgan Stanley Smith Barney. 
 
Important disclosures on Morgan Stanley and Citi Investment Research  Analysis 
research reports may relate in part to the separate businesses of Citigroup 
Global Markets Inc. and Morgan Stanley that now form Morgan Stanley Smith 
Barney LLC. To view these important research disclosures, go to 
http://www.morganstanley.com/researchdisclosures and 
https://www.citigroupgeo.com/geopublic/Disclosures/index_a.html.
 
If received in error, please destroy and notify sender. Sender does not intend 
to waive confidentiality or privilege. Use of this email is prohibited when 
received in error. We may monitor and store emails to the extent permitted by 
applicable law.
 
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ISPF Panel - REXX usage

2009-08-14 Thread Steve Comstock

Terri Shaffer wrote:

Hi,
  I was wondering if someone could help. 

 I am building an exec that displays a panel, from this panel a person can 
choose a system and a date and under the covers we will display the 
appropraite syslog for the 100 systems we maintain.


Were I am having trouble, since I do not work with ISPF Dialogs all that often 
is in the panel/rexx interfacing.  So my question are?


1. How does the exec know when the person hits the PF3 (END) key as they 
are done viewing logs?  The way this is designed is they can view multiple logs 
with 1 invokation of the exec.


2. In looking at the )INIT, )REINIT and )PROC statements I am also having 
trouble clearing out a field for the redisplay (like system name).  I can blank 
the field out, but the panel still shows it filled in.  Then the person actually 
has to enter it twice for the next display.  Does anyone have a sample that 
would help or maybe I am doing this in the wrong place?


Any other ideas/help would be greatly appreciated?

Thanks
Ms Terri Shaffer


The short answer to 1) is: a return code of 8 indicates END was selected

The answer to 2) is a little more complex because we can't see what
you've already coded; there are several possibilities.

ad
I would strongly recommend some training (Of course, that's
my business). There are lots of details that have to fit together
to create a useable, robust Dialog Manager application.

We have a five day class, Developing Dialog Manager Applications in
z/OS that covers what you need. Take a look at

  http://www.trainersfriend.com/TSO_Clist_REXX_Dialog_Mgr/a810descrpt.htm

for the description of the content; then at the top and the bottom of
that page you will find links; follow the link to course outline
and you will see a very detailed topical outline. This will also
give you an idea of the concepts and skills you need to work with
to develop Dialog Manager applications.


We would be delighted to come to your company and teach this,
live, to up to 16 students. Dialog Manager is one of the most
enjoyable / fun topics to work with: instant gratification
for most labs!


If you're the only one at your shop who needs this training, we
have a Remote Contact Training offering where you take the
course as a self-study, run the labs on your system, and have
the course author as a mentor-by-email to answer questions,
provide hints and clues, and to evaluate your solutions. See
  http://www.trainersfriend.com/Policies/RCT_OverView.htm
to get more information on that process.

/ad

Kind regards,

-Steve Comstock
The Trainer's Friend, Inc.

303-393-8716
http://www.trainersfriend.com

  z/OS Application development made easier
* Our classes include
   + How things work
   + Programming examples with realistic applications
   + Starter / skeleton code
   + Complete working programs
   + Useful utilities and subroutines
   + Tips and techniques

== Ask about being added to our opt-in list:  ==
==   * Early announcement of new courses  ==
==   * Early announcement of new techincal papers ==
==   * Early announcement of new promotions   ==

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Bill Fairchild
I'll let you talk to IBM, since I don't do I/O performance measurements any 
more.  I believe their number was perfectly correct at one time, but not now 
(see my post in reply to Ron Hawkins for details).  If I were doing I/O 
performance measurement and tuning today, I would most definitely not use that 
number.  Since you are using the number, you should verify its accuracy and, if 
not accurate any more, ask IBM yourself or else find a more modern analysis of 
average I/O service time.

Bill Fairchild

Software Developer 
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Ted MacNEIL
Sent: Thursday, August 13, 2009 6:52 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Degraded I/O performance in 1.10?

It was probably a good value to use aeons ago when it took a real SLED 3390 
16.67 ms. to spin around once, so 8.3 ms. was 1/2 revolution.  Today, 
however, is aeons later as far as the hardware is concerned, especially 
channel speed when delivering data from controller cache instead of straight 
from the platter.

Talk to IBM!
I didn't make up the number.
It's just what it is.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Bill Fairchild
The connect time estimate of 8.3 ms. is apparently 1/2 revolution of a 3380.  
Over 20 years ago (before 1989) was before the 3390 was first introduced, so a 
3380's values would still be a correct value in whatever year that value was 
published.  Whatever is reported by RMF will always be an integral multiple of 
128 microseconds after rounding.

Bill Fairchild

Software Developer 
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Ted MacNEIL
Sent: Thursday, August 13, 2009 9:23 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Degraded I/O performance in 1.10?

Ted,

I like to see the documentation. The Channel measurement block records connect 
time and SRM in turn converts that to IO service units. Are you saying that 
8.3ms was equivalent to 1 IO service Unit?

Gotta look over 20 years ago.

IOSRV=COUNT
IOSRV=TIME

was an option, a long time ago.
I believe XA, but (as always) I could be wrong.
And, back then, it was documented as 8.3ms.

Or was it IOSRVC?

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread R.S.

Vernooij, CP - SPLXM pisze:
[...]

Please, this is, as often in this group, far Off-Topic.
Is there anybody who can say something On-Topic, meaning answer Davids
question? We are going to 1.10 soon and are very interested in this
threads Topic.


IMHO it is on topic (mainframes) in the IBM-MAIN list context, and it it 
off-topic when considering the thread's topic.


In other words I think it is justified to keep the discussion on the 
forum, but maybe it would be good idea to change message topic.


Just my $0.02
BTW: I don't like topic deviations to recollections of S/360 models ;-)

Regards
--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sd Rejonowy dla m. st. Warszawy 
XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, 
nr rejestru przedsibiorców KRS 025237

NIP: 526-021-50-88
Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci 
wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego 
podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16 marca 
2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec 
podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale zakadowym 
BRE Banku SA bd w caoci opacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Jousma, David
Herman,

Thanks for the response.  The files in question are VSAM.  I will
re-check the migration guide for info.

_
Dave Jousma
Assistant Vice President, Mainframe Services
david.jou...@53.com
1830 East Paris, Grand Rapids, MI  49546 MD RSCB1G
p 616.653.8429
f 616.653.8497


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Stocker, Herman
Sent: Thursday, August 13, 2009 1:13 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Degraded I/O performance in 1.10?

Hi David,

Look into increasing the buffers VSAM index.  Some catalog changes have
occurred that may be the cause of your slow response.  Also SMF and
Logrec
buffering.

Regards, 
Herman Stocker 
--- snip---
All,

I realize this is a really open ended question.  We completed our 1.8 to
1.10 upgrade in June, with no known problems.  Everything seems to be
running fine.  However, I have various people(mostly developers)
occasionally complaining that they think the system is slower since
the upgrade.   Of course, the upgrade gets blamed for everything.  By
slower, the are referring to their batch jobs, those that do a lot of
I/O.  

Interestingly, several people, who do not work in the same area(and most
likely do not talk to each other), asked if file buffering has changed
somehow with the upgrade.  I tell them, not that I am aware of, and ask
them
for specifics to research, and in most cases I compare the jobs running
before and after the upgrade, and the EXCPs all seem to be inline.

So, all I can say is that there is this gut feeling that something isn't
quite right, but can't put a finger on it.

Has anyone else noticed anything, or have idea's on what to look for?

_
Dave Jousma
---/snip---


The sender believes that this E-mail and any attachments were free of
any
virus, worm, Trojan horse, and/or malicious code when sent. This message
and
its attachments could have been infected during transmission. By reading
the
message and opening any attachments, the recipient accepts full
responsibility for taking protective and remedial action about viruses
and
other defects. The sender's employer is not liable for any loss or
damage
arising in any way from this message or its attachments.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

This e-mail transmission contains information that is confidential and may be 
privileged.   It is intended only for the addressee(s) named above. If you 
receive this e-mail in error, please do not read, copy or disseminate it in any 
manner. If you are not the intended recipient, any disclosure, copying, 
distribution or use of the contents of this information is prohibited. Please 
reply to the message immediately by informing the sender that the message was 
misdirected. After replying, please erase it from your computer system. Your 
assistance in correcting this error is appreciated.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Jousma, David
Herman,  Do you have any links to this info?  I only find the changes to
CA-sizes in the migration guide.

_
Dave Jousma
Assistant Vice President, Mainframe Services
david.jou...@53.com
1830 East Paris, Grand Rapids, MI  49546 MD RSCB1G
p 616.653.8429
f 616.653.8497


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Jousma, David
Sent: Friday, August 14, 2009 9:40 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Degraded I/O performance in 1.10?

Herman,

Thanks for the response.  The files in question are VSAM.  I will
re-check the migration guide for info.

_
Dave Jousma
Assistant Vice President, Mainframe Services
david.jou...@53.com
1830 East Paris, Grand Rapids, MI  49546 MD RSCB1G
p 616.653.8429
f 616.653.8497


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Stocker, Herman
Sent: Thursday, August 13, 2009 1:13 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Degraded I/O performance in 1.10?

Hi David,

Look into increasing the buffers VSAM index.  Some catalog changes have
occurred that may be the cause of your slow response.  Also SMF and
Logrec
buffering.

Regards, 
Herman Stocker 
--- snip---
All,

I realize this is a really open ended question.  We completed our 1.8 to
1.10 upgrade in June, with no known problems.  Everything seems to be
running fine.  However, I have various people(mostly developers)
occasionally complaining that they think the system is slower since
the upgrade.   Of course, the upgrade gets blamed for everything.  By
slower, the are referring to their batch jobs, those that do a lot of
I/O.  

Interestingly, several people, who do not work in the same area(and most
likely do not talk to each other), asked if file buffering has changed
somehow with the upgrade.  I tell them, not that I am aware of, and ask
them
for specifics to research, and in most cases I compare the jobs running
before and after the upgrade, and the EXCPs all seem to be inline.

So, all I can say is that there is this gut feeling that something isn't
quite right, but can't put a finger on it.

Has anyone else noticed anything, or have idea's on what to look for?

_
Dave Jousma
---/snip---


The sender believes that this E-mail and any attachments were free of
any
virus, worm, Trojan horse, and/or malicious code when sent. This message
and
its attachments could have been infected during transmission. By reading
the
message and opening any attachments, the recipient accepts full
responsibility for taking protective and remedial action about viruses
and
other defects. The sender's employer is not liable for any loss or
damage
arising in any way from this message or its attachments.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

This e-mail transmission contains information that is confidential and
may be privileged.   It is intended only for the addressee(s) named
above. If you receive this e-mail in error, please do not read, copy or
disseminate it in any manner. If you are not the intended recipient, any
disclosure, copying, distribution or use of the contents of this
information is prohibited. Please reply to the message immediately by
informing the sender that the message was misdirected. After replying,
please erase it from your computer system. Your assistance in correcting
this error is appreciated.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

This e-mail transmission contains information that is confidential and may be 
privileged.   It is intended only for the addressee(s) named above. If you 
receive this e-mail in error, please do not read, copy or disseminate it in any 
manner. If you are not the intended recipient, any disclosure, copying, 
distribution or use of the contents of this information is prohibited. Please 
reply to the message immediately by informing the sender that the message was 
misdirected. After replying, please erase it from your computer system. Your 
assistance in correcting this error is appreciated.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex?

2009-08-14 Thread R.S.

Arthur Gutowski pisze:
[...]
RACF is a sticky wicket.  In addition to profile and protection differences, 
sysplex communication requires either a shared database, or unique dataset 
names.  Because we chose not to rename our existing subplex DSNames (too 
many ICHRDSNT's to manage), and not to merge our databases (not trivial), 
we suffered the loss of command propagation.  Our admins did get over it, in 
time...


Side note: this is result of bad sysplex design. I don't want to 
criticize anyone, however sysplex is not a way for connecting (merge) 
several independent systems and applications together.


Sysplex is meant rather to expand single system. So, the proper way is 
to merge the systems before and then create sysplex. I mean real 
sysplex, not some flavor of price-plex.
In such case, there wouldn't be differences in RACF db's because there 
would be single RACF database.


BTW: Management of ICHRDSNT's is piece of cake IMHO.


My $0.02
--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sd Rejonowy dla m. st. Warszawy 
XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, 
nr rejestru przedsibiorców KRS 025237

NIP: 526-021-50-88
Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci 
wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego 
podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16 marca 
2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec 
podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale zakadowym 
BRE Banku SA bd w caoci opacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Stocker, Herman
Sorry Dave, I unlike a number of the listers do not keep references after I
have used them.
Regards, 
Herman Stocker 

- Snip-
Herman,  Do you have any links to this info?  I only find the changes to
CA-sizes in the migration guide.
-/Snip-


The sender believes that this E-mail and any attachments were free of any
virus, worm, Trojan horse, and/or malicious code when sent. This message and
its attachments could have been infected during transmission. By reading the
message and opening any attachments, the recipient accepts full
responsibility for taking protective and remedial action about viruses and
other defects. The sender's employer is not liable for any loss or damage
arising in any way from this message or its attachments.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ISPF Panel - REXX usage

2009-08-14 Thread Mike Myers

Terri:

If you are not aware of the MODEL command in ISPF, I suggest you give it 
a try to at least answer your first question (the answer is a return 
code of 8, but check out MODEL anyway, if you are not familiar with it). 
To use it, while in the ISPF editor editing a REXX procedure, type MODEL 
on the command line and enter the line command A (After) or B (Before) 
atthe point where you would like to insert the specific command. In 
response to the prompt panel generated by MODEL, select D1 (Display). 
This will generate the ISPF DISPLAY command source for you, along with 
comments describing the fields and will also give you source code in 
REXX that would test the return code. That's where you'll find the 
answer to your first question, which, as I said is 8.


As for the other question, it would help to see the panel details and 
probably what you have in REXX already, to answer that one.


Mike Myers
Mentor Services Corporation (oh yes, we offer training in ISPF - 
outright blatant plug - :-) )


Terri Shaffer wrote:

Hi,
  I was wondering if someone could help. 

 I am building an exec that displays a panel, from this panel a person can 
choose a system and a date and under the covers we will display the 
appropraite syslog for the 100 systems we maintain.


Were I am having trouble, since I do not work with ISPF Dialogs all that often 
is in the panel/rexx interfacing.  So my question are?


1. How does the exec know when the person hits the PF3 (END) key as they 
are done viewing logs?  The way this is designed is they can view multiple logs 
with 1 invokation of the exec.


2. In looking at the )INIT, )REINIT and )PROC statements I am also having 
trouble clearing out a field for the redisplay (like system name).  I can blank 
the field out, but the panel still shows it filled in.  Then the person actually 
has to enter it twice for the next display.  Does anyone have a sample that 
would help or maybe I am doing this in the wrong place?


Any other ideas/help would be greatly appreciated?

Thanks
Ms Terri Shaffer

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

  


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Ron Hawkins
Gerhard,

Nothing wrong with what you said, but IOSERV uses connect time, which is
handshake and transfer and represents work being done by the CEC. If
everything else you mentioned was to be included then why not use the sum of
Connect, Disconnect and Pend (Service Time) to calculate IO Service Units?


Ron

 
 I would be surprised if it isn't, although I'd use a value a
 little smidgen higher. The typical I/O requires some setup to
 get IOs to handle the request, it needs to be queued, wait until
 the device is available, position the heads, search or seek the
 record, transfer the data, and clean up. Processors in the late
 eighties were fast enough so that only the search or seek
 processing took any significant time compared to processing
 time. If the disks were favorably positioned at the time of
 request, there would be no overhead, vs. maximum overhead if it
 just passed the requested record. So half the latency represents
 an average; I'd also add a correction factor for arm
 positioning, but if you're the only one running, after the first
 I/O that also becomes negligible.
 
 
 Gerhard Postpischil
 Bradford, VT
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Ted MacNEIL
If I were doing I/O performance measurement and tuning today, I would most 
definitely not use that number.

Why not?
That is what it is -- constant.
I'm pretty sure it's derived from the equation 128 mics * 6500 = 8.32 ms.

Since you are using the number, you should verify its accuracy and, if not 
accurate any more, ask IBM yourself or else find a more modern analysis of 
average I/O service time.

The number is good for the 'quick and dirty'.
I never said that Ron's suggestion for the analysis of I/O from RMF (etc) was 
wrong.
Nor did I say I was using the number, myself.
I was just disputing the comment that EXCP's were only block counts.
That depends on the setting (TIME or COUNT).
And, I believe COUNT is still the default.
-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Ron Hawkins
Bill,

With XA I don't think that RPS was ever included in connect time. I admit I
only started working on XA in 1984, but everything I had from back then by
Beretvas and Freisenborg uses disconnect time to estimate if there is a seek
problem based on RPS being counted in Disconnect time.

Of course this was focused on 3880 Controllers. I have no idea if it was
different for earlier models that required reconnect for handling TIC.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Bill Fairchild
 Sent: Friday, August 14, 2009 6:24 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Degraded I/O performance in 1.10?
 
 You're right.  It spun at 70 revolutions per second.  The 3380 spun at 60
rps,
 so its revolution took 16.67 ms.
 
 The average latency of a disk drive was useful for calculating connect
time
 when every I/O probably involved a real seek (disconnect time) and a real
 partial revolution for the search loop to find the correct record (which
was
 all connect time).  But with today's hardware, caching, RAID, channel
speed,
 controller buffering, etc., the connect time component should consist
almost
 totally of data transfer.  1/2 revolution's worth of data transfer
indicates
 the average amount of data to be transferred per I/O is 1/2 of a full
track.
 Since EXCP tells SMF to add one to its I/O counters not for every I/O
request
 but rather for every block being transferred, then RMF's reported connect
time
 for these I/Os should vary widely if BUFNO is varied widely, say from one
to
 ten, while the EXCP counted reported by SMF would be constant.
 
 I don't doubt the validity of the IBM number at the time it was published
 (aeons ago).  I doubt its validity for today's hardware.  I am only trying
to
 guess why IBM recommended that number aeons ago in the face of its obvious
 inapplicability today.
 
 Bill Fairchild
 
 Software Developer
 Rocket Software
 275 Grove Street * Newton, MA 02466-2272 * USA
 Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
 Email: bi...@mainstar.com
 Web: www.rocketsoftware.com
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Ed Finnell
 
In a message dated 8/14/2009 9:01:30 A.M. Central Daylight Time,  
ron.hawkins1...@sbcglobal.net writes:

handshake and transfer and represents work being done by the CEC.  If
everything else you mentioned was to be included then why not use the  sum 
of
Connect, Disconnect and Pend (Service Time) to calculate IO Service  Units?



Guess I'd go for more of a macro level  approach first.
 
1)Open a PMR with IBM. They may be able to  suggest remedial maint.
 
2)Look at the rudimentary RMF(or RMFPP)  reports for channels and
 controllers %Busy. May be that doing the  conversion that the PROD config  
lost paths.
 
3)Get help. _www.perfassoc.com_ (http://www.perfassoc.com)  or 
_www.watsonwalker.com_ (http://www.watsonwalker.com)  offer tuning  services .




**A Good Credit Score is 700 or Above. See yours in just 2 easy 
steps! 
(http://pr.atwola.com/promoclk/100126575x1222846709x1201493018/aol?redir=http://www.freecreditreport.com/pm/default.aspx?sc=668072hmpgID=115bcd
=JulystepsfooterNO115)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DFSMS and deleting expired datasets by management class

2009-08-14 Thread Pommier, Rex R.
Hi List,

I haven't been able to find an answer in the archives, so I'll ask the
list what I'm missing.  I'm trying to get DFSMShsm to delete expired
datasets based on management class.  To this end, I changed the expire
non-usage field for one of the test management classes to 5 days then
allocated a few new datasets in this management class.  I did this 7
days ago.  Today when I checked, the datasets were still there.  My
EXPIREDDATASET parameter is set to SCRATCH.  What am I missing?  How do
I get these datasets to delete automatically?

Test dataset catalog entry:

NONVSAM --- U05.RRP.JUNK

 IN-CAT --- CATALOG.WSCTEST.UCAT

 HISTORY

   DATASET-OWNER-(NULL) CREATION2009.218

   RELEASE2 EXPIRATION--.000

   ACCOUNT-INFO---(NULL)

 SMSDATA

   STORAGECLASS --SCSTD MANAGEMENTCLASS-MCINTUAT

   DATACLASS -DCSTD LBACKUP ---.000.

 VOLUMES

   VOLSERZD2000 DEVTYPE--X'3010200F'
FSEQN--0

 ASSOCIATIONS(NULL)

 ATTRIBUTES

***

SMS constructs (with extra lines deleted):

 LINE   MGMTCLASEXPIRE EXPIRERET   
 OPERATOR   NAMENON-USAGE  DATE/DAYSLIMIT  
---(1)  --(2)------(3)---  ---(4)  --(5)-- 
MCINTUAT5 NOLIMIT  NOLIMIT



PARTIAL  PRIMARY   LEVEL 1  CMD/AUTO  
RELEASE  DAYS  DAYS MIGRATE   
(6)  ---(7)--  --(8)--  --(9)---  
CONDITIONAL10   35  BOTH  


A couple other items in the SMS construct for this mgmt class is that it
should have 3 backups if the dataset exists and 1 backup for a deleted
dataset.  This dataset apparently hasn't been backed up because I just
created it but didn't open it.  Could this be the reason it isn't being
deleted?  I thought the 1 backup for a deleted dataset simply says that
if there are backups when the dataset is deleted, to keep 1 backup copy
for a period of time.  Is this a mistaken thought?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex?

2009-08-14 Thread Ted MacNEIL
sysplex is not a way for connecting (merge) 
several independent systems and applications together.

I disagree.

Sysplex is meant rather to expand single system. So, the proper way is to 
merge the systems before and then create sysplex.

Again, I disagree.

Where I first had implemented Parallel SYSPLEX (Oct 1994), we started by 
creating the PLEX, and once we had that we started merging our two large IMS 
sub-systems.
We couldn't do it before because of Virtual Storage Constraint, and limits to 
31-bit Central Storage.
We also managed to reduce the sub-systems running from four (XRF) to three 
(IMSPLEX fail over).
We also created a DB2PLEX to support the new IMSPLEX, by merging the two (four) 
existing DB2's.

For our Web/PC-Serving CICS/DB2 application, we did it your 'proper' way (sort 
of).
We expanded the existing sub-system(s) into two CICS  two DB2.
But, after creating the same SYSPLEX.
(This also negated the [originally] planned implementation of XRF for CICS). 

The proper way is to meet your business needs and implement SYSPLEX, 
application merging, in a low risk and properly planned way.

What makes sense at one shop does not necessarily make sense at another.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ISPF Panel - REXX usage

2009-08-14 Thread Dave Salt
 1. How does the exec know when the person hits the PF3 (END) key
 
Check the return code immediately after you display the panel, like this:
 
exit_code = 0
do while exit_code = 0
   DISPLAY PANEL(MYPANEL)
   select
  when rc = 0 then call PROCESS_INPUT
  when rc = 8 then exit_code = 8
  otherwise do  /* Unexpected error; e.g. panel not found */
 exit_code = 16
 SETMSG MSG(zerrmsg)
  end
   end
end
EXIT exit_code
 
 
 2. In looking at the )INIT, )REINIT and )PROC statements I am also having
 trouble clearing out a field for the redisplay (like system name).
 
You might need to use the ISPF panel REFRESH command.
 
 
HTH,

Dave Salt

SimpList(tm) - try it; you'll get it!
http://www.mackinney.com/products/SIM/simplist.htm







 Date: Fri, 14 Aug 2009 08:01:08 -0500
 From: terri.e.shaf...@jpmchase.com
 Subject: ISPF Panel - REXX usage
 To: IBM-MAIN@bama.ua.edu

 Hi,
 I was wondering if someone could help.

 I am building an exec that displays a panel, from this panel a person can
 choose a system and a date and under the covers we will display the
 appropraite syslog for the 100 systems we maintain.

 Were I am having trouble, since I do not work with ISPF Dialogs all that often
 is in the panel/rexx interfacing. So my question are?

 1. How does the exec know when the person hits the PF3 (END) key as they
 are done viewing logs? The way this is designed is they can view multiple logs
 with 1 invokation of the exec.

 2. In looking at the )INIT, )REINIT and )PROC statements I am also having
 trouble clearing out a field for the redisplay (like system name). I can blank
 the field out, but the panel still shows it filled in. Then the person 
 actually
 has to enter it twice for the next display. Does anyone have a sample that
 would help or maybe I am doing this in the wrong place?

 Any other ideas/help would be greatly appreciated?

 Thanks
 Ms Terri Shaffer

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html
_
Send and receive email from all of your webmail accounts.
http://go.microsoft.com/?linkid=9671356

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread John Kelly
snip
I'm trying to get DFSMShsm to delete expired datasets based on management 
class
unsnip

Does SECONDARY SPACE MANAGEMENT run? That's where they should get deleted 
if in fact they have not been 'accessed' in 5 days. Also look in the HSM 
SYSOUTs for the SECONDARY SPACE MANAGEMENT (or process the FSRs) and see 
if give you and error for the DSNs that should be scratched.
I don't think that HSM will backup a DSN that hasn't been opened, or at 
least given the basics by a Data Class.

Jack Kelly
202-502-2390 (Office)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Bill Fairchild
Ron,

It's been so long that I had forgotten about RPS.  My comments about a 
connected search loop became obsolete with the advent of RPS.  Then the average 
value of 1/2 rotation was to compute the disconnect time waiting for RPS to 
cause a reconnect to the channel, assuming that the sector value had been 
computed correctly.  Some more milliseconds of disconnect time were added in to 
account for average seek.  After RPS' advent, connect time was 100% due to data 
transfer.  Today it is different thanks to FICON and controller microcode.  At 
one time, all the handshaking necessary to get the I/O started was lumped into 
pend time.  With RPS, connect time was reliably used for calculating work done, 
and pend and disconnect time were attributed to queueing and thus 
non-repeatable.  Another component of queueing that is not visible and usually 
ignored is I/O interrupt pending time, caused by not having a CPU available to 
field an interrupt as soon as the I/O ends.  This component is sti!
 ll with us.

After 20+ years our memories have trouble recalling all the details.  Like you, 
I would want to see the original quote, its context, and the year when it was 
published.

Bill Fairchild

Software Developer 
Rocket Software
275 Grove Street * Newton, MA 02466-2272 * USA
Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
Email: bi...@mainstar.com 
Web: www.rocketsoftware.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Ron Hawkins
Sent: Friday, August 14, 2009 9:13 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Degraded I/O performance in 1.10?

Bill,

With XA I don't think that RPS was ever included in connect time. I admit I
only started working on XA in 1984, but everything I had from back then by
Beretvas and Freisenborg uses disconnect time to estimate if there is a seek
problem based on RPS being counted in Disconnect time.

Of course this was focused on 3880 Controllers. I have no idea if it was
different for earlier models that required reconnect for handling TIC.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Bill Fairchild
 Sent: Friday, August 14, 2009 6:24 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Degraded I/O performance in 1.10?
 
 You're right.  It spun at 70 revolutions per second.  The 3380 spun at 60
rps,
 so its revolution took 16.67 ms.
 
 The average latency of a disk drive was useful for calculating connect
time
 when every I/O probably involved a real seek (disconnect time) and a real
 partial revolution for the search loop to find the correct record (which
was
 all connect time).  But with today's hardware, caching, RAID, channel
speed,
 controller buffering, etc., the connect time component should consist
almost
 totally of data transfer.  1/2 revolution's worth of data transfer
indicates
 the average amount of data to be transferred per I/O is 1/2 of a full
track.
 Since EXCP tells SMF to add one to its I/O counters not for every I/O
request
 but rather for every block being transferred, then RMF's reported connect
time
 for these I/Os should vary widely if BUFNO is varied widely, say from one
to
 ten, while the EXCP counted reported by SMF would be constant.
 
 I don't doubt the validity of the IBM number at the time it was published
 (aeons ago).  I doubt its validity for today's hardware.  I am only trying
to
 guess why IBM recommended that number aeons ago in the face of its obvious
 inapplicability today.
 
 Bill Fairchild
 
 Software Developer
 Rocket Software
 275 Grove Street * Newton, MA 02466-2272 * USA
 Tel: +1.617.614.4503 * Mobile: +1.508.341.1715
 Email: bi...@mainstar.com
 Web: www.rocketsoftware.com
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Pommier, Rex R.
Jack,

Secondary Space Mgmt runs at 23:00 every night.  I see no errors in the
HSM sysouts.  

Is it possible that HSM won't touch a dataset that hasn't been opened?
For my test, I merely allocated some datasets then left them sit.  I
just opened one today and changed it to see if that would make any
difference.

Rex

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of John Kelly
Sent: Friday, August 14, 2009 10:14 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DFSMS and deleting expired datasets by management class

snip
I'm trying to get DFSMShsm to delete expired datasets based on
management 
class
unsnip

Does SECONDARY SPACE MANAGEMENT run? That's where they should get
deleted 
if in fact they have not been 'accessed' in 5 days. Also look in the HSM

SYSOUTs for the SECONDARY SPACE MANAGEMENT (or process the FSRs) and see

if give you and error for the DSNs that should be scratched.
I don't think that HSM will backup a DSN that hasn't been opened, or at 
least given the basics by a Data Class.

Jack Kelly
202-502-2390 (Office)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Gibney, Dave
  Not so much as not opened. But, it won't mess with invalid datasets.
An unopened dataset could be incompletely defined (usually an unknown
DSORG). 
   And, usually DFHSM won't migrate or delete without a backup existing.

Dave Gibney
Information Technology Services
Washington State University


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Pommier, Rex R.
 Sent: Friday, August 14, 2009 10:41 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 Jack,
 
 Secondary Space Mgmt runs at 23:00 every night.  I see no errors in
the
 HSM sysouts.
 
 Is it possible that HSM won't touch a dataset that hasn't been opened?
 For my test, I merely allocated some datasets then left them sit.  I
 just opened one today and changed it to see if that would make any
 difference.
 
 Rex
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of John Kelly
 Sent: Friday, August 14, 2009 10:14 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 snip
 I'm trying to get DFSMShsm to delete expired datasets based on
 management
 class
 unsnip
 
 Does SECONDARY SPACE MANAGEMENT run? That's where they should get
 deleted
 if in fact they have not been 'accessed' in 5 days. Also look in the
 HSM
 
 SYSOUTs for the SECONDARY SPACE MANAGEMENT (or process the FSRs) and
 see
 
 if give you and error for the DSNs that should be scratched.
 I don't think that HSM will backup a DSN that hasn't been opened, or
at
 least given the basics by a Data Class.
 
 Jack Kelly
 202-502-2390 (Office)
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Convert Service Units to MIPS and calculate cost

2009-08-14 Thread Colleen Gordon
1. To calculate the cost of CPU find out the normalized cost for 1 MIPS and 
find out what machine type they normalize it too such as Z9 2094 S54 745.
2. Look up the service units per second (26727.5780) for the machine type at 
the following address: 
http://publib.boulder.ibm.com/infocenter/zos/v1r10/index.jsp?topic=/com.ibm.zos.r10.ieae100/paio.htm
3. Find out how many service units the function is using divide by the service 
units per second; this tells you how much CPU was used
4. If you have CPU and not service units; multiply the service units by the 
number of seconds to compute the service units
5. Divide by 50 to get the MIPS

Questions:

 1.  Are these calculations correct?
 2.  Does this mean that each CPU second consumes all of the MIPS on the 
machine?
*   As an example: 35 seconds * 26727.5780 = 935,465.23 divided by 50 
=18,709.30 MIPS were used in the processing?
 3.  To figure cost; multiply the number of MIPS by their normalized per MIPS 
number?

Your help/verification/correction is very much appreciated


Colleen Gordon



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Any Utilities to Archive Members of a PDS/PDSE - Load Libraries, Not Source

2009-08-14 Thread Stocker, Herman
G'day,

I have been tasked with finding a utility that can archive members of a
PDS/PDSE data set, Not the entire data set but individual members.  The
utility should be able to keep a number of copies or generations of the load
modules.

I thought I had heard of a product once a long time ago however, have lost
all reference to the product and vendor.

Thank you for your assistance.

Regards, 
Herman Stocker


The sender believes that this E-mail and any attachments were free of any
virus, worm, Trojan horse, and/or malicious code when sent. This message and
its attachments could have been infected during transmission. By reading the
message and opening any attachments, the recipient accepts full
responsibility for taking protective and remedial action about viruses and
other defects. The sender's employer is not liable for any loss or damage
arising in any way from this message or its attachments.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Can CICS region share more than one processor

2009-08-14 Thread Terry Draper
Tommy,'
   You say you are VSAM.
 
  There is a sub-task for some VSAM functions which is an option within a CICS 
address space. I do not think it takes a lot away from the main task, but make 
sure you are using this option.
 
   You could set up a TOR and route to multiple AORs and for VSAM read only 
files, these could be shared between AORs. For update files create a FOR and 
ship the requests from the AOR to the AOR. You could ship all VSAM requests to 
the FOR if you want.
 
   This will add to the overall CPU but will split the load across several 
tasks. It should not require application code changes. You need to decide how 
to split the load between the AORs and implement in the TOR the appropriate 
routing. You can use CICSPLEX SM to control the routing or create your own 
routing exit.
 
   I thought most users use the TOR/AOR structure now. to avoid your problem.
 
  I would also look at data tables to try to reduce the CPU load for VSAM 
accesses. 
 
  Or you could go out and buy a big engine z10.
 
Terry Draper
zSeries Performance Consultant
w...@btopenworld.com
mobile:  +966 556730876

--- On Thu, 13/8/09, Tommy Tsui tommyt...@gmail.com wrote:


From: Tommy Tsui tommyt...@gmail.com
Subject: Re: Can CICS region share more than one processor
To: IBM-MAIN@bama.ua.edu
Date: Thursday, 13 August, 2009, 11:40 PM


Our data access for applications is VSAM only without DB2. We have not
implement OTE yet that means we only has QR TCB. As you know, we have
to re-write many user program either switch to cicsplex with RLS or
DB2 access.

On Fri, Aug 14, 2009 at 1:01 AM, Terry Draperw...@btopenworld.com wrote:
 Tommy,

 Can I ask a couple of fundamental questions?

 What is the data access for the applications?
 Is it DB2, DL/1, VSAM or something else.
 If DB2 (and I think DL/1) these will already be running on threads and these 
 use their own TCBs, 1 per thread. If so I cannot understand your problem.

 Also do you have a TOR and AORs structure. If not I suggest you go that way.


 Terry Draper
 zSeries Performance Consultant
 w...@btopenworld.com
 mobile:  +966 556730876

 --- On Wed, 12/8/09, Tommy Tsui tommyt...@gmail.com wrote:


 From: Tommy Tsui tommyt...@gmail.com
 Subject: Can CICS region share more than one processor
 To: IBM-MAIN@bama.ua.edu
 Date: Wednesday, 12 August, 2009, 2:44 PM


 Hi ,

 We hit a problem that our cics cannot utilized more than one CPU
 processor and IBM recommend our shop upgrade to CICSplex .Except this,
 is there any other way to solve this problem?


 any comment will be appreciated

 best regards

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Archive members of a PDS/PDSE

2009-08-14 Thread Colleen Gordon
There is a product that does this.  It is sold by Mainstar.  It's call 
SYSchange.  It backs up members of a PDS/PDSE every time they are changed 
(automatically).  It keeps however many copies you want to keep in an archive 
database so you can restore them.
Check out the Mainstar Web Site at www.mainstar.comhttp://www.mainstar.com/ 
under products; select Catalog and System Management then click on SYSchange.

Colleen Gordon


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Pommier, Rex R.
Dave,

I know DFHSM won't migrate a dataset without it being backed up
beforehand, but it won't delete one either?  The manual I'm looking at
says that DFHSM processes expiration attributes before it processes
migration attributes.  And in the backup section of the same manual it
says the number of backup copies of a deleted dataset says how many to
retain.  It says nothing about needing to have backups before allowing a
delete.  

I double-checked and the datasets I'm testing with have all their
attributes.

I guess I'll wait a week and see if the one I updated will go away.

Thanks.

Rex

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Gibney, Dave
Sent: Friday, August 14, 2009 12:45 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DFSMS and deleting expired datasets by management class


  Not so much as not opened. But, it won't mess with invalid datasets.
An unopened dataset could be incompletely defined (usually an unknown
DSORG). 
   And, usually DFHSM won't migrate or delete without a backup existing.

Dave Gibney
Information Technology Services
Washington State University


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Pommier, Rex R.
 Sent: Friday, August 14, 2009 10:41 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 Jack,
 
 Secondary Space Mgmt runs at 23:00 every night.  I see no errors in
the
 HSM sysouts.
 
 Is it possible that HSM won't touch a dataset that hasn't been opened?
 For my test, I merely allocated some datasets then left them sit.  I
 just opened one today and changed it to see if that would make any
 difference.
 
 Rex
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of John Kelly
 Sent: Friday, August 14, 2009 10:14 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 snip
 I'm trying to get DFSMShsm to delete expired datasets based on
 management
 class
 unsnip
 
 Does SECONDARY SPACE MANAGEMENT run? That's where they should get
 deleted
 if in fact they have not been 'accessed' in 5 days. Also look in the
 HSM
 
 SYSOUTs for the SECONDARY SPACE MANAGEMENT (or process the FSRs) and
 see
 
 if give you and error for the DSNs that should be scratched.
 I don't think that HSM will backup a DSN that hasn't been opened, or
at
 least given the basics by a Data Class.
 
 Jack Kelly
 202-502-2390 (Office)
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DFSMS and deleting expired data sets by management class

2009-08-14 Thread Colleen Gordon
HSM cannot manage a data set with DSORG unknown.  You must assign a dataclass 
for HSM to manage it.  If a data set is eligible for expiration but space 
management is not expiring it; this indicates that there could be a problem 
with the records in the MCDS.  You'll need to audit the MCDS to see what the 
errors are and correct them.

If the data sets are not eligible for expiration but your want to expire them 
anyway; you'll have to HDELETE them.


Colleen Gordon

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DFSMS and deleting expired data sets by management class

2009-08-14 Thread Colleen Gordon
By the way, data doesn't need to be backed up by HSM to be migrated.  There is 
an option in the management class you can select that REQUIRES a backup before 
migration but if you don't have that turned on; you can successfully migrate 
data without a backup.  There is no relationship between deleting data with 
space management and whether a backup exists or not.

Colleen Gordon




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Any Utilities to Archive Members of a PDS/PDSE - Load Libraries, Not Source

2009-08-14 Thread Traylor, Terry
CA-Panexec is one 


Terry Traylor 
charlesSCHWAB 
TIS Mainframe Storage Management 
Remedy Queue: tis-hs-mstg
(602) 977-5154

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Stocker, Herman
Sent: Friday, August 14, 2009 10:57 AM
To: IBM-MAIN@bama.ua.edu
Subject: Any Utilities to Archive Members of a PDS/PDSE - Load
Libraries, Not Source

G'day,

I have been tasked with finding a utility that can archive members of a
PDS/PDSE data set, Not the entire data set but individual members.  The
utility should be able to keep a number of copies or generations of the
load modules.

I thought I had heard of a product once a long time ago however, have
lost all reference to the product and vendor.

Thank you for your assistance.

Regards,
Herman Stocker


The sender believes that this E-mail and any attachments were free of
any
virus, worm, Trojan horse, and/or malicious code when sent. This message
and
its attachments could have been infected during transmission. By reading
the
message and opening any attachments, the recipient accepts full
responsibility for taking protective and remedial action about viruses
and
other defects. The sender's employer is not liable for any loss or
damage
arising in any way from this message or its attachments.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Batch connection to CSKL (was: Trigger CICS transaction from Batch Job)

2009-08-14 Thread Gil, Victor x28091
Thanks, Jantje.

I vaguely remember coding a simple child server circa 1997 and having an
issue with the need to start it on a terminal to enable password
maintenance via EXEC CICS SIGNON.

But I've never touched the batch part. Will take a look.

Thanks again,
-Victor- 

===
Date:Thu, 13 Aug 2009 07:16:24 -0500
From:Jan MOEYERSONS jan.moeyers...@adelior.be
Subject: Re: Batch connection to CSKL (was: Trigger CICS transaction
from Batch Job)

snip

You can find the fine manual that describes the CICS side of such at
z/OS 
Communications Server IP CICS Sockets Guide ( 
http://www.elink.ibmlink.ibm.com/publications/servlet/pbi.wss?
CTY=USFNC=SRXPBL=SC31-8807-05 ). The batch side of things is described

in z/OS Communications Server IP Sockets Application Programming
Interface 
Guide and Reference ( 
http://www.elink.ibmlink.ibm.com/publications/servlet/pbi.wss?
CTY=USFNC=SRXPBL=SC31-8788-07 ).

I did code a child server, but I am afraid I cannot give you that code,
because 
it was developed for a specific customer who paid for it and who
actually owns 
that code. But the documentation is fairly good and there are samples.

The code for the client was for Windows.

The point, however, is that CICS comes with infrastructure (the CSKL 
transaction) that makes managing a listener in CICS quite simple and
effective. 

All you need to do is to code the child server transaction in your
programming 
language of choice. The book gives you guidelines on how to structure
the 
program for this transaction. And you do not have to worry about how to 
listen for an incoming connection (because that is the difficult part
that CICS 
has already done for you...). Your transaction is started by CSKL and
receives 
the socket number that represents the connection. All you need to do is
to 
TAKESOCKET and start receiving (and sending) data over it.

For the batch program, you just do simple sockets programming, again in
the 
programming language of your choice. 

The connection is set up by opening a socket to the port where the CSKL
is 
listening, send the trancode plus some security-related information (if
you 
need it). CICS is listening for incoming connections, spawns the
transaction, 
corresponding to the trancode you ask for and GIVESOCKET the socket to
it.

Then your batch program and your CICS transaction engage in a TCP 
conversation and can pass data back and forth among them without 
restriction.

Do read the book; it is all in there.

If you need more information, please ask me specific, detailed questions
off-
list.

Cheers,

Jantje.
This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS batch query

2009-08-14 Thread Neil Duffee
On 2009-08-13 at 12:08, concerning DFSMS batch query 
bre..sn...@fan..m...com wrote to IBM-Main:

 Is there a way to query DFSMS volume status with a batch job? 

DFSMSdfp v1r3.0 Storage Admin Ref SC26-7402-01, Appendix E : Using 
NaviQuest; Performing Storage Administration Tasks in Batch - pg 343.

SYS1.SACBCNTL Sample JCL Library - pg 345
ACBJBAI8 - Generate DASD volume list, ...
ACBJBAIA - Generate ISMF mountable tape volume list, ...
ACBJBAIX - Generate Storage Group list, ...

I schedule ACBJBAIX each morning before I get to the office so I can 
see if anything overflowed into my SPILL group for special 
consideration.  I also FDR Move that group each morning since most of 
my overflows occur because of over-generous SPACE= values.  90% of 
the time, the finalized dataset fits back in its original, proper 
StorGrp.  

ps.  I also use ACBJBAG1/ACBJBAIC, et al, monthly to test my ACS 
routines and the planned upgrades.  (My rules have been slowly 
evolving over the last 6 years.)

--  signature = 6 lines follows --
Neil Duffee, Joe SysProg, U d'Ottawa, Ottawa, Ont, Canada
telephone:1 613 562 5800 x4585 fax:1 613 562 5161
mailto:NDuffee of uOttawa.ca http:/ /aix1.uottawa.ca/ ~nduffee
How *do* you plan for something like that? Guardian Bob, Reboot
For every action, there is an equal and opposite criticism.
Systems Programming: Guilty, until proven innocent John Norgauer 
2004

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 0ADR713E (001)-ALLOC(01), UNABLE TO ALLOCATE SMS MANAGED DATA SET

2009-08-14 Thread Schwarz, Barry A
What program and what control cards are you using to accomplish this?

Any mismatch between the SMS status of a volume and the SMS status of a
dataset on that volume is a problem you need to fix as quickly as
possible, independent of any cloning efforts.

If you go to 3.4, list all the datasets on the volume, and then type 
 LISTCAT ENT(/) ALL
to the left of the first, you can check the resulting display for SMS
data.  A managed dataset will have at least a STORCLAS value.  Repeat
the process using the = command for each dataset on the volume, one at a
time.


Do you really have a VVDS on your SYSRES?  In my limited experience, it
makes cloning a lot more difficult.

-Original Message-
From: Glen Gasior 
Sent: Thursday, August 13, 2009 3:58 PM
To: IBM-MAIN@bama.ua.edu
Subject: 0ADR713E (001)-ALLOC(01), UNABLE TO ALLOCATE SMS MANAGED DATA
SET

*
I tried cloning a SYSRES volume for the first time at this site and
received this 
message.

0ADR713E (001)-ALLOC(01), UNABLE TO ALLOCATE SMS MANAGED DATA SET

My suspicion is that there are 8 SMS datasets on the non-sms SYSRES.

I am hoping someone has encountered this before and can recommend how to

correct the situation. I would like to correct the status of the
datasets before 
the ADRDSSU clone, but if the best way to go is correct it in the cloned
copy I 
imagine that will suffice for now.

I imagine an IDCAMS alter might get the SMS information out of the
catalog 
entry, but I am also imagining there is something in the VVDS or VTOC
that 
would need to be corrected.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Ron Hawkins
Ted,

Isn't the statement  I'm pretty sure it's derived from the equation 128
mics * 6500 = 8.32 ms a little arse about? That's how service units are
derived, but Connect time and EXCP counts are not derived, they are
recorded.

And connect time is definitely not constant. With FICON the attribution of
connect time varies from vendor to vendor, and transfer time varies with
path activity such that connect time can be double accounted. I'd go as far
as to say that in going from ESCON to FICON connect time became one of the
more unreliable IO metrics.

I still use connect time for some things, but I agree with Bill that IO
service units derived from Connect Time are somewhat useless.

I think you should restate what you are disputing, or show me which EXCP
count field is recorded incorrectly when IOSERV=TIME is used. EXCP counts
are EXCP counts. Connect Time is Connect Time. The only thing that IOSERV
changes is whether IO services units are derived from EXCP counts or Connect
Time. If for some reason you ignored block count fields and created a block
count from IO service units then that result would change, but who would do
that in the first place?

Finally, my Guru has spoken! I recalled that Barry calculated both EXCP and
IOTM from Service Units in the MXG Type72 record, and while checking that I
found this interesting note:

 /* NOTE: PRIOR TO MVS/ESA 5.2, IO SERVICE UNITS COULD BE BASED ON   */
 /*  EITHER EXCP COUNT OR IO CONNECT TIME, AND MXG CALCULATED TWO*/
 /*  VARIABLES, PGPEXCP AND PGPIOTM TO GIVE THE RAW IO UNITS.  AS*/
 /*  THERE WAS NO FLAG IN TYPE72 TO IDENTIFY WHICH UNITS WERE USED,  */
 /*  BOTH VARIABLES WERE CALCULATED KNOWING ONLY ONE WAS VALID.  */
 /*  WHEN DEVICE CONNECT TIME WAS USED FOR SERVICE UNITS, A SERVICE  */
 /*  UNIT WAS DEFINED AS 65 CONNECT TIME UNITS, AND A CONNECT TIME   */
 /*  UNIT IS 128 MICROSECONDS, HENCE THE 8320E-6 FACTOR IN PGPIOTM.  */
 /* BUT BEGINNING WITH MVS/ESA 5.2, IO SERVICE UNITS CAN ONLY BE */
 /*  BASED ON EXCP COUNT, SO PGPIOTM IS FORCED MISSING FOR 5.2+. */

So based on that it would seem that IOSERV=TIME is no longer honoured and IO
Service Units are always based on EXCP count. It also corrects my 6500* 128
calculation - it should be 65.

Ron


Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of
 Ted MacNEIL
 Sent: Friday, August 14, 2009 7:12 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: [IBM-MAIN] Degraded I/O performance in 1.10?
 
 If I were doing I/O performance measurement and tuning today, I would
most
 definitely not use that number.
 
 Why not?
 That is what it is -- constant.
 I'm pretty sure it's derived from the equation 128 mics * 6500 = 8.32 ms.
 
 Since you are using the number, you should verify its accuracy and, if
not
 accurate any more, ask IBM yourself or else find a more modern analysis of
 average I/O service time.
 
 The number is good for the 'quick and dirty'.
 I never said that Ron's suggestion for the analysis of I/O from RMF (etc)
was
 wrong.
 Nor did I say I was using the number, myself.
 I was just disputing the comment that EXCP's were only block counts.
 That depends on the setting (TIME or COUNT).
 And, I believe COUNT is still the default.
 -
 Too busy driving to stop for gas!
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Ted MacNEIL
So based on that it would seem that IOSERV=TIME is no longer honoured and IO
Service Units are always based on EXCP count. It also corrects my 6500* 128
calculation - it should be 65.

I honestly don't know, but the last doc I looked at was circa 1.7  and the 
distinction of COUNT/TIME was still there, with nothing saying 'no longer 
honoured'.
When I get to a PC I'm going to look it up in the current INITTuna.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Difference between Linklist, LPA, MLPA, PLPA

2009-08-14 Thread Donald Johnson
I have never been asked to explain this before, and now I am not sure I have
it right.

Can someone point me to a comprehensive but short discussion on what
Linklist, LPA, MLPA an PLPA are, and the advantages/disadvantages of one
over the other?

Thanks!
Don

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Difference between Linklist, LPA, MLPA, PLPA

2009-08-14 Thread Patrick Lyon
On Fri, 14 Aug 2009 14:42:47 -0400, Donald Johnson dej@gmail.com 
wrote:

I have never been asked to explain this before, and now I am not sure I have
it right.

Can someone point me to a comprehensive but short discussion on what
Linklist, LPA, MLPA an PLPA are, and the advantages/disadvantages of one
over the other?

Don - try the Introduction to the New Mainframe: z/OS Basics, page 513.

http://www.redbooks.ibm.com/redbooks/pdfs/sg246366.pdf

HTH, 
Pat Lyon

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Gibney, Dave
  I honestly don't know about back-up before expire. Coleen is probably
correct. I was just thinking logically. I know I would be disturbed
if a dataset I expected to have a backup for was expired before the
backup was made. But, I can't site a case either way right now.

Dave Gibney
Information Technology Services
Washington State University


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Pommier, Rex R.
 Sent: Friday, August 14, 2009 11:12 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 Dave,
 
 I know DFHSM won't migrate a dataset without it being backed up
 beforehand, but it won't delete one either?  The manual I'm looking at
 says that DFHSM processes expiration attributes before it processes
 migration attributes.  And in the backup section of the same manual it
 says the number of backup copies of a deleted dataset says how many to
 retain.  It says nothing about needing to have backups before allowing
 a
 delete.
 
 I double-checked and the datasets I'm testing with have all their
 attributes.
 
 I guess I'll wait a week and see if the one I updated will go away.
 
 Thanks.
 
 Rex
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Gibney, Dave
 Sent: Friday, August 14, 2009 12:45 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 
   Not so much as not opened. But, it won't mess with invalid datasets.
 An unopened dataset could be incompletely defined (usually an unknown
 DSORG).
And, usually DFHSM won't migrate or delete without a backup
 existing.
 
 Dave Gibney
 Information Technology Services
 Washington State University
 
 
  -Original Message-
  From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
  Behalf Of Pommier, Rex R.
  Sent: Friday, August 14, 2009 10:41 AM
  To: IBM-MAIN@bama.ua.edu
  Subject: Re: DFSMS and deleting expired datasets by management class
 
  Jack,
 
  Secondary Space Mgmt runs at 23:00 every night.  I see no errors in
 the
  HSM sysouts.
 
  Is it possible that HSM won't touch a dataset that hasn't been
 opened?
  For my test, I merely allocated some datasets then left them sit.  I
  just opened one today and changed it to see if that would make any
  difference.
 
  Rex
 
  -Original Message-
  From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
  Behalf Of John Kelly
  Sent: Friday, August 14, 2009 10:14 AM
  To: IBM-MAIN@bama.ua.edu
  Subject: Re: DFSMS and deleting expired datasets by management class
 
  snip
  I'm trying to get DFSMShsm to delete expired datasets based on
  management
  class
  unsnip
 
  Does SECONDARY SPACE MANAGEMENT run? That's where they should get
  deleted
  if in fact they have not been 'accessed' in 5 days. Also look in the
  HSM
 
  SYSOUTs for the SECONDARY SPACE MANAGEMENT (or process the FSRs) and
  see
 
  if give you and error for the DSNs that should be scratched.
  I don't think that HSM will backup a DSN that hasn't been opened, or
 at
  least given the basics by a Data Class.
 
  Jack Kelly
  202-502-2390 (Office)
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Darth Keller
From the Manual DFSMS Storage Administration Reference:

The Expire after Days Non-Usage field specifies how muhc time must eleapse 
since last access before a data set or object becomes ELIGIBLE for 
expiration.

The keyword here is ELIGIBLE.   IIRC, deletion of datasets occurs during 
Primary Space Managment.  PSM does not automatically occur for every 
volume in storage group.  You have to know what your AutoMigration  
threshold values are for the storage group and then look at how that works 
with the Thresholds defined for that storage group.  This behavior is 
covered in DFHSM Storage Administration Guide.

dd keller 
**
This e-mail message and all attachments transmitted with it may contain legally 
privileged and/or confidential information intended solely for the use of the 
addressee(s). If the reader of this message is not the intended recipient, you 
are hereby notified that any reading, dissemination, distribution, copying, 
forwarding or other use of this message or its attachments is strictly 
prohibited. If you have received this message in error, please notify the 
sender immediately and delete this message and all copies and backups thereof.

Thank you.
**

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CA Mainframe 2.0

2009-08-14 Thread Gibney, Dave
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Scott Fagen
 Sent: Thursday, July 23, 2009 3:15 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: CA Mainframe 2.0
 
 Dave,
 
 The best way to obtain CA MSM is to contact the AD/AM on your account.
 
 On Wed, 22 Jul 2009 19:21:29 -0700, Gibney, Dave gib...@wsu.edu
 wrote:
Hi Scott,
 
 One question, when can we have it. We might have tried today :)
  Sorry to revive this thread, maybe.

  Scott, we made such a request that same day. No response as of yet.

Dave Gibney
Information Technology Services
Washington State University

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS batch query

2009-08-14 Thread Darth Keller
Hallelujah!  Another NaviQuest believer! 

After several years of going to Share's, I was beginning to believe I was 
the only one who saw the value of a really nice tool.  I've totally 
re-written SMS environments in several very large shops and swear by 
NaviQuest. 

Welcome, brother!
dd keller


**
This e-mail message and all attachments transmitted with it may contain legally 
privileged and/or confidential information intended solely for the use of the 
addressee(s). If the reader of this message is not the intended recipient, you 
are hereby notified that any reading, dissemination, distribution, copying, 
forwarding or other use of this message or its attachments is strictly 
prohibited. If you have received this message in error, please notify the 
sender immediately and delete this message and all copies and backups thereof.

Thank you.
**

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Naviquest

2009-08-14 Thread Colleen Gordon
Hey Darth,

It's been a while since I've seen a Naviquest presentation at Share.  How about 
signing up to present on this topic?  It is a great tool!

Colleen Gordon


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Naviquest

2009-08-14 Thread Schwartz, Alan
Even I'd attend that !!


Alan Schwartz

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Colleen Gordon
Sent: Friday, August 14, 2009 2:12 PM
To: IBM-MAIN@bama.ua.edu
Subject: Naviquest

Hey Darth,

It's been a while since I've seen a Naviquest presentation at Share.
How about signing up to present on this topic?  It is a great tool!

Colleen Gordon


--
For IBM-MAIN subscribe / signoff / archive access instructions, send
email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search
the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS activation - RACF AUDIT

2009-08-14 Thread Ulrich Boche

Robert S. Hansel , RSH wrote:

Jennifer,

Unfortunately, it is WAD. The ISMF programs do not use the FACILITY class
STGADMIN profiles for governing user authority. To control ISMF, you either
have restrict access to the ISMF program library or restrict access to the
ISMF programs using PROGRAM class profiles. Some organizations choose to
only protect program DGTFPF05 which allows you to switch to 'Storage
Administrator Mode', but this is not a rigorous control measure since the
mode is actually governed by a bit in your ISPF profile that acts as a
switch.

For more information, see our presentation titled RACF and Storage
Administration available through our website at url:

http://www.rshconsulting.com/racfres.htm



We wrote about the security problem with the ISMF dialog and the (rather 
awkward) circumvention using PROGRAM class profiles in the redbook 
GG24-3378 DFSMS and RACF Usage Considerations which was published in 1998.


It is amazing that, in all this years, DFSMS development never bothered 
to spend the few lines of code necessary to bring this piece to 
reasonable security (although they implemented the STGADMIN profiles).

--
Ulrich Boche
SVA GmbH, Germany
IBM Premier Business Partner

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex?

2009-08-14 Thread R.S.

Ted MacNEIL pisze:
[...]

The proper way is to meet your business needs and implement SYSPLEX, 
application merging, in a low risk and properly planned way.


Technical possibilities may not always reflect business needs.
For example, there is no painless and easy way to merge RACF db's and 
for sure it is IRRUT400 is not recommended way to do this.


Sysplex was meant as a way for the systems to grow up and to achieve 
higher levels of availability. It wasn't a method to merge several 
applications under one system.
Parallel sysplex means datasharing, but people oftenly ask how to keep 
sysplex members separated. As separated as possible. Why? Because 
sometimes consolidation means price-plex - sysplex built for financial 
reasons only, just to lower license fees. I don't want to criticize such 
approach, however those unnatural sysplexes produces untypical problems.



In case two IMS databses to big to fit in one system - suggested way 
would be to create sysplex from one monoplex and then merge another IMS 
application (not system).

This is my understanding, however I heard similar opinions in Montpellier.

Regards
--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sąd Rejonowy dla m. st. Warszawy 
XII Wydział Gospodarczy Krajowego Rejestru Sądowego, 
nr rejestru przedsiębiorców KRS 025237

NIP: 526-021-50-88
Według stanu na dzień 01.01.2009 r. kapitał zakładowy BRE Banku SA (w całości 
wpłacony) wynosi 118.763.528 złotych. W związku z realizacją warunkowego 
podwyższenia kapitału zakładowego, na podstawie uchwały XXI WZ z dnia 16 marca 
2008r., oraz uchwały XVI NWZ z dnia 27 października 2008r., może ulec 
podwyższeniu do kwoty 123.763.528 zł. Akcje w podwyższonym kapitale zakładowym 
BRE Banku SA będą w całości opłacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 0ADR713E (001)-ALLOC(01), UNABLE TO ALLOCATE SMS MANAGED DATA SET

2009-08-14 Thread John Laubenheimer
Try allocating any dataset on the existing SYSRES volume.  If there are any 
datasets in transition (partial SMS conversion/deconversion), you won't be 
able to allocate anything.

Most likely, your ACS routine (STORCLAS) is assigning a storage class to these 
datasets while trying to clone the volume.  If this is the case, then you need 
to execute ADRDSSU with the ADMINISTRATOR, BYPASSACS(**) and 
NULLSTORCLAS keywords.  All require some appropriate level of RACF (or other 
security package) authority to use.

On Thu, 13 Aug 2009 17:57:31 -0500, Glen Gasior 
glen.manages@gmail.com wrote:

*
I tried cloning a SYSRES volume for the first time at this site and received 
this
message.

0ADR713E (001)-ALLOC(01), UNABLE TO ALLOCATE SMS MANAGED DATA SET

My suspicion is that there are 8 SMS datasets on the non-sms SYSRES.

I am hoping someone has encountered this before and can recommend how to
correct the situation. I would like to correct the status of the datasets 
before
the ADRDSSU clone, but if the best way to go is correct it in the cloned copy 
I
imagine that will suffice for now.

I imagine an IDCAMS alter might get the SMS information out of the catalog
entry, but I am also imagining there is something in the VVDS or VTOC that
would need to be corrected.

Thanks for any help.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: NaviQuest

2009-08-14 Thread Darth Keller
Hey Darth,
It's been a while since I've seen a NaviQuest presentation at Share. How 
about signing up to present on this topic?  It is a great tool!
Colleen Gordon

Hi Colleen - 

If I can get past budget constraints and can get both permission to go to 
Share next year and then also permission from Corporate to do the 
presentation (required as a condition of employment), I think that would 
actually be fun to do.  I'll start talking to my boss about it.

ddk



**
This e-mail message and all attachments transmitted with it may contain legally 
privileged and/or confidential information intended solely for the use of the 
addressee(s). If the reader of this message is not the intended recipient, you 
are hereby notified that any reading, dissemination, distribution, copying, 
forwarding or other use of this message or its attachments is strictly 
prohibited. If you have received this message in error, please notify the 
sender immediately and delete this message and all copies and backups thereof.

Thank you.
**

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex?

2009-08-14 Thread Ted MacNEIL
Technical possibilities may not always reflect business needs.

True, but we, as technicians, have a responsibility to meet, or compromise 
with, as much of the business need as possible.


For example, there is no painless and easy way to merge RACF db's and for sure 
it is IRRUT400 is not recommended way to do this.

I've worked with RACF since 1984, and we never had different DB's across 
systems.
We used whatever sharing methodologies were supported by IBM.

Sysplex was meant as a way for the systems to grow up and to achieve higher 
levels of availability.

You are limiting what can be done with a proper exploitation of SYSPLEX.

It wasn't a method to merge several 
applications under one system.

I disagree, as I've already stated.
We couldn't merge our two IMS applications, and get rid of XRF, without SYSPLEX.
-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


shutting down a 2105's Linux console

2009-08-14 Thread McKown, John
How does one do this? Logging in as SERVICE and looking at the selections in 
Gnome didn't seem to have a way to shutdown the Linux system. I remembered that 
some Linux systems will shutdown if you tap the power button quickly. On the 
2105, this crashed Linux and it had to do an fsck() due to a dirty /.

John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * (817)-961-6183 cell
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Neal Scheffler

Rex,

Since you are not receiving any errors in your MIGLOGs regarding the 
dataset in question, space management is probably not running for that 
storage group.  You must have Auto Migrate set to something other than 
NO to get space management to run and expire datasets.


Neal Scheffler

Pommier, Rex R. wrote:

Jack,

Secondary Space Mgmt runs at 23:00 every night.  I see no errors in the
HSM sysouts.  


Is it possible that HSM won't touch a dataset that hasn't been opened?
For my test, I merely allocated some datasets then left them sit.  I
just opened one today and changed it to see if that would make any
difference.

Rex

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of John Kelly
Sent: Friday, August 14, 2009 10:14 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DFSMS and deleting expired datasets by management class

snip
I'm trying to get DFSMShsm to delete expired datasets based on
management 
class

unsnip

Does SECONDARY SPACE MANAGEMENT run? That's where they should get
deleted 
if in fact they have not been 'accessed' in 5 days. Also look in the HSM


SYSOUTs for the SECONDARY SPACE MANAGEMENT (or process the FSRs) and see

if give you and error for the DSNs that should be scratched.
I don't think that HSM will backup a DSN that hasn't been opened, or at 
least given the basics by a Data Class.


Jack Kelly
202-502-2390 (Office)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


  


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired data sets by management class

2009-08-14 Thread Pommier, Rex R.
Hi Colleen,

Both of the datasets that I allocated are defined as PS.  From a 3.4
listing of one of the test datasets:


Data Set Name . . . . : I.RRP.JUNK

 

General DataCurrent Allocation

Management class . . :  MCINTUATAllocated tracks  . :
1
Storage class  . . . :  SCSTD   Allocated extents . :
1
Volume serial . . . :   ZD2001

Device type . . . . :   3390

Data class . . . . . :  DCSTD   Current Utilization

Organization  . . . :   PS  Used tracks . . . . :
0
Record format . . . :   FB  Used extents  . . . :
0
Record length . . . :   80

Block size  . . . . :   27920

1st extent tracks . :   1

Secondary tracks  . :   1

Data set name type  :   SMS Compressible  :
NO  
 

Creation date . . . :   2009/08/06  Referenced date . . :
2009/08/07
  Expiration date . . : ***None***

 


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Colleen Gordon
Sent: Friday, August 14, 2009 1:17 PM
To: IBM-MAIN@bama.ua.edu
Subject: DFSMS and deleting expired data sets by management class

HSM cannot manage a data set with DSORG unknown.  You must assign a
dataclass for HSM to manage it.  If a data set is eligible for
expiration but space management is not expiring it; this indicates that
there could be a problem with the records in the MCDS.  You'll need to
audit the MCDS to see what the errors are and correct them.

If the data sets are not eligible for expiration but your want to expire
them anyway; you'll have to HDELETE them.


Colleen Gordon

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Any Utilities to Archive Members of a PDS/PDSE - Load Libraries, Not Source

2009-08-14 Thread Staller, Allan
Archiver on the CBT Tape  WWW.CBTTAPE.ORG FILE 147

SNIP
I have been tasked with finding a utility that can archive members of a
PDS/PDSE data set, Not the entire data set but individual members.  The
utility should be able to keep a number of copies or generations of the
load
modules.
/snip

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Pommier, Rex R.
I've got just the opposite thought.  I would probably be disturbed if
a dataset that I wanted to delete wouldn't go because it doesn't have a
backup.  I'm thinking of the situation where, for example, at my site we
use a specific naming convention for SMS-managed datasets that are
temporary in that they are only used for a job or a few jobs then are
deleted.  They aren't permanent datasets that have to hang around for
days or longer.  We keep them around for restarts and reruns, but at the
end of their needed time in the batch cycle, they are deleted.  It would
cause misery to revamp our batch stream to change it to accommodate the
idea that these short-term datasets couldn't be deleted until they have
been backed up.

I told you to delete, now go away already!!!  :-)


Rex




-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Gibney, Dave
Sent: Friday, August 14, 2009 1:55 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DFSMS and deleting expired datasets by management class


  I honestly don't know about back-up before expire. Coleen is probably
correct. I was just thinking logically. I know I would be disturbed
if a dataset I expected to have a backup for was expired before the
backup was made. But, I can't site a case either way right now.

Dave Gibney
Information Technology Services
Washington State University


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Pommier, Rex R.
 Sent: Friday, August 14, 2009 11:12 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 Dave,
 
 I know DFHSM won't migrate a dataset without it being backed up
 beforehand, but it won't delete one either?  The manual I'm looking at
 says that DFHSM processes expiration attributes before it processes
 migration attributes.  And in the backup section of the same manual it
 says the number of backup copies of a deleted dataset says how many to
 retain.  It says nothing about needing to have backups before allowing
 a
 delete.
 
 I double-checked and the datasets I'm testing with have all their
 attributes.
 
 I guess I'll wait a week and see if the one I updated will go away.
 
 Thanks.
 
 Rex
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Gibney, Dave
 Sent: Friday, August 14, 2009 12:45 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 
   Not so much as not opened. But, it won't mess with invalid datasets.
 An unopened dataset could be incompletely defined (usually an unknown
 DSORG).
And, usually DFHSM won't migrate or delete without a backup
 existing.
 
 Dave Gibney
 Information Technology Services
 Washington State University
 
 
  -Original Message-
  From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
  Behalf Of Pommier, Rex R.
  Sent: Friday, August 14, 2009 10:41 AM
  To: IBM-MAIN@bama.ua.edu
  Subject: Re: DFSMS and deleting expired datasets by management class
 
  Jack,
 
  Secondary Space Mgmt runs at 23:00 every night.  I see no errors in
 the
  HSM sysouts.
 
  Is it possible that HSM won't touch a dataset that hasn't been
 opened?
  For my test, I merely allocated some datasets then left them sit.  I
  just opened one today and changed it to see if that would make any
  difference.
 
  Rex
 
  -Original Message-
  From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
  Behalf Of John Kelly
  Sent: Friday, August 14, 2009 10:14 AM
  To: IBM-MAIN@bama.ua.edu
  Subject: Re: DFSMS and deleting expired datasets by management class
 
  snip
  I'm trying to get DFSMShsm to delete expired datasets based on
  management
  class
  unsnip
 
  Does SECONDARY SPACE MANAGEMENT run? That's where they should get
  deleted
  if in fact they have not been 'accessed' in 5 days. Also look in the
  HSM
 
  SYSOUTs for the SECONDARY SPACE MANAGEMENT (or process the FSRs) and
  see
 
  if give you and error for the DSNs that should be scratched.
  I don't think that HSM will backup a DSN that hasn't been opened, or
 at
  least given the basics by a Data Class.
 
  Jack Kelly
  202-502-2390 (Office)
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Gibney, Dave
-Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Pommier, Rex R.
 Sent: Friday, August 14, 2009 12:50 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class
 
 I've got just the opposite thought.  I would probably be disturbed
if
 a dataset that I wanted to delete wouldn't go because it doesn't have
a
 backup.  I'm thinking of the situation where, for example, at my site
 we
 use a specific naming convention for SMS-managed datasets that are
 temporary in that they are only used for a job or a few jobs then
are
 deleted.  They aren't permanent datasets that have to hang around
for
 days or longer.  We keep them around for restarts and reruns, but at
 the
 end of their needed time in the batch cycle, they are deleted.  It
 would
 cause misery to revamp our batch stream to change it to accommodate
the
 idea that these short-term datasets couldn't be deleted until they
have
 been backed up.
 
 I told you to delete, now go away already!!!  :-)

  I should be quiet until I think and look. I have just such a MGMTCLAS:

LINE   MGMTCLAS EXPIRE EXPIRERETPARTIAL  PRIMARY
LEVEL 1  CMD/AUTO  # GDG ON  ROLLED-OFF  BACKUP   
 OPERATOR   NAME NON-USAGE  DATE/DAYSLIMIT   RELEASE  DAYS
DAYS MIGRATE   PRIMARY   GDS ACTION  FREQUENCY
---(1)  --(2)--- ---(3)---  ---(4)  --(5)--  (6)
---(7)--  --(8)--  --(9)---  --(10)--  ---(11)---  --(12)---
MCTEMP NOLIMIT   50  YES
22  BOTH   ---  EXPIRE  0
MCWEEK   7   70  YES
37  BOTH   ---  EXPIRE  1
 LINE   MGMTCLAS # BACKUPS# BACKUPS RETAIN DAYS  RETAIN DAYS
ADM/USER  AUTOBACKUP COPY   LAST MOD
 OPERATOR   NAME (DS EXISTS)  (DS DELETED)  ONLY BACKUP  EXTRA
BACKUPS  BACKUPBACKUP  TECHNIQUE USERID  
---(1)  --(2)--- ---(13)  (14)  ---(15)
(16)-  --(17)--  -(18)-  (39)  --(19)--
MCTEMP 1 0  ---
---  NONE  NO  STANDARD  GIBNEY  
MCWEEK 2 1   10
1  BOTH  YES STANDARD  GIBNEY  

  MCTEMP works just fine so does the MCWEEK which I do backup. Sorry
about the ugly wrap.


Dave Gibney
Information Technology Services
Washington State University

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Pommier, Rex R.
Neal,

Storage group is set to YES for the storage these datasets are sitting
on.

Thanks for the hint.

Rex



-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Neal Scheffler
Sent: Friday, August 14, 2009 2:32 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DFSMS and deleting expired datasets by management class

Rex,

Since you are not receiving any errors in your MIGLOGs regarding the 
dataset in question, space management is probably not running for that 
storage group.  You must have Auto Migrate set to something other than 
NO to get space management to run and expire datasets.

Neal Scheffler

Pommier, Rex R. wrote:
 Jack,

 Secondary Space Mgmt runs at 23:00 every night.  I see no errors in
the
 HSM sysouts.  

 Is it possible that HSM won't touch a dataset that hasn't been opened?
 For my test, I merely allocated some datasets then left them sit.  I
 just opened one today and changed it to see if that would make any
 difference.

 Rex

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of John Kelly
 Sent: Friday, August 14, 2009 10:14 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS and deleting expired datasets by management class

 snip
 I'm trying to get DFSMShsm to delete expired datasets based on
 management 
 class
 unsnip

 Does SECONDARY SPACE MANAGEMENT run? That's where they should get
 deleted 
 if in fact they have not been 'accessed' in 5 days. Also look in the
HSM

 SYSOUTs for the SECONDARY SPACE MANAGEMENT (or process the FSRs) and
see

 if give you and error for the DSNs that should be scratched.
 I don't think that HSM will backup a DSN that hasn't been opened, or
at 
 least given the basics by a Data Class.

 Jack Kelly
 202-502-2390 (Office)

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS batch query

2009-08-14 Thread Gibney, Dave
  It's useful, but running ISPF in Batch can be quite doggy :)

Dave Gibney
Information Technology Services
Washington State University


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Darth Keller
 Sent: Friday, August 14, 2009 12:07 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: DFSMS batch query
 
 Hallelujah!  Another NaviQuest believer!
 
 After several years of going to Share's, I was beginning to believe I
 was
 the only one who saw the value of a really nice tool.  I've totally
 re-written SMS environments in several very large shops and swear by
 NaviQuest.
 
 Welcome, brother!
 dd keller
 
 

***
 ***
 This e-mail message and all attachments transmitted with it may
contain
 legally privileged and/or confidential information intended solely for
 the use of the addressee(s). If the reader of this message is not the
 intended recipient, you are hereby notified that any reading,
 dissemination, distribution, copying, forwarding or other use of this
 message or its attachments is strictly prohibited. If you have
received
 this message in error, please notify the sender immediately and delete
 this message and all copies and backups thereof.
 
 Thank you.

***
 ***
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DFSMS and deleting expired data sets by management class

2009-08-14 Thread Colleen Gordon
Data Set Name . . . . : I.RRP.JUNK


Here's the problem Rex, the expiration data is NONE.  When do you want the data 
sets to expire?  X days after creation or x days after last reference date?  
Setup the management class to expire the data sets in one way or another and 
they'll expire.

General Data   Current Allocation

Management class . . :MCINTUAT Allocated tracks  . :
1
Storage class  . . . : SCSTDAllocated extents . :
1
Volume serial . . . :   ZD2001

Device type . . . . :   3390

Data class . . . . . :   DCSTD   Current Utilization

Organization  . . . :   PS   Used tracks . . . 
. :
0
Record format . . . : FB   Used extents  . . . :
0
Record length . . . :  80

Block size  . . . . : 27920

1st extent tracks . :  1

Secondary tracks  . :   1

Data set name type  : SMS Compressible  :
NO


Creation date . . . :  2009/08/06  Referenced date . . :
2009/08/07
  Expiration date . . :***None***



Colleen Gordon



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Pommier, Rex R.
Darth,

From what I see in the primary space management messages coming from
DFHSM, the volumes these datasets are on do get processed by PSM.  From
the Storage Admin Guide, all the datasets on the volume are scanned for
eligibility for being deleted.  Pass 1 of PSM also deletes all datasets
that are eligible based on MGMT class - regardless of whether they are
backed up or not.  

This is definitely puzzling to me.  From what I can see, I have
everything defined properly, but these things just won't go away.


Rex

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Darth Keller
Sent: Friday, August 14, 2009 2:03 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DFSMS and deleting expired datasets by management class

From the Manual DFSMS Storage Administration Reference:

The Expire after Days Non-Usage field specifies how muhc time must
eleapse 
since last access before a data set or object becomes ELIGIBLE for 
expiration.

The keyword here is ELIGIBLE.   IIRC, deletion of datasets occurs during

Primary Space Managment.  PSM does not automatically occur for every 
volume in storage group.  You have to know what your AutoMigration  
threshold values are for the storage group and then look at how that
works 
with the Thresholds defined for that storage group.  This behavior is 
covered in DFHSM Storage Administration Guide.

dd keller 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired data sets by management class

2009-08-14 Thread Gibney, Dave
  On this one, I do know that the ***NONE*** is not a factor in HSM
EXPIRE processing as Rex desires. I had my MCTEMP and MCWEEK working for
years before I expanded my DFHSM set-up to honor EXPDT and RETPD from
JCL.

Dave Gibney
Information Technology Services
Washington State University


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of Colleen Gordon
 Sent: Friday, August 14, 2009 1:31 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: DFSMS and deleting expired data sets by management class
 
 Data Set Name . . . . : I.RRP.JUNK
 
 
 Here's the problem Rex, the expiration data is NONE.  When do you want
 the data sets to expire?  X days after creation or x days after last
 reference date?  Setup the management class to expire the data sets in
 one way or another and they'll expire.
 
 General Data   Current Allocation
 
 Management class . . :MCINTUAT Allocated
 tracks  . :
 1
 Storage class  . . . : SCSTDAllocated
 extents . :
 1
 Volume serial . . . :   ZD2001
 
 Device type . . . . :   3390
 
 Data class . . . . . :   DCSTD   Current
 Utilization
 
 Organization  . . . :   PS   Used
 tracks . . . . :
 0
 Record format . . . : FB   Used extents  . . . :
 0
 Record length . . . :  80
 
 Block size  . . . . : 27920
 
 1st extent tracks . :  1
 
 Secondary tracks  . :   1
 
 Data set name type  : SMS Compressible  :
 NO
 
 
 Creation date . . . :  2009/08/06  Referenced date
 . . :
 2009/08/07
   Expiration date . . :***None***
 
 
 
 Colleen Gordon
 
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired data sets by management class

2009-08-14 Thread Pommier, Rex R.
Colleen,

I thought the expiration date in this screen was only populated if I put
an EXPDT or RETPD in the JCL I used to create it.  From what I read in
the manuals, the MGMT class expire non-usage is used when the dataset
doesn't have an expiration date on it.  From my SMS construct, I have
the expire non usage set to 5 days:

SMS constructs (with extra lines deleted):

 LINE   MGMTCLASEXPIRE EXPIRERET   
 OPERATOR   NAMENON-USAGE  DATE/DAYSLIMIT  
---(1)  --(2)------(3)---  ---(4)  --(5)-- 
MCINTUAT5 NOLIMIT  NOLIMIT



Rex


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Colleen Gordon
Sent: Friday, August 14, 2009 3:31 PM
To: IBM-MAIN@bama.ua.edu
Subject: DFSMS and deleting expired data sets by management class

Data Set Name . . . . : I.RRP.JUNK


Here's the problem Rex, the expiration data is NONE.  When do you want
the data sets to expire?  X days after creation or x days after last
reference date?  Setup the management class to expire the data sets in
one way or another and they'll expire.

General Data   Current Allocation

Management class . . :MCINTUAT Allocated
tracks  . :
1
Storage class  . . . : SCSTDAllocated
extents . :
1
Volume serial . . . :   ZD2001

Device type . . . . :   3390

Data class . . . . . :   DCSTD   Current
Utilization

Organization  . . . :   PS   Used tracks
. . . . :
0
Record format . . . : FB   Used extents  . . . :
0
Record length . . . :  80

Block size  . . . . : 27920

1st extent tracks . :  1

Secondary tracks  . :   1

Data set name type  : SMS Compressible  :
NO


Creation date . . . :  2009/08/06  Referenced date .
. :
2009/08/07
  Expiration date . . :***None***



Colleen Gordon

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Tom Marchant
On Fri, 14 Aug 2009 18:40:58 +, Ted MacNEIL wrote:

Ron Hawkins wrote:
So based on that it would seem that IOSERV=TIME is no longer 

I think you mean IOSRVC, a parameter in IEAIPSxx.

honoured and IO Service Units are always based on EXCP count. 
It also corrects my 6500* 128 calculation - it should be 65.

I honestly don't know, but the last doc I looked at was circa 1.7 
 and the distinction of COUNT/TIME was still there, with nothing 
saying 'no longer honoured'.

This note appears in the Summary of Changes to the Initialization and Tuning
Reference for z/OS 1.6:

quote
Beginning with z/OS V1R3, workload management (WLM) compatibility mode is no
longer available. Information about WLM compatibility mode has been removed
throughout this document, including descriptions of the IEAICSxx parmlib
member, the IEAIPSxx member, and many options of the IEAOPTxx member.
/quote

IIRC, IEAIPSxx is not used when in goal mode..

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired datasets by management class

2009-08-14 Thread Pommier, Rex R.
OK, this is tacky, replying to my own post.  I just found a paragraph in
the DFHSM SAG that I missed earlier.  It appears as though these
datasets need to be backed up for DFHSM to automatically delete them.
From the SAG, 

DFSMShsm provides a patch byte that enables users to override the
requirement that an SMS-managed data set have a backup copy before it is
expired. For more information about this patch, refer to Chapter 16,
Tuning DFSMShsm in the z/OS DFSMShsm Implementation and Customization
Guide.  

I have not set this patch byte so it appears as though I have to
actually open and change the dataset to force a backup before it will
get deleted.  I'll do more playing with this one and let everybody know
if this is my problem.  

Thanks again, everybody, for your help and input.

Rex

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Pommier, Rex R.
Sent: Friday, August 14, 2009 9:31 AM
To: IBM-MAIN@bama.ua.edu
Subject: DFSMS and deleting expired datasets by management class

Hi List,

I haven't been able to find an answer in the archives, so I'll ask the
list what I'm missing.  I'm trying to get DFSMShsm to delete expired
datasets based on management class.  To this end, I changed the expire
non-usage field for one of the test management classes to 5 days then
allocated a few new datasets in this management class.  I did this 7
days ago.  Today when I checked, the datasets were still there.  My
EXPIREDDATASET parameter is set to SCRATCH.  What am I missing?  How do
I get these datasets to delete automatically?

Test dataset catalog entry:

NONVSAM --- U05.RRP.JUNK

 IN-CAT --- CATALOG.WSCTEST.UCAT

 HISTORY

   DATASET-OWNER-(NULL) CREATION2009.218

   RELEASE2 EXPIRATION--.000

   ACCOUNT-INFO---(NULL)

 SMSDATA

   STORAGECLASS --SCSTD MANAGEMENTCLASS-MCINTUAT

   DATACLASS -DCSTD LBACKUP ---.000.

 VOLUMES

   VOLSERZD2000 DEVTYPE--X'3010200F'
FSEQN--0

 ASSOCIATIONS(NULL)

 ATTRIBUTES

***

SMS constructs (with extra lines deleted):

 LINE   MGMTCLASEXPIRE EXPIRERET   
 OPERATOR   NAMENON-USAGE  DATE/DAYSLIMIT  
---(1)  --(2)------(3)---  ---(4)  --(5)-- 
MCINTUAT5 NOLIMIT  NOLIMIT



PARTIAL  PRIMARY   LEVEL 1  CMD/AUTO  
RELEASE  DAYS  DAYS MIGRATE   
(6)  ---(7)--  --(8)--  --(9)---  
CONDITIONAL10   35  BOTH  


A couple other items in the SMS construct for this mgmt class is that it
should have 3 backups if the dataset exists and 1 backup for a deleted
dataset.  This dataset apparently hasn't been backed up because I just
created it but didn't open it.  Could this be the reason it isn't being
deleted?  I thought the 1 backup for a deleted dataset simply says that
if there are backups when the dataset is deleted, to keep 1 backup copy
for a period of time.  Is this a mistaken thought?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DFSMS and deleting expired data sets by management class

2009-08-14 Thread Colleen Gordon
Hi Rex,

Yes, you're right.  Sorry about that.  Can you cut and paste the entire 
management class definition?  Backup shouldn't have anything to do with it 
unless you have backup turned on in the management class.

Colleen Gordon



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired data sets by management class

2009-08-14 Thread Pommier, Rex R.
Colleen,

Not a problem.  I followed that rabbit earlier which is how I found the
page in the manual that explained the use of **none** in the expiration
date.

Here is the mgmt class def'n.  Hopefully line wrap won't mess it up too
bad.

 LINE   MGMTCLASEXPIRE EXPIRERETPARTIAL

 OPERATOR   NAMENON-USAGE  DATE/DAYSLIMIT   RELEASE

---(1)  --(2)------(3)---  ---(4)  --(5)--
(6) 
MCINTUAT5 NOLIMIT  NOLIMIT
CONDITIONAL

PRIMARY   LEVEL 1  CMD/AUTO  # GDG ON  ROLLED-OFF  BACKUP
DAYS  DAYS MIGRATE   PRIMARY   GDS ACTION  FREQUENCY 
---(7)--  --(8)--  --(9)---  --(10)--  ---(11)---  --(12)--- 
  10   35  BOTH   ---  --- 0


# BACKUPS# BACKUPS RETAIN DAYS  RETAIN DAYS  
(DS EXISTS)  (DS DELETED)  ONLY BACKUP  EXTRA BACKUPS
---(13)  (14)  ---(15)  (16)-
  3 12  1


ADM/USER  AUTOLAST MOD  LAST DATE   LAST TIME
BACKUPBACKUP  USERIDMODIFIEDMODIFIED 
--(17)--  -(18)-  --(19)--  ---(20)---  --(21)---
BOTH  YES RRP   2009/08/05  11:49

Fields 22-38 are all blank.

BACKUP COPY   ABACKUP COPY 
TECHNIQUE TECHNIQUE
(39)  (40) 
STANDARD  STANDARD 


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Colleen Gordon
Sent: Friday, August 14, 2009 4:03 PM
To: IBM-MAIN@bama.ua.edu
Subject: DFSMS and deleting expired data sets by management class

Hi Rex,

Yes, you're right.  Sorry about that.  Can you cut and paste the entire
management class definition?  Backup shouldn't have anything to do with
it unless you have backup turned on in the management class.

Colleen Gordon



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DFSMS and deleting expired data sets by managment class

2009-08-14 Thread Colleen Gordon
Hi Rex,

You have auto backup turned on and you have # of data sets data set deleted set 
to 1.  Turn backup off and they'll expire or you'll need to use the patch you 
found.

If you are only wanting to keep them for 5 days after last reference then I'm 
guessing  that a backup isn't necessary.

Colleen Gordon




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Multi-file tape

2009-08-14 Thread Paul Gilmartin
This worked OK on a virtual tape.  Now I must make
it work on a physical tape.  My EXEC says:

 do File = 0 + 1 to 99
 File = right( File, 3, 0 )
 say
 say ' File' File '='
 InDD = 'F'File
 DynArg =( 'alloc dd('InDD') dsn(''TAPE.FILE'File''')' ,
 'expdt(98000) recfm(U) blksize(32760)' ,
 'label(BLP) position('File')' ,
 'unit(AB2)' VolArg 'shr reuse' )
 address 'TSO' DynArg
 if RC0 then leave File

 RC = ProcessFile()  /* FREEs InDD */
 if RC0 then leave File;  end File

To my utter dismay, it dismounts the tape after each file
and I must re-mount it.

What's the ALLOCATE analogue of RETAIN?

This takes longer each time.

Do I need to code a DD statement in JCL with RETAIN for
each file?

Should I count my blessings because auto ops is at least
replying to the VSN messages.

Long ago, a colleague caused a tape volume to be mounted,
even across job boundaries.  This is outside my operator's
and my current skill set.  Is that an alternative?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMS and deleting expired data sets by management class

2009-08-14 Thread Pommier, Rex R.
Colleen,

I'll try turning backup off and see if they go away tonight.

Just for my own knowledge, with the number of datasets deleted set to 1,
I have to have a backup before it will allow expiration?  I was under
the impression that this was more of a maximum number of backups to keep
for a deleted dataset.  IOW, in this case, I have # backups of deleted
set to 1, # of backups of active dataset set to 3, and retain days only
backup set to 2.  Given these numbers I thought the system would keep up
to 3 backups of primary datasets, and when the primary gets deleted, it
would also delete any backups I have greater than one, if they exist,
and then after 2 days delete the last backup.  I didn't think it
required a backup before deleting the primary.

Rex

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Colleen Gordon
Sent: Friday, August 14, 2009 4:39 PM
To: IBM-MAIN@bama.ua.edu
Subject: DFSMS and deleting expired data sets by managment class

Hi Rex,

You have auto backup turned on and you have # of data sets data set
deleted set to 1.  Turn backup off and they'll expire or you'll need to
use the patch you found.

If you are only wanting to keep them for 5 days after last reference
then I'm guessing  that a backup isn't necessary.

Colleen Gordon




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Multi-file tape

2009-08-14 Thread Thompson, Steve
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Paul Gilmartin
Sent: Friday, August 14, 2009 4:44 PM
To: IBM-MAIN@bama.ua.edu
Subject: Multi-file tape

This worked OK on a virtual tape.  Now I must make
it work on a physical tape.  My EXEC says:

 do File = 0 + 1 to 99
 File = right( File, 3, 0 )
 say
 say ' File' File '='
 InDD = 'F'File
 DynArg =( 'alloc dd('InDD') dsn(''TAPE.FILE'File''')' ,
 'expdt(98000) recfm(U) blksize(32760)' ,
 'label(BLP) position('File')' ,
 'unit(AB2)' VolArg 'shr reuse' )
 address 'TSO' DynArg
 if RC0 then leave File

 RC = ProcessFile()  /* FREEs InDD */
 if RC0 then leave File;  end File

To my utter dismay, it dismounts the tape after each file
and I must re-mount it.

What's the ALLOCATE analogue of RETAIN?

This takes longer each time.

Do I need to code a DD statement in JCL with RETAIN for
each file?

Should I count my blessings because auto ops is at least
replying to the VSN messages.

Long ago, a colleague caused a tape volume to be mounted,
even across job boundaries.  This is outside my operator's
and my current skill set.  Is that an alternative?
SNIP

Yes.

MOUNT [devaddr],vol=(sl,volser)

Where devaddr is the device address to hold the tape
  volser  is the actual Volume Serial of the tape

I believe that this will work on an ATL -- although some others may not
like it much (particularly if you only have two heads and the system is
rather busy).

Regards,
Steve Thompson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Multi-file tape

2009-08-14 Thread Pommier, Rex R.
Gil,

I can't help with your question about ALLOCATE equivalents, but I think
you should be able to have your operator simply issue a MOUNT command to
permanently mount the tape on a drive.  Check the system commands
manual.  S/he would have to either do an UNLOAD or VARY OFFLINE to get
the tape unmounted.

Rex

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Paul Gilmartin
Sent: Friday, August 14, 2009 4:44 PM
To: IBM-MAIN@bama.ua.edu
Subject: Multi-file tape

This worked OK on a virtual tape.  Now I must make
it work on a physical tape.  My EXEC says:

 do File = 0 + 1 to 99
 File = right( File, 3, 0 )
 say
 say ' File' File '='
 InDD = 'F'File
 DynArg =( 'alloc dd('InDD') dsn(''TAPE.FILE'File''')' ,
 'expdt(98000) recfm(U) blksize(32760)' ,
 'label(BLP) position('File')' ,
 'unit(AB2)' VolArg 'shr reuse' )
 address 'TSO' DynArg
 if RC0 then leave File

 RC = ProcessFile()  /* FREEs InDD */
 if RC0 then leave File;  end File

To my utter dismay, it dismounts the tape after each file
and I must re-mount it.

What's the ALLOCATE analogue of RETAIN?

This takes longer each time.

Do I need to code a DD statement in JCL with RETAIN for
each file?

Should I count my blessings because auto ops is at least
replying to the VSN messages.

Long ago, a colleague caused a tape volume to be mounted,
even across job boundaries.  This is outside my operator's
and my current skill set.  Is that an alternative?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Cap software CPU utilization

2009-08-14 Thread Tommy Tsui
Hi,
Is there any tools can caps software (such as CA product) CPU
utilization. So that we can control the software cost .

many thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Any Utilities to Archive Members of a PDS/PDSE - Load Libraries, Not Source

2009-08-14 Thread Rick Fochtman

-snip


G'day,

I have been tasked with finding a utility that can archive members of a
PDS/PDSE data set, Not the entire data set but individual members.  The
utility should be able to keep a number of copies or generations of the load
modules.

I thought I had heard of a product once a long time ago however, have lost
all reference to the product and vendor.

Thank you for your assistance.

Regards, 
Herman Stocker
 


unsnip-
Shameless plug here: go look at the ARCHIVER, in file 147 of the 
CBTTAPE. As long as the module isn't scatter-loadable, ARCHIVER will 
handle load modules, including overlays (if anyone still uses them) just 
fine. (I sweat blood getting the overlay handling right; the note list 
threw me for quite a loop until i understood how it worked.) LMOD blocks 
close to the 32,760 limit will cause a problem; the compaction routine 
sometimes makes a block larger than the input block, overflowing the 
Archive buffer.


Rick

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cap software CPU utilization

2009-08-14 Thread Joel Wolpert
You can soft cap the lpar through the HMC; or you can set up a wlm resource 
group to cap specific workloads.
- Original Message - 
From: Tommy Tsui tommyt...@gmail.com

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Friday, August 14, 2009 7:46 PM
Subject: Cap software CPU utilization



Hi,
Is there any tools can caps software (such as CA product) CPU
utilization. So that we can control the software cost .

many thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cap software CPU utilization

2009-08-14 Thread Ted MacNEIL
You can soft cap the lpar through the HMC; or you can set up a wlm resource 
group to cap specific workloads.

The HMC is a hard cap.
The WLM can soft cap.
Plus, there are resource groups.

This capability has been around for years.
There is also the IBM utility for reporting on this.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cap software CPU utilization

2009-08-14 Thread John McKown
On Sat, 15 Aug 2009, Tommy Tsui wrote:

 Hi,
 Is there any tools can caps software (such as CA product) CPU
 utilization. So that we can control the software cost .
 
 many thanks
 

IF you are on a z9 or z10 AND you are running z/OS 1.8 or above THEN you
can use a facility called GROUP CAPACITY. This is where you set the MSU
value for a group of LPARs to a given value using the HMC. WLM on the z/OS
systems in the LPARs in that group talks to PR/SM, which will then 
dynamically adjust the LPAR dispatching so that the MSU value is not 
exceeded for the 4 hour rolling average.

-- 
Trying to write with a pencil that is dull is pointless.

Maranatha!
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Degraded I/O performance in 1.10?

2009-08-14 Thread Brian Westerman
Back to the original question/problem.  I'm assuming that your programmers
are not complaining that they problem is the number of I/O's or EXCPS have
gone up because they could probably check those figures for themselves in
the actual JOB output, but that it feels to them like jobs that do a lot
of I/O seem to be taking longer to run.

This could be any of several issues related to your parmlib settings or WLM
settings where you are penalizing high I/O, or could be a hardware issue
that coincided with your OS upgrade.  I couldn't even count the number of
problems that I have searched on during and after upgrades that turned out
to be something that the site's CE decided to implement during the outage.
 So don't limit your searching to z/OS 1.10 possibilities as it could very
well be a hardware issue that you had very little control over.

Check to be sure that your WLM settings have not changes in an unwarranted
manner.  This may not be an issue of everything being bad, just that some
jobs are now taking longer while a lot of others are running faster.  I
think you shoudl probably err on the side of caution and assume that they
have a point until you can prove otherwise.  They won't believe you anyway
without proof. If you were allowed to function without proof, you would be
one of them. :)

Have you checked to be sure that your PAV settings are still there.  You may
have lost your dynamic PAV in the quest for HyperPAV.  Also, you may want to
see if your CE (IBM or other) has made changes to your RAID.  It's possible
that you may have lost some cache, or some of the features are not set as
they were previously.

Is it only certain datasets, or certain volumes (or subsets of volumes) that
appear to be affected?  For instance, is it only a few VSAM files that may
exhibit the perceived problem?  What has changed (if anything) about their
location?  Once you can quantify something concrete, it will make the job
much easier.  Once you locate some common threads you can start to zoom in
on where the issue is presenting itself and figure out what may have changed.

It's also completely possible that there may not be a problem, but
programmers, (being what they are), will need you to prove that nothing
has changed.  If you check everything and see absolutely no difference in
the jobs, then you can move into that response.

If you need to contact me offline about this, feel free to do so and let me
know what I can do to help.

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cap software CPU utilization

2009-08-14 Thread Joel Wolpert
You set the service units for the soft cap thru the hmc. That is what I 
meant.
- Original Message - 
From: Ted MacNEIL eamacn...@yahoo.ca

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Friday, August 14, 2009 9:07 PM
Subject: Re: Cap software CPU utilization


You can soft cap the lpar through the HMC; or you can set up a wlm 
resource

group to cap specific workloads.

The HMC is a hard cap.
The WLM can soft cap.
Plus, there are resource groups.

This capability has been around for years.
There is also the IBM utility for reporting on this.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Multi-file tape

2009-08-14 Thread Robert A. Rosenberg

At 16:43 -0500 on 08/14/2009, Paul Gilmartin wrote about Multi-file tape:


This worked OK on a virtual tape.  Now I must make
it work on a physical tape.  My EXEC says:

 do File = 0 + 1 to 99
 File = right( File, 3, 0 )
 say
 say ' File' File '='
 InDD = 'F'File
 DynArg =( 'alloc dd('InDD') dsn(''TAPE.FILE'File''')' ,
 'expdt(98000) recfm(U) blksize(32760)' ,
 'label(BLP) position('File')' ,
 'unit(AB2)' VolArg 'shr reuse' )
 address 'TSO' DynArg
 if RC0 then leave File

 RC = ProcessFile()  /* FREEs InDD */
 if RC0 then leave File;  end File

To my utter dismay, it dismounts the tape after each file
and I must re-mount it.

What's the ALLOCATE analogue of RETAIN?

This takes longer each time.

Do I need to code a DD statement in JCL with RETAIN for
each file?

Should I count my blessings because auto ops is at least
replying to the VSN messages.

Long ago, a colleague caused a tape volume to be mounted,
even across job boundaries.  This is outside my operator's
and my current skill set.  Is that an alternative?

-- gil


To keep the tape from unmounting use the Operation MOUNT command as 
others have responded. To keep the tape from taking to longer start 
reading with each file (since it will rewind after each file) make 
the VolArg (which I assume is the equivalent of the JCL DISP) SHR 
PASS. Do help on ALLOC to see if this is supported and if there is a 
RETAIN parm.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cap software CPU utilization

2009-08-14 Thread Tommy Tsui
is it really can cap the usage for CA product... since our software
charge always over paid  each time when we adjust our HMC CPU hard cap
ratio...


On Sat, Aug 15, 2009 at 11:54 AM, Joel Wolpertj...@perfconsultant.com wrote:
 You set the service units for the soft cap thru the hmc. That is what I
 meant.
 - Original Message - From: Ted MacNEIL eamacn...@yahoo.ca
 Newsgroups: bit.listserv.ibm-main
 To: IBM-MAIN@bama.ua.edu
 Sent: Friday, August 14, 2009 9:07 PM
 Subject: Re: Cap software CPU utilization


 You can soft cap the lpar through the HMC; or you can set up a wlm
  resource
 group to cap specific workloads.

 The HMC is a hard cap.
 The WLM can soft cap.
 Plus, there are resource groups.

 This capability has been around for years.
 There is also the IBM utility for reporting on this.

 -
 Too busy driving to stop for gas!

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cap software CPU utilization

2009-08-14 Thread Joel Wolpert

What specific products are you trying to cap?
- Original Message - 
From: Tommy Tsui tommyt...@gmail.com

Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Saturday, August 15, 2009 12:07 AM
Subject: Re: Cap software CPU utilization



is it really can cap the usage for CA product... since our software
charge always over paid  each time when we adjust our HMC CPU hard cap
ratio...


On Sat, Aug 15, 2009 at 11:54 AM, Joel Wolpertj...@perfconsultant.com 
wrote:

You set the service units for the soft cap thru the hmc. That is what I
meant.
- Original Message - From: Ted MacNEIL eamacn...@yahoo.ca
Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@bama.ua.edu
Sent: Friday, August 14, 2009 9:07 PM
Subject: Re: Cap software CPU utilization



You can soft cap the lpar through the HMC; or you can set up a wlm
 resource
group to cap specific workloads.

The HMC is a hard cap.
The WLM can soft cap.
Plus, there are resource groups.

This capability has been around for years.
There is also the IBM utility for reporting on this.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cap software CPU utilization

2009-08-14 Thread Michael W. Moss
All of the previous advice has been correct, but maybe there are other options.

Have you investigated the AutoSoftCapping product from zCOST 
Management?  This solution does introduce a soft capping technique, so you 
know that you will never exceed the high-watermark setting, namely MSU.  
Thus this is way to safeguard that your software bill is never higher than 
expected.  It also dynamically allocates MSU resources to the workload 
requiring them the most (E.g. Production), based on user customizable 
parameters, safeguarding mission-critical performance characteristics and SLA 
objectives.

Try the following links:

http://www.zcostmanagement.com/pages/products.php
http://www.value-4it.com/products/zCOST.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html