Re: NY Metro NaSPA Chapter Meeting: Tuesday, 20 March 2012

2012-04-02 Thread John Laubenheimer
The handouts are located as follows:

ftp://ftp.software.ibm.com/eserver/zseries/zos/racf/pdf/ny_metro_naspa_2012_03_avoiding_logger_pitfalls.pdf
ftp://ftp.software.ibm.com/eserver/zseries/zos/racf/pdf/ny_metro_naspa_2012_03_high_availability_zlinux_architectures.pdf
ftp://ftp.software.ibm.com/eserver/zseries/zos/racf/pdf/ny_metro_naspa_2012_03_how_do_you_do_what_you_do_z196_cpu.pdf
ftp://ftp.software.ibm.com/eserver/zseries/zos/racf/pdf/ny_metro_naspa_2012_03_pfa_rtd_zos.pdf

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


ABCs of z/OS System Programming Volume 4

2010-11-03 Thread John Laubenheimer
I just noticed that a draft of the long missing ABCs of z/OS System
Programming Volume 4 is now available on the redbooks web site.  The final
version should be available by the end of the year.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: The new POO (Props / ProP ) is available

2010-09-07 Thread John Laubenheimer
On Tue, 7 Sep 2010 12:15:32 -0300, Clark Morris cfmpub...@ns.sympatico.ca
wrote:

On 7 Sep 2010 01:59:58 -0700, in bit.listserv.ibm-main you wrote:

if you find a solution please post it or send it to me.

Post it because I am having the same problem and I am registered.

Clark Morris

-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Paul Gilmartin
Sent: Saturday, September 04, 2010 6:16 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: The new POO (Props / ProP ) is available

On Sep 3, 2010, at 16:47, Steve Comstock wrote:

 http://www-01.ibm.com/support/docview.wss?uid=isg2b9de5f05a9d578198525
 71c500428f9a

Can't get there.  I thought I was registered, long ago.


--
For IBM-MAIN subscribe / signoff / archive access instructions,

Don't click on the link in the post.  Instead, highlight and copy the link
and paste it to the address line.  This link is too long for this forum;
hence the remainder of the link (i.e., the part on the second line) isn't
carried forward.  (And remember to remove the   characters when wrapping
the line.)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Catalog question

2010-02-18 Thread John Laubenheimer
On Thu, 18 Feb 2010 09:20:00 -0600, Darth Keller 
darth.kel...@assurant.com wrote:

 3. It rather does not depend on number of aliases. BTW: The number is
 limited (by the filed in MCAT), but the limit is quite big (several
 hundreds AFAIR).


-- Several thousands in fact.
-- The sum of the lengths of all aliases cannot exceed 32300.

Haven't seen where anyone's mentioned that catalogs are standard KSDS's 
are there for limited to 4GB's.  Is that not still true?

A SHARE requirement was submitted to relieve the 4GB limit for catalogs.  
(Yes ... the process does work!)  This appears to be addressed in z/OS 1.12.  
Check the announcement preview.

You also want to consider recoverability issues -  if all your production
aliases are in one catalog and there's a catalog error (and they do still
happen), this could result in ALL of your production applications being
down.  One might argue increasing the # of catalogs increases the risk of
catalog errors, but it also limits your exposure and can shorten the
amount of time being spent in recovery.

Recovery is the major reason NOT to place all of your entries in a single 
catalog.  When that one catalog breaks (it's not if, but when), think about 
how you will manage to perform recovery.  Can you do this without relying on 
some cataloged dataset somewhere?  (There are ways; but, you need to plan 
for this event.)

Another consideration may be what happens when that catalog fills, for 
whatever reason.  (Lots of dead CAs, say.)  You may not be able to catalog 
anything new, until you re-organize the catalog.  Can you afford that outage 
to your entire shop?

I once worked in a major shop where every production ds started P.** - I
honestly believe that shop was one of the reasons multi-level aliases were
designed.
dd keller

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: VTOC ERROR - Index entry not included because VTOC entry does not exist.

2010-02-11 Thread John Laubenheimer
On Thu, 11 Feb 2010 10:02:28 -0800, John Dawes 
jhn_da...@yahoo.com.au wrote:

John,
 
I tried your suggestion and the job was successful.  However, when do a 3.4 
I get the same error message.
In the ICKDSF JOB the only messages posted for both steps were :
 
ICK2I ICKDSF PROCESSING COMPLETE. MAXIMUM CONDITION CODE WAS 
0 

Would I need to delte the OSVTOC before I do the BUILDIX?

--- On Fri, 12/2/10, McKown, John john.mck...@healthmarkets.com 
wrote:


From: McKown, John john.mck...@healthmarkets.com
Subject: Re: VTOC ERROR - Index entry not included because VTOC entry 
does not exist.
To: IBM-MAIN@bama.ua.edu
Received: Friday, 12 February, 2010, 4:46 AM


First change it to an OSVTOC, then back to an IXVTOC. I've had to do that 
on rare occassion in the past.

BUILDIX OSVTOC

BUILDIX IXVTOC

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets®

9151 Boulevard 26 . N. Richland Hills . TX 76010
(817) 255-3225 phone . (817)-961-6183 cell
john.mck...@healthmarkets.com . www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets® is the brand name for products underwritten and issued by 
the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life 
Insurance Company®, Mid-West National Life Insurance Company of 
TennesseeSM and The MEGA Life and Health Insurance Company.SM

 -Original Message-
 From: IBM Mainframe Discussion List 
 [mailto:ibm-m...@bama.ua.edu] On Behalf Of John Dawes
 Sent: Thursday, February 11, 2010 11:35 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: VTOC ERROR - Index entry not included because VTOC 
 entry does not exist.
 
 I notice that when I do a 3.4 via TSO on VOLUME PROD01 I 
 receive the error message Incomplete VTOC List.  To get 
 more info I hit PF1 and received the message Index entry not 
 included because VTOC entry does not exist'.  I checked the 
 contents of the volume and it does show that 
 SYS1.VTOCIX.PROD01.  I executed the ICKDSF utility BUILDIX  
 IXVTOC, but it was unsuccessful:
 ICK31505I 9B3F VTOC FORMAT IS CURRENTLY IXFORMAT, REQUEST 
REJECTED
 ICK31515I 9B3F BUILDIX COMMAND FAILED.    
 
 I verified the volume status via ISMF and the INDEX STATUS it 
 is ENABLED.
 
 Is there anything else I can try to fix this problem?
  
 Thanks to you all in advance.
 
 
 
 
       
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html
 
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



  
__

Yahoo!7: Catch-up on your favourite Channel 7 TV shows easily, legally, and 
for free at PLUS7. www.tv.yahoo.com.au/plus7

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Don't try deleting the OSVTOC ... you'll be sorry if you actually succeed!  
Speculating ... this sounds like you may have run out of space in the index, 
and the rebuild is not successful.  You need to delete the index and create a 
new/larger one.

To delete:

//BUILDOS EXEC PGM=ICKDSF,REGION=4M,PARM='NOREPLYU' 
//SYSIN DD *
BUILDIX OSVTOC DDNAME(DISK) PURGE   
//SYSPRINT  DD SYSOUT=* 
//DISK  DD DISP=OLD,UNIT=SYSALLDA,VOL=SER=volume

To rebuild:

//BUILDIX EXEC PGM=ICKDSF,REGION=4M,PARM='NOREPLYU'
//SYSIN DD *  
BUILDIX IXVTOC DDNAME(DISK)
//SYSPRINT  DD SYSOUT=*   
//DISK  DD DISP=OLD,UNIT=SYSALLDA,VOL=SER=volume
//INDEX DD DISP=(NEW,KEEP),DSN=SYS1.VTOCIX.volume,
// UNIT=SYSALLDA,VOL=SER=SMHSM1,
// SPACE=(TRK,more)   -- no secondary ... more=a larger value 
than previous

Make sure that you vary the disk device offline to all MVS images other than 
the one where you plan to run the jobs.  (This depends on z/OS release; but 
varying offline always works.)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at 

Re: VTOC ERROR - Index entry not included because VTOC entry does not exist.

2010-02-11 Thread John Laubenheimer
My thought is that the index ran out of space, and either didn't issue a 
message, or you missed it.  The BUILD OSVTOC PURGE will convert to an 
OSVTOC and delete the existing index.  The following JCL will allocate a new 
index (size of your choosing), and convert it back to indexed format.  The 
OSVTOC itself is described by the format 4 DSCB (plus the other DSCBs along 
with it); there is no interface to delete it.  (Other than ICKDSF INIT, that 
is.)  
It is not a separate dataset.  If you actually could delete this, you would 
have 
essentially deleted all datasets on the volume, and the BUILDIX command will 
have nothing to work with.  Now, since reallocating the index will probably 
result in the index being located in a different place on the volume, you need 
to get other MVS images out of the picture until the rebuild is complete by 
varying the device offline to the other MVS images.  Once complete, you can 
vary the device back online to the other images.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: PDS vs. PDSE

2010-02-10 Thread John Laubenheimer
On Wed, 10 Feb 2010 11:46:57 -0800, John R. Ehrman (408-463-3543 T/543-
) ehr...@vnet.ibm.com wrote:

PDSEs have been available for a long time, and provide many
advantages over PDSs. Why are people reluctant to use PDSEs?
John Ehrman

--
Mostly, this is FUD (Fear, Uncertainty  Doubt).  PDSEs developed a bad 
reputation due to problems which have been correctly by service (which has 
been included in the base of the latest z/OS releases).  A bad reputation is 
difficult to fight off, especially when upper managements gets to have it's 
say.  There is no valid reason to avoid PDSEs today.  You still need to keep on 
top of your service, though.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: MOUNT COMMAND FAILING

2010-02-01 Thread John Laubenheimer
On Mon, 1 Feb 2010 10:04:00 -0800, John Dawes 
jhn_da...@yahoo.com.au wrote:

I am trying to change the status of an online volume.  It is mounted as 
PRIVATE and I want to change it to PUBLIC.  I issued the following mount 
command:
M A3FD,VOL=(SL,WRKC14),USE=PUBLIC 
 
However, it failed due to a :
ACF9CCCD USERID INFSTC   IS ASSIGNED TO THIS JOB - MOUNT 
IEF403I MOUNT - STARTED - TIME=12.14.07  
IEF453I MOUNT - JOB FAILED - JCL ERROR - TIME=12.14.07   
$HASP395 MOUNT    ENDED  
IEE134I MOUNT COMMAND DEVICE ALLOCATION ERROR    

I varied the vol offline and tried the mount command again, to no avail.  
 
Anyboy spot my error?
 
Thanks in advance.

Try:

M /A3FD,VOL=(SL,WRKC14),USE=PUBLIC   --- note the slash

In its infinite wisdom, MVS thinks that A3FD is a system esoteric device 
name.  originally, the system only allowed for 3-digit addresses; so, when 4-
digit address came around, they conflicted with the existing rules for 
esoterics.  If you want to specify a 4-digit address (anywhere), you need to 
precede the address with a slash.  Also note that you can specify an esoteric 
device name in place of the address on the mount command; such as:

M SYSDA,VOL=(SL,WRKC14),USE=PUBLIC -or-
M SYSALLDA,VOL=(SL,WRKC14),USE=PUBLIC 
  
__

Yahoo!7: Catch-up on your favourite Channel 7 TV shows easily, legally, and 
for free at PLUS7. www.tv.yahoo.com.au/plus7

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: PAGE datasets -- few large or more small

2010-01-20 Thread John Laubenheimer
The answer partly depends on whether (or not) you have PAV support in your 
DASD hardware.  Last I knew, only dynamic PAVs will provide performance if 
you have multiple page datasets on a single volume (UCB); statis PAVs will 
not, and hyper-PAV support is still on the drawing board.  IBM has 
recommended a minimum of 4 locals (or 4 paging paths).

Now, for most environments, paging is not much of an issue today ... storage 
is plentiful and cheap.  Issues can arise when an SVC dump occurs.  The SVC 
dump is first written to a dataspace.  This may cause frames from other 
applications to get paged out.  (The system will probably go disabled for some 
time around here ... not exactly sure when.  You need a frozen picture of the 
system to be included with the dump.)  Once the dataspace is populated, the 
system will being processing again, and the dataspace is written to the dump 
datasets.  Storage will then come available for the frames that were paged 
out to make room for the dataspace.  (How much room - it depends ... think 
of a large CICS region using a multi-gigabyte DB2 region ... you get the idea.) 
 
You really want this to happen quickly and non-disruptively; this means many 
page datasets and many ways to get to them.

So, unless you have dynamic PAVs, lots of locals is still better.  DS/8000s 
will 
let you carve up the DASD however you see fit.  (I like 3390-2s for locals ... 
YMMV.  I have hyper-PAVs and z/OS 1.9.  I don't think that the dynamic PAV 
support is yet available for hyper-PAVs ... IBM?)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IEFU83 SDSSMF83 module

2009-12-03 Thread John Laubenheimer
On Thu, 3 Dec 2009 09:04:17 -0600, Mary Anne Matyaz 
maryanne4...@gmail.com wrote:

Hello group. I just installed 1.11 and I have some hincky SMF problems, which
I'm leaning toward blaming on an SMF IEFU83 exit.

Here are some of the random errors we are getting:
DFHST0103 CIAPZZAB An SMF error has occurred with return code X'14'
IST435I UNABLE TO RECORD ON TUNSTATS FILE, CODE= 20
TMNT010E SMF WRITE FAILED - SMFWTM RETURN CODE IS 0014
CSQW133E MQT2 CSQWVSMF - TRACE DATA LOST, SMF  NOT ACCESSIBLE
RC=14


D PROG,EXIT,EN=SYSSTC.IEFU83,DIAG
CSV464I 07.18.34 PROG,EXIT DISPLAY 011
EXIT SYSSTC.IEFU83
MODULESTATE EPADDRLOADPTLENGTHJOBNAME
IEFU83  A   888367C0      *
SDSSMF83A   96EDA300      *


I see that the module's entry point address is 96EDA300, and IBM QA
told me to go in to IPCS browse to look at that and see if the eye catchers
indicate whos module it is, but IPCS browse of active says the storage is not
available.

I also tried taking a console dump of NET and SMF, and couldn't find the
storage in there either.

Does anyone recognize the module and know to whom it might belong? If not,
any suggestions on  how to browse the storage?

Thanks!
Mary Anne

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Just a guess here, but I might think that SDSSMF83 belongs to some vendor 
product from SDS SOFTWARE.  This exit is (probably) dynamically installed in 
your system when this product initializes.  Also, you should be checking the 
code at address 16EDA300, not 96EDA300.  The high-order bit actually 
indicates that this is a 31-bit address, not a 24-bit address.

Or not .

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: criteria during PRIMARY SPACE MANAGEMENT (PSM)

2009-12-03 Thread John Laubenheimer
This is from z/OS V1R9.0 DFSMShsm Storage Administration Guide  (SC35-
0421-06) page 65.  (Other releases may have changed the manual 
name/number and/or page.)  This discussion is for tape mount management 
datasets ... the description of such is not that clear; but, it really means 
direct to ML2 migration(s).  These behave differently.  Note Trigger 
Threshold description here.  This may (or may not) explain what you are 
seeing and why.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM Imp1

2009-11-24 Thread John Laubenheimer
On Tue, 24 Nov 2009 02:23:16 -0600, R Hey sys...@yahoo.com wrote:

Hi,

It's been recommended NOT to use Imp 1 in WLM.

My client has been using Imp1 for online CICS.

I’m planning to change it to use Imp2 buy changing all Imp to Imp+1.

Can you think of any potential problems/issues?

TIA,
Rez

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

One thought here is that, when your management comes to you and says to 
increase priority (meaning throughput) of an IMP 1 address space, your only 
option will be to decrease the importance of all other IMP 1 address spaces.  
(It's not a matter of IF; it's a matter of WHEN!)  A rather nasty undertaking!  
If you leave some room at the top, you will have room to increase the 
importance level of a single address space.  This is not to say that you 
shouldn't have IMP 1 service classes; just leave a little wiggle room for when 
that time comes.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Beating a dead horse but

2009-11-20 Thread John Laubenheimer
On Fri, 20 Nov 2009 12:34:12 -0600, Tim Hare 
tim.h...@ssrc.myflorida.com wrote:

In a 2008 thread on people using IEFBR14 to do deletes there as this:

Indeed, how should Allocation know whether the program about to execute
wants to do something with the dataset(s) before deleting it/them?
Perhaps Allocation could be educated to issue HDELETE iff the dataset
is migrated *AND* DISP=(,DELETE) *AND* PGM=IEFBR14.

I have a set of application folks who use this method for tape datasets
(reusing the dataset names, rather than making them unique for whatever
reason), and of course no one wants to change the batch application 
because
of testing requirements, busy with other things, etc.

I believe HSM installs a catalog locate exit, as does ABR, to handle
the 'dynamic recall' .

Why can't the exit check for this specific set of conditions:

1. jobstep program is IEFBR14
2. At offset 0 of the location of IEFBR14 in storage, we have '1BFF07FE' :  SR
15,15 followed by BCR 15,14
3. Status is OLD or MOD
4. normal disposition AND conditional disposition are DELETE

If all of the above are met, then do the equivalent of HDELETE for the dataset

Seems to me that verifies in most cases that we're dealing with the real
IEFBR14 and not a replacement (although I grant it's possible to leave those
bytes in place by coding an IEFBR14 replacement with a different entry point,
I believe it in practice to be pretty unlikely), and that the dataset is 
intended
to be deleted no matter what (both dispositions are DELETE).


The workaround for now is to try to make sure the dataset to be deleted is
either retained until the delete reference (often the next day), OR to do
HRECALLs ahead of the job.

As I said, this is probably beating a dead horse but I didn't see this specific
proposal mentioned.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Have you checked out z/OS 1.11 yet?  EXEC PGM=IEFBR14 with DISP=
(OLD,DELETE) will no longer recall a dataset if migrated.  The dataset will be 
deleted (via HDELETE or equivalent interface) directly from the migration 
volume, without an intervening recall.  I believe that this is an installation 
option if you don't want this to happen.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Setting the Date/Time on One LPAR

2009-11-20 Thread John Laubenheimer
On Fri, 20 Nov 2009 14:50:11 -0600, Mark Zelden 
mark.zel...@zurichna.com wrote:

On Fri, 20 Nov 2009 15:29:01 -0500, Baraniecki, Ray
ray.baranie...@morganstanley.com wrote:

Is it possible to isolate one LPAR in a CEC and set the date/time to a
different value than the other LPARs in the same CEC?


Yes.  This has been possible since the 9672 G5 (I think... could
have been prior).  This was needed for Y2K testing.  Of course
you have to have your CLOCKxx member set up to not use
the sysplex timer.

Search the archives or google for Y2K and TOD.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at 
http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

While setting a different time is possible; but, it may not be the best of 
ideas.

Shared DASD concerns may take precedence.  If a system with a higher date 
touches any dataset(s), HSM (or equivalent package) may start behaving in 
an unexpected way.  That dataset's last reference date would be changed to 
a new (higher) date, and impact any incremental backups in an undesired 
manner.  (Also, GRS will not allow 2 systems with different dates; so, 
integrity 
is out the window.)

Think it through first.  A completely isolated LPAR should work fine.  Concerns 
will arise if anything is shared.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: High CPU / channel ovhd w/3592 and DFDSS

2009-11-19 Thread John Laubenheimer
Two thoughts here.

1) Your new tape drives accept data at a faster rate than your old tape 
drives.  Therfore, you might expect that DFHSM is reading the DASD at a 
faster rate; hence, DFHSM gets dispatched more frequently.  This would 
increase the apparent CPU utilization of DFHSM.  However, since your backup 
completes in a shorter amount of time, the average CPU utilization of DFHSM 
should remain the same (or similar).

2) Check you HSM parameters.  Use SETSYS TAPEHARDWARECOMPACT to 
notify DFHSM that your tape drives have this feature.  And, check your 
SETSYS COMPACT parameter:
SETSYS COMPACT(DASDMIGRATE NOTAPEMIGRATE DASDBACKUP 
NOTAPEBACKUP) 
  /* USE COMPACTION FOR:*/
  /*MIGRATION TO DASD   */
  /*BACKUPTO DASD   */
  /* DO NOT USE COMPACTION FOR: */
  /*MIGRATION TO TAPE   */
  /*BACKUPTO TAPE   */
You really don't need DFHSM to compact your data; the tape hardware does 
this for you.  (Of course, if you do compact your tape backup in software, 
you're doing much more of this operation per second, since you are processing 
more data.)

Of course, this may/may not be your problem; and, as always, YMMV.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Change system date

2009-11-19 Thread John Laubenheimer
On Thu, 19 Nov 2009 12:52:50 -0600, Raquel Calvo Olmos 
raquel_ca...@everis-outsourcing.com wrote:

Hi there,

We are doing some tests with date change in our systems.

Has anyone tried, through JCL, etc., to change the system date?

We would cheat the system to indicate it that we are in other year, for
exemple, 2010.

Thanks in advanced.

Regards,
Raquel Calvo.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

This was done on a regular basis about 10 years (or so) ago, when everybody 
was testing for Y2K.

What you are proposing is not a good idea.  Various system components may 
not react well when the clock is turned back.

There were many date and time testing packages available for Y2K testing ... 
HOURGLASS 2000, from a company canned MAINWARE, was just one of these.  
I'm sure that a few might still be available.  You can check the internet 
(google) for their web site.  Else, search on Y2K TESTING MAINFRAME for 
other related software packages.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Change system date

2009-11-19 Thread John Laubenheimer
On Thu, 19 Nov 2009 14:08:09 -0600, Mark Zelden 
mark.zel...@zurichna.com wrote:

On Thu, 19 Nov 2009 13:06:28 -0600, John Laubenheimer
jlaubenhei...@doitt.nyc.gov wrote:

On Thu, 19 Nov 2009 12:52:50 -0600, Raquel Calvo Olmos
raquel_ca...@everis-outsourcing.com wrote:

Hi there,

We are doing some tests with date change in our systems.

Has anyone tried, through JCL, etc., to change the system date?

We would cheat the system to indicate it that we are in other year, for
exemple, 2010.

Thanks in advanced.

Regards,
Raquel Calvo.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

This was done on a regular basis about 10 years (or so) ago, when 
everybody
was testing for Y2K.

What you are proposing is not a good idea.  Various system components may
not react well when the clock is turned back.

There were many date and time testing packages available for Y2K 
testing ...
HOURGLASS 2000, from a company canned MAINWARE, was just one of 
these.
I'm sure that a few might still be available.  You can check the internet
(google) for their web site.  Else, search on Y2K TESTING MAINFRAME for
other related software packages.


I though HourGlass was from Princeton Softech.  Anyway, IBM owns it now.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at 
http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

I just went into the SAMPLIB dataset ... it said MAINWARE there.  Oh well. 
chained acquisitions!  I guess you can't tell who owns something anymore 
without a scorecard.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Use of RETAIN

2009-09-10 Thread John Laubenheimer
The originally specified JCL isn't exactly correct.  If you were to call for a 
2nd 
tape sometime before reaching file 10, the system would try to place that file 
on the 1st (full) tape, due to the way this refer-back is coded.  Also, if 
allowed, there would be gaps in the file numbers (such as file 10 following 
file 8 
with no file 9 present).  The correct specification would be to refer back to 
the DD for the previous file on the tape.

Original:

//TAPE1DD  DSN=CHGE.VOL1..MAGSTAR(+1),DISP=(NEW,CATLG,DELETE),  
// UNIT=3590-1,LABEL=(1,SL), 
// VOLUME=(,RETAIN,SER=),
// DCB=(MODEL,RECFM=U,LRECL=0,BLKSIZE=32760) 
//TAPE2DD  DSN=CHGE.VOL2..MAGSTAR(+1),DISP=(NEW,CATLG,DELETE),  
// UNIT=3590-1,LABEL=(2,SL), 
// VOLUME=(,RETAIN,REF=*.STEP01.TAPE1),  
// DCB=(MODEL,RECFM=U,LRECL=0,BLKSIZE=32760)

//TAPE10   DD  DSN=CHGE.VOL10..MAGSTAR(+1),DISP=(NEW,CATLG,DELETE),
// UNIT=3590-1,LABEL=(10,SL),   
// VOLUME=(,RETAIN,REF=*.STEP01.TAPE1), 
// DCB=(MODEL,RECFM=U,LRECL=0,BLKSIZE=32760) 

Corrected:

//TAPE1DD  DSN=CHGE.VOL1..MAGSTAR(+1),DISP=(NEW,CATLG,DELETE),  
// UNIT=3590-1,LABEL=(1,SL), 
// VOLUME=(,RETAIN,SER=),
// DCB=(MODEL,RECFM=U,LRECL=0,BLKSIZE=32760) 
//TAPE2DD  DSN=CHGE.VOL2..MAGSTAR(+1),DISP=(NEW,CATLG,DELETE),  
// UNIT=3590-1,LABEL=(2,SL), 
// VOLUME=(,RETAIN,REF=*.STEP01.TAPE1),  
// DCB=(MODEL,RECFM=U,LRECL=0,BLKSIZE=32760)
 .  .  .
//TAPE9DD  DSN=CHGE.VOL9..MAGSTAR(+1),DISP=(NEW,CATLG,DELETE),
// UNIT=3590-1,LABEL=(9,SL),   
// VOLUME=(,RETAIN,REF=*.STEP01.TAPE8), 
// DCB=(MODEL,RECFM=U,LRECL=0,BLKSIZE=32760) 
//TAPE10   DD  DSN=CHGE.VOL10..MAGSTAR(+1),DISP=(NEW,CATLG,DELETE),
// UNIT=3590-1,LABEL=(10,SL),   
// VOLUME=(,RETAIN,REF=*.STEP01.TAPE9), 
// DCB=(MODEL,RECFM=U,LRECL=0,BLKSIZE=32760) 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IDC3009I RC=110

2009-08-18 Thread John Laubenheimer
Also, check the SETROPTS for the PROTECTALL option.  If you had a PAGE.* 
profile, which only covered the 2nd level, the PAGE.A.another_level is NOT 
protected, and RACF (actually DFSMS) would fail any access to the dataset.  
When you created PAGE.**, you then covered any number of levels after 
PAGE., and RACF/DFSMS would then allow access.

And, as previously stated (I think), when you rename a RACF protected 
dataset, the dataset name that it is renamed to must also be protected.  
(Level of protection doesn't really matter ... just covered by a profile.)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 0ADR713E (001)-ALLOC(01), UNABLE TO ALLOCATE SMS MANAGED DATA SET

2009-08-14 Thread John Laubenheimer
Try allocating any dataset on the existing SYSRES volume.  If there are any 
datasets in transition (partial SMS conversion/deconversion), you won't be 
able to allocate anything.

Most likely, your ACS routine (STORCLAS) is assigning a storage class to these 
datasets while trying to clone the volume.  If this is the case, then you need 
to execute ADRDSSU with the ADMINISTRATOR, BYPASSACS(**) and 
NULLSTORCLAS keywords.  All require some appropriate level of RACF (or other 
security package) authority to use.

On Thu, 13 Aug 2009 17:57:31 -0500, Glen Gasior 
glen.manages@gmail.com wrote:

*
I tried cloning a SYSRES volume for the first time at this site and received 
this
message.

0ADR713E (001)-ALLOC(01), UNABLE TO ALLOCATE SMS MANAGED DATA SET

My suspicion is that there are 8 SMS datasets on the non-sms SYSRES.

I am hoping someone has encountered this before and can recommend how to
correct the situation. I would like to correct the status of the datasets 
before
the ADRDSSU clone, but if the best way to go is correct it in the cloned copy 
I
imagine that will suffice for now.

I imagine an IDCAMS alter might get the SMS information out of the catalog
entry, but I am also imagining there is something in the VVDS or VTOC that
would need to be corrected.

Thanks for any help.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Info on Setting Up an Lpar for Flashcopy

2008-09-05 Thread John Laubenheimer
Assuming that you are planning to flash all of your production DASD at one 
time, you can do something like this:

 PRODUCTION LPAR  BACKUP LPAR

 ------
 | || |
 |  PRODUCTION ||  PRODUCTION |
 | DASD|| DASD|
 | || |
 |  OFFLINE=NO || not included|
 |   in IODF   ||   in IODF   |
 | || |
 ------

 ------
 | || |
 |BACKUP   ||BACKUP   |
 | DASD|| DASD|
 | || |
 | OFFLINE=YES ||  OFFLINE=NO |
 |   in IODF   ||   in IODF   |
 | || |
 ------

(You may have to cut this out and paste it into WORD with a COURIER NEW 
font to see this correctly.)

The production LPAR would FLASH the OFFLINE=NO DASD to the OFFLINE=YES 
DASD.  You need both sets of addresses genned somewhere to do the 
flashcopy from MVS.  (It can also be done from the DS/8000 HMC; you really 
don't want to go there!)

This is just one method; there are others that will work just as well.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Non-SMS GDG Issue

2008-06-26 Thread John Laubenheimer
I wouldn't use IEHPROGM for anything any more.  IDCAMS is the way to go.
  
Also, for tapes, you really should be including the file-sequence number in the 
catalog entry.

 DEFINE NONVSAM(NAME(PSI.TRN.SI0095C.G2671V00) -
DEVICETYPES(3490) VOLUMES(183323) FSEQN(1))

Note: the FSEQN of 1 assumes that it is the first file on the 
standard-labeled 
tape.

Note (2): It looks OK from what you posted, but make sure that your Os 
and 0s are correct.  This could potentially lead to the dataset name being 
cataloged as NOT part of the proper GDG.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Moving RACF databases

2008-06-11 Thread John Laubenheimer
The official supported method to move a RACF database is by using the 
utility IRRUT400.  This utility provides the proper ENQ and locking mechanisms 
to prevent updates to the database while the copy is in progress.  And, it's 
simple enough to use.

However, if you can afford the downtime, and have a 2nd completely 
independent operating system image, the database can be moved using 
FDR/DFSMSdss and/or IEBGENER/ICEGENER/SYNCGENR.  This is 
officially unsupported, but works, provided the databases are not in use (in 
any way) while the copy is in progress.

My opinion, always use IRRUT400.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Workstation compliance

2008-05-28 Thread John Laubenheimer
IBM is well aware of this issue with VISTA.

The PDF files are a good alternative.  If you plan on using the PDF files, make 
sure that you install the IBM Advanced Linguistic Search facility plug-in for 
ACROBAT.  This is located on the tools CD, in the PLUGINS directory, with a 
name of HCLXINST.EXE.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Question on defining a profile to facility class in RACF

2008-05-20 Thread John Laubenheimer
On Tue, 20 May 2008 13:14:05 -0500, David Day [EMAIL PROTECTED] 
wrote:

rdef facility abc.addrspac.** uacc(none).

RACF does not gripe about this.  Says everythingis fine,as far as I can tell.

next I execute

setropts generic(facility) refresh

I then execute a PERMIT as follows:

permit abc.addrspac.dad* access(read) class(facility) id(dad) 

And get the following:

ICH06004I ABC.ADDRSPAC.DAD* NOT DEFINED TO RACF   

You created a profile abc.addrspac.** in the facility class; not 
abc.addrspac.dad*.  You need to issue your permit against abc.addrspac.**, 
or create a profile abc.addrspac.dad* in the facility class.

In the absense of profile abc.addrspac.dad*, any RACHECK against this profile 
will resolve itself to profile abc.addrspac.**; this access list is what is 
being 
used.  This access list may be more general that what you want.  (Eventually, 
you might want to issue a RACHECK against a profile called 
abc.addrspac.mom*, and have a different access list for this profile.)  So, you 
just might want to create a more specific profile, and populate that profile 
with whatever access list you want.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: 3 Page Datasets on one Volume

2008-05-07 Thread John Laubenheimer
On Wed, 7 May 2008 14:58:21 -0500, Staller, Allan [EMAIL PROTECTED] 
wrote:

It depends.

In the old SLED days this could be performance crippling, especially if
there was a decent paging rate(remember only one actuator/volume). With
little or no paging this would not be a large problem.

Fast forward 20 years:

Data is mapped transparently to many many small drives, accessed by many
actuators. Again if there is no significant paging, no problem. If
significant paging occurs, this MAY BE a problem, maybe not.

Having said all of the above, my distinct preference would be for one
page ds the size of the 3 currently occupying the volume.

HTH,


snip
I noticed again today that we have 6 local page datasets on 2 volumes -
each a 3390-3.  I'm sure they have Pavs, or the equivelent on the dasd,
but I'm just wondering if that is a good practice or not.
/snip


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
It is best to have a minimum of four(4) local page datasets; more is better.  
The heavy hitter that gets lost in the mix is an SVC dump.  This can put quite 
a load on the paging configuration when it needs to page-in a lot of inactive 
frames, just to write them out to a dump dataset.  The system will be disabled 
while the copy process copies this to a dataspace (which, in turn, can stress 
the real storage configuration, and thus cause more page-out activity).  You 
really want to be disabled for as short a time as possible.  So, more 
concurrent 
paging I/Os is better; the way to get that is more local datasets.  With PAVs, 
they need not be on different volumes.  Without PAVs, separate volumes are 
strongly recommended.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Forms Control Buffer (FCB)

2008-05-02 Thread John Laubenheimer
Try looking at the MVS DFP Utilities manual, SC26-7414.  The utility that you 
need is IEBIMAGE.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: SHARE Future Conferences

2008-04-22 Thread John Laubenheimer
I don't have the dates, although they are in line with the normal SHARE 
conference dates.  For 2010, the locations that have been stated are:

Winter (S114) - Seattle, WA
Summer (S115) - Boston, MA

Things can (and often do) change.  These locations were reported by the BOD 
at Team Time in San Diego (Summer 2007/S109).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: JES2 DD Concatenation issue

2008-03-28 Thread John Laubenheimer
Just curious: can the PROCLIB concatenation contain PDSEs?

Yes.  That eliminates the COMPRESS issue or the problem you might
run into if the library takes an additional extent.

Yes, you can use PDSEs in the JES2 PROCLIB concatenation.  LINKLIST too.  
But, not such a great idea, IMHO.

Maybe I'm wrong here (and I could be), but I somehow remember that the 
empty space in a PDSE is not reclaimed until the dataset is closed.  Now, you 
can't easily close a LINKLIST dataset; system shutdown doesn't quite do it.  
Removing it from the active LINKLIST probably will.  A JES2 PROCLIB is easier 
to close; a temporary switch to a new concatenation usually works.

However, the caveat here is that it must be closed on ALL systems sharing it, 
AT THE SAME TIME!  Otherwise, the free space clean operation will not take 
place.

I don't think that this has (yet) been addressed by IBM development.

Additional extents are not as large of a problem with PDSEs and they are with 
PDSs.  If a PDS (in a PROCLIB or a LINKLIST) takes a new extent, the only 
address space to know about it is the initiator in which the copy job executed 
(or the TSO user that took the new extent).  Neither JES2 nor the LINKLIST 
will be aware.  With a PDSE, all of this activity takes place within the PDSE 
address space.  This address space can communicate this information to other 
images within the same SYSPLEX.  All JES2s and LINKLISTs become aware.

Corrections, if necessary, to the above are welcome!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JES2 DD Concatenation issue

2008-03-28 Thread John Laubenheimer
If JES2 is up, then:

o Submit a job with:

  - STEP1 IDCAMS ALTER NAME

  - STEP2 IEFBR14 with DISP=OLD on the PROCLIB catenands.

o The job waits on PROCLIB ENQ

o Stop JES2

o Start JES2

o JES2 waits on PROCLIB ENQ

o The RENAME job completes freeing the ENQs

o JES2 allocates PROCLIB and continues.

In the above scenario, you would never get JES2 to shut down cleanly while a 
job is still executing.  (It would need to be an STC running SUBSYS=MASTER.)  
And, even if this did work, you better hope that the job is successful; else, 
you are left without the PROCLIB and JES2 won't start!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Paging Subsystem -- Comfortable Size and Configuration?

2008-03-12 Thread John Laubenheimer
As a Rule of Thumb (and thumbs come in various shapes and sizes ... yours 
may vary):

1 PLPA dataset
1 COMMON dataset
4 (or more) LOCAL datasets

LOCAL datasets should peak at no more that 30 (or so) percent utilization, 
else the block paging algorithm starts to have problems (the probability of 
finding a block with sufficient contiguous free slots drops below .5)

With DYNAMIC (only) PAVs, you may be able to get away with 2 LOCAL page 
datasets, since you will get 2 addresses assigned to each dataset

Best to have the PLPA and COMMON datasets on separate volumes, 
sufficiently sized.  Next best (for a no-PAV environment) is a 1 cylinder PLPA 
and a COMMON dataset sized somewhere around 2GB (2000 cylinders is 
probably OK).  With PAVs, we need someone else to chime in here, since you 
probably don't want to assign 2 PAV addresses to a 1 cylinder PLPA dataset!

Note that HYPER-PAVs don't play in the paging subsystem yet.  Still awaiting 
word on this issue from IBM (Jim Mulder, maybe?).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CATALOG Quesiton

2008-03-12 Thread John Laubenheimer
First, the good news.  A catalog can have up to 123 (if I recall correctly) 
extents.  If you haven't reached this limit, you should be OK.

If you are at (or near) this limit, it gets a bit fuzzy.  If, by creating a new 
entry in the catalog, you cause a CI split, which in turn causes a CA split, 
you 
may need to take another extent.  It only takes one entry to cause the CA 
split; thus, the inability to create this entry makes it look like the catalog 
is 
full.  On the other hand, there may be room for many thousands of catalog 
entries which fit into existing CIs and/or CAs.  It depends on the name of the 
entry to be cataloged (and, usually, the high-level qualifier).

Also, if I recall correctly, a catalog is limited to 4GB in size (the same as 
for 
any other non-extended format KSDS; catalogs do not have an extended 
format counterpart).  

You should allocate catalogs with secondary extents.  And, if you reach some 
threshold (determined by your installation ... say 100 extents), you should 
schedule a re-org.  A catalog re-org really doesn't require a stand-alone 
environment.  You just don't want things initiating while the export and import 
are running.  Export the catalog, close and unallocate the catalog, delete and 
re-define the catalog (larger?), and import (into empty) the catalog; it's 
rather 
quick.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMSDSS V1R07.0 and the CONCURRENT parameter

2008-03-05 Thread John Laubenheimer
All of your suggestions make sense.  However, you really should address the 
main culprit here, namely the HW*.** in your include.  This, effectively, 
causes DSS to search all of your catalogs via SVC 26.  This approach, as I 
recall from SHARE, causes catalog reorientation, security processing, 
serialization, etc., for each SVC 26 call.  This is an expensive process, 
which, 
among other things, ties up the DASD cache for the duration of the backup 
process when concurrent is specified.  If you can reduce the high-level 
qualifiers specified (i.e., HW* replaced by HW1, HW2, ..., HWzz), this 
would help.  Also, segregating the include into multiples, then running them in 
parallel (using multiple output tapes, of course), would also help.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IBM PR: Almost Introducing the Extraordinary New XXXXX

2008-02-13 Thread John Laubenheimer
At SHARE in Orlando, MVSE Sessions 2832, 2833 and 2836 on Wednesday 
morning have abstracts which tend to make one think that IBM will be talking 
about something new.  So, keep an eye on these slots.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: PLPA and COMMON PAGESPACE Size

2007-09-13 Thread John Laubenheimer
On Wed, 12 Sep 2007 20:01:29 -0400, Jim Mulder [EMAIL PROTECTED] 
wrote:

IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 
09/12/2007
12:46:24 PM:

 Mark Zelden wrote:
  ...  I think the
  decision to remove suspend/resume was based on issues that
  kept cropping up with pav and paging.
 

 Must have been a fairly serious issue of some sort. Why else would they
 change the behavior via APAR and not on a release boundary?


  There is some prior discussion in the archives:

http://bama.ua.edu/cgi-bin/wa?A2=ind0602L=ibm-mainP=R27468I=1X=-

  We had already been planning to remove ASM use of suspend/resume
via the HyperPAV support APARs, since HyperPAV didn't mesh well with
suspend resume.  Then another problem cropped up with dynamic PAV and
suspend/resume, and we did not have an elegant solution.  Since we
we going to be removing suspend/resume anyway, we simply removed
it a little earlier.

Jim Mulder   z/OS System Test   IBM Corp.  Poughkeepsie,  NY

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Jim ...

Perhaps you can clarify something.  If I understand correctly, 2 PAVs are 
assigned to each page dataset; 1 PAV for single-page requests, and 1 PAV for 
block requests.  If so, is it possible to have any block requests against the 
PLPA or COMMON page datasets?  I understand that if both are on the same 
volume, each will have its own path to the dataset, but will it have both?  
Just 
curious.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Storage Group for non-SMS volumes?

2007-06-21 Thread John Laubenheimer
On Thu, 21 Jun 2007 12:02:14 -0400, Tim Hare [EMAIL PROTECTED] 
wrote:

This is one of those things that _ought_ to exist, but I can't find it
documented anywhere so far.

Is there, anywhere a 'reserved name' storage group which contains all of
the non-SMS volumes?

If it doesn't exist, has it been suggested through the requirements
process that anyone knows of?

If I were implementing it, it would contain all volumes which are online
to the current system I'm using it on, and which are not part of a storage
group.

The advantage to having this would be to allow me to use utilities /
commands that have storagegroup operands for selecting groups of volumes
(for example, ABR's MOUNT STORGRP=).


Tim Hare
Senior Systems Programmer
Florida Department of Transportation
(850) 414-4209

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

I have never seen such a requirement.  One does not exist currently in the 
SHARE requirement database.  Best to try to submit a requirement at least a 
month in advance of the next SHARE meeting (August 12-17 in San Diego) for 
the quickest turn-around.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: determining cause of deadlock

2007-05-01 Thread John Laubenheimer
On Tue, 1 May 2007 07:22:09 -0500, Debbie Mitchell 
[EMAIL PROTECTED] wrote:

Is there any way to determine who was holding a reserve after the fact?  I
had a situation where systemA was holding an exclusive enq on a catalog but
systemB had the volume reserved.  The job holding the catalog enq on 
systemA
was cancelled, thus resolving the contention, before I could issue the
appropriate commands to determine who was holding the reserve.  I'd still
like to know, however, who/what was holding the reserve on systemB so I 
can
possibly avoid the problem in the future.  systemA is at z/OS 1.7 and
systemB is at z/OS 1.4.

TIA,
Debbie Mitchell

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

If you are running RMF with ENQ(DETAIL) as an option, you should be able to 
execute the RMF post-processor with REPORTS(ENQ).  This won't tell you 
exactly what is causing the problem, but it should get you close enough that 
you can surmise what's going on.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: FEKFRSRV Service Class

2007-04-25 Thread John Laubenheimer
Two possibilities that I can think of (perhaps more) ...

The FEKFRSRV task is not really an STC, but a (1) APPC task of a (2) USS 
task.  In case (1), this task name should be placed in the ASCH subsystem 
entry under WLM option 6.  In case (2), this task should be placed in the 
OMVS subsystem entry under WLM option 6.

IMHO, nothing should ever run with SYSOTHER, which you are correctly trying 
to fix.  Everything should be well known, and classified accordingly.

On Wed, 25 Apr 2007 07:01:06 -0400, Bob Shannon 
[EMAIL PROTECTED] wrote:

In our shop FEKFRSRV is in a service class of SYSOTHER even though I
explicitly defined it to go into another service class. I assumed it was
an STC. What do I have to do to get this thing out of SYSOTHER? TIA.

 

Bob Shannon

Rocket Software


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


RMF Post-Processor and LBI Tapes

2007-04-12 Thread John Laubenheimer
Has anyone successfully used the RMF post-processor to read SMF data from 
LBI tapes?  (BLKSIZE  32K)

Currently, I receive an S013-E1 abend.   z/OS is at 1.7.

I can read the tape back into DF/SORT and have it convert the data back to a 
block size less than 32K, and all is well.  But, I just need to get as much SMF 
data onto a tape as possible (especially a monthly roll-up tape).  RMF 
reporting is just the first thing that I've run across; I'm sure there must be 
others.

And, does anyone know of any other utilities that one normally runs with SMF 
data that can't read an LBI tape yet?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ADR793E -PDSE indicators in the VTOC and VVDS do not match

2007-04-04 Thread John Laubenheimer
This is really a [EMAIL PROTECTED] guess here, and may need Mark Thomen to 
chime 
in here.

It sounds like you might have a duplicate NVR in the VVDS on the volume.  Try 
to salvage what you can from the PDSE in question by copying it to another 
dataset with a different name on a different volume.  Delete the PDSE in 
question.  Then, use IDCAMS to print the VVDS.  Search the VVDS printout for 
the dataset name of the PDSE that you just deleted.  If it's not there, then 
this is not your problem.  If it is there, you should be able to do a DELETE 
dataset NVR FILE(xxx) to correct this situation for the next time.  (Point a 
DD 
card xxx to the volser in question.)

If you do have a duplicate NVR, and can't identify how it got there, you should 
still open a PMR, since something is wrong, and this problem will only crop up 
again.  And, if you can identify how it got there, you now know not to do 
(whatever you did) again!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Max Allocation w/ Space= DD

2006-12-26 Thread John Laubenheimer
This assumes that space constraint relief is NOT specified in the DATACLAS 
being used.

By specifying a primary extent of 4,000 cylinders, you have essentially 
eliminated the (12) 3390 mod-3 volumes from consideration, since none of 
these volumes can satisfy this request.  (At most, you will have 3,300 
cylinders available on each of these volumes.)  This leaves the (2) 3390 
mod-9 volumes for consideration.  With a limit of 65,535 tracks per volumes 
for a standard sequential dataset, you have limited the size of your 
dataset to 8,000 cylinders (1 primary allocation of 4,000 cylinders by 2 
candidate volumes ... no secondaries).  By specifying (CYL,(2000,1000)), 
you can get (maybe ... depending on other usage of the volumes) one (1) 
primary and one (1) seconday extent on each 3390 mod-3 volume, and one (1) 
primary and eight (8) seconday extents on a 3390 mod-9 volume.  Assuming 
that your data class allows for all fourteen (14) volumes to be used, your 
max dataset size here is 44,000 cylinders!  Note that each extent taken 
must reside entirely on one (1) volume before proceeding to another.

Hopefully, this clarifies the situation.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Max Allocation w/ Space= DD

2006-12-26 Thread John Laubenheimer
and one (1) primary and eight (8) seconday extents on a 3390 mod-9 volume

Correction ... and one (1) primary and two (2) seconday extents on a 3390 
mod-9 volume.

The maximum dataset size would then be 12 time 3,000 cylinders plus 2 times 
4,000 cylinders (36,000 + 8,000), or 44,000 cylinders.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SYS1.MAN allocation question

2006-11-15 Thread John Laubenheimer
There's been some disagreement over the specification of CISIZE for the SMF 
datasets.  A CISIZE of 26624 gives you 52K per track on a 3390 (2 CIs) ... 
slightly better performance.  A CISIZE of 18432 will give you 54K per track 
on a 3390 (3 CIs) ... sightly better space utilization.  Make your own 
choice.  And, as always, YMMV.

The BUFSP parameter should typically be 5 times the CISIZE.

The average (first number in) RECSZ value pretty much doesn't matter (in 
this case); the maximum (second number in) RECSZ value MUST be 32767.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SHARE 2006

2006-01-19 Thread John Laubenheimer
On Wed, 11 Jan 2006 06:45:55 -0600, Chase, John [EMAIL PROTECTED] wrote:

 -Original Message-
 From: IBM Mainframe Discussion List On Behalf Of Gene Muszak

 Esteemed colleagues

 Does anyone know where the SHARE expo is after SEATTLE in March?

According to http://www.share.org/Events/future_conf.cfm:

SHARE User-Driven Training Event  Expo
August 13-18, 2006
Baltimore, Maryland

I see nothing has been published yet for 2007 and beyond

-jc-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

The SHARE website has finally been updated to include the dates and
locations for the 2007 conferences.


Future Conferences

SHARE User-Driven Training Event  Expo (SHARE 106)
March 5-10, 2006
Seattle, Washington

SHARE User-Driven Training Event  Expo (SHARE 107)
August 13-18, 2006
Baltimore, Maryland

SHARE User-Driven Training Event  Expo (SHARE 108)
February 11-16, 2007
Tampa, Florida

SHARE User-Driven Training Event  Expo (SHARE 109)
August 12-17, 2007
San Diego, California


In particular, note SHARE 108, which is a bit earlier than is normally
scheduled.  SHARE 108 was originally scheduled for the week before Mardi
Gras in New Orleans.  After Katrina, the Board of Director concluded that
it may not be in SHARE's interest to stay in New Orleans at this time.
The dates still reflect the week before Mardi Gras timing, albeit in a
different location.  As I understand it, New Orleans will be re-evaluated
for a future conference location.  (And, this change of venue was why the
2007 conferences haven't been posted until now.)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: (semi-OT) - for IBM, SHARE members

2005-11-02 Thread John Laubenheimer
An easier mechanism to vote SHARE requirements is coming soon!

It's currently in beta test on 4 projects, and we hope that it will be
available by the next meeting.  (Seattle, first week of March ... mark
your calendars!)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Access to the SHARE requirements database

2005-09-06 Thread John Laubenheimer
Ed ...

   Each project handled the merge of the GUIDE requirements in their own
fashion.  I can only speak for MVS Storage requirements, since I was the
coordinator at that time.  (And, even though I am Manager of Requirements
for SHARE right now, I can't answer what happened to all of the GUIDE
requirements, since I didn't have this position at that time.)  The MVSS
project still has 46 GUIDE requirements in ACTIVE status.  81 of the GUIDE
storage requirements were retired, either because they were rejected by
IBM, announced or available in some form, voted down by project membership
or considered non-strategic by a requirements review committee (i.e., no
way was IBM going to do this, no matter what we thought).

   SHARE still view the requirements process as important, and is
currently in the process of trying to revitalize it.  We welcome new
requirements from SHARE members.  But, the process doesn't work well
without the participation of the SHARE community.  Register for the
requirements system; participate in the discussion (it's all carried out
on-line via a browser interface); and, most important, VOTE (again, on-
line via a browser interface).  Requirements is the primary method of
influencing IBM's direction, so, please, PARTICIPATE!

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Acquiring TSO Userid in Logon Proc

2005-08-12 Thread John Laubenheimer
On Fri, 12 Aug 2005 09:49:49 -0400, Earnie Allen
[EMAIL PROTECTED] wrote:

Is there a way to pick up the TSO userid [sysuid] during logon and
utilize
it in the specific TSO logon proc being used?

I would like to use it as part of a DSN which I would like to be able to
create during logon and go away at logoff --- a monitoring-type dataset.

Thanks!


Earnie Allen
Senior Systems Programmer
MVS Systems Software
WORLDSPAN, LP
Phone: 404-322-2700  FAX: 404-322-4653
E-Mail:  [EMAIL PROTECTED]

Remember: It takes teamwork to make the dream work.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

You can specify something like:

//$TSOSM  EXEC PGM=ADFMDF03,DYNAMNBR=200  (or PGM=IKJEFT01)
//ISPPROF   DD DISP=SHR,DSN=SYSUID..SYSNAME..ISPF.ISPPROF
  . . .

You can insert the basic system variables.  The above example inserts the
TSO userid via SYSUID, and your system ID (from IEASYSxx) via SYSNAME.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Copying GDG versions in Reverse order

2005-08-05 Thread John Laubenheimer
I checked this last night.  There was no requirement of this nature
transferred from GUIDE to SHARE.  So, a new requirement might me in
order.  All of the GUIDE storage requirements were subject to 2 separate
reviews by the MVS Storage Management requirements committee.  The
majority of the old GUIDE requirements (almost 98 percent) were deemed to
be available, no longer applicable, unlikely to ever be addressed by IBM,
or revoted by project members.  I think that 6 of the GUIDE requirements
have survived!  Anyone who has registered for MVS/MVSS requirements
through SHARE should be able to see everything on the database.  Check the
SHARE website (www.share.org) for information on registering for
requirements.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html