Yes, I should have added the details that the lpar referenced was a member
of a sysplex, but the db2 ssid mentioned is a 
vintage legacy model that is standalone with no datasharing.  So, for this
particular environment, the db2 part of the oltp network only runs on one
lpar at a time.  So even plexed, we do have a couple of minutes when db2
data is unavailable. 

So, prior to IPL we move the db2 from lparA to lparB (less than two
minutes).  We run cics on both, so as soon as db2 is started on lparb the
appropriate regions attach there. 

lpara is then recycled, and the db2 is moved back to lpara in same manner
when practical.

As I said, our OS guys were notably wowed with how fast the system came back
to life and were practically doing cartwheels.

We do use OPS, and I think that yes, over the years we've extensively tuned
it to deal with  most things that pop out of the startup log.
Unfortunately, I don't know the details well enough to respond to your
questions.

But I did want to point out that there is a redpaper residency getting ready
to take place in August that should produce more information on avoiding
IPLs.  

The details are at
http://publib-b.boulder.ibm.com/residents.nsf/IntNumber/ZS-0005-R01?OpenDocu
ment 



-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Tim Hare
Sent: Friday, June 24, 2005 11:21 AM
To: [email protected]
Subject: Re: IPL periodicity


"these days the IPL process is
pretty much an automated cinch that takes less than 5 minutes"

The time depends upon a lot of things... and the outage is longer than the 
actual IPL, as measured at our shop the outage is from when online systems 
(TSO, CICS, IMS, Web applications) come down, to when they are available 
again. That's longer than 5 minutes at our shop, but brings up an 
interesting (to me) topic:

Do any of you "tune" the process of bringing the system down and back up?

 I'm starting to look at this  - we use AF/Operator to bring everything 
except JES2 up and down. The previous "owner" of the product, instead of 
making an event-driven automation based on message traps, put in a lot of 
15 or 30 second WAITs (i.e. "Start VTAM, wait 30 seconds, Start TSO").  So 
we have a lot of easy speed-ups due to that - but there are other, perhaps 
more sublte, things I wonder about:

1. A lot of the automation issues messages like "about to start VTAM".. 
there's acually quite a flood of messages during our IPL. Are we slowing 
down waiting for the 2074/emulator to handle all of those?

2. Does VTAM process its start l ist sequentially? If so, would I be 
better off issuing VARY ACT commands for all of the major nodes to take 
advantage of VTAM's parallelism?


I'm guessing that the 24/7-via-Sysplex crowd doesn't have as much of an 
issue, since their public face is always up even if one of the members of 
the plex bounces for maintenance.

<SNIP>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to