There is no reason not to share *EVERYTHING* possible at the physical
level. 

All of the integrity/performance issues perceived by the OP's management
occur when concurrent ACCESS occurs, not concurrent CONNECTION, the fact
that the devices are cabled to two (or more) LPARS, CECS,....

I agree w/John's post that the concurrent ACCESS issues are minimal to
non-existent, but what if a SYSPROG fat fingers something and the prod
system won't come up? If concurrent CONNECTION is available, just vary
the DASD online, fix the problem and retry. If not, now there is a *BIG*
mess!

If the OP's management is truly worried about performance issues
w/concurrent ACCESS, duplicate what is needed and run w/test & prod
offline to each other. 


<snip>
We fully share all of our DASD between all three of our z/OS images.
Shared DASD has never been a problem. The stuff is so fast that even if
both systems are accessing the same load library, we don't see any
measurable degradation. We convert most of the hardware reserves to
global ENQs which help prevent old-style "deadly embraces". We share the
JES2 SPOOL. Data set access integrity is assured by RACF (e.g. test jobs
cannot update anything other than test data sets due to RACF rules -
same would work with Top Secret or ACF2). We do have separate res and
master catalog volumes, but many shops share the system residence
volumes and master catalog, separating things using static system
symbols.

I really feel that not sharing is a recipe for disaster. Suppose some
system support library gets out of sync (such as DB2). You do all your
testing with a different PTF level of DB2 than you have in production.
Everything runs well in test, but "messes up" in production. Yuck! OK,
not likely, but a non-zero probability!
</snip>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to