Just be aware that you pretty much need to share the related catalogs (connecting a usercat to multiple mastercats will work, with proper cat sharing techniques) if you want to actually access SMS datasets on SMS volumes from a different system. You may be able to see the datasets (like with ISPF 1.3.4), but when you try to access, most (all?) access techniques will bitch about the SMS dataset not being cataloged in the proper catalog and refuse. With non-SMS, non-VSAM datasets you can get by with explicit volume/unit references, or by cataloging on both systems in a catalog on each system.
   JC Ewing

On 03/23/2013 11:05 AM, mf db wrote:
Hello Groups,

Thanks for an awesome reply. Apologies for not giving a deep description
about the environment, ours is not sysplex and we are not sharing catalog.
Just want to share the volumes, so that the datasets are visible across the
system.(I mean accessible via 3.4 with volumes)


/Peter

On Sat, Mar 23, 2013 at 8:45 PM, Joel C. Ewing <[email protected]> wrote:

On 03/23/2013 02:07 AM, mf db wrote:

Hello Group,

We have two machines Z800 and Z9 both use the same storage unit as ESS800
for storage purpose. Currently Z800 and Z9 machines don't share the dasd.
Could Someone Please provide some pointers or URL which can help me to
make
the DASD across the two machines.

Any Ideas would be much appreciated. Right now I am reading the HCD
configuration guide but I am unable to find a topic which talks about
sharing the DASD across.


/Peter

...

  A lot depends on your goals.
If the object is to be able to access a single volume from multiple
systems, but only have a volume on-line to one system at a time, then you
can get by with IODFs that define physical paths to the device from both
systems and define a device as initially off-line to all but one system,
together with rigid operating procedures that demand the device be placed
off-line on any other system before it is brought on-line to a different
system.  We have used this with a test/recovery system that is
restricted-use and which might need to access production volumes for
restores, but only when production z/OS is dead, and where production might
need to access test system volumes, but only when the test system is down
and being being rebuilt or reconfigured.  If you have this type of
environment, you have to be very careful about accidentally getting a
volume on-line to two systems, as updates while on-line to two systems can
destroy a shared catalog, shared dataset, make cached information held by
the other system invalid, etc.

If your object is to concurrently share and potentially update data on
volumes shared among multiple systems without corrupting the data, this is
much more complicated and you need to do much research on defining devices
as "shared" to z/OS in IODF and GRS and/or Sysplex concepts to insure that
destructive concurrent updates and inconsistent cached copies of data are
prevented.  Without properly implementing the inter-z/OS communication
needed for coordination among the systems, all sort of random nasty things
can happen: broken catalogs, missing datasets, broken datasets, corrupted
data. I haven't looked recently, but there surely must be some Redbooks out
there that address these concepts.

Unless all access to an alternate system with the independent security
database and shared DASD is sufficiently restricted, you must also use some
automatic or manual technique to sync the security databases across the
systems, at least to the extent that sensitive data or critical datasets
belonging to all systems are always protected from inappropriate access on
all systems with access.

--
Joel C. Ewing,    Bentonville, AR       [email protected]

------------------------------**------------------------------**----------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN



--
Joel C. Ewing,    Bentonville, AR       [email protected] 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to