Additionally, there's the generic class DASDVOL that may be applicable/helpful.
Used to be $DASDI.<something> in FACILITY... I think?

- KB

------- Original Message -------
On Friday, November 25th, 2022 at 6:11 PM, Ituriel do Neto 
<000003427ec2837d-dmarc-requ...@listserv.ua.edu> wrote:


> Hi,
> 
> In the past, i used to work for a tiny shop with the same distribution you 
> indicated.
> Only three Lpars and no Sysplex, no GRS.
> 
> At that time, we choose to make all disks available to all Lpars, but there 
> was a segregation of Production, Development, and Sysprog volumes done by 
> VOLSER.
> I don't remember the details anymore, but shared disks were labeled as SHR*, 
> Production and development disks as PRD* and DEV*, and of course SYSRES, 
> Page, spool, etc...
> 
> At IPL time, a small program was executed, searching all volumes and issuing 
> V OFFLINE to those that do not belong to the appropriated Lpar. This program 
> used wildcard masks to select what should remain ONLINE.
> 
> And, of course, MVS commands were protected in RACF, so only authorized 
> userids can VARY ONLINE a volume.
> 
> It worked well for us, in this reality.
> 
> 
> Best Regards
> 
> Ituriel do Nascimento Neto
> z/OS System Programmer
> 
> 
> 
> 
> 
> 
> Em sexta-feira, 25 de novembro de 2022 02:38:47 BRT, Joel C. Ewing 
> jce.ebe...@cox.net escreveu:
> 
> 
> 
> 
> 
> 
> But its not just a case of whether you trust they will not intentionally
> damage something, but the ease of accidentally causing integrity
> problems by not knowing when others have touched catalogs, volumes, or
> datasets on DASD that is physically shared but not known to be shared by
> the Operating System. If many people are involved, the coordination
> procedures involved to prevent damage, assuming such procedures are even
> feasible, are a disaster waiting to happen.
> 
> If volumes are SMS, all datasets must be cataloged and the associated
> catalogs must be accessed from any system that accesses those
> datasets. If the systems are not in a relationship that enables proper
> catalog sharing, access and possible modification of the catalog from
> multiple systems causes the cached versions of catalog data to become
> out of sync with actual content on the drive when the catalog is altered
> from a different system, and there is a high probability the catalog
> will become corrupted on all systems.
> 
> Auditors are justified in being concerned whether independent RACF
> databases on multiple systems will always be in sync to properly protect
> production datasets from unintentional access or unauthorized access if
> test LPARs share access to production volumes. There should always be
> multiple barriers to doing something bad because accidents happen --
> like forgetting to change a production dataset name in what was intended
> to be test JCL.
> 
> There are just two many bad things that can happen if you try to share
> things that are only designed for sharing within a sysplex. The only
> relatively safe way to do this across independent LPARs is
> non-concurrently: have a set of volumes and a catalog for HLQ's of
> just the datasets on those volumes that is also located on one of those
> volumes, and only have those volumes on-line to one system at a time and
> close, and deallocate all datasets and the catalog on those volumes
> before taking them offline to move them to a different system.
> 
> A much simpler and safer solution is to not share DASD volumes across
> LPARs not in the same sysplex, to maintain a unique copy of datasets on
> systems where they are needed, and to use a high-speed communication
> link between the LPARs to transmit datasets from one system to another
> when there is a need to resync those datasets from a production LPAR.
> 
> Joel C Ewing
> 
> 
> On 11/24/22 21:38, Farley, Peter wrote:
> 
> > Not necessarily true in a software development environment where all 
> > members of the team need to share all their data everywhere. "Zero trust" 
> > is anathema in a development environment.
> > 
> > If you don't trust me then fire me. It's cleaner that way.
> > 
> > Shakespeare was almost right. First get rid of all the auditors, then get 
> > rid of all the lawyers.
> > 
> > Peter
> > 
> > -----Original Message-----
> > From: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU On Behalf Of 
> > Lennie Dymoke-Bradshaw
> > Sent: Thursday, November 24, 2022 5:24 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: To share or not to share DASD
> > 
> > If you were asking in a security context, I would advise against it in 
> > nearly all cases.
> > Auditors will not like that a system's data can be accessed without 
> > reference to the RACF (or ACF2, or TSS) system that is supposed to protect 
> > it.
> > 
> > Lennie Dymoke-Bradshaw
> > 
> > -----Original Message-----
> > From: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU On Behalf Of 
> > Gord Neill
> > Sent: 24 November 2022 20:55
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: To share or not to share DASD
> > 
> > G'day all,
> > I've been having discussions with a small shop (single mainframe, 3 
> > separate LPARs, no Sysplex) regarding best practices for DASD sharing. 
> > Their view is to share all DASD volumes across their 3 LPARs 
> > (Prod/Dev/Test) so their developers/sysprogs can get access to current 
> > datasets, but in order to do that, they'll need to use GRS Ring or MIM with 
> > the associated overhead. I don't know of any other serialization products, 
> > and since this is not a Sysplex environment, they can't use GRS Star. I 
> > suggested the idea of no GRS, keeping most DASD volumes isolated to each 
> > LPAR, with a "shared string"
> > available to all LPARs for copying datasets, but it was not well received.
> > 
> > Just curious as to how other shops are handling this. TIA!
> > 
> > Gord Neill | Senior I/T Consultant | GlassHouse Systems
> > --
> > ...
> 
> 
> --
> Joel C. Ewing
> 
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to