Re: To share or not to share DASD

2023-02-03 Thread Rick Troth

Following up on a thread from November. The subject alone caught my eye.
I share DASD all the time on my home systems.

Related:
With the help of an intern (Rushal Verma), I released an automounter 
script called 'vmlink'. Works great on Linux on top of z/VM.


Mainframes have been sharing DASD at the physical layer from the 
beginning. [cue Chicago, "Only the Beginning"]
z/VM has been sharing DASD, and slices thereof, at the logical layer 
since its beginning.
So when VMware appeared and suddenly we had a hypervisor for a different 
architecture, "I wonder ...", could one share virtual disks there too?

YES
It's true for KVM too, and likely most other hypervisors.

Seymour's right ...
Best practice is to not share what you can't protect, whether from 
malice or from accident (like content corruption due to dual write; call 
that a "write fight").
CMS uses a common IPL disk and at least one other, usually several 
others. SHARED READ-ONLY
That's not sexy ... or maybe some of you have a weak appreciation for 
"sexy".
Ironically, micro devices have been doing READ-ONLY longer than Unix and 
Unix like systems (I'm talkin Linux, read on).
Your smart phone has a chunk of read-only memory. That's not DASD, but 
it works the same: it holds a filesystem.
Same for ye olde Palm Pilot, okay prolly not a "filesystem", and even 
further back. Remember PocketPC?

But I digress. With micro thingies, the content isn't shared.

So ... CMS does this all the time.
Turns out you can do it with other parts of your z/VM host system to any 
number of VM-on-VM guests.
That is, you can share host minidisks, OR full volumes (containing 
minidisks), with second level CP, doling out the same minidisks as found 
on the host to the guests ... without having to copy the lot. Yes, 
Virginia, you can share the CP IPL disk.
Not a big case for production work, but if your staging systems persist 
then this hack might have real value. (Wanna get agile?)


Linux distributions picked up on the trick of read-only op sys content, 
prolly to support flashable ROM or hand-helds.

Linux can straight away use shared R/O DASD for the op sys.
Linux can mimic CMS. hah

And we haven't even talked about containers.

"But it's gotta be writable."
Does it?
Think about all the R/O content you already have. Think about all the 
stuff you don't WANT people writing or updating.

For any op sys, those things are candidates for residing on shared DASD.
It makes sense to manage such content that way even if you don't intend 
to share the residence volumes.


But if it must be R/W then there are ways to make that happen.
Just gotta go out-of-band. None of our varied I/O models can fully 
isolate mixed updates in the higher levels.


I have two flavors of Linux at home for which the OS disk is shared 
across several virtual machines. These run 24x7 and are rock solid. 
Guests which don't share the op sys have to be maintained individually. 
The time spent baby-sitting them is painful.
These days I use KVM, but have also used VMware and Xen. KVM does allow 
for disks to be explicitly "shareable" and "read only". In situations 
where KVM cannot enforce R/O then my guests have to behave properly. 
(Since I control them, that's less of a problem here.)

Works ... ahhh...

Sharing DASD ... just do it.
EASY for CMS content. Doable for selected parts of VM/CP.
Commonly done with z/OS too.
Should be done more in Linux land. Has also been done with Amdahl's UTS 
and AIX/370 (but now I'm showing my gray hair).


That automounter script which I mentioned is here ...

https://github.com/trothr/vmlink/

-- R; <><


On 11/25/22 08:51, Seymour J Metz wrote:

Best practice is to not share what you can't protect. MIM, GRS ring, etc., can 
help, but sharing of PDSE or Unix files can lead to data corruption even with 
serialization, and sharing between security domains might not only lead to 
compromisng data but to legal issues, both civil and criminal. If you're a 
financial or medical facility, involve the legal staff in any decision of 
sharing data between sysplexes.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Gord Neill [02ff5f18e15f-dmarc-requ...@listserv.ua.edu]
Sent: Thursday, November 24, 2022 3:54 PM
To:IBM-MAIN@LISTSERV.UA.EDU
Subject: To share or not to share DASD

G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate LPARs, no 
Sysplex) regarding best practices for DASD sharing.  Their view is to share all DASD 
volumes across their 3 LPARs (Prod/Dev/Test) so their developers/sysprogs can get access 
to current datasets, but in order to do that, they'll need to use GRS Ring or MIM with 
the associated overhead.  I don't know of any other serialization products, and since 
this is not a Sysplex environment, they

Re: To share or not to share DASD

2022-12-01 Thread Ed Jaffe

On 11/29/2022 9:07 PM, Brian Westerman wrote:

You are completely right, and thanks for setting things straight.


Who is right? About what?


--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/



This e-mail message, including any attachments, appended messages and the
information contained therein, is for the sole use of the intended
recipient(s). If you are not an intended recipient or have otherwise
received this email message in error, any use, dissemination, distribution,
review, storage or copying of this e-mail message and the information
contained therein is strictly prohibited. If you are not an intended
recipient, please contact the sender by reply e-mail and destroy all copies
of this email message and do not otherwise utilize or retain this email
message or any or all of the information contained therein. Although this
email message and any attachments or appended messages are believed to be
free of any virus or other defect that might affect any computer system into
which it is received and opened, it is the responsibility of the recipient
to ensure that it is virus free and no responsibility is accepted by the
sender for any loss or damage arising in any way from its opening or use.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-29 Thread Brian Westerman
You are completely right, and thanks for setting things straight.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-29 Thread Doug Henry
This type of sysplex is called Basic by IBM.

https://www.ibm.com/docs/en/zos-basic-skills?topic=sysplex-zos

Doug

On Tue, 29 Nov 2022 07:43:23 -0600, Dana Mitchell  wrote:

>On Mon, 28 Nov 2022 23:51:53 -0600, Brian Westerman 
> wrote:
>
>>You're incorrect, you don't need a coupling facility to share PDS/e, you can 
>>(and I do at several sites) use FICON CTC's just as well, and in fact it's a 
>>lot cheaper, 
>>(unless you already have a coupling facility installed in which case it would 
>>be silly to not use it).
>>
>>IBM does not require a CF to share PDS/e all the way down to the member 
>>level.  Wherever you got the information you posted, it's incorrect or at 
>>best misleading.  I >maintain several sites that have no coupling facility 
>>and sysplex sharing is no problem.  It's not a "complete" sysplex, but I 
>>think people tend to refer to it as a "baby" >sysplex.  GRS ring is not a 
>>problem, and you get  a lot of the benefits of sysplex (shared consoles, 
>>command shipping, etc.) you just don't have the CF to handle it, >instead you 
>>use the FICON Cards as CTC's
>>
>
>You both and the linked doc are correct.  This is a true statement   "Every 
>system that is sharing a PDSE must be a member of the sysplex and have the 
>sysplex coupling facility (XCF) active."
>
>But XCF does not require CF,  Brian's COUPLExx members have PATHINs and 
>PATHOUTs specifying FICON CTCs, instead of STRNAMEs.
>
>Dana
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-29 Thread Dana Mitchell
On Mon, 28 Nov 2022 23:51:53 -0600, Brian Westerman 
 wrote:

>You're incorrect, you don't need a coupling facility to share PDS/e, you can 
>(and I do at several sites) use FICON CTC's just as well, and in fact it's a 
>lot cheaper, 
>(unless you already have a coupling facility installed in which case it would 
>be silly to not use it).
>
>IBM does not require a CF to share PDS/e all the way down to the member level. 
> Wherever you got the information you posted, it's incorrect or at best 
>misleading.  I >maintain several sites that have no coupling facility and 
>sysplex sharing is no problem.  It's not a "complete" sysplex, but I think 
>people tend to refer to it as a "baby" >sysplex.  GRS ring is not a problem, 
>and you get  a lot of the benefits of sysplex (shared consoles, command 
>shipping, etc.) you just don't have the CF to handle it, >instead you use the 
>FICON Cards as CTC's
>

You both and the linked doc are correct.  This is a true statement   "Every 
system that is sharing a PDSE must be a member of the sysplex and have the 
sysplex coupling facility (XCF) active."

But XCF does not require CF,  Brian's COUPLExx members have PATHINs and 
PATHOUTs specifying FICON CTCs, instead of STRNAMEs.

Dana

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-28 Thread Brian Westerman
You're incorrect, you don't need a coupling facility to share PDS/e, you can 
(and I do at several sites) use FICON CTC's just as well, and in fact it's a 
lot cheaper, (unless you already have a coupling facility installed in which 
case it would be silly to not use it).

IBM does not require a CF to share PDS/e all the way down to the member level.  
Wherever you got the information you posted, it's incorrect or at best 
misleading.  I maintain several sites that have no coupling facility and 
sysplex sharing is no problem.  It's not a "complete" sysplex, but I think 
people tend to refer to it as a "baby" sysplex.  GRS ring is not a problem, and 
you get  a lot of the benefits of sysplex (shared consoles, command shipping, 
etc.) you just don't have the CF to handle it, instead you use the FICON Cards 
as CTC's

If you don't have the $$s to spend on a CF, buying a couple FICON cards from a 
reseller (or even eBay) is a very inexpensive way to achieve GRS support (and 
thus PDS/e sharing).  

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-28 Thread Rob Schramm
I vote with Brian.

Rob

On Mon, Nov 28, 2022, 16:34 Tom Marchant <
000a2a8c2020-dmarc-requ...@listserv.ua.edu> wrote:

> PDSE sharing is only supported within a Sysplex. XCF signalling is
> required to maintain
> integrity. When you say that PDSEs can be fully shared, I think you are
> referring to
> Extended Sharing, not Normal Sharing. Following is from
>
> https://www.ibm.com/docs/en/zos/2.4.0?topic=neps-specifying-extended-pdse-sharing-in-multiple-system-environment
> but it has been this way for a very long time.
>
> 
> In a multiple-system environment, the system programmer uses
> PDSESHARING(EXTENDED)
> to share PDSEs at the member level. A system programmer must specify
> PDSESHARING(EXTENDED) in the IGDSMSxx member in the SYS1.PARMLIB on each
> system in the sysplex. Every system that is sharing a PDSE must be a
> member of the
> sysplex and have the sysplex coupling facility (XCF) active.
> 
>
> People have violated these rules and gotten away with it, but that does
> not mean
> that it is safe to do so.
>
> --
> Tom Marchant
>
> On Fri, 25 Nov 2022 20:00:42 -0600, Brian Westerman <
> brian_wester...@syzygyinc.com> wrote:
>
> >The GRS ring (not star) for a small site with 3 LPARs should have no
> problem with
> >any slowdowns, and it will allow you to run fully shared PDS/e, catalogs,
> etc.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-28 Thread Tom Marchant
PDSE sharing is only supported within a Sysplex. XCF signalling is required to 
maintain 
integrity. When you say that PDSEs can be fully shared, I think you are 
referring to 
Extended Sharing, not Normal Sharing. Following is from
https://www.ibm.com/docs/en/zos/2.4.0?topic=neps-specifying-extended-pdse-sharing-in-multiple-system-environment
but it has been this way for a very long time.


In a multiple-system environment, the system programmer uses 
PDSESHARING(EXTENDED) 
to share PDSEs at the member level. A system programmer must specify 
PDSESHARING(EXTENDED) in the IGDSMSxx member in the SYS1.PARMLIB on each 
system in the sysplex. Every system that is sharing a PDSE must be a member of 
the 
sysplex and have the sysplex coupling facility (XCF) active. 


People have violated these rules and gotten away with it, but that does not 
mean 
that it is safe to do so.

-- 
Tom Marchant

On Fri, 25 Nov 2022 20:00:42 -0600, Brian Westerman 
 wrote:

>The GRS ring (not star) for a small site with 3 LPARs should have no problem 
>with 
>any slowdowns, and it will allow you to run fully shared PDS/e, catalogs, etc. 
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-28 Thread Allan Staller
Classification: Confidential

I would share devices at the physical level (all devices can be brought online 
to any LPAR)
I would not share those devices at the logical level (all devices are offline 
where not normally used).

HTH,

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Ituriel do Neto
Sent: Friday, November 25, 2022 6:42 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

[CAUTION: This Email is from outside the Organization. Unless you trust the 
sender, Don’t click links or open attachments as it may be a Phishing email, 
which can steal your Information and compromise your Computer.]

Hi,

In the past, i used to work for a tiny shop with the same distribution you 
indicated.
Only three Lpars and no Sysplex, no GRS.

At that time, we choose to make all disks available to all Lpars, but there was 
a segregation of Production, Development, and Sysprog volumes done by VOLSER.
I don't remember the details anymore, but shared disks were labeled as SHR*, 
Production and development disks as PRD* and DEV*, and of course SYSRES, Page, 
spool, etc...

At IPL time, a small program was executed, searching all volumes and issuing V 
OFFLINE to those that do not belong to the appropriated Lpar. This program used 
wildcard masks to select what should remain ONLINE.

And, of course, MVS commands were protected in RACF, so only authorized userids 
can VARY ONLINE a volume.

It worked well for us, in this reality.


Best Regards

Ituriel do Nascimento Neto
z/OS System Programmer






Em sexta-feira, 25 de novembro de 2022 02:38:47 BRT, Joel C. Ewing 
 escreveu:





But its not just a case of whether you trust they will not intentionally damage 
something, but the ease of accidentally causing integrity problems by not 
knowing when others have touched catalogs, volumes, or datasets on DASD that is 
physically shared but not known to be shared by the Operating System.  If many 
people are involved, the coordination procedures involved to prevent damage, 
assuming such procedures are even feasible, are a disaster waiting to happen.

 If volumes are SMS, all datasets must be cataloged and the associated catalogs 
must be accessed from any system that accesses those
datasets.   If the systems are not in a relationship that enables proper
catalog sharing, access and possible modification of the catalog from multiple 
systems causes the cached versions of catalog data to become out of sync with 
actual content on the drive when the catalog is altered from a different 
system, and there is a high probability the catalog will become corrupted on 
all systems.

Auditors are justified in being concerned whether independent RACF databases on 
multiple systems will always be in sync to properly protect production datasets 
from unintentional access or unauthorized access if test LPARs share access to 
production volumes.  There should always be multiple barriers to doing 
something bad because accidents happen -- like forgetting to change a 
production dataset name in what was intended to be test JCL.

There are just two many bad things that can happen if you try to share things 
that are only designed for sharing within a sysplex. The only relatively safe 
way to do this across independent LPARs is
non-concurrently:   have a set of volumes and a catalog for HLQ's of
just the datasets on those volumes that is also located on one of those 
volumes, and only have those volumes on-line to one system at a time and close, 
and deallocate all datasets and the catalog on those volumes before taking them 
offline to move them to a different system.

A much simpler and safer solution is to not share DASD volumes across LPARs not 
in the same sysplex, to maintain a unique copy of datasets on systems where 
they are needed, and to use a high-speed communication link between the LPARs 
to transmit datasets from one system to another when there is a need to resync 
those datasets from a production LPAR.

Joel C Ewing


On 11/24/22 21:38, Farley, Peter wrote:

> Not necessarily true in a software development environment where all members 
> of the team need to share all their data everywhere.  "Zero trust" is 
> anathema in a development environment.
>
> If you don't trust me then fire me.  It's cleaner that way.
>
> Shakespeare was *almost* right.  First get rid of all the auditors, *then* 
> get rid of all the lawyers.
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List  On
> Behalf Of Lennie Dymoke-Bradshaw
> Sent: Thursday, November 24, 2022 5:24 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> If you were asking in a security context, I would advise against it in nearly 
> all cases.
> Auditors will not like that a system's data can be accessed without reference 
> to the RACF (or ACF2, or TSS) system that is supposed to protect it.
>

Re: To share or not to share DASD

2022-11-26 Thread Gord Neill
Thx to all contributors on this topic, great food for thought!

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Brian Westerman
Sent: November 25, 2022 9:01 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

I think that you are missing the fact that you can VERY easily add GRS ring via 
a couple of really inexpensive FICON cards.  You may already even have them 
just sitting there unused on your processor.  In any case, you can buy them 
even on eBay now for next to nothing.  

The GRS ring (not star) for a small site with 3 LPARs should have no problem 
with any slowdowns, and it will allow you to run fully shared PDS/e, catalogs, 
etc.  

I support several sites that I converted to GRS ring (some from MIM, some from 
nothing at all) on everything down to a really small z13s (~80 mip) and there 
was no decrease in performance, and in fact, things got better since now GRS 
was handling things instead of reserves.  

In any case, NOT sharing DASD on the same processor complex is quite silly and 
makes life much harder for the users and for you to support it.

It's really simple to set up, and GRS is free, so your only cost is the FICON 
cards.  (I think the last place I upgraded ended up paying $500 each and got 3 
even though we only needed two).

If you need help setting it up, feel free to contact me and I'll help you 
through it.

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Brian Westerman
I think that you are missing the fact that you can VERY easily add GRS ring via 
a couple of really inexpensive FICON cards.  You may already even have them 
just sitting there unused on your processor.  In any case, you can buy them 
even on eBay now for next to nothing.  

The GRS ring (not star) for a small site with 3 LPARs should have no problem 
with any slowdowns, and it will allow you to run fully shared PDS/e, catalogs, 
etc.  

I support several sites that I converted to GRS ring (some from MIM, some from 
nothing at all) on everything down to a really small z13s (~80 mip) and there 
was no decrease in performance, and in fact, things got better since now GRS 
was handling things instead of reserves.  

In any case, NOT sharing DASD on the same processor complex is quite silly and 
makes life much harder for the users and for you to support it.

It's really simple to set up, and GRS is free, so your only cost is the FICON 
cards.  (I think the last place I upgraded ended up paying $500 each and got 3 
even though we only needed two).

If you need help setting it up, feel free to contact me and I'll help you 
through it.

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Leonard D Woren

Joel C. Ewing wrote on 11/24/2022 9:38 PM:

[...]

If volumes are SMS, all datasets must be cataloged and the 
associated catalogs must be accessed from any system that accesses 
those datasets.   If the systems are not in a relationship that 
enables proper catalog sharing, access and possible modification of 
the catalog from multiple systems causes the cached versions of 
catalog data to become out of sync with actual content on the drive 
when the catalog is altered from a different system, and there is a 
high probability the catalog will become corrupted on all systems.


Let me sharpen that last point.  If the catalog is being updated from 
more than one system not in a sysplex, it's not a "high probability", 
it's basically a certainty   The really fun part is that you may not 
discover the catalog corruption for days, weeks, or even months.



/Leonard


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Subject: To share or not to share DASD

2022-11-25 Thread Ed Jaffe

On 11/25/2022 6:22 AM, Don Parrott wrote:

We created a DASD only SYSPLEX about 3(?) years ago on a z/14 primarily to facilitate 
PDSE sharing between the PROD and DEVL lpars.   I would have rather had a coupling 
facility for a full sysplex, but we did not have one.There was a ton of work to 
setup the CTC pairs between the three lpars, the final one being our maintenance 
lpar.   GRS will have to be reviewed carefully.We have had zero issues since 
implementation.   Feel free to write me directly for specific questions.   
d...@clemson.edu


No doubt setting up CTC connections is more work than simply setting up 
a CF messaging structure, but they're good to have in case you want to 
use/test GRS ring, push VTAM or XCF traffic through a dedicated 
resource, connect to non-z/OS LPARs such as z/VM or z/VSE without going 
through an OSA, etc.


Years ago, Skip Robinson gave a nice explanation (perhaps at SHARE? 
perhaps here on IBM-MAIN?) on the naming convention they used at SCE to 
keep it all straight. The upper nybble indicates whether a guzzinta or a 
guzzoutta, the next two nybbles represent the LPAR number, and the last 
nybble is the device number 0-F. Having this naming convention makes it 
trivially easy to know which LPARs the control units and devices should 
be connected to since the LPAR number is part of the device number.


We have only six LPARs (1, 2, 3, 4, 5 & 8), so it doesn't look too bad. 
Clearly, if you have 85 LPARs on the box it will take longer to define 
them, but still be just as easy to get it right.


--Device-- --#--- Control Unit Numbers + 
Number   Type +    CSS OS 1--- 2--- 3--- 4--- 5--- 6--- 7--- 8---
4010,16  FCTC  1   2  4010       
4020,16  FCTC  1   2  4020       
4030,16  FCTC  1   2  4030       
4040,16  FCTC  1   2  4040       
4050,16  FCTC  1   2  4050       
4080,16  FCTC  1   2  4080       
5010,16  FCTC  1   2  5010       
5020,16  FCTC  1   2  5020       
5030,16  FCTC  1   2  5030       
5040,16  FCTC  1   2  5040       
5050,16  FCTC  1   2  5050       
5080,16  FCTC  1   2  5080       

The only real drawback I can see to using CTCs like this is the "chewing 
up" of device numbers in environments with a shortage of them.



--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/



This e-mail message, including any attachments, appended messages and the
information contained therein, is for the sole use of the intended
recipient(s). If you are not an intended recipient or have otherwise
received this email message in error, any use, dissemination, distribution,
review, storage or copying of this e-mail message and the information
contained therein is strictly prohibited. If you are not an intended
recipient, please contact the sender by reply e-mail and destroy all copies
of this email message and do not otherwise utilize or retain this email
message or any or all of the information contained therein. Although this
email message and any attachments or appended messages are believed to be
free of any virus or other defect that might affect any computer system into
which it is received and opened, it is the responsibility of the recipient
to ensure that it is virus free and no responsibility is accepted by the
sender for any loss or damage arising in any way from its opening or use.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Seymour J Metz
That's why off-site backup, outside of the range of regional disasters, are so 
important. Data centers have been destroyed by earthquakes, industrial 
accidents and weather in the past, and RAID offers no protection.

Hot backup and its cousins are no longer arcane topics.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Colin Paice [colinpai...@gmail.com]
Sent: Friday, November 25, 2022 9:34 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

I had to explain to some people that RAID disks do not give 100%
protection.  If you delete a file or corrupt a file, then the RAID will
*reliably* make the change to delete or corrupt  all copies of the data.
We used z/VM and ran z/OS on top of it.  We could share volumes read only
and so people could not change them.
Colin

On Fri, 25 Nov 2022 at 13:45, Seymour J Metz  wrote:

> I don't even trust myself; belt and suspender policies are highly useful
> in a development environment. The key is to deploy safeguards that don't
> get underfoot. Have you never had to revert a change?
>
> Auditors serve a useful purpose. Get rid of the bad ones, not all.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
> 
> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf
> of Farley, Peter [031df298a9da-dmarc-requ...@listserv.ua.edu]
> Sent: Thursday, November 24, 2022 10:38 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> Not necessarily true in a software development environment where all
> members of the team need to share all their data everywhere.  "Zero trust"
> is anathema in a development environment.
>
> If you don't trust me then fire me.  It's cleaner that way.
>
> Shakespeare was *almost* right.  First get rid of all the auditors, *then*
> get rid of all the lawyers.
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Lennie Dymoke-Bradshaw
> Sent: Thursday, November 24, 2022 5:24 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> If you were asking in a security context, I would advise against it in
> nearly all cases.
> Auditors will not like that a system's data can be accessed without
> reference to the RACF (or ACF2, or TSS) system that is supposed to protect
> it.
>
> Lennie Dymoke-Bradshaw
>
> -Original Message-----
> From: IBM Mainframe Discussion List  On Behalf
> Of Gord Neill
> Sent: 24 November 2022 20:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: To share or not to share DASD
>
> G'day all,
> I've been having discussions with a small shop (single mainframe, 3
> separate LPARs, no Sysplex) regarding best practices for DASD sharing.
> Their view is to share all DASD volumes across their 3 LPARs
> (Prod/Dev/Test) so their developers/sysprogs can get access to current
> datasets, but in order to do that, they'll need to use GRS Ring or MIM with
> the associated overhead.  I don't know of any other serialization products,
> and since this is not a Sysplex environment, they can't use GRS Star.  I
> suggested the idea of no GRS, keeping most DASD volumes isolated to each
> LPAR, with a "shared string"
> available to all LPARs for copying datasets, but it was not well received.
>
> Just curious as to how other shops are handling this.  TIA!
>
>
> Gord Neill | Senior I/T Consultant | GlassHouse Systems
> --
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Colin Paice
I had to explain to some people that RAID disks do not give 100%
protection.  If you delete a file or corrupt a file, then the RAID will
*reliably* make the change to delete or corrupt  all copies of the data.
We used z/VM and ran z/OS on top of it.  We could share volumes read only
and so people could not change them.
Colin

On Fri, 25 Nov 2022 at 13:45, Seymour J Metz  wrote:

> I don't even trust myself; belt and suspender policies are highly useful
> in a development environment. The key is to deploy safeguards that don't
> get underfoot. Have you never had to revert a change?
>
> Auditors serve a useful purpose. Get rid of the bad ones, not all.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
> 
> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf
> of Farley, Peter [031df298a9da-dmarc-requ...@listserv.ua.edu]
> Sent: Thursday, November 24, 2022 10:38 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> Not necessarily true in a software development environment where all
> members of the team need to share all their data everywhere.  "Zero trust"
> is anathema in a development environment.
>
> If you don't trust me then fire me.  It's cleaner that way.
>
> Shakespeare was *almost* right.  First get rid of all the auditors, *then*
> get rid of all the lawyers.
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Lennie Dymoke-Bradshaw
> Sent: Thursday, November 24, 2022 5:24 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> If you were asking in a security context, I would advise against it in
> nearly all cases.
> Auditors will not like that a system's data can be accessed without
> reference to the RACF (or ACF2, or TSS) system that is supposed to protect
> it.
>
> Lennie Dymoke-Bradshaw
>
> -Original Message-----
> From: IBM Mainframe Discussion List  On Behalf
> Of Gord Neill
> Sent: 24 November 2022 20:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: To share or not to share DASD
>
> G'day all,
> I've been having discussions with a small shop (single mainframe, 3
> separate LPARs, no Sysplex) regarding best practices for DASD sharing.
> Their view is to share all DASD volumes across their 3 LPARs
> (Prod/Dev/Test) so their developers/sysprogs can get access to current
> datasets, but in order to do that, they'll need to use GRS Ring or MIM with
> the associated overhead.  I don't know of any other serialization products,
> and since this is not a Sysplex environment, they can't use GRS Star.  I
> suggested the idea of no GRS, keeping most DASD volumes isolated to each
> LPAR, with a "shared string"
> available to all LPARs for copying datasets, but it was not well received.
>
> Just curious as to how other shops are handling this.  TIA!
>
>
> Gord Neill | Senior I/T Consultant | GlassHouse Systems
> --
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Subject: To share or not to share DASD

2022-11-25 Thread Don Parrott
Gord,



We created a DASD only SYSPLEX about 3(?) years ago on a z/14 primarily to 
facilitate PDSE sharing between the PROD and DEVL lpars.   I would have rather 
had a coupling facility for a full sysplex, but we did not have one.There 
was a ton of work to setup the CTC pairs between the three lpars, the final one 
being our maintenance lpar.   GRS will have to be reviewed carefully.We 
have had zero issues since implementation.   Feel free to write me directly for 
specific questions.   d...@clemson.edu<mailto:d...@clemson.edu>



Don



> -Original Message-

> From: IBM Mainframe Discussion List 
> mailto:IBM-MAIN@LISTSERV.UA.EDU>> On

> Behalf Of Gord Neill

> Sent: Thursday, November 24, 2022 12:55 PM

> To: IBM-MAIN@LISTSERV.UA.EDU<mailto:IBM-MAIN@LISTSERV.UA.EDU>

> Subject: To share or not to share DASD

>

> [EXTERNAL EMAIL]

>

> G'day all,

> I've been having discussions with a small shop (single mainframe, 3 separate

> LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is 
> to

> share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their

> developers/sysprogs can get access to current datasets, but in order to do

> that, they'll need to use GRS Ring or MIM with the associated overhead.  I

> don't know of any other serialization products, and since this is not a 
> Sysplex

> environment, they can't use GRS Star.  I suggested the idea of no GRS,

> keeping most DASD volumes isolated to each LPAR, with a "shared string"

> available to all LPARs for copying datasets, but it was not well received.

>

> Just curious as to how other shops are handling this.  TIA!

>

>

> Gord Neill | Senior I/T Consultant | GlassHouse Systems

>

>

>

>


Don Parrott

zSeries Server Technical Support Team
Clemson Computing and Information Technology
Clemson University



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Seymour J Metz
Best practice is to not share what you can't protect. MIM, GRS ring, etc., can 
help, but sharing of PDSE or Unix files can lead to data corruption even with 
serialization, and sharing between security domains might not only lead to 
compromisng data but to legal issues, both civil and criminal. If you're a 
financial or medical facility, involve the legal staff in any decision of 
sharing data between sysplexes.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Gord Neill [02ff5f18e15f-dmarc-requ...@listserv.ua.edu]
Sent: Thursday, November 24, 2022 3:54 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: To share or not to share DASD

G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate 
LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is to 
share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their 
developers/sysprogs can get access to current datasets, but in order to do 
that, they'll need to use GRS Ring or MIM with the associated overhead.  I 
don't know of any other serialization products, and since this is not a Sysplex 
environment, they can't use GRS Star.  I suggested the idea of no GRS, keeping 
most DASD volumes isolated to each LPAR, with a "shared string" available to 
all LPARs for copying datasets, but it was not well received.

Just curious as to how other shops are handling this.  TIA!


Gord Neill | Senior I/T Consultant | GlassHouse Systems




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Seymour J Metz
I don't even trust myself; belt and suspender policies are highly useful in a 
development environment. The key is to deploy safeguards that don't get 
underfoot. Have you never had to revert a change?

Auditors serve a useful purpose. Get rid of the bad ones, not all.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Farley, Peter [031df298a9da-dmarc-requ...@listserv.ua.edu]
Sent: Thursday, November 24, 2022 10:38 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

Not necessarily true in a software development environment where all members of 
the team need to share all their data everywhere.  "Zero trust" is anathema in 
a development environment.

If you don't trust me then fire me.  It's cleaner that way.

Shakespeare was *almost* right.  First get rid of all the auditors, *then* get 
rid of all the lawyers.

Peter

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Lennie Dymoke-Bradshaw
Sent: Thursday, November 24, 2022 5:24 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

If you were asking in a security context, I would advise against it in nearly 
all cases.
Auditors will not like that a system's data can be accessed without reference 
to the RACF (or ACF2, or TSS) system that is supposed to protect it.

Lennie Dymoke-Bradshaw

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Gord Neill
Sent: 24 November 2022 20:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: To share or not to share DASD

G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate 
LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is to 
share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their 
developers/sysprogs can get access to current datasets, but in order to do 
that, they'll need to use GRS Ring or MIM with the associated overhead.  I 
don't know of any other serialization products, and since this is not a Sysplex 
environment, they can't use GRS Star.  I suggested the idea of no GRS, keeping 
most DASD volumes isolated to each LPAR, with a "shared string"
available to all LPARs for copying datasets, but it was not well received.

Just curious as to how other shops are handling this.  TIA!


Gord Neill | Senior I/T Consultant | GlassHouse Systems
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread kekronbekron
Additionally, there's the generic class DASDVOL that may be applicable/helpful.
Used to be $DASDI. in FACILITY... I think?

- KB

--- Original Message ---
On Friday, November 25th, 2022 at 6:11 PM, Ituriel do Neto 
<03427ec2837d-dmarc-requ...@listserv.ua.edu> wrote:


> Hi,
> 
> In the past, i used to work for a tiny shop with the same distribution you 
> indicated.
> Only three Lpars and no Sysplex, no GRS.
> 
> At that time, we choose to make all disks available to all Lpars, but there 
> was a segregation of Production, Development, and Sysprog volumes done by 
> VOLSER.
> I don't remember the details anymore, but shared disks were labeled as SHR*, 
> Production and development disks as PRD* and DEV*, and of course SYSRES, 
> Page, spool, etc...
> 
> At IPL time, a small program was executed, searching all volumes and issuing 
> V OFFLINE to those that do not belong to the appropriated Lpar. This program 
> used wildcard masks to select what should remain ONLINE.
> 
> And, of course, MVS commands were protected in RACF, so only authorized 
> userids can VARY ONLINE a volume.
> 
> It worked well for us, in this reality.
> 
> 
> Best Regards
> 
> Ituriel do Nascimento Neto
> z/OS System Programmer
> 
> 
> 
> 
> 
> 
> Em sexta-feira, 25 de novembro de 2022 02:38:47 BRT, Joel C. Ewing 
> jce.ebe...@cox.net escreveu:
> 
> 
> 
> 
> 
> 
> But its not just a case of whether you trust they will not intentionally
> damage something, but the ease of accidentally causing integrity
> problems by not knowing when others have touched catalogs, volumes, or
> datasets on DASD that is physically shared but not known to be shared by
> the Operating System. If many people are involved, the coordination
> procedures involved to prevent damage, assuming such procedures are even
> feasible, are a disaster waiting to happen.
> 
> If volumes are SMS, all datasets must be cataloged and the associated
> catalogs must be accessed from any system that accesses those
> datasets. If the systems are not in a relationship that enables proper
> catalog sharing, access and possible modification of the catalog from
> multiple systems causes the cached versions of catalog data to become
> out of sync with actual content on the drive when the catalog is altered
> from a different system, and there is a high probability the catalog
> will become corrupted on all systems.
> 
> Auditors are justified in being concerned whether independent RACF
> databases on multiple systems will always be in sync to properly protect
> production datasets from unintentional access or unauthorized access if
> test LPARs share access to production volumes. There should always be
> multiple barriers to doing something bad because accidents happen --
> like forgetting to change a production dataset name in what was intended
> to be test JCL.
> 
> There are just two many bad things that can happen if you try to share
> things that are only designed for sharing within a sysplex. The only
> relatively safe way to do this across independent LPARs is
> non-concurrently: have a set of volumes and a catalog for HLQ's of
> just the datasets on those volumes that is also located on one of those
> volumes, and only have those volumes on-line to one system at a time and
> close, and deallocate all datasets and the catalog on those volumes
> before taking them offline to move them to a different system.
> 
> A much simpler and safer solution is to not share DASD volumes across
> LPARs not in the same sysplex, to maintain a unique copy of datasets on
> systems where they are needed, and to use a high-speed communication
> link between the LPARs to transmit datasets from one system to another
> when there is a need to resync those datasets from a production LPAR.
> 
> Joel C Ewing
> 
> 
> On 11/24/22 21:38, Farley, Peter wrote:
> 
> > Not necessarily true in a software development environment where all 
> > members of the team need to share all their data everywhere. "Zero trust" 
> > is anathema in a development environment.
> > 
> > If you don't trust me then fire me. It's cleaner that way.
> > 
> > Shakespeare was almost right. First get rid of all the auditors, then get 
> > rid of all the lawyers.
> > 
> > Peter
> > 
> > -Original Message-
> > From: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU On Behalf Of 
> > Lennie Dymoke-Bradshaw
> > Sent: Thursday, November 24, 2022 5:24 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: To share or not to share DASD
> > 
> > If you were asking in a security context, I would advise against it in 
> > nearly all c

Re: To share or not to share DASD

2022-11-25 Thread Ituriel do Neto
Hi,

In the past, i used to work for a tiny shop with the same distribution you 
indicated. 
Only three Lpars and no Sysplex, no GRS.

At that time, we choose to make all disks available to all Lpars, but there was 
a segregation of Production, Development, and Sysprog volumes done by VOLSER. 
I don't remember the details anymore, but shared disks were labeled as SHR*, 
Production and development disks as PRD* and DEV*, and of course SYSRES, Page, 
spool, etc...

At IPL time, a small program was executed, searching all volumes and issuing V 
OFFLINE to those that do not belong to the appropriated Lpar. This program used 
wildcard masks to select what should remain ONLINE.

And, of course, MVS commands were protected in RACF, so only authorized userids 
can VARY ONLINE a volume.

It worked well for us, in this reality.


Best Regards

Ituriel do Nascimento Neto
z/OS System Programmer






Em sexta-feira, 25 de novembro de 2022 02:38:47 BRT, Joel C. Ewing 
 escreveu: 





But its not just a case of whether you trust they will not intentionally 
damage something, but the ease of accidentally causing integrity 
problems by not knowing when others have touched catalogs, volumes, or 
datasets on DASD that is physically shared but not known to be shared by 
the Operating System.  If many people are involved, the coordination 
procedures involved to prevent damage, assuming such procedures are even 
feasible, are a disaster waiting to happen.

 If volumes are SMS, all datasets must be cataloged and the associated 
catalogs must be accessed from any system that accesses those 
datasets.   If the systems are not in a relationship that enables proper 
catalog sharing, access and possible modification of the catalog from 
multiple systems causes the cached versions of catalog data to become 
out of sync with actual content on the drive when the catalog is altered 
from a different system, and there is a high probability the catalog 
will become corrupted on all systems.

Auditors are justified in being concerned whether independent RACF 
databases on multiple systems will always be in sync to properly protect 
production datasets from unintentional access or unauthorized access if 
test LPARs share access to production volumes.  There should always be 
multiple barriers to doing something bad because accidents happen -- 
like forgetting to change a production dataset name in what was intended 
to be test JCL.

There are just two many bad things that can happen if you try to share 
things that are only designed for sharing within a sysplex. The only 
relatively safe way to do this across independent LPARs is 
non-concurrently:   have a set of volumes and a catalog for HLQ's of 
just the datasets on those volumes that is also located on one of those 
volumes, and only have those volumes on-line to one system at a time and 
close, and deallocate all datasets and the catalog on those volumes 
before taking them offline to move them to a different system.

A much simpler and safer solution is to not share DASD volumes across 
LPARs not in the same sysplex, to maintain a unique copy of datasets on 
systems where they are needed, and to use a high-speed communication 
link between the LPARs to transmit datasets from one system to another 
when there is a need to resync those datasets from a production LPAR.

Joel C Ewing


On 11/24/22 21:38, Farley, Peter wrote:

> Not necessarily true in a software development environment where all members 
> of the team need to share all their data everywhere.  "Zero trust" is 
> anathema in a development environment.
>
> If you don't trust me then fire me.  It's cleaner that way.
>
> Shakespeare was *almost* right.  First get rid of all the auditors, *then* 
> get rid of all the lawyers.
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf Of 
> Lennie Dymoke-Bradshaw
> Sent: Thursday, November 24, 2022 5:24 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> If you were asking in a security context, I would advise against it in nearly 
> all cases.
> Auditors will not like that a system's data can be accessed without reference 
> to the RACF (or ACF2, or TSS) system that is supposed to protect it.
>
> Lennie Dymoke-Bradshaw
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf Of 
> Gord Neill
> Sent: 24 November 2022 20:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: To share or not to share DASD
>
> G'day all,
> I've been having discussions with a small shop (single mainframe, 3 separate 
> LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is 
> to share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their 
> developers/sysprogs can get access to current datasets, but in order to do 
> that, they'll need to use GRS Ring or MIM with t

Re: To share or not to share DASD

2022-11-24 Thread Joel C. Ewing
But its not just a case of whether you trust they will not intentionally 
damage something, but the ease of accidentally causing integrity 
problems by not knowing when others have touched catalogs, volumes, or 
datasets on DASD that is physically shared but not known to be shared by 
the Operating System.  If many people are involved, the coordination 
procedures involved to prevent damage, assuming such procedures are even 
feasible, are a disaster waiting to happen.


 If volumes are SMS, all datasets must be cataloged and the associated 
catalogs must be accessed from any system that accesses those 
datasets.   If the systems are not in a relationship that enables proper 
catalog sharing, access and possible modification of the catalog from 
multiple systems causes the cached versions of catalog data to become 
out of sync with actual content on the drive when the catalog is altered 
from a different system, and there is a high probability the catalog 
will become corrupted on all systems.


Auditors are justified in being concerned whether independent RACF 
databases on multiple systems will always be in sync to properly protect 
production datasets from unintentional access or unauthorized access if 
test LPARs share access to production volumes.  There should always be 
multiple barriers to doing something bad because accidents happen -- 
like forgetting to change a production dataset name in what was intended 
to be test JCL.


There are just two many bad things that can happen if you try to share 
things that are only designed for sharing within a sysplex. The only 
relatively safe way to do this across independent LPARs is 
non-concurrently:   have a set of volumes and a catalog for HLQ's of 
just the datasets on those volumes that is also located on one of those 
volumes, and only have those volumes on-line to one system at a time and 
close, and deallocate all datasets and the catalog on those volumes 
before taking them offline to move them to a different system.


A much simpler and safer solution is to not share DASD volumes across 
LPARs not in the same sysplex, to maintain a unique copy of datasets on 
systems where they are needed, and to use a high-speed communication 
link between the LPARs to transmit datasets from one system to another 
when there is a need to resync those datasets from a production LPAR.


Joel C Ewing


On 11/24/22 21:38, Farley, Peter wrote:


Not necessarily true in a software development environment where all members of the team 
need to share all their data everywhere.  "Zero trust" is anathema in a 
development environment.

If you don't trust me then fire me.  It's cleaner that way.

Shakespeare was *almost* right.  First get rid of all the auditors, *then* get 
rid of all the lawyers.

Peter

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Lennie Dymoke-Bradshaw
Sent: Thursday, November 24, 2022 5:24 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

If you were asking in a security context, I would advise against it in nearly 
all cases.
Auditors will not like that a system's data can be accessed without reference 
to the RACF (or ACF2, or TSS) system that is supposed to protect it.

Lennie Dymoke-Bradshaw

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Gord Neill
Sent: 24 November 2022 20:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: To share or not to share DASD

G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate LPARs, no 
Sysplex) regarding best practices for DASD sharing.  Their view is to share all DASD 
volumes across their 3 LPARs (Prod/Dev/Test) so their developers/sysprogs can get access 
to current datasets, but in order to do that, they'll need to use GRS Ring or MIM with 
the associated overhead.  I don't know of any other serialization products, and since 
this is not a Sysplex environment, they can't use GRS Star.  I suggested the idea of no 
GRS, keeping most DASD volumes isolated to each LPAR, with a "shared string"
available to all LPARs for copying datasets, but it was not well received.

Just curious as to how other shops are handling this.  TIA!


Gord Neill | Senior I/T Consultant | GlassHouse Systems
--
...


--
Joel C. Ewing

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-24 Thread Farley, Peter
Not necessarily true in a software development environment where all members of 
the team need to share all their data everywhere.  "Zero trust" is anathema in 
a development environment.

If you don't trust me then fire me.  It's cleaner that way.

Shakespeare was *almost* right.  First get rid of all the auditors, *then* get 
rid of all the lawyers.

Peter

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Lennie Dymoke-Bradshaw
Sent: Thursday, November 24, 2022 5:24 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

If you were asking in a security context, I would advise against it in nearly 
all cases.
Auditors will not like that a system's data can be accessed without reference 
to the RACF (or ACF2, or TSS) system that is supposed to protect it. 

Lennie Dymoke-Bradshaw

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Gord Neill
Sent: 24 November 2022 20:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: To share or not to share DASD

G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate 
LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is to 
share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their 
developers/sysprogs can get access to current datasets, but in order to do 
that, they'll need to use GRS Ring or MIM with the associated overhead.  I 
don't know of any other serialization products, and since this is not a Sysplex 
environment, they can't use GRS Star.  I suggested the idea of no GRS, keeping 
most DASD volumes isolated to each LPAR, with a "shared string"
available to all LPARs for copying datasets, but it was not well received.

Just curious as to how other shops are handling this.  TIA!


Gord Neill | Senior I/T Consultant | GlassHouse Systems
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-24 Thread Lennie Dymoke-Bradshaw
If you were asking in a security context, I would advise against it in
nearly all cases.
Auditors will not like that a system's data can be accessed without
reference to the RACF (or ACF2, or TSS) system that is supposed to protect
it. 

Lennie Dymoke-Bradshaw

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of
Gord Neill
Sent: 24 November 2022 20:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: To share or not to share DASD

G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate
LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is
to share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their
developers/sysprogs can get access to current datasets, but in order to do
that, they'll need to use GRS Ring or MIM with the associated overhead.  I
don't know of any other serialization products, and since this is not a
Sysplex environment, they can't use GRS Star.  I suggested the idea of no
GRS, keeping most DASD volumes isolated to each LPAR, with a "shared string"
available to all LPARs for copying datasets, but it was not well received.

Just curious as to how other shops are handling this.  TIA!


Gord Neill | Senior I/T Consultant | GlassHouse Systems




--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-24 Thread Gibney, Dave
Different RACF and fully shared DASD is a recipe for security problems and 
inconsistencies. I suppose RRSF could help. GRS ring is reported to be abysmal 
and nodes > 2
Same issues in trying to keep the catalogs in sync, which is required to SMS to 
be reliable. As will as trying to keep the SMS xCDS (and DFHSM) in sync.

I had 4 LPARS, all monoplex, perhaps a dozen, carefully shared volumes. 
Including the SYSRES. I did have separate Unix system filesystems for each.
All volumes were potentially shareable, but I varied most of them offline at 
IPL from the 3 LPARs they were a part of. 

I had separate SMS pools for the application data in each LPAR.

> -Original Message-
> From: IBM Mainframe Discussion List  On
> Behalf Of Gord Neill
> Sent: Thursday, November 24, 2022 1:06 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
> 
> [EXTERNAL EMAIL]
> 
> Dave,
> Each LPAR has its own RACF and Catalogs, and they are using SMS.  This shop
> is currently running z/OS 1.9 on very old hardware, in the process of
> upgrading to current H/W and S/W.
> 
> -Original Message-
> From: IBM Mainframe Discussion List  On
> Behalf Of Gibney, Dave
> Sent: November 24, 2022 4:02 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
> 
> You can't share PDSE in such an environment. You can "get away"  with only
> and rarely updating from one LPAR, and reading in the others.
> 
> Multiple RACF databases?  Are the Catalogs the same and shared between all
> 3 LPARS?
> 
> Is the site using SMS?
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List  On
> > Behalf Of Gord Neill
> > Sent: Thursday, November 24, 2022 12:55 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: To share or not to share DASD
> >
> > [EXTERNAL EMAIL]
> >
> > G'day all,
> > I've been having discussions with a small shop (single mainframe, 3
> > separate LPARs, no Sysplex) regarding best practices for DASD sharing.
> > Their view is to share all DASD volumes across their 3 LPARs
> > (Prod/Dev/Test) so their developers/sysprogs can get access to current
> > datasets, but in order to do that, they'll need to use GRS Ring or MIM
> > with the associated overhead.  I don't know of any other serialization
> > products, and since this is not a Sysplex environment, they can't use
> > GRS Star.  I suggested the idea of no GRS, keeping most DASD volumes
> isolated to each LPAR, with a "shared string"
> > available to all LPARs for copying datasets, but it was not well received.
> >
> > Just curious as to how other shops are handling this.  TIA!
> >
> >
> > Gord Neill | Senior I/T Consultant | GlassHouse Systems
> >
> >
> >
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions, send
> > email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email to
> lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-24 Thread Gord Neill
Dave,
Each LPAR has its own RACF and Catalogs, and they are using SMS.  This shop is 
currently running z/OS 1.9 on very old hardware, in the process of upgrading to 
current H/W and S/W.

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Gibney, Dave
Sent: November 24, 2022 4:02 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

You can't share PDSE in such an environment. You can "get away"  with only and 
rarely updating from one LPAR, and reading in the others.

Multiple RACF databases?  Are the Catalogs the same and shared between all 3 
LPARS?

Is the site using SMS?

> -Original Message-
> From: IBM Mainframe Discussion List  On 
> Behalf Of Gord Neill
> Sent: Thursday, November 24, 2022 12:55 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: To share or not to share DASD
> 
> [EXTERNAL EMAIL]
> 
> G'day all,
> I've been having discussions with a small shop (single mainframe, 3 
> separate LPARs, no Sysplex) regarding best practices for DASD sharing.  
> Their view is to share all DASD volumes across their 3 LPARs 
> (Prod/Dev/Test) so their developers/sysprogs can get access to current 
> datasets, but in order to do that, they'll need to use GRS Ring or MIM 
> with the associated overhead.  I don't know of any other serialization 
> products, and since this is not a Sysplex environment, they can't use 
> GRS Star.  I suggested the idea of no GRS, keeping most DASD volumes isolated 
> to each LPAR, with a "shared string"
> available to all LPARs for copying datasets, but it was not well received.
> 
> Just curious as to how other shops are handling this.  TIA!
> 
> 
> Gord Neill | Senior I/T Consultant | GlassHouse Systems
> 
> 
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-24 Thread Gibney, Dave
You can't share PDSE in such an environment. You can "get away"  with only and 
rarely updating from one LPAR, and reading in the others.

Multiple RACF databases?  Are the Catalogs the same and shared between all 3 
LPARS?

Is the site using SMS?

> -Original Message-
> From: IBM Mainframe Discussion List  On
> Behalf Of Gord Neill
> Sent: Thursday, November 24, 2022 12:55 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: To share or not to share DASD
> 
> [EXTERNAL EMAIL]
> 
> G'day all,
> I've been having discussions with a small shop (single mainframe, 3 separate
> LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is 
> to
> share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their
> developers/sysprogs can get access to current datasets, but in order to do
> that, they'll need to use GRS Ring or MIM with the associated overhead.  I
> don't know of any other serialization products, and since this is not a 
> Sysplex
> environment, they can't use GRS Star.  I suggested the idea of no GRS,
> keeping most DASD volumes isolated to each LPAR, with a "shared string"
> available to all LPARs for copying datasets, but it was not well received.
> 
> Just curious as to how other shops are handling this.  TIA!
> 
> 
> Gord Neill | Senior I/T Consultant | GlassHouse Systems
> 
> 
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


To share or not to share DASD

2022-11-24 Thread Gord Neill
G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate 
LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is to 
share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their 
developers/sysprogs can get access to current datasets, but in order to do 
that, they'll need to use GRS Ring or MIM with the associated overhead.  I 
don't know of any other serialization products, and since this is not a Sysplex 
environment, they can't use GRS Star.  I suggested the idea of no GRS, keeping 
most DASD volumes isolated to each LPAR, with a "shared string" available to 
all LPARs for copying datasets, but it was not well received.

Just curious as to how other shops are handling this.  TIA!


Gord Neill | Senior I/T Consultant | GlassHouse Systems




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN