How to calculate average dasd response time from WLM Period ?

2022-11-25 Thread Boesel Guillaume
Hi,
Is somebody know how to calculate the average dasd response time from WLM 
Period's connect/disconnect/wait/iosq times ?

On my shop, we use SYSVIEW. In his option WLM, we can see the I/O metrics but I 
don't understand how is calculated the average response time.

For the example, how is calculated the 0.022968 second average connect time ? I 
thought that it was simply the amount of connect time in microseconds divided 
by the total I/O count, but 3527936 / 19661 = 179.4382 that is far away of 
22968 microseconds (0.022968 seconds)...

SYSVIEW, option WLM for a particular WLM Period :
Resource  Value  Average
Total service units for period82495
 CPU service units77274
 SRB service units 5221
Swap count6
I/O interupt time  0.044067
Average swapped-in transactions   0
Total frames  0
RCT time   0.001301 
Average active transactions   0
DASD I/O count19661
DASD I/O connect time  3.527936 0.022968
DASD I/O disconnect time   1.666304 0.010848
DASD I/O wait time 0.000896 0.06


In the RCAERESC structure of IWMWRCAA mapping, it's mentionned that the Total 
DASD I/O count "can be used with fields RCAEIOCT, RCAEIODT, RCAEIOWT, RCAEIOST 
to determine average DASD response time for the period" but without more 
details.
 
 
https://www.ibm.com/docs/en/zos/2.5.0?topic=SSLTBW_2.5.0/com.ibm.zos.v2r5.iead300/IWMWRCAA-map.html

Thanks for your help to understand this calculation.

Regards
Guillaume

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: End of several eras

2022-11-25 Thread Michael Stein
On Tue, Nov 22, 2022 at 01:40:47PM +, Seymour J Metz wrote:
> My experience was that I had to read the fiche for things that should
> be in the PLM and I had to read the PLM for things that should have been
> in the macros or services manual.

I read many a fiche when writing UCLA/IPC.

https://cbttape.org/uclamail/uclamail.htm

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Brian Westerman
I think that you are missing the fact that you can VERY easily add GRS ring via 
a couple of really inexpensive FICON cards.  You may already even have them 
just sitting there unused on your processor.  In any case, you can buy them 
even on eBay now for next to nothing.  

The GRS ring (not star) for a small site with 3 LPARs should have no problem 
with any slowdowns, and it will allow you to run fully shared PDS/e, catalogs, 
etc.  

I support several sites that I converted to GRS ring (some from MIM, some from 
nothing at all) on everything down to a really small z13s (~80 mip) and there 
was no decrease in performance, and in fact, things got better since now GRS 
was handling things instead of reserves.  

In any case, NOT sharing DASD on the same processor complex is quite silly and 
makes life much harder for the users and for you to support it.

It's really simple to set up, and GRS is free, so your only cost is the FICON 
cards.  (I think the last place I upgraded ended up paying $500 each and got 3 
even though we only needed two).

If you need help setting it up, feel free to contact me and I'll help you 
through it.

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bytes in a 3390 track

2022-11-25 Thread Dana Mitchell
On Fri, 25 Nov 2022 12:12:36 -0800, Leonard D Woren  
wrote:
>
>I've long wondered why.  And in the 1980s (I think), IBM actually had 
>a disk for s370 which was FBA, but only supported by DOS/VS.
>

3310 and 3370.  Also supported by VM/SP.  (Certain models of 3370 could also 
attach to System/38)

3375 was the CKD flavor of 3370.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Leonard D Woren

Joel C. Ewing wrote on 11/24/2022 9:38 PM:

[...]

If volumes are SMS, all datasets must be cataloged and the 
associated catalogs must be accessed from any system that accesses 
those datasets.   If the systems are not in a relationship that 
enables proper catalog sharing, access and possible modification of 
the catalog from multiple systems causes the cached versions of 
catalog data to become out of sync with actual content on the drive 
when the catalog is altered from a different system, and there is a 
high probability the catalog will become corrupted on all systems.


Let me sharpen that last point.  If the catalog is being updated from 
more than one system not in a sysplex, it's not a "high probability", 
it's basically a certainty   The really fun part is that you may not 
discover the catalog corruption for days, weeks, or even months.



/Leonard


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bytes in a 3390 track - reason for the question

2022-11-25 Thread Leonard D Woren

Paul Schuster wrote on 11/24/2022 11:13 PM:

TRKCALC knows everything.


Second best, I dug up this exec from the 1990s that should get it right:

/* Rexx */
Parse Arg kl dl .
"XPROC 2 KL DL DEBUG"
If dl = "" Then Do
   Say "Usage: BLK3390 keylen datalen [DEBUG]"
   Exit 2
   End
c = 10
If kl = 0 Then k = 0
  Else Do
   kn = DivRU( kl+6, 232 )
   k = 9 + DivRU( kl+6*kn+6, 34 )
   End
dn = DivRU( dl+6, 232 )
d = 9 + DivRU( dl+6*dn+6, 34 )
blks = 1729%(c+k+d)
If debug = "DEBUG" Then Say 'C' c', K' k', D' d
Say blks "blocks with keylen" kl "and datalen" dl "fit on a 3390."
Exit

/*/
DivRU:  /* divide and round UP */
Return (Arg(1)+Arg(2)-1)%Arg(2)


/Leonard


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bytes in a 3390 track

2022-11-25 Thread Leonard D Woren

Seymour J Metz wrote on 11/25/2022 6:23 AM:

You have 80 terabytes?


At least.  You can get there pretty fast with 10 and 12 TB disks, in 
pairs for backup purposes.  But the pressure is on to switch to Linux 
because I'm just about out of drive letters on Windows, and I already 
have some disks with no drive letters using Windows mount points.



One reason is IBM's refusal to accept and implement the SHARE requirement to 
support FBA in MVS for both access methods and IPL.


I've long wondered why.  And in the 1980s (I think), IBM actually had 
a disk for s370 which was FBA, but only supported by DOS/VS.



I'd also ask why IBM didn't provide ACB/RPL support for all access methods, 
with compatibility and reverse compatibility interfaces as necessary, such as 
were present in OS/VS1 and VSE.


I'd also like to know why.  If they had done that, a lot of changes 
(such as the DCBE kludge) to non-VSAM access methods for 31 bit 
support would have been unnecessary because ACBs are 31 bit clean.



It's a shame that IBM dropped TSS; moving it to FBA would have been a piece of 
cake.


The grand irony of course is that VSAM and its cousins (Linear, PAGE, 
PDSE, ZFS) use FBA layered on top of CKD emulated on FBA. Surely I'm 
not the only one who thinks this is nuts?


/Leonard


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Leonard D Woren [ibm-main...@ldworen.net]
Sent: Wednesday, November 23, 2022 8:19 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Bytes in a 3390 track

True.

Yet... why is space still such a big deal on mainframes?  I have
almost as much disk space connected to my primary PC as 10,000 3390-9
would hold.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bytes in a 3390 track

2022-11-25 Thread Alan Altmark
As someone who has delved deeply into this subject for different reasons, and 
without "inside" knowledge, here's what I have learned or intuited:

1. Logical volumes are entirely self-contained (think of them as files), 
allocated from the arrays with all required space needed to hold metadata, 
including the count and key fields associated with every track allocated to the 
volume. 

2. The CK fields appear to be located within the logical track construct.  This 
does not preclude the existence of other constructs to make key and ID searches 
go faster.  I speculate that such constructs do exist.  Why?  Because that's 
what I would do.

3. For thinly-provisioned volumes, space is allocated on WRITE-type operations 
and is released on ERASE-type operations.   Space is managed in chunks that are 
implementation dependent, not on the architecture, though for DS8K, I think 
it's 1113 cylinders at a time (3390 mod 1 size).  

4. While the architecture allows R0 data to be more than 8 bytes, it's not a 
good idea.  There are too many things that "know" R0 is 8 bytes.

5. The space calculation capacity factors and the algorithm to use are returned 
by the device via Read Device Characteristics (RDC).  You have to go to the 
book to get the algorithm details.

6. Unformatted track capacity is a useless number to sysprogs.  

7. The best utilization of the device is when you have just one user record on 
the track (R1), and that record is 56664 bytes.

8. READ FULL TRACK and WRITE FULL TRACK operations are very specialized 
operations typically used only by programs that are archiving/restoring data.

9. "Nobody cares about this stuff, Alan.  Don't you have something more 
important you should be doing?"  

I have a program that gathers capacity data.  Here's output for a 100-cylinder 
minidisk.  The only thing that changes based on cylinder count is the total 
capacity value:
Capacity formula: 2   
Capacity factors F1-F6:   34  19  9  6  116  6 
HA + R0 length: 1428
Track length:  58786
Max R0 length: 57326
Max R1 length: 56664   (architected)
Unformatted device capacity: 84.1 MB  
Formatted device capacity (4K blocks): 70.3 MB  
Formatted device capacity (56664 block):  81.1 MB  

Regards,
Alan Altmark
IBM z/VM Engineer and Consultant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bytes in a 3390 track

2022-11-25 Thread Ed Jaffe

On 11/23/2022 5:19 PM, Leonard D Woren wrote:
It's time to use this brainpower for better things than optimizing the 
arrangement of angels on a pinhead.  Just throw more hardware at it 
and move on.


We have standardized on a mixture of Mod-27 and Mod-216 volumes for all 
of our SMS-managed data.


It's pretty much worry free compared to the micro-management we used to 
do with Mod-3s and Mod-9s.


.-- VTOC Summary Information ---.
| MVS60  . : MVSEV0 |
| Command ===>  |
| |
| Unit . . : 3390   Free Space  |
| |
|  VTOC Data    Total  Tracks Cyls  |
|  Tracks  . :   540    Size  . . :   913,978 60,883  |
|  %Used . . : 6    Largest . :   392,055 26,137  |
|  Free DSCBS:    25,614 Free    |
|   Extents . : 248 |
| |
|  Volume Data  Track Managed  Tracks Cyls  |
|  Tracks . : 3,606,120 Size  . . :   471,718 31,399  |
|  %Used  . :    74 Largest . :   392,055 26,137  |
|  Trks/Cyls:    15 Free    |
|  F1=Help    F2=Split   F3=Exit    F9=Swap F12=Cancel    |
'-- VTOC Summary Information ---'

--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/



This e-mail message, including any attachments, appended messages and the
information contained therein, is for the sole use of the intended
recipient(s). If you are not an intended recipient or have otherwise
received this email message in error, any use, dissemination, distribution,
review, storage or copying of this e-mail message and the information
contained therein is strictly prohibited. If you are not an intended
recipient, please contact the sender by reply e-mail and destroy all copies
of this email message and do not otherwise utilize or retain this email
message or any or all of the information contained therein. Although this
email message and any attachments or appended messages are believed to be
free of any virus or other defect that might affect any computer system into
which it is received and opened, it is the responsibility of the recipient
to ensure that it is virus free and no responsibility is accepted by the
sender for any loss or damage arising in any way from its opening or use.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Subject: To share or not to share DASD

2022-11-25 Thread Ed Jaffe

On 11/25/2022 6:22 AM, Don Parrott wrote:

We created a DASD only SYSPLEX about 3(?) years ago on a z/14 primarily to facilitate 
PDSE sharing between the PROD and DEVL lpars.   I would have rather had a coupling 
facility for a full sysplex, but we did not have one.There was a ton of work to 
setup the CTC pairs between the three lpars, the final one being our maintenance 
lpar.   GRS will have to be reviewed carefully.We have had zero issues since 
implementation.   Feel free to write me directly for specific questions.   
d...@clemson.edu


No doubt setting up CTC connections is more work than simply setting up 
a CF messaging structure, but they're good to have in case you want to 
use/test GRS ring, push VTAM or XCF traffic through a dedicated 
resource, connect to non-z/OS LPARs such as z/VM or z/VSE without going 
through an OSA, etc.


Years ago, Skip Robinson gave a nice explanation (perhaps at SHARE? 
perhaps here on IBM-MAIN?) on the naming convention they used at SCE to 
keep it all straight. The upper nybble indicates whether a guzzinta or a 
guzzoutta, the next two nybbles represent the LPAR number, and the last 
nybble is the device number 0-F. Having this naming convention makes it 
trivially easy to know which LPARs the control units and devices should 
be connected to since the LPAR number is part of the device number.


We have only six LPARs (1, 2, 3, 4, 5 & 8), so it doesn't look too bad. 
Clearly, if you have 85 LPARs on the box it will take longer to define 
them, but still be just as easy to get it right.


--Device-- --#--- Control Unit Numbers + 
Number   Type +    CSS OS 1--- 2--- 3--- 4--- 5--- 6--- 7--- 8---
4010,16  FCTC  1   2  4010       
4020,16  FCTC  1   2  4020       
4030,16  FCTC  1   2  4030       
4040,16  FCTC  1   2  4040       
4050,16  FCTC  1   2  4050       
4080,16  FCTC  1   2  4080       
5010,16  FCTC  1   2  5010       
5020,16  FCTC  1   2  5020       
5030,16  FCTC  1   2  5030       
5040,16  FCTC  1   2  5040       
5050,16  FCTC  1   2  5050       
5080,16  FCTC  1   2  5080       

The only real drawback I can see to using CTCs like this is the "chewing 
up" of device numbers in environments with a shortage of them.



--
Phoenix Software International
Edward E. Jaffe
831 Parkview Drive North
El Segundo, CA 90245
https://www.phoenixsoftware.com/



This e-mail message, including any attachments, appended messages and the
information contained therein, is for the sole use of the intended
recipient(s). If you are not an intended recipient or have otherwise
received this email message in error, any use, dissemination, distribution,
review, storage or copying of this e-mail message and the information
contained therein is strictly prohibited. If you are not an intended
recipient, please contact the sender by reply e-mail and destroy all copies
of this email message and do not otherwise utilize or retain this email
message or any or all of the information contained therein. Although this
email message and any attachments or appended messages are believed to be
free of any virus or other defect that might affect any computer system into
which it is received and opened, it is the responsibility of the recipient
to ensure that it is virus free and no responsibility is accepted by the
sender for any loss or damage arising in any way from its opening or use.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Seymour J Metz
That's why off-site backup, outside of the range of regional disasters, are so 
important. Data centers have been destroyed by earthquakes, industrial 
accidents and weather in the past, and RAID offers no protection.

Hot backup and its cousins are no longer arcane topics.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Colin Paice [colinpai...@gmail.com]
Sent: Friday, November 25, 2022 9:34 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

I had to explain to some people that RAID disks do not give 100%
protection.  If you delete a file or corrupt a file, then the RAID will
*reliably* make the change to delete or corrupt  all copies of the data.
We used z/VM and ran z/OS on top of it.  We could share volumes read only
and so people could not change them.
Colin

On Fri, 25 Nov 2022 at 13:45, Seymour J Metz  wrote:

> I don't even trust myself; belt and suspender policies are highly useful
> in a development environment. The key is to deploy safeguards that don't
> get underfoot. Have you never had to revert a change?
>
> Auditors serve a useful purpose. Get rid of the bad ones, not all.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
> 
> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf
> of Farley, Peter [031df298a9da-dmarc-requ...@listserv.ua.edu]
> Sent: Thursday, November 24, 2022 10:38 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> Not necessarily true in a software development environment where all
> members of the team need to share all their data everywhere.  "Zero trust"
> is anathema in a development environment.
>
> If you don't trust me then fire me.  It's cleaner that way.
>
> Shakespeare was *almost* right.  First get rid of all the auditors, *then*
> get rid of all the lawyers.
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Lennie Dymoke-Bradshaw
> Sent: Thursday, November 24, 2022 5:24 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> If you were asking in a security context, I would advise against it in
> nearly all cases.
> Auditors will not like that a system's data can be accessed without
> reference to the RACF (or ACF2, or TSS) system that is supposed to protect
> it.
>
> Lennie Dymoke-Bradshaw
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Gord Neill
> Sent: 24 November 2022 20:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: To share or not to share DASD
>
> G'day all,
> I've been having discussions with a small shop (single mainframe, 3
> separate LPARs, no Sysplex) regarding best practices for DASD sharing.
> Their view is to share all DASD volumes across their 3 LPARs
> (Prod/Dev/Test) so their developers/sysprogs can get access to current
> datasets, but in order to do that, they'll need to use GRS Ring or MIM with
> the associated overhead.  I don't know of any other serialization products,
> and since this is not a Sysplex environment, they can't use GRS Star.  I
> suggested the idea of no GRS, keeping most DASD volumes isolated to each
> LPAR, with a "shared string"
> available to all LPARs for copying datasets, but it was not well received.
>
> Just curious as to how other shops are handling this.  TIA!
>
>
> Gord Neill | Senior I/T Consultant | GlassHouse Systems
> --
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage protection keys

2022-11-25 Thread Seymour J Metz
The MI is an object oriented language at a high level than Pascal P-code. I 
don't know how it compares to JVM byte code. The key feature is that you can 
only call a program object and that the objects are black box.

You might want to read up on capability based machines and browse the manuals 
at http://bitsavers.org/pdf/ibm/system38/ and 
http://bitsavers.org/pdf/ibm/as400/.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Paul Gorlinsky [p...@atsmigrations.com]
Sent: Friday, November 25, 2022 11:12 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Storage protection keys

Would you consider that the applications were more like P-Code ( pseudo-code ) 
... not that much different in principle to JAVA today ?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage protection keys

2022-11-25 Thread Paul Gorlinsky
Would you consider that the applications were more like P-Code ( pseudo-code ) 
... not that much different in principle to JAVA today ?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SET IBM-MAIN NODIGEST

2022-11-25 Thread Mike Schwab
send to lists...@listserv.ua.edu, not the group.

On Fri, Nov 25, 2022 at 9:24 AM Don Parrott  wrote:
>
> SET IBM-MAIN NODIGEST
>
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage protection keys

2022-11-25 Thread Seymour J Metz
AFAIK the replacement for i still uses the same paradigm as the S/38; a program 
requests compilation of MI code (with name changes since S/38) and the compiled 
code is a black box object; you can invoke it but you can't change or even 
inspect it. Whether it compiles to POWER code or to some interpreted 
intermediate language is not visible to the application.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Paul Gorlinsky [p...@atsmigrations.com]
Sent: Friday, November 25, 2022 10:28 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Storage protection keys

Thanks for the info Dana,

"For the record,  there are no i-Server, or p-Servers any more.  IBM Power 
servers can run any combination of IBMi,  AIX and Linux LPARS concurrently."

This reduces the IBM "mainframe" product line to just two; Z and Power Servers. 
( or is it one in reality ? The magic of 
software/firmware/epi-code/microcode/macrocode ... )

Are the Power Servers implementing what was the AS400/OS400 functionality as an 
emulator?

Is the Linux used compiled for the Power instruction set?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage protection keys

2022-11-25 Thread Paul Gorlinsky
Thanks for the info Dana,

"For the record,  there are no i-Server, or p-Servers any more.  IBM Power 
servers can run any combination of IBMi,  AIX and Linux LPARS concurrently."

This reduces the IBM "mainframe" product line to just two; Z and Power Servers. 
( or is it one in reality ? The magic of 
software/firmware/epi-code/microcode/macrocode ... )

Are the Power Servers implementing what was the AS400/OS400 functionality as an 
emulator? 

Is the Linux used compiled for the Power instruction set?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


SET IBM-MAIN NODIGEST

2022-11-25 Thread Don Parrott
SET IBM-MAIN NODIGEST



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Storage protection keys

2022-11-25 Thread Dana Mitchell
On Thu, 24 Nov 2022 09:27:41 -0600, Paul Gorlinsky  
wrote:
>
>It would also make good business sense that IBM would share as much tech as 
>possible between the product lines of i-Server, p-Server and z-Server... in 
>order to save costs.
>

For the record,  there are no i-Server, or p-Servers any more.  IBM Power 
servers can run any combination of IBMi,  AIX and Linux LPARS concurrently.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Colin Paice
I had to explain to some people that RAID disks do not give 100%
protection.  If you delete a file or corrupt a file, then the RAID will
*reliably* make the change to delete or corrupt  all copies of the data.
We used z/VM and ran z/OS on top of it.  We could share volumes read only
and so people could not change them.
Colin

On Fri, 25 Nov 2022 at 13:45, Seymour J Metz  wrote:

> I don't even trust myself; belt and suspender policies are highly useful
> in a development environment. The key is to deploy safeguards that don't
> get underfoot. Have you never had to revert a change?
>
> Auditors serve a useful purpose. Get rid of the bad ones, not all.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>
> 
> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf
> of Farley, Peter [031df298a9da-dmarc-requ...@listserv.ua.edu]
> Sent: Thursday, November 24, 2022 10:38 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> Not necessarily true in a software development environment where all
> members of the team need to share all their data everywhere.  "Zero trust"
> is anathema in a development environment.
>
> If you don't trust me then fire me.  It's cleaner that way.
>
> Shakespeare was *almost* right.  First get rid of all the auditors, *then*
> get rid of all the lawyers.
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Lennie Dymoke-Bradshaw
> Sent: Thursday, November 24, 2022 5:24 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> If you were asking in a security context, I would advise against it in
> nearly all cases.
> Auditors will not like that a system's data can be accessed without
> reference to the RACF (or ACF2, or TSS) system that is supposed to protect
> it.
>
> Lennie Dymoke-Bradshaw
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf
> Of Gord Neill
> Sent: 24 November 2022 20:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: To share or not to share DASD
>
> G'day all,
> I've been having discussions with a small shop (single mainframe, 3
> separate LPARs, no Sysplex) regarding best practices for DASD sharing.
> Their view is to share all DASD volumes across their 3 LPARs
> (Prod/Dev/Test) so their developers/sysprogs can get access to current
> datasets, but in order to do that, they'll need to use GRS Ring or MIM with
> the associated overhead.  I don't know of any other serialization products,
> and since this is not a Sysplex environment, they can't use GRS Star.  I
> suggested the idea of no GRS, keeping most DASD volumes isolated to each
> LPAR, with a "shared string"
> available to all LPARs for copying datasets, but it was not well received.
>
> Just curious as to how other shops are handling this.  TIA!
>
>
> Gord Neill | Senior I/T Consultant | GlassHouse Systems
> --
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bytes in a 3390 track

2022-11-25 Thread Seymour J Metz
You have 80 terabytes?

One reason is IBM's refusal to accept and implement the SHARE requirement to 
support FBA in MVS for both access methods and IPL. However, IBM did add new DD 
parameters to allow the OS to do some of the calculations for the user.. Even 
with that, however, there would still have been issues of guarantied space and 
locality of reference.

I'd also ask why IBM didn't provide ACB/RPL support for all access methods, 
with compatibility and reverse compatibility interfaces as necessary, such as 
were present in OS/VS1 and VSE.

It's a shame that IBM dropped TSS; moving it to FBA would have been a piece of 
cake.

--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Leonard D Woren [ibm-main...@ldworen.net]
Sent: Wednesday, November 23, 2022 8:19 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Bytes in a 3390 track

True.

Yet... why is space still such a big deal on mainframes?  I have
almost as much disk space connected to my primary PC as 10,000 3390-9
would hold.

Seeing a 3390 with 150,000 free cylinders does take some getting used to.

It's time to use this brainpower for better things than optimizing the
arrangement of angels on a pinhead.  Just throw more hardware at it
and move on.  A 4 TB disk is the equivalent of almost 5,000,000 3390
cylinders, or 500 3390-9s.  How much DASD can you add in the disk
array for the price (salary) of one additional storage management person?

/Leonard


Seymour J Metz wrote on 11/23/2022 3:20 PM:
> With essentially all z DASD being cached these days, a lot of the efficiency 
> issues no longer apply, leaving space as more dominant.
>
>
> --
> Shmuel (Seymour J.) Metz
> http://mason.gmu.edu/~smetz3
>


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Subject: To share or not to share DASD

2022-11-25 Thread Don Parrott
Gord,



We created a DASD only SYSPLEX about 3(?) years ago on a z/14 primarily to 
facilitate PDSE sharing between the PROD and DEVL lpars.   I would have rather 
had a coupling facility for a full sysplex, but we did not have one.There 
was a ton of work to setup the CTC pairs between the three lpars, the final one 
being our maintenance lpar.   GRS will have to be reviewed carefully.We 
have had zero issues since implementation.   Feel free to write me directly for 
specific questions.   d...@clemson.edu



Don



> -Original Message-

> From: IBM Mainframe Discussion List 
> mailto:IBM-MAIN@LISTSERV.UA.EDU>> On

> Behalf Of Gord Neill

> Sent: Thursday, November 24, 2022 12:55 PM

> To: IBM-MAIN@LISTSERV.UA.EDU

> Subject: To share or not to share DASD

>

> [EXTERNAL EMAIL]

>

> G'day all,

> I've been having discussions with a small shop (single mainframe, 3 separate

> LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is 
> to

> share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their

> developers/sysprogs can get access to current datasets, but in order to do

> that, they'll need to use GRS Ring or MIM with the associated overhead.  I

> don't know of any other serialization products, and since this is not a 
> Sysplex

> environment, they can't use GRS Star.  I suggested the idea of no GRS,

> keeping most DASD volumes isolated to each LPAR, with a "shared string"

> available to all LPARs for copying datasets, but it was not well received.

>

> Just curious as to how other shops are handling this.  TIA!

>

>

> Gord Neill | Senior I/T Consultant | GlassHouse Systems

>

>

>

>


Don Parrott

zSeries Server Technical Support Team
Clemson Computing and Information Technology
Clemson University



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


SET IBM-MAIN NODIGEST

2022-11-25 Thread Don Parrott
SET IBM-MAIN NODIGEST




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Bytes in a 3390 track

2022-11-25 Thread Seymour J Metz
No, but the controller does. When the access method uses Locate and Define 
Extent, the controller pulls full tracks into the cache.

There's also a a/OS authorized service for accessing SCSI DASD via FCP; I 
believe that works by sector, but, again, the controller does things behind the 
scenes to make it more efficient.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Mike Schwab [mike.a.sch...@gmail.com]
Sent: Wednesday, November 23, 2022 8:19 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Bytes in a 3390 track

Not sure when it changed (at least a decade ago), I think the
mainframe reads and writes full tracks at a time anymore.

On Wed, Nov 23, 2022 at 7:01 PM Michael Oujesky  wrote:
>
> Actually, if you are doing sequential processing, zEDC is perhaps the
> best as it "write"s full-tracks, regardless of the BLKSIZE
> specified.  With zEDC, the BLKSIZE is just the size of data passed
> to/from the application and no longer the physical data "written" to disk.
>
> Michael
>
> At 12:14 PM 11/23/2022, Mike Schwab wrote:
>
> >If you are doing sequential reads and writes, half track is the best
> >you can do.  If you are random reading small records, I.E. 80 byte,
> >400 bytes, 2000 bytes; then smaller blocks lead to less I/O per
> >record, since you aren't using most of the data read, and the larger
> >the block the less you use.  VSAM use a 4K physical record unless
> >you specify a very large CI size.
> >
> >On Wed, Nov 23, 2022 at 11:56 AM Paul Gorlinsky  
> >wrote:
> > >
> > > Short block more efficient? Elaborate please. Space utilization
> > and efficient are not necessarily the same. Latency issues vary a
> > lot depending on the exact box being used for DASD. DS6K v DS8K.
> > DS8K with rotating v solid-state ...
> > >
> > > QSAM v BPAM v BSAM v etc...
> > >
> > > General guidelines ...
> > >
> > > --
> > > For IBM-MAIN subscribe / signoff / archive access instructions,
> > > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> >
> >
> >
> >--
> >Mike A Schwab, Springfield IL USA
> >Where do Forest Rangers go to get away from it all?
> >
> >--
> >For IBM-MAIN subscribe / signoff / archive access instructions,
> >send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



--
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Seymour J Metz
Best practice is to not share what you can't protect. MIM, GRS ring, etc., can 
help, but sharing of PDSE or Unix files can lead to data corruption even with 
serialization, and sharing between security domains might not only lead to 
compromisng data but to legal issues, both civil and criminal. If you're a 
financial or medical facility, involve the legal staff in any decision of 
sharing data between sysplexes.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Gord Neill [02ff5f18e15f-dmarc-requ...@listserv.ua.edu]
Sent: Thursday, November 24, 2022 3:54 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: To share or not to share DASD

G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate 
LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is to 
share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their 
developers/sysprogs can get access to current datasets, but in order to do 
that, they'll need to use GRS Ring or MIM with the associated overhead.  I 
don't know of any other serialization products, and since this is not a Sysplex 
environment, they can't use GRS Star.  I suggested the idea of no GRS, keeping 
most DASD volumes isolated to each LPAR, with a "shared string" available to 
all LPARs for copying datasets, but it was not well received.

Just curious as to how other shops are handling this.  TIA!


Gord Neill | Senior I/T Consultant | GlassHouse Systems




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread Seymour J Metz
I don't even trust myself; belt and suspender policies are highly useful in a 
development environment. The key is to deploy safeguards that don't get 
underfoot. Have you never had to revert a change?

Auditors serve a useful purpose. Get rid of the bad ones, not all.


--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3


From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU] on behalf of 
Farley, Peter [031df298a9da-dmarc-requ...@listserv.ua.edu]
Sent: Thursday, November 24, 2022 10:38 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

Not necessarily true in a software development environment where all members of 
the team need to share all their data everywhere.  "Zero trust" is anathema in 
a development environment.

If you don't trust me then fire me.  It's cleaner that way.

Shakespeare was *almost* right.  First get rid of all the auditors, *then* get 
rid of all the lawyers.

Peter

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Lennie Dymoke-Bradshaw
Sent: Thursday, November 24, 2022 5:24 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: To share or not to share DASD

If you were asking in a security context, I would advise against it in nearly 
all cases.
Auditors will not like that a system's data can be accessed without reference 
to the RACF (or ACF2, or TSS) system that is supposed to protect it.

Lennie Dymoke-Bradshaw

-Original Message-
From: IBM Mainframe Discussion List  On Behalf Of 
Gord Neill
Sent: 24 November 2022 20:55
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: To share or not to share DASD

G'day all,
I've been having discussions with a small shop (single mainframe, 3 separate 
LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is to 
share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their 
developers/sysprogs can get access to current datasets, but in order to do 
that, they'll need to use GRS Ring or MIM with the associated overhead.  I 
don't know of any other serialization products, and since this is not a Sysplex 
environment, they can't use GRS Star.  I suggested the idea of no GRS, keeping 
most DASD volumes isolated to each LPAR, with a "shared string"
available to all LPARs for copying datasets, but it was not well received.

Just curious as to how other shops are handling this.  TIA!


Gord Neill | Senior I/T Consultant | GlassHouse Systems
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: To share or not to share DASD

2022-11-25 Thread kekronbekron
Additionally, there's the generic class DASDVOL that may be applicable/helpful.
Used to be $DASDI. in FACILITY... I think?

- KB

--- Original Message ---
On Friday, November 25th, 2022 at 6:11 PM, Ituriel do Neto 
<03427ec2837d-dmarc-requ...@listserv.ua.edu> wrote:


> Hi,
> 
> In the past, i used to work for a tiny shop with the same distribution you 
> indicated.
> Only three Lpars and no Sysplex, no GRS.
> 
> At that time, we choose to make all disks available to all Lpars, but there 
> was a segregation of Production, Development, and Sysprog volumes done by 
> VOLSER.
> I don't remember the details anymore, but shared disks were labeled as SHR*, 
> Production and development disks as PRD* and DEV*, and of course SYSRES, 
> Page, spool, etc...
> 
> At IPL time, a small program was executed, searching all volumes and issuing 
> V OFFLINE to those that do not belong to the appropriated Lpar. This program 
> used wildcard masks to select what should remain ONLINE.
> 
> And, of course, MVS commands were protected in RACF, so only authorized 
> userids can VARY ONLINE a volume.
> 
> It worked well for us, in this reality.
> 
> 
> Best Regards
> 
> Ituriel do Nascimento Neto
> z/OS System Programmer
> 
> 
> 
> 
> 
> 
> Em sexta-feira, 25 de novembro de 2022 02:38:47 BRT, Joel C. Ewing 
> jce.ebe...@cox.net escreveu:
> 
> 
> 
> 
> 
> 
> But its not just a case of whether you trust they will not intentionally
> damage something, but the ease of accidentally causing integrity
> problems by not knowing when others have touched catalogs, volumes, or
> datasets on DASD that is physically shared but not known to be shared by
> the Operating System. If many people are involved, the coordination
> procedures involved to prevent damage, assuming such procedures are even
> feasible, are a disaster waiting to happen.
> 
> If volumes are SMS, all datasets must be cataloged and the associated
> catalogs must be accessed from any system that accesses those
> datasets. If the systems are not in a relationship that enables proper
> catalog sharing, access and possible modification of the catalog from
> multiple systems causes the cached versions of catalog data to become
> out of sync with actual content on the drive when the catalog is altered
> from a different system, and there is a high probability the catalog
> will become corrupted on all systems.
> 
> Auditors are justified in being concerned whether independent RACF
> databases on multiple systems will always be in sync to properly protect
> production datasets from unintentional access or unauthorized access if
> test LPARs share access to production volumes. There should always be
> multiple barriers to doing something bad because accidents happen --
> like forgetting to change a production dataset name in what was intended
> to be test JCL.
> 
> There are just two many bad things that can happen if you try to share
> things that are only designed for sharing within a sysplex. The only
> relatively safe way to do this across independent LPARs is
> non-concurrently: have a set of volumes and a catalog for HLQ's of
> just the datasets on those volumes that is also located on one of those
> volumes, and only have those volumes on-line to one system at a time and
> close, and deallocate all datasets and the catalog on those volumes
> before taking them offline to move them to a different system.
> 
> A much simpler and safer solution is to not share DASD volumes across
> LPARs not in the same sysplex, to maintain a unique copy of datasets on
> systems where they are needed, and to use a high-speed communication
> link between the LPARs to transmit datasets from one system to another
> when there is a need to resync those datasets from a production LPAR.
> 
> Joel C Ewing
> 
> 
> On 11/24/22 21:38, Farley, Peter wrote:
> 
> > Not necessarily true in a software development environment where all 
> > members of the team need to share all their data everywhere. "Zero trust" 
> > is anathema in a development environment.
> > 
> > If you don't trust me then fire me. It's cleaner that way.
> > 
> > Shakespeare was almost right. First get rid of all the auditors, then get 
> > rid of all the lawyers.
> > 
> > Peter
> > 
> > -Original Message-
> > From: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU On Behalf Of 
> > Lennie Dymoke-Bradshaw
> > Sent: Thursday, November 24, 2022 5:24 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: To share or not to share DASD
> > 
> > If you were asking in a security context, I would advise against it in 
> > nearly all cases.
> > Auditors will not like that a system's data can be accessed without 
> > reference to the RACF (or ACF2, or TSS) system that is supposed to protect 
> > it.
> > 
> > Lennie Dymoke-Bradshaw
> > 
> > -Original Message-
> > From: IBM Mainframe Discussion List IBM-MAIN@LISTSERV.UA.EDU On Behalf Of 
> > Gord Neill
> > Sent: 24 November 2022 20:55
> > To: IBM-MAIN@LISTSERV.UA.EDU
> 

Re: To share or not to share DASD

2022-11-25 Thread Ituriel do Neto
Hi,

In the past, i used to work for a tiny shop with the same distribution you 
indicated. 
Only three Lpars and no Sysplex, no GRS.

At that time, we choose to make all disks available to all Lpars, but there was 
a segregation of Production, Development, and Sysprog volumes done by VOLSER. 
I don't remember the details anymore, but shared disks were labeled as SHR*, 
Production and development disks as PRD* and DEV*, and of course SYSRES, Page, 
spool, etc...

At IPL time, a small program was executed, searching all volumes and issuing V 
OFFLINE to those that do not belong to the appropriated Lpar. This program used 
wildcard masks to select what should remain ONLINE.

And, of course, MVS commands were protected in RACF, so only authorized userids 
can VARY ONLINE a volume.

It worked well for us, in this reality.


Best Regards

Ituriel do Nascimento Neto
z/OS System Programmer






Em sexta-feira, 25 de novembro de 2022 02:38:47 BRT, Joel C. Ewing 
 escreveu: 





But its not just a case of whether you trust they will not intentionally 
damage something, but the ease of accidentally causing integrity 
problems by not knowing when others have touched catalogs, volumes, or 
datasets on DASD that is physically shared but not known to be shared by 
the Operating System.  If many people are involved, the coordination 
procedures involved to prevent damage, assuming such procedures are even 
feasible, are a disaster waiting to happen.

 If volumes are SMS, all datasets must be cataloged and the associated 
catalogs must be accessed from any system that accesses those 
datasets.   If the systems are not in a relationship that enables proper 
catalog sharing, access and possible modification of the catalog from 
multiple systems causes the cached versions of catalog data to become 
out of sync with actual content on the drive when the catalog is altered 
from a different system, and there is a high probability the catalog 
will become corrupted on all systems.

Auditors are justified in being concerned whether independent RACF 
databases on multiple systems will always be in sync to properly protect 
production datasets from unintentional access or unauthorized access if 
test LPARs share access to production volumes.  There should always be 
multiple barriers to doing something bad because accidents happen -- 
like forgetting to change a production dataset name in what was intended 
to be test JCL.

There are just two many bad things that can happen if you try to share 
things that are only designed for sharing within a sysplex. The only 
relatively safe way to do this across independent LPARs is 
non-concurrently:   have a set of volumes and a catalog for HLQ's of 
just the datasets on those volumes that is also located on one of those 
volumes, and only have those volumes on-line to one system at a time and 
close, and deallocate all datasets and the catalog on those volumes 
before taking them offline to move them to a different system.

A much simpler and safer solution is to not share DASD volumes across 
LPARs not in the same sysplex, to maintain a unique copy of datasets on 
systems where they are needed, and to use a high-speed communication 
link between the LPARs to transmit datasets from one system to another 
when there is a need to resync those datasets from a production LPAR.

Joel C Ewing


On 11/24/22 21:38, Farley, Peter wrote:

> Not necessarily true in a software development environment where all members 
> of the team need to share all their data everywhere.  "Zero trust" is 
> anathema in a development environment.
>
> If you don't trust me then fire me.  It's cleaner that way.
>
> Shakespeare was *almost* right.  First get rid of all the auditors, *then* 
> get rid of all the lawyers.
>
> Peter
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf Of 
> Lennie Dymoke-Bradshaw
> Sent: Thursday, November 24, 2022 5:24 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: To share or not to share DASD
>
> If you were asking in a security context, I would advise against it in nearly 
> all cases.
> Auditors will not like that a system's data can be accessed without reference 
> to the RACF (or ACF2, or TSS) system that is supposed to protect it.
>
> Lennie Dymoke-Bradshaw
>
> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf Of 
> Gord Neill
> Sent: 24 November 2022 20:55
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: To share or not to share DASD
>
> G'day all,
> I've been having discussions with a small shop (single mainframe, 3 separate 
> LPARs, no Sysplex) regarding best practices for DASD sharing.  Their view is 
> to share all DASD volumes across their 3 LPARs (Prod/Dev/Test) so their 
> developers/sysprogs can get access to current datasets, but in order to do 
> that, they'll need to use GRS Ring or MIM with the associated overhead.  I 
> don't know of any other serialization products, and since this is not a 
> Sysplex environment,