Re: maintain, distribute multiple copies same-named ZFS files

2012-10-29 Thread Staller, Allan
snip
(1) how does one maintain multiple copies of zfs files off of one master 
catalog, given that you can't catalog vsam files using symbolic values in the 
catalog entry ;
/snip

I use DFDSS logical copy from my SMP/E maintained datasets to the new sysres
Sample control statements are pre-processed by a utility to perform variable 
substitution prior to job submission.

COPY DS   -   
   (INCLUDE   -   
 (-   
 SYS1.OMVS.** -   
 )-   
   )  -   
   RENAMEUNC  -   
   (  -   
(SYS1.OMVS.ETC.FVOL4,SYS1.OMVS.ETC.TVOL4) - 
(SYS1.OMVS.ROOT.FVOL4,SYS1.OMVS.ROOT.TVOL4)   - 
(SYS1.OMVS.SCSDROOT.FVOL4,SYS1.OMVS.SCSDROOT.TVOL4)   - 
(SYS1.OMVS.VAR.FVOL4,SYS1.OMVS.VAR.TVOL4) - 
   )  -   
 LOGINDYNAM(FVOL4) RECATALOG(SYS1.PRD2Z1D.MCAT)  -   


HTH,


Al Staller | Z Systems Programmer | KBM Group | (Tel) 972 664-3565 | 
allan.stal...@kbmg.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Bonno, Tuco
Sent: Friday, October 26, 2012 6:13 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: maintain, distribute multiple copies same-named ZFS files

operating environment here is all z/os 1.13 how am i suppsed to clone and 
DISTRIBUTE (with emphasis on *distribute*) zfs files?
CURRENTLY, I have a situation where 6 lpars, 3 production and 3 test, all share 
the same master catalog.  In that master catalog, the entry for OMVS.ROOT reads 
as follows, (i.a.,) :   volser   hfsv1 ;and there are 6 3390s out 
there, one for each lpar, each of which has a copy of OMVS.ROOT on it.  
Currently, each such omvs.root is an HFS file.  Whenever any lpar is ipled, the 
value of hfsv1 is set bmo of an entry in an appropriate IEASYM-- in an 
appropriate PARMLIB, and each lpar runs with its own copy of the root,  called 
 OMVS.ROOT .
whenever i have a NEW COPY of omvs.root that needs to be distributed, i (1) 
drain the 3 test lpars; (2) go to the pack in each test lpar whcih contains  
omvs.root and do a DELETE/NOSCRATCH  for the omvs.root on that pack ; (3)  
from a 7th lpar, i copy the new version of omvs.root to the pack in each test 
lpar which is supposed to have a copy of omvs.root on it, using a 
DISP=(NEW,KEEP).  Later on, those 3 test lpars are ipled as production lpars. 
  in this methodology, the entry in the master catalog is never touched, and 
nothing in any of the 3 production lpars is ever impacted by whatever i may be 
doing in any test lpar.  (This g.p. methodology is how i maintain ALL the o/s 
image dsn-s)  (and, btw, i use this methodology to distribute a couple DOZEN 
omvs/unix-system-services dsn-s, not just omvs.root)

the recent emphasis from ibm is, has been, to convert one's HFS files to ZFS.

so, now, given that zfs files are really vsam files,  (1) how does one maintain 
multiple copies of zfs files off of one master catalog, given that you can't 
catalog vsam files using symbolic values in the catalog entry ;  (2) next, 
assuming that there is some method -- unknown to me at the present moment -- to 
maintain multiple copes of identically-named vsam files off of one catalog , 
how do i go about distributing new copies of zfs files, using my methodology 
that I described above?  First of all, if I follow the above-described 
scenario, the first time i try to delete a zfs file (a vsam file, now) , isn't 
that going to also erase the catalog entry for it  (and wreak some havoc for 
the remaining 5 lpars, which are also still using the catalog) ?   Second of 
all, there is/are the VVDS entries, something I need not bother myself about in 
my current methodology (b/c in that, no vsam files are involved) -- how do i 
keep them sync-ed up?   (3) i tried to consult several books about this , i.a., 
z/os distributed file service file system implementation , distributed file 
sevice zSeries file system Administration  and  volume 9, ABCs of system 
programming and z/FS reorganization tool , and i just can't get this to work 
, at least in the context of the methodology i currently use to distribute
omvs/unix-system-services  dsn-s.  Maybe i need to totally change my 
methodology?  EXACTLY how are other people doing this kind of thing?

TIA

/s/ tuco bonno;
Graduate, College of Conflict Management; University of SouthEast Asia; I 
partied on the Ho Chi Minh Trail - tiến lên !! 


--
For IBM-MAIN subscribe / signoff / archive access

Re: maintain, distribute multiple copies same-named ZFS files

2012-10-29 Thread Bonno, Tuco
If you do a DELETE/NOSCRATCH on a volume, how can you then do a NEW/KEEP to 
the same volume?  Did you mean you do a delete without updating the catalog?

correction.  I *did* mean to say that I do a delete WITHOUT updating the 
catalog.
thank you

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


maintain, distribute multiple copies same-named ZFS files

2012-10-26 Thread Bonno, Tuco
operating environment here is all z/os 1.13
how am i suppsed to clone and DISTRIBUTE (with emphasis on *distribute*) zfs 
files?
CURRENTLY, I have a situation where 6 lpars, 3 production and 3 test, all share 
the same master catalog.  In that master catalog, the entry for OMVS.ROOT reads 
as follows, (i.a.,) :   volser   hfsv1 ;and there are 6 3390s out 
there, one for each lpar, each of which has a copy of OMVS.ROOT on it.  
Currently, each such omvs.root is an HFS file.  Whenever any lpar is ipled, the 
value of hfsv1 is set bmo of an entry in an appropriate IEASYM-- in an 
appropriate PARMLIB, and each lpar runs with its own copy of the root,  called 
 OMVS.ROOT .
whenever i have a NEW COPY of omvs.root that needs to be distributed, i (1) 
drain the 3 test lpars; (2) go to the pack in each test lpar whcih contains  
omvs.root and do a DELETE/NOSCRATCH  for the omvs.root on that pack ; (3)  
from a 7th lpar, i copy the new version of omvs.root to the pack in each test 
lpar which is supposed to have a copy of omvs.root on it, using a 
DISP=(NEW,KEEP).  Later on, those 3 test lpars are ipled as production lpars. 
  in this methodology, the entry in the master catalog is never touched, and 
nothing in any of the 3 production lpars is ever impacted by whatever i may be 
doing in any test lpar.  (This g.p. methodology is how i maintain ALL the o/s 
image dsn-s)  (and, btw, i use this methodology to distribute a couple DOZEN 
omvs/unix-system-services dsn-s, not just omvs.root)

the recent emphasis from ibm is, has been, to convert one's HFS files to ZFS.

so, now, given that zfs files are really vsam files,  (1) how does one maintain 
multiple copies of zfs files off of one master catalog, given that you can't 
catalog vsam files using symbolic values in the catalog entry ;  (2) next, 
assuming that there is some method -- unknown to me at the present moment -- to 
maintain multiple copes of identically-named vsam files off of one catalog , 
how do i go about distributing new copies of zfs files, using my methodology 
that I described above?  First of all, if I follow the above-described 
scenario, the first time i try to delete a zfs file (a vsam file, now) , isn't 
that going to also erase the catalog entry for it  (and wreak some havoc for 
the remaining 5 lpars, which are also still using the catalog) ?   Second of 
all, there is/are the VVDS entries, something I need not bother myself about in 
my current methodology (b/c in that, no vsam files are involved) -- how do i 
keep them sync-ed up?   (3) i tried to consult several books about this , i.a., 
z/os distributed file service file system implementation , distributed file 
sevice zSeries file system Administration  and  volume 9, ABCs of system 
programming and z/FS reorganization tool , and i just can't get this to work 
, at least in the context of the methodology i currently use to distribute
omvs/unix-system-services  dsn-s.  Maybe i need to totally change my 
methodology?  EXACTLY how are other people doing this kind of thing?

TIA

/s/ tuco bonno;
Graduate, College of Conflict Management;
University of SouthEast Asia;
I partied on the Ho Chi Minh Trail - tiến lên !! 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: maintain, distribute multiple copies same-named ZFS files

2012-10-26 Thread van der Grijn, Bart (B)
In our older environment (no real sysplex), each system has its own master 
catalog, but the root file system is part of sysres volume which is actually 
shared between systems. We don't ship new copies of the root file system, 
instead we IPL with a new version of sysres each month. The root file system is 
not in the master catalog, it is in a ucat on sysres.

In our new environment (all sysplex), the 'IBM root' file system is still on a 
shared sysres (in a ucat on sysres), but it has the name of the sysres in the 
dataset name (using SYSR1). Again, we never replace the file system by itself, 
we replace sysres every month. The real (sysplex) root file system is shared 
across the plex. 

Bart

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Bonno, Tuco
Sent: Friday, October 26, 2012 7:13 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: maintain, distribute multiple copies same-named ZFS files

operating environment here is all z/os 1.13
how am i suppsed to clone and DISTRIBUTE (with emphasis on *distribute*) zfs 
files?
CURRENTLY, I have a situation where 6 lpars, 3 production and 3 test, all share 
the same master catalog.  In that master catalog, the entry for OMVS.ROOT reads 
as follows, (i.a.,) :   volser   hfsv1 ;and there are 6 3390s out 
there, one for each lpar, each of which has a copy of OMVS.ROOT on it.  
Currently, each such omvs.root is an HFS file.  Whenever any lpar is ipled, the 
value of hfsv1 is set bmo of an entry in an appropriate IEASYM-- in an 
appropriate PARMLIB, and each lpar runs with its own copy of the root,  called 
 OMVS.ROOT .
whenever i have a NEW COPY of omvs.root that needs to be distributed, i (1) 
drain the 3 test lpars; (2) go to the pack in each test lpar whcih contains  
omvs.root and do a DELETE/NOSCRATCH  for the omvs.root on that pack ; (3)  
from a 7th lpar, i copy the new version of omvs.root to the pack in each test 
lpar which is supposed to have a copy of omvs.root on it, using a 
DISP=(NEW,KEEP).  Later on, those 3 test lpars are ipled as production lpars. 
  in this methodology, the entry in the master catalog is never touched, and 
nothing in any of the 3 production lpars is ever impacted by whatever i may be 
doing in any test lpar.  (This g.p. methodology is how i maintain ALL the o/s 
image dsn-s)  (and, btw, i use this methodology to distribute a couple DOZEN 
omvs/unix-system-services dsn-s, not just omvs.root)

the recent emphasis from ibm is, has been, to convert one's HFS files to ZFS.

so, now, given that zfs files are really vsam files,  (1) how does one maintain 
multiple copies of zfs files off of one master catalog, given that you can't 
catalog vsam files using symbolic values in the catalog entry ;  (2) next, 
assuming that there is some method -- unknown to me at the present moment -- to 
maintain multiple copes of identically-named vsam files off of one catalog , 
how do i go about distributing new copies of zfs files, using my methodology 
that I described above?  First of all, if I follow the above-described 
scenario, the first time i try to delete a zfs file (a vsam file, now) , isn't 
that going to also erase the catalog entry for it  (and wreak some havoc for 
the remaining 5 lpars, which are also still using the catalog) ?   Second of 
all, there is/are the VVDS entries, something I need not bother myself about in 
my current methodology (b/c in that, no vsam files are involved) -- how do i 
keep them sync-ed up?   (3) i tried to consult several books about this , i.a., 
z/os distributed file service file system implementation , distributed file 
sevice zSeries file system Administration  and  volume 9, ABCs of system 
programming and z/FS reorganization tool , and i just can't get this to work 
, at least in the context of the methodology i currently use to distribute
omvs/unix-system-services  dsn-s.  Maybe i need to totally change my 
methodology?  EXACTLY how are other people doing this kind of thing?

TIA

/s/ tuco bonno;
Graduate, College of Conflict Management;
University of SouthEast Asia;
I partied on the Ho Chi Minh Trail - tiến lên !! 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: maintain, distribute multiple copies same-named ZFS files

2012-10-26 Thread retired mainframer
You may not be able to create a VSAM dataset using a symbol for the volume
but after it is created you can alter the catalog entry to specify the
symbol.

Since you already have system symbols that contain unique values for each
system, why not add one more that specifies a unique qualifier for each ROOT
dataset.  Then use this symbol in the DSN you specify in BPXPRMxx.  Each
dataset would then have its own catalog entry (which might eliminate the
need for hfsv1).  This might let you simplify your current HFS processing.
It will eliminate the name collision in the master catalog when you convert
to zFS.  (While this method will cause master catalog updates, which should
not be an issue, it does maintain the lack of impact to the production
LPARs.)

You could use this symbol in the DSN for most common datasets, VSAM or
non, which have separate copies on each system but it is only mandatory for
VSAM.

If you do a DELETE/NOSCRATCH on a volume, how can you then do a NEW/KEEP to
the same volume?  Did you mean you do a delete without updating the catalog?

:: -Original Message-
:: From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
:: Behalf Of Bonno, Tuco
:: Sent: Friday, October 26, 2012 4:13 AM
:: To: IBM-MAIN@LISTSERV.UA.EDU
:: Subject: maintain, distribute multiple copies same-named ZFS files
::
:: operating environment here is all z/os 1.13
:: how am i suppsed to clone and DISTRIBUTE (with emphasis on *distribute*)
:: zfs files?
:: CURRENTLY, I have a situation where 6 lpars, 3 production and 3 test,
:: all share the same master catalog.  In that master catalog, the entry
:: for OMVS.ROOT reads as follows, (i.a.,) :   volser   hfsv1 ;
:: and there are 6 3390s out there, one for each lpar, each of which has a
:: copy of OMVS.ROOT on it.  Currently, each such omvs.root is an HFS file.
:: Whenever any lpar is ipled, the value of hfsv1 is set bmo of an entry
:: in an appropriate IEASYM-- in an appropriate PARMLIB, and each lpar runs
:: with its own copy of the root,  called  OMVS.ROOT .
:: whenever i have a NEW COPY of omvs.root that needs to be distributed, i
:: (1) drain the 3 test lpars; (2) go to the pack in each test lpar whcih
:: contains  omvs.root and do a DELETE/NOSCRATCH  for the omvs.root on
:: that pack ; (3)  from a 7th lpar, i copy the new version of omvs.root to
:: the pack in each test lpar which is supposed to have a copy of omvs.root
:: on it, using a DISP=(NEW,KEEP).  Later on, those 3 test lpars are
:: ipled as production lpars.   in this methodology, the entry in the
:: master catalog is never touched, and nothing in any of the 3 production
:: lpars is ever impacted by whatever i may be doing in any test lpar.
:: (This g.p. methodology is how i maintain ALL the o/s image dsn-s)  (and,
:: btw, i use this methodology to distribute a couple DOZEN omvs/unix-
:: system-services dsn-s, not just omvs.root)
::
:: the recent emphasis from ibm is, has been, to convert one's HFS files to
:: ZFS.
::
:: so, now, given that zfs files are really vsam files,  (1) how does one
:: maintain multiple copies of zfs files off of one master catalog, given
:: that you can't catalog vsam files using symbolic values in the catalog
:: entry ;  (2) next, assuming that there is some method -- unknown to me
:: at the present moment -- to maintain multiple copes of identically-named
:: vsam files off of one catalog , how do i go about distributing new
:: copies of zfs files, using my methodology that I described above?  First
:: of all, if I follow the above-described scenario, the first time i try
:: to delete a zfs file (a vsam file, now) , isn't that going to also erase
:: the catalog entry for it  (and wreak some havoc for the remaining 5
:: lpars, which are also still using the catalog) ?   Second of all, there
:: is/are the VVDS entries, something I need not bother myself about in my
:: current methodology (b/c in that, no vsam files are involved) -- how do
:: i keep them sync-ed up?   (3) i tried to consult several books about
:: this , i.a., z/os distributed file service file system implementation
,
:: distributed file sevice zSeries file system Administration  and 
:: volume 9, ABCs of system programming and z/FS reorganization tool ,
:: and i just can't get this to work , at least in the context of the
:: methodology i currently use to distribute
:: omvs/unix-system-services  dsn-s.  Maybe i need to totally change my
:: methodology?  EXACTLY how are other people doing this kind of thing?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: maintain, distribute multiple copies same-named ZFS files

2012-10-26 Thread Shmuel Metz (Seymour J.)
In
90ec2e798a22854ebf67a14ec3fe093fa74f2ad...@scmbxc01.bcbad.state.sc.us,
on 10/26/2012
   at 07:12 AM, Bonno, Tuco t...@cio.sc.gov said:

how am i suppsed to clone and DISTRIBUTE (with emphasis on
*distribute*) zfs files?

First set up youe wnvieonmwnr in accordance with z/OS UNIX System
Services Planning, GA22-7800-16, 7.5.4  Mounting the version file
system; in particular, each version file system should have a unique
name.

-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 Atid/2http://patriot.net/~shmuel
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: maintain, distribute multiple copies same-named ZFS files

2012-10-26 Thread Rob Schramm
Certainly the version root is a way to go.

If you want to go another way, you can place the backups of the zfs files
on the res pack and restore them on the system where you want it cataloged.
 Mount all the zfs's as read-only and you can share the zfs's as well as
the normal res pack.

Read-only just takes a little up-front planning.

Rob Schramm
Senior Systems Consultant
Imperium Group




On Fri, Oct 26, 2012 at 12:44 PM, Shmuel Metz (Seymour J.) 
shmuel+...@patriot.net wrote:

 In
 90ec2e798a22854ebf67a14ec3fe093fa74f2ad...@scmbxc01.bcbad.state.sc.us,
 on 10/26/2012
at 07:12 AM, Bonno, Tuco t...@cio.sc.gov said:

 how am i suppsed to clone and DISTRIBUTE (with emphasis on
 *distribute*) zfs files?

 First set up youe wnvieonmwnr in accordance with z/OS UNIX System
 Services Planning, GA22-7800-16, 7.5.4  Mounting the version file
 system; in particular, each version file system should have a unique
 name.

 --
  Shmuel (Seymour J.) Metz, SysProg and JOAT
  Atid/2http://patriot.net/~shmuel
 We don't care. We don't have to care, we're Congress.
 (S877: The Shut up and Eat Your spam act of 2003)

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN