ah YUP!
I was going to add, when I did a short jaunt with IGS, most of the IBM
sysprog's never heard of ServerPac and most never provided or worked
with a custompac install.
The clients systems are built based on what the client needs, sometime
just the base system, sometimes the base+OEM products, that info is
provided to another IBM service that builds the system, then restored to
the clients systems using Txxxxx and Dxxxxx volumes. so the customer is
not really involved.
Curious where z/osmf will participate, if at all in these scenario's
unfortunately and somewhat fortunate for me I've worked at a lot of
different sites, only when I was 'THE GUY" did the install process stay
the same from company to company. :)
I think, with anything else new, z/osmf once embarrassed, installing the
OS and products will be somewhat like 'those' platforms.
Carmen
On 7/22/2021 9:38 AM, Tom Brennan wrote:
"seems everyone has a better way"
I think you hit on the root of the problem. With Windows and Linux
installs, everyone (generally) does things the same exact way,
including filenames and directory locations. They don't have the
problems we have with mainframe installs.
On 7/22/2021 7:19 AM, Carmen Vitullo wrote:
I think I IPL'd the CPAC system that was built from the ServerPac
once in my career, it was the company/dept's standard and we had a
small LPAR built just for that reason. Documentation was provided,
IPLing the CPAC system was only done to proceed with the ServerPac
install.
moving on from that company I moved to a different process, seems
everyone has a better way, for me building the target sysres and zfs
file systems, running some IVP tests and build my new master catalog,
and IPL that system on my sandbox system.
I have a documented process to copy/migrate the new version or maint
to production that works well even for someone who's not a z/os sysprog.
Carmen
On 7/21/2021 1:19 PM, Tom Brennan wrote:
Same with me when I ran ServerPac installs - I never IPL'd using the
datasets provided by the installer such as catalogs, RACF, spool,
SMF, page, etc. I never understood IBM's reason for doing that, and
also never understood the reason for running the system validation
jobs on the vanilla system. What was much more important for us was
IPLing the new res pack on a sandbox system with our own system
datasets, parms, and usermods - and then solve any issues that may
come up.
So those IBM-supplied system datasets were never used, and although
I could not delete them using the CPP dialog, I would always set
them to 1 track or 1 cylinder before running the alloc job - just to
save space.
It just made little sense to me to prove the vanilla system from IBM
works correctly. Of course it does, otherwise why would they send
it to me?
On 7/20/2021 10:23 PM, Gibney, Dave wrote:
I don't know how it would work with zOSMF, but I don't worry about
the dataset sizes of my SMPE target datasets. Because I never IPL
using them.
I copy to new SYSRES, FDR and ADRDSSU dataset copies to single
extents. Of course, I rarely (maybe 5 to 5 times in 30 years) put
maintenance into a running system
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
--
/I am not bound to win, but I am bound to be true. I am not bound to
succeed, but I am bound to live by the light that I have. I must stand
with anybody that stands right, and stand with him while he is right,
and part with him when he goes wrong. *Abraham Lincoln*/
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN