Hi All,
We want to move our Linux environment into an SSI cluster. We have local
resources that I need to get rid of in order for guests to use LGR. The most
obvious are the CMS minidisks (190, 19E). Most commands in the PROFILE EXEC are
CP commands so these can be moved into the directory.
You can still use CMS to run swapgen. When you IPL Linux, CMS is no longer
there.
On 01/22/2016 03:39 PM, van Sleeuwen, Berry wrote:
Hi All,
We want to move our Linux environment into an SSI cluster. We have local
resources that I need to get rid of in order for guests to use LGR. The most
Regarding the last part of your message, zPro has no restrictions on directory entry
contents. What ever is in the directory is replicated to the destination server.
On 01/22/2016 03:39 PM, van Sleeuwen, Berry wrote:
Hi All,
We want to move our Linux environment into an SSI cluster. We have
On Friday, 01/22/2016 at 10:10 GMT, "van Sleeuwen, Berry"
wrote:
> So the question is what would be the best way to configure the new
setup.
> Either remove CMS dependency altogether or detach the disks after we
don't need
> CMS anymore (in the PROFILE EXEC or after
On Friday, 01/22/2016 at 10:29 GMT, Marcy Cortes
wrote:
> We do other necessary stuff from CMS. Mainly checking of where we are
and
> writing various parms that a linux boot script uses to configure IP,
hostname,
> etc.
I am wondering if people are looking at
"best" is also a point of view (or a religion perhaps). I was raised with CMS
so I do like CMS a lot. The CP commands in the directory remove a lot of the
need for executing command within a PROFILE EXEC. So far I have dismissed this
configuration because I am (too much?) accustomed to CMS.
I know, just as CMS is no longer available after we IPL a VSE guest. But as
long as the CMS disks are linked in the guest that will prevent LGR. Granted,
we might build a shared CMS for our Linux guests but I don't like a second,
separate CMS.
So the question is what would be the best way to
Thanks. It's nice to see what you do within CMS. We currently have some guests
that test a shared minidisk in an CSE environment to see if they are already
started. Obviously that has to be changed in an SSI configuration.
Actually, when searching for what I should/could do I already copied
As Dennis mentioned, do the detach. I do that as well.
On 01/22/2016 04:08 PM, van Sleeuwen, Berry wrote:
I know, just as CMS is no longer available after we IPL a VSE guest. But as
long as the CMS disks are linked in the guest that will prevent LGR. Granted,
we might build a shared CMS for
Thanks, That's indeed helpful. In the past we tested a provisioning tool that
couldn't handle our CMS setup (both Linux PROFILE EXEC and AUTOLOG1 PROFILE
EXEC) so I will take such items into account when redesigning our environment.
We currently don't use any tools other than our own processes
/etc/init.d/boot.local has this in it for the swap disks (if you are RH rather
than SUSE that might be a different file) :
/sbin/mkswap /dev/disk/by-path/ccw-0.0.ff00-part1
/sbin/mkswap /dev/disk/by-path/ccw-0.0.ff01-part1
/sbin/mkswap /dev/disk/by-path/ccw-0.0.ff02-part1
/sbin/mkswap
You're welcome.
Our prod servers can be brought up in any of 3 data centers. And they could
come up different host names and IPs for testing recovery scenarios. Our
dev/test servers can come up in 2 data centers.
It's just easier to have one spot in CMS to carry this info.
Liberally stolen
Alan wrote:
>I am wondering if people are looking at using persistent DHCP for this. To
>make it work you have to use layer 2 VSWITCHes and you have to manage
>MACIDs on the NICDEFs. Your automation tools have to "manufacture" a new
>MACID for every new NICDEF and put it there or on COMMAND
Hi,
Once SSI is up and running you can do LGR using IBM Wave by dragging a server
from one node and dropping it to another node of the same SSI cluster, all
graphically. You can get the pre-requisites for IBM Wave from the latest
Redbook available publicly. A simple search on IBM Wave
What Dennis said.
Running DHCP on servers may even be prohibited by security policy. I haven't
checked, but I wouldn't be surprised if it was.
-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of O'Brien,
Dennis L
Sent: Friday, January 22, 2016
On Saturday, 01/23/2016 at 01:13 GMT, "O'Brien, Dennis L"
wrote:
> Like Marcy said, I don't see anything to gain by involving the DHCP
people.
> Managing MACID's is no less complicated than managing IP addresses.
Managing MACIDs is actually way *less*
>Managing MACIDs is actually way *less* complicated than IPs since you
>don't have to worry about global uniqueness or subnet boundaries.
The network people worry about global uniqueness and subnet boundaries, and
they give us an IP address range to use. We have to make sure that the guest
Our setup is similar to Marcy's. Each SSI cluster has a master list of guest
userids and IP addresses in a CMS file. Each entry contains production and DR
test IP addresses. We used to have separate addresses for real DR, but those
are now the same as production. The IP addresses are
18 matches
Mail list logo