Thank you, Yuan. That looks *very* interesting; I was actually looking at
an HA setup, too.
For this specific need, it seems like it would be problematic because this
script seems to also share data; that's specifically what I do not want to
do.
Interesting idea, Nick. I'm not sure how that would work because when the
nodes are booting up, initially they wouldn't know which VLAN to connect
to. I don't have enough control over the switch to set up port-based VLANs.
___
Hi Kevin,
We have a script xcatha.py can setup a standby xcat management node.
It is used in one of xCAT HA solution. We can use it setup 2 inactive xcat management node, then, use it active one of the xcat management node. You scenario is similar with our solution.
You can find the related
Why not run the mgr and data networks for the new cluster in different
vlans.
That way you can leave the dhcp on both systems running and they will only
respond to the requests for their respective clusters.
Nick
On Sat., 8 Dec. 2018, 8:22 am Rich Sudlow We do something slightly different by
We do something slightly different by not using a DHCP pool but only allow
DHCP to answer with MACs that are know.
On 12/7/18 1:30 PM, David Johnson wrote:
Yes, only one can have a dynamic range. In my case neither of them do,
since I manually paste the MAC addresses into the mac table.
My
Yes, only one can have a dynamic range. In my case neither of them do,
since I manually paste the MAC addresses into the mac table.
My issue with deleting first is when I deleted 16 mac addresses and then got
sidetracked and went home, those nodes later lost their lease and ended up
getting
Thank you, Dave. That is an interesting alternative approach; I might
actually consider that.
So you are saying that the old and new DHCP servers can run in parallel? I
assume that they just can't both have dynamic ranges?
I'm not sure I understand what the problem is with deleting a node first,
We’ve kept parallel clusters on the same network for nearly a year now while
transitioning to RH7 from CentOS 6.
Initially copied the hosts and nodelist and MAC tables into the new xcat
database. Carefully controlled use of makedhcp so that nodes moving to the new
cluster were first added to
I'm in the middle of upgrading our existing HPC (from RHEL 6 to RHEL 7).
I'm doing most of my testing on a separate "sandbox" test bed, but now I'm
close to going live. I'm trying to figure out how to do this with minimal
disruption.
My question: how can I install the new management node and keep