okie dokie, ill update that gist now thanks Stephen!

On Thu, 15 Jul 2021 at 10:18, Stephen John Smoogen <smo...@gmail.com> wrote:

> On Thu, 15 Jul 2021 at 04:53, David Kirwan <dkir...@redhat.com> wrote:
> >
> > The previous changes to those management hosts, were a name change so I
> think we should probably leave those as they where so they are still
> accessible on the management network. The hosts themselves are older
> hardware Kevin was suggesting that we might throw out or retire soon.
> >
> > Think the idea is instead of using those, is to use the existing
> vmhost-x86-05/06/07/11 staging as all free capacity/newer boxes.
> >
>
> OK as long as the hosts ping/respond on that network, they should have
> ip addresses in DNS. Otherwise the following happens:
> 1. We try to use those ips for a new box and both systems go dead.
> [That usually needs someone to go onto the hardware via a KVM and
> fix.]
> 2. We get a scan/audit from RH and they want to know what hardware is
> on that address and is it an attacker/etc.
> 3. The box gets turned off and then comes back on somehow. Then we may
> have an ip conflict.
>
> So until the hardware is physically removed from the racks.. keep its
> mgmt ip in DNS. Maybe change the name to oldhost or retired .. but
> keep it there until then.
>
> > On Thu, 15 Jul 2021 at 09:22, Stephen John Smoogen <smo...@gmail.com>
> wrote:
> >>
> >> On Thu, 15 Jul 2021 at 04:18, David Kirwan <dkir...@redhat.com> wrote:
> >> >
> >> > https://gist.github.com/davidkirwan/ecc1c135b6f2c82b1ef337ebdcd8414b
> >> >
> >> > Hi folks if someone would be so kind as to take a look over our
> changes here, we want to revert some vhmost-ocp4 changes and rename the ocp
> hosts.
> >> >
> >> > Just two things to be aware:
> >> >
> >> > This is commented out line is intentional, as we wish to reuse the
> bootstrap.ocp node to become the worker01.ocp node after the control plane
> has been installed.
> >> > +;worker01.ocp IN A 10.3.166.118
> >> >
> >> > Second question, we've removed the vmhost-ocp nodes from the 160.
> management network, not sure if we should have done that.
> >> >
> >>
> >> Where are the management ports for these boxes going to be? Is
> >> 10.3.160.40 really unused?
> >>
> >>
> >> > cheers,
> >> > David
> >> >
> >> >
> >> >
> >> > --
> >> > David Kirwan
> >> > Software Engineer
> >> >
> >> > Community Platform Engineering @ Red Hat
> >> >
> >> > T: +(353) 86-8624108     IM: @dkirwan
> >> >
> >> > _______________________________________________
> >> > infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> >> > To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> >> > Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> >> > List Guidelines:
> https://fedoraproject.org/wiki/Mailing_list_guidelines
> >> > List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> >> > Do not reply to spam on the list, report it:
> https://pagure.io/fedora-infrastructure
> >>
> >>
> >>
> >> --
> >> Stephen J Smoogen.
> >> I've seen things you people wouldn't believe. Flame wars in
> >> sci.astro.orion. I have seen SPAM filters overload because of Godwin's
> >> Law. All those moments will be lost in time... like posts on  BBS...
> >> time to reboot.
> >> _______________________________________________
> >> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> >> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> >> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> >> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> >> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> >> Do not reply to spam on the list, report it:
> https://pagure.io/fedora-infrastructure
> >
> >
> >
> > --
> > David Kirwan
> > Software Engineer
> >
> > Community Platform Engineering @ Red Hat
> >
> > T: +(353) 86-8624108     IM: @dkirwan
> >
> > _______________________________________________
> > infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> > To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> > Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> > Do not reply to spam on the list, report it:
> https://pagure.io/fedora-infrastructure
>
>
>
> --
> Stephen J Smoogen.
> I've seen things you people wouldn't believe. Flame wars in
> sci.astro.orion. I have seen SPAM filters overload because of Godwin's
> Law. All those moments will be lost in time... like posts on  BBS...
> time to reboot.
> _______________________________________________
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> Do not reply to spam on the list, report it:
> https://pagure.io/fedora-infrastructure
>


-- 
David Kirwan
Software Engineer

Community Platform Engineering @ Red Hat

T: +(353) 86-8624108     IM: @dkirwan
_______________________________________________
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

Reply via email to