Thanks Jordan.

The Ansible playbooks do not provide a means of setting nodeName when
performing Advanced Install or scaleup? I can add new nodes with the
appropriate nodeName set and remove the existing ones if possible.

Otherwise, would you care to provide some details on what node client
credentials needs to be updated? The documentation revolves largely around
assisted setups; didn't manage to find anything on manual node
configuration and introduction into an existing cluster (might look into
how scaleup works in Ansible for some idea).

Frank
Systems Engineer

VSee: fr...@vsee.com <http://vsee.com/u/tmd4RB> | Cell: +65 9338 0035

Join me on VSee for Free <http://vsee.com/u/tmd4RB>




On Mon, Aug 15, 2016 at 11:32 PM, Jordan Liggitt <jligg...@redhat.com>
wrote:

> Node names are immutable in the API. Changing the node name would require
> updating the node client credentials, the node would register another node
> with the API when it started, and any pods scheduled to the old node name
> would get evicted once the old node object's status got stale enough.
>
>
>
> On Aug 15, 2016, at 11:25 AM, Frank Liauw <fr...@vsee.com> wrote:
>
> Thanks Jason.
>
> Can I update nodeName in config.yaml and restart the EC2 nodes? Will that
> update the metadata of my nodes automatically across the entire cluster?
>
> Frank
> Systems Engineer
>
> VSee: fr...@vsee.com <http://vsee.com/u/tmd4RB> | Cell: +65 9338 0035
>
> Join me on VSee for Free <http://vsee.com/u/tmd4RB>
>
>
>
>
> On Mon, Aug 15, 2016 at 11:10 PM, Jason DeTiberus <jdeti...@redhat.com>
> wrote:
>
>>
>>
>> On Mon, Aug 15, 2016 at 4:17 AM, Frank Liauw <fr...@vsee.com> wrote:
>>
>>> Hi All,
>>>
>>> I have a 5 node Openshift cluster split across 2 AZs; our colocation
>>> center and AWS, with a master in each AZ and the rest being nodes.
>>>
>>> We setup our cluster with the Ansible script, and somewhere during the
>>> setup, the EC2 instance's private hostname were picked up and registered as
>>> node names of the nodes in AWS, which is a bit annoying as that deviates
>>> from our hostname conventions and is rather difficult to read, and it's not
>>> something that can be changed post setup.
>>>
>>> It didn't help that parts of the admin operations seem to be using the
>>> EC2 instance's private hostname, so I get errors like this:
>>>
>>> # oc logs logging-fluentd-shfnu
>>> Error from server: Get https://ip-10-20-128-101.us-we
>>> st-1.compute.internal:10250/containerLogs/logging/logging-fl
>>> uentd-shfnu/fluentd-elasticsearch: dial tcp 198.90.20.95:10250: i/o
>>> timeout
>>>
>>> Scheduling system related pods on the AWS instances works (router,
>>> fluentd), though any build pods that lands up on EC2s never gets built, and
>>> just eventually times out; my suspicion is that the build process monitors
>>> depends on the hostname which can't be reached from our colocation center
>>> master (which we use as a primary), and hence breaks.
>>>
>>> I'm unable to find much detail on this behaviour.
>>>
>>> 1. Can we manually change the hostname of certain nodes?
>>>
>>
>> The nodeName value overrides this, however if you are relying on cloud
>> provider integration there are limitations, see below.
>>
>>
>>>
>>> 2. How do we avoid registering EC2 nodes with their private hostnames?
>>>
>>
>> f you are willing to give up the native cloud provider integration
>> (ability to leverage EBS volumes as PVs), then you can override this using
>> the openshift_hostname variable when installing the cluster. At least as of
>> Kubernetes/Origin 1.2, the nodeName value in the node config needed to
>> match the private dns name of the host.
>>
>> --
>> Jason DeTiberus
>>
>
> _______________________________________________
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to