Hi Carlo. I would prefer option one because the TLS handling can be quite expensive and therefore I would not want that a Application workload blocks in anyway the masters for there work.
Well the MCO (Machine Config Operator) maintains the worker, which explains why your manual deletion does not work. https://github.com/openshift/machine-config-operator I would try to run 2 ingress controller with different |nodePlacement, domain| and any other settings specific for the external router. The Idea is untested. https://docs.okd.io/latest/networking/ingress-operator.html Default: worker nodes External: new worker nodes Maybe this doc will show you some more options for your Setup. https://docs.okd.io/latest/networking/configuring_ingress_cluster_traffic/overview-traffic.html HTH and Regards Aleks On 26.11.20 10:39, Carlo Rodrigues wrote:
Anyone? *From:*users-boun...@lists.openshift.redhat.com <users-boun...@lists.openshift.redhat.com> *On Behalf Of *Carlo Rodrigues *Sent:* Monday, November 23, 2020 17:46 *To:* users@lists.openshift.redhat.com *Subject:* [EXTERNAL] Changing Workers' machine config - 2 IngressControllers Hello All, I’m using an OKD cluster 4.5 with 3 masters and 3 workers, using oVirt IPI. I want to segregate external traffic of some workloads from the rest, so I created a different IngressController, named external. I had 2 choices. 1. Add another worker node and keep the default ingress controller on 2 worker nodes and the external ingress controller on the other 2 worker nodes. 2. Move default ingress controller to master nodes and use the worker nodes to host the external ingress controller. I opted for option 2, using nodeSelector and tolerances so that the default routers would run on the master nodes. So far, so good. My problem now it that I don’t want keepalived for the internal API and internal *.app to run on the worker nodes, I want it to run only on the master nodes. So I edited 00-worker Machine Config and removed the /etc/kubernetes/manifests/keepalived.yaml config. But this MachineConfig gets overwritten very time I change it, probably overwritten by the machine config operator. I deleted the file manually on the worker nodes, but I’m afraid it will come back after an upgrade or some other change. Is there any other way to accomplish what I’m trying to do? Even if I opt for having 2 worker nodes with the default router and 2 worker nodes with the new one (external), I think I’ll have the same problem, because keepalived could put the internal *.apps IP on a worker node with the external router, and there would be at least a mismatched certificate, and, because I want to only publish some few namespace routes on the external router, internal apps would not run when hitting the external router, including console. How do you people segregate traffic and how did you overcome these problems? Thanks Carlo Rodrigues _______________________________________________ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users
_______________________________________________ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users