Oops, hit send a bit too early.

External routing would depend on the particular CNI environment (GKE does
it by default, Calico can do it (
https://docs.tigera.io/calico/latest/networking/determine-best-networking#pod-ip-routability-outside-of-the-cluster),
unsure about others...).

On Thu, Feb 6, 2025 at 11:55 AM Natalie Klestrup Röijezon
<nat.roije...@stackable.tech> wrote:

> For the given setup, I think a separate configuration per "side" will have
> to be the way to go. There are also other issues with letting pods know the
> other side's cluster-internal addresses (there's normally no guarantee that
> there isn't an overlap in the PodCIDR subnets between different clusters,
> so there might be something completely different on those addresses!).
>
> Depending on your specific environment, another option might be to enable
> external routing for your PodCIDRs ().
>
> On Thu, Feb 6, 2025 at 11:22 AM Enriquez, Victor
> <victor.a.enriq...@capgemini.com.invalid> wrote:
>
>> Hi there,
>>
>> I am working in a project where we are running Zookeeper inside 2
>> separate Kubernetes clusters, let's call them cluster A and cluster B.
>>
>> On cluster A there are 3 instances (3 pods, each one with one zookeeper
>> process) getting an internal IP address from the PODCIDR (as expected in
>> Kubernetes), we also point an internal service that we create per POD, so
>> that each POD has it's own internal DNS.
>>
>> On cluster B we do the same, but only with 2 instances (2 PODs).
>>
>> Now to make the 2 Kubernetes clusters talk, we have created one external
>> Load Balancer service per POD, this in the end, ends up associating an IP
>> address per POD from the network that is shared between the 2 clusters.
>>
>> This "public" IP is managed by Kubernetes/MetalLB, meaning that it is not
>> associated with the NIC of the PODs. But we want to use these IPs on
>> cluster B to communicate with cluster A members, and the other way around.
>>
>> Yesterday I discovered that we can use the zookeeper.multiAddress.enabled
>> property to define multiple addresses for each instance using a '|' to
>> separate them, so I decided to try configuring both the internal Kubernetes
>> name service of each pod and the external Load Balancer service name (this
>> points to the "public" IP), and I was hoping to make it work in that way
>> simplifying my peers in the settings, but the issue comes when Zookeeper
>> starts. At that point it tries to bind to the external IP of the Load
>> Balancer, which is not available making Zookeeper crash.
>>
>> Is there any way, of using an external IP so that it is used by the
>> cluster members of the other side, while still using the internal IP when
>> communicating with the local peers? Or am I poised to have to use a
>> separate peers definition per site?.
>>
>> Thanks in advance.
>> This message contains information that may be privileged or confidential
>> and is the property of the Capgemini Group. It is intended only for the
>> person to whom it is addressed. If you are not the intended recipient, you
>> are not authorized to read, print, retain, copy, disseminate, distribute,
>> or use this message or any part thereof. If you receive this message in
>> error, please notify the sender immediately and delete all copies of this
>> message.
>>
>

Reply via email to