Normally there are two ways to achieve this: excludeNeighbors
and AffinityBackupFilter [1]

However, excludeNeighbors won't work across pods on the same k8s node,
since it relies on MAC addresses.

So your best bet is to use ClusterNodeAttributeAffinityBackupFilter:
* set ClusterNodeAttributeAffinityBackupFilter with env var K8S_NODE_NAME
as described in [2]
* export K8S_NODE_NAME environment variable in Kubernetes as described in
[3]:

      env:
        - name: K8S_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName

This way backups won't end up on the same k8s node.

[1]
https://apacheignite.readme.io/docs/affinity-collocation#crash-safe-affinity
[2]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html
[3]
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables

On Tue, Jul 7, 2020 at 12:42 PM Humphrey <hmmlo...@gmail.com> wrote:

> Let say I have 2 kubernetes nodes and I have 4 ignite server nodes.
> This will result (if kubernetes have 2 pods running on each node) in the
> following:
>
> *kubernetes_node1:* ignite_node1, ignite_node2
> *kubernetes_node2:* ignite_node3, ignite_node4
>
> I specify that my cache backup = 1
>
> Is there a way to configure that the backup data of the ignite_node1 goes
> on
> the ignite_node3 or ignite_node4 and NOT on ignite_node2 (same physical
> machine/kubernetes node)? Is there any configuration for this (I assume
> it's
> something on runtime, because we don't know where kubernetes will schedule
> the pod)?
>
> Background:
> If kubernetes_node1 goes down, then there won't be any data loss.
>
> Humphrey
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to