I'm not clear on the role which the proxies would play.  Can you clarify
that?

In general, if you had a broker with no queues then any client trying to
send message to that broker would fail.  Neither the connections nor the
messages would somehow pass through that broker to another broker.


Justin

On Tue, Jun 26, 2018 at 11:25 PM, Victor <victor.rom...@gmail.com> wrote:

> I'm still playing with different topologies of ActiveMQ Artemis in
> Kubernetes. An almost satisfactory one (also playing with colocated but
> anti-affinity is difficult there) is to have master and slaves paired in
> two stateful sets:
>
>         +-----------------------+
>         |                       |
>         |                       |
>         |                       |
> +-------+--------+     +--------+-------+
> |artemis master 1|     |artemis master 2|
> +----------------+     +--------+ ------+
>         |group-name=artemis-1   |
>         |                       |
>         v   group-name=artemis-2|
> +-------+--------+     +------> --------+
> |artemis slave 1 |     |artemis slave 2 |
> +----------------+     +----------------+
>
> Note that this configuration also has inter-pod anti-affinity
> <https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>
> between master and slaves do they don't end up working on the same physical
> node. Also, there is a disruption budget
> <https://kubernetes.io/docs/concepts/workloads/pods/disruptions/> of just
> one and only one master or slave could be down at the same time without
> involving data loss.
>
> This could be acceptable as version 1, as it might be useful for many
> users. However, I found a little thing that is usually fine but seems to be
> bothersome for Kubernetes. Slaves do not open ports nor serve traffic while
> they are just being slaves. Kubernetes have one special nuance in terms of
> load balancing, the load balancer does not check for the pods to be
> healthy. Its Kubernetes itself doing two checks, liveness (should I restart
> you?) and readiness (are you ready?). the readiness mean both I'm started
> and also I'm ready to receive traffic. Given that slaves do not open ports
> they won't typically be ready (if they where the load balancer would route
> to them and those request would fail). And thus, the helm chart present
> weird behaviors like for instance the following:
>
> helm install activemq-artemis --wait
>
> Will timeout, as --wait will try to wait for every pod to be in ready
> state. Unless I go for much more sophisticated balancing solutions this is
> mostly unavoidable and undesirable.
>
> One possible solution I have contemplated might be perhaps a bit too
> creative and I'd preferred to run it here before executing. What If I set
> up a cluster of Artemis with no persistence, no local queues and just core
> connections to the real servers:
>
>
>           +-----load balancer----+
>           |                      |
>           |                      |
>           |                      |
>           |                      |
>     +--proxy 1--+         +---proxy 2--+
>     |           |         |            |
>     |           |         |            |
>     |           |         |            |
>     |           |         |            |
>     |           |         |            |
>  master 1    slave 1    master 2   slave 2
>
>
> With my limited understanding, I believe those mostly stateless Artemis
> would act as a proxy just as I wanted to wrap the needs of Kubernetes into
> a proxy with no need of new code.
>
> Is this assumption right? Would there be a risk of data loss? I assume
> there would be unless I activate persistence, would there be a work-around
> for this?
>
> Thanks!
>

Reply via email to