Hey JB, I am interested here.

I know many approaches to replication have been tried - with AMQ 5 as well
as Artemis.  For example, "LevelDB replicated storage" and "Pure Master
Slave" (where the active broker copied updates to the passive brokers) in
AMQ 5.  So I'm curious how the problem is getting solved in an effective
manner.

I've had people ask about GlusterFS, but haven't heard of anyone
successfully using it.

The reason shared filesystem works so well is that a synchronous write to
the shared filesystem is guaranteed "on disk" (and hence, accessible by all
clients of that filesystem).  Even though the overhead of the sync write
can be significant, high speed networking and advanced hardware help
minimize the latency introduced.  If we use replication, would it be doing
effectively the same thing for all the replicated copies?  If not, then how
can message loss and duplication be prevented on change of the active
broker?

Of course, one big downside for the shared filesystem solution is that it
requires the server to be redundant and highly-available (like a Filter, or
EFS), so a distributed solution like this is appealing.

Cheers!

Art


On Wed, Feb 17, 2021 at 1:52 PM JB Onofré <j...@nanthrax.net> wrote:

> Hi everyone
>
> On a cloud environment, our current ActiveMQ5 topologies have limitations:
>
> - master slave works fine but requires either a shared file system (for
> instance AWS EFS) or database. And it also means that we only have one
> broker active at a time.
> - network of broker can be used to have kind of partitioning of messages
> across several brokers. However if we have pending messages on a broker and
> we lost this broker, the messages on this one are not available until we
> restart the broker (with the same file system).
>
> The idea of replicatedKahaDB is to replicate messages from one kahadb to
> another one. If we lost the broker, a broker with the replica is able to
> load the messages to be available.
>
> I started to work on this implementation:
> - adding a new configuration element as persistence adapter
> - adding zookeeper client m, zookeeper is used as topology storage,
> heartbeat and leader élection
> - I’m evaluating the use of bookkeeper as well (directly as storage)
>
> I will share a branch on my local repo with you soon.
>
> Any comment is welcome.
>
> Thanks
> Regards
> JB
>

Reply via email to