[ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17287178#comment-17287178
 ] 

John Behm edited comment on AMQ-8149 at 2/19/21, 4:42 PM:
----------------------------------------------------------

I did spend 1.5 months learning the configuration aspect of ActiveMQ Artemis 
and sadly came to the conclusion that it is a configuration mess that's not 
really feasible to run in a docker container without changing Artemis itself to 
be container ready.

One does not simply put a non-container application inside of a container and 
says that it's now a proper container ready software.
 From my experience the configuration aspect of ActiveMQ Artemis needs to be 
redesigned to be container ready.
 Looking at my longer comment in the mentioned issue above, I did try to run a 
high availability Artemis cluster in a Kubernetes cluster environment and came 
across so many problems, that we did decide against using Artemis (the seconds 
aspect were message order problems, mentioned in a different issue of mine).

*Quoting* my comment from the other issue, for the lazy ones:

Well, RabbitMQ does a great job at being easy to setup. It took me two days to 
get a cluster running, without nearly any problems at all.
 The biggest advantage is that they do provide a lot of essential examples and 
a Kubernetes Operator implementation that can simply be deployed to the 
Kubernetes cluster and that somewhat manages the setup of the custom Kubernetes 
Resource (being the RabbitMQ cluster) that it provides.
 This might be overkill for the first step, but might be a long term target to 
look at (I'm not familiar with Operator Development, yet).

Contrary to Artemis, where it took me 1.5 months until now and still going. I 
did learn a lot about Artemis in the process as well as Docker, Kubernetes and 
Helm Charts, but for anyone willing to simply setup Artemis in a clustered high 
availability setup, they will not really spend that much time until they just 
say: No.

I am currently trying(when I do have the time, tho) to get this setup running 
in a Kubernetes cluster that does not really support udp broadcasting, so one 
has to use JGroups.
 The currently used version of JGroups (I do not know why) is super old, 
something around version 3.6.x if I recall correctly.
 This rather old version of JGroups only works with a super old version of the 
JGroups Protocol Stack plugin(however you may call this) KUBE_PING (0.9.3), 
which both should be updated to be, well, up to date with the current 
technology.

KUBE_PING [https://github.com/jgroups-extras/jgroups-kubernetes]

This jgroups plugin should be part of either a special Kubernetes docker image 
or of every Docker image, as it's key for peer discovery in a Kubernetes 
cluster.

Looking away from the configuration mess one has to fight through, one would 
need to have a docker image that lives on Dockerhub and not to build it 
yourself.
 The second step of someone willing to use the docker image from dockerhub is 
that they want to know how to configure the docker image to work the way they 
want it to work.

what environment variables do I set
 what configuration files do I mount (before the startup of Artemis) at which 
path inside of the docker container, so that the application inside may pickup 
the mounted configuration files and work/run according to that custom 
configuration.
 what examples can I use to simply copy and paste from
 I think in order to avoid the mess from my above configuration, Artemis should 
evaluate environment variables automatically the way they are passed through 
ARTEMIS_CLUSTER_PROPS, so that those may be directly used inside of the 
broker.xml and not be initially passed as environment variables and also passed 
as JVM arguments through ARTEMIS_CLUSTER_PROPS.

Another big problem from my point of view is that Artemis does generate 
configuration files at the startup of the container.
 This goes against the immutability of a container "principle" (I'm no expert, 
don't quote me )

First thing is: the configuration files should not be generated inside of the 
container, but outside.
 The container does one thing and that is: Tell Artemis where the configuration 
is located and start the Artemis broker. Nothing else.

Those, I think, hard drive performance values that are calculated at startup of 
a container should not be part of the broker.xml, as they are inherent to the 
underlying container/vm/machine and cannot be really precisely known prior to 
the container startup.
 One may simply set those to some small values, but that would not be what the 
idea of those values is.

So my (uneducated) idea would be to check the environment variables for a non 
empty string for those specific configuration values that are calculated at 
startup.
 If the string is, in fact, not empty, then you may use those values and do not 
need to calculate anything. If the string is empty, you may calculate those 
performance values yourself at startup (and maybe also set them as environment 
variables).

Everything else that is configured inside of the broker.xml is static, so it 
can stay the way it is and should simply be mounted as a configuration into the 
container at a specific path, that Artemis expects.


was (Author: behm015):
I did spend 1.5 months learning the configuration aspect of ActiveMQ Artemis 
and sadly came to the conclusion that it is a configuration mess that's not 
really feasible to run in a docker container without changing Artemis itself to 
be container ready.

One does not simply put a non-container application inside of a container and 
says that it's now a proper container ready software.
>From my experience the configuration aspect of ActiveMQ Artemis needs to be 
>redesigned to be container ready.
Looking at my longer comment in the mentioned issue above, I did try to run a 
high availability Artemis cluster in a Kubernetes cluster environment and came 
across so many problems, that we did decide against using Artemis (the seconds 
aspect were message order problems, mentioned in a different isue of mine).

*Quoting* my comment from the other issue, for the lazy ones:


Well, RabbitMQ does a great job at being easy to setup. It took me two days to 
get a cluster running, without nearly any problems at all.
The biggest advantage is that they do provide a lot of essential examples and a 
Kubernetes Operator implementation that can simply be deployed to the 
Kubernetes cluster and that somewhat manages the setup of the custom Kubernetes 
Resource (being the RabbitMQ cluster) that it provides.
This might be overkill for the first step, but might be a long term target to 
look at (I'm not familiar with Operator Development, yet).

Contrary to Artemis, where it took me 1.5 months until now and still going. I 
did learn a lot about Artemis in the process as well as Docker, Kubernetes and 
Helm Charts, but for anyone willing to simply setup Artemis in a clustered high 
availability setup, they will not really spend that much time until they just 
say: No.

I am currently trying(when I do have the time, tho) to get this setup running 
in a Kubernetes cluster that does not really support udp broadcasting, so one 
has to use JGroups.
The currently used version of JGroups (I do not know why) is super old, 
something around version 3.6.x if I recall correctly.
This rather old version of JGroups only works with a super old version of the 
JGroups Protocol Stack plugin(however you may call this) KUBE_PING (0.9.3), 
which both should be updated to be, well, up to date with the current 
technology.

KUBE_PING https://github.com/jgroups-extras/jgroups-kubernetes

This jgroups plugin should be part of either a special Kubernetes docker image 
or of every Docker image, as it's key for peer discovery in a Kubernetes 
cluster.

Looking away from the configuration mess one has to fight through, one would 
need to have a docker image that lives on Dockerhub and not to build it 
yourself.
The second step of someone willing to use the docker image from dockerhub is 
that they want to know how to configure the docker image to work the way they 
want it to work.

what environment variables do I set
what configuration files do I mount (before the startup of Artemis) at which 
path inside of the docker container, so that the application inside may pickup 
the mounted configuration files and work/run according to that custom 
configuration.
what examples can I use to simply copy and paste from
I think in order to avoid the mess from my above configuration, Artemis should 
evaluate environment variables automatically the way they are passed through 
ARTEMIS_CLUSTER_PROPS, so that those may be directly used inside of the 
broker.xml and not be initially passed as environment variables and also passed 
as JVM arguments through ARTEMIS_CLUSTER_PROPS.

Another big problem from my point of view is that Artemis does generate 
configuration files at the startup of the container.
This goes against the immutability of a container "principle" (I'm no expert, 
don't quote me )

First thing is: the configuration files should not be generated inside of the 
container, but outside.
The container does one thing and that is: Tell Artemis where the configuration 
is located and start the Artemis broker. Nothing else.

Those, I think, hard drive performance values that are calculated at startup of 
a container should not be part of the broker.xml, as they are inherent to the 
underlying container/vm/machine and cannot be really precisely known prior to 
the container startup.
One may simply set those to some small values, but that would not be what the 
idea of those values is.

So my (uneducated) idea would be to check the environment variables for a non 
empty string for those specific configuration values that are calculated at 
startup.
If the string is, in fact, not empty, then you may use those values and do not 
need to calculate anything. If the string is empty, you may calculate those 
performance values yourself at startup (and maybe also set them as environment 
variables).

Everything else that is configured inside of the broker.xml is static, so it 
can stay the way it is and should simply be mounted as a configuration into the 
container at a specific path, that Artemis expects.

> Create Docker Image
> -------------------
>
>                 Key: AMQ-8149
>                 URL: https://issues.apache.org/jira/browse/AMQ-8149
>             Project: ActiveMQ
>          Issue Type: New Feature
>    Affects Versions: 5.17.0
>            Reporter: Matt Pavlovich
>            Assignee: Matt Pavlovich
>            Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:
> [ ] jib or jkube mvn plugin
> [ ] Create a general container that supports most use cases (enable all 
> protocols on default ports, etc)
> [ ] Provide artifacts for users to build customized containers
> Tasks:
> [Pending] Creation of Docker repository for ActiveMQ INFRA-21430
> [ ] Add activemq-docker module to 5.17.x
> [ ] Add dockerhub deployment to release process



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to