[ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17287178#comment-17287178
 ] 

John Behm edited comment on AMQ-8149 at 2/19/21, 5:03 PM:
----------------------------------------------------------

I did spend 1.5 months learning the configuration aspect of ActiveMQ Artemis 
and sadly came to the conclusion that it is a configuration mess that's not 
really feasible to run in a docker container without changing Artemis itself to 
be container ready.

One does not simply put a non-container application inside of a container and 
says that it's now a proper container ready software.
 From my experience the configuration aspect of ActiveMQ Artemis needs to be 
redesigned to be container ready.
 Looking at my longer comment in the mentioned issue above, I did try to run a 
high availability Artemis cluster in a Kubernetes cluster environment and came 
across so many problems, that we did decide against using Artemis (the second 
aspect were message order problems, mentioned in a different issue of mine).

*Quoting* my comment from the other issue, for the lazy ones:

Well, RabbitMQ does a great job at being easy to setup. It took me two days to 
get a cluster running, without nearly any problems at all.
 The biggest advantage is that they do provide a lot of essential examples and 
a Kubernetes Operator implementation that can simply be deployed to the 
Kubernetes cluster and that somewhat manages the setup of the custom Kubernetes 
Resource (being the RabbitMQ cluster) that it provides.
 This might be overkill for the first step, but might be a long term target to 
look at (I'm not familiar with Operator Development, yet).

Contrary to Artemis, where it took me 1.5 months until now and still going. I 
did learn a lot about Artemis in the process as well as Docker, Kubernetes and 
Helm Charts, but for anyone willing to simply setup Artemis in a clustered high 
availability setup, they will not really spend that much time until they just 
say: No.

I am currently trying(when I do have the time, tho) to get this setup running 
in a Kubernetes cluster that does not really support udp broadcasting, so one 
has to use JGroups.
 The currently used version of JGroups (I do not know why) is super old, 
something around version 3.6.x if I recall correctly.
 This rather old version of JGroups only works with a super old version of the 
JGroups Protocol Stack plugin(however you may call this) KUBE_PING (0.9.3), 
which both should be updated to be, well, up to date with the current 
technology.

KUBE_PING [https://github.com/jgroups-extras/jgroups-kubernetes]

This jgroups plugin should be part of either a special Kubernetes docker image 
or of every Docker image, as it's key for peer discovery in a Kubernetes 
cluster.

Looking away from the configuration mess one has to fight through, one would 
need to have a docker image that lives on Dockerhub and not to build it 
yourself.
 The second step of someone willing to use the docker image from dockerhub is 
that they want to know how to configure the docker image to work the way they 
want it to work.
 * what environment variables do I set
 * what configuration files do I mount (before the startup of Artemis) at which 
path inside of the docker container, so that the application inside may pickup 
the mounted configuration files and work/run according to that custom 
configuration.
 * what examples can I use to simply copy and paste from
 I think in order to avoid the mess from my above configuration, Artemis should 
evaluate environment variables automatically the way they are passed through 
ARTEMIS_CLUSTER_PROPS, so that those may be directly used inside of the 
broker.xml and not be initially passed as environment variables and also passed 
as JVM arguments through ARTEMIS_CLUSTER_PROPS.

Another big problem from my point of view is that Artemis does generate 
configuration files at the startup of the container.
 This goes against the immutability of a container "principle" (I'm no expert, 
don't quote me:))

First thing is: the configuration files should not be generated inside of the 
container, but outside.
 The container does one thing and that is: Tell Artemis where the configuration 
is located and start the Artemis broker. Nothing else.

Those, I think, hard drive performance values that are calculated at startup of 
a container should not be part of the broker.xml, as they are inherent to the 
underlying container/vm/machine and cannot be really precisely known prior to 
the container startup.
 One may simply set those to some small values, but that would not be what the 
idea of those values is.

So my (uneducated) idea would be to check the environment variables for a non 
empty string for those specific configuration values that are calculated at 
startup.
 If the string is, in fact, not empty, then you may use those values and do not 
need to calculate anything. If the string is empty, you may calculate those 
performance values yourself at startup (and maybe also set them as environment 
variables).

Everything else that is configured inside of the broker.xml is static, so it 
can stay the way it is and should simply be mounted as a configuration into the 
container at a specific path, that Artemis expects.

*Quote END*

 

If one actually does build a Docker image according to those file here: 
[https://github.com/apache/activemq-artemis/tree/master/artemis-docker]

and deploys them to their private (e.g. Harbor) or some openly accessible image 
repository, you will come across this Kubernetes configuration mess as seen in 
the following file: 
{code:java}
apiVersion: v1
kind: ConfigMap
metadata: 
  name: artemis-config
  namespace: artemis
data: 
  bootstrap.xml: |-
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <!--
    ~ Licensed to the Apache Software Foundation (ASF) under one or more
    ~ contributor license agreements. See the NOTICE file distributed with
    ~ this work for additional information regarding copyright ownership.
    ~ The ASF licenses this file to You under the Apache License, Version 2.0
    ~ (the "License"); you may not use this file except in compliance with
    ~ the License. You may obtain a copy of the License at
    ~
    ~     http://www.apache.org/licenses/LICENSE-2.0
    ~
    ~ Unless required by applicable law or agreed to in writing, software
    ~ distributed under the License is distributed on an "AS IS" BASIS,
    ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    ~ See the License for the specific language governing permissions and
    ~ limitations under the License.
    -->

    <broker xmlns="http://activemq.org/schema";>

       <jaas-security domain="activemq"/>


       <!-- artemis.URI.instance is parsed from artemis.instance by the CLI 
startup.
          This is to avoid situations where you could have spaces or special 
characters on this URI -->
       <server configuration="file:/data/broker.xml"/><!-- 
<<<<<<<<<<<<<<<<<<<<<<<========================= -->

       <!-- The web server is only bound to localhost by default -->
       <web bind="http://localhost:8161"; path="web">
          <app url="activemq-branding" war="activemq-branding.war"/>
          <app url="artemis-plugin" war="artemis-plugin.war"/>
          <app url="console" war="console.war"/>
       </web>

    </broker>
  broker.xml: |-
    <?xml version='1.0'?>
      <!--
      Licensed to the Apache Software Foundation (ASF) under one
      or more contributor license agreements.  See the NOTICE file
      distributed with this work for additional information
      regarding copyright ownership.  The ASF licenses this file
      to you under the Apache License, Version 2.0 (the
      "License"); you may not use this file except in compliance
      with the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing,
      software distributed under the License is distributed on an
      "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
      KIND, either express or implied.  See the License for the
      specific language governing permissions and limitations
      under the License.
      -->

      <configuration xmlns="urn:activemq"
                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
                     xmlns:xi="http://www.w3.org/2001/XInclude";
                     xsi:schemaLocation="urn:activemq 
/schema/artemis-configuration.xsd">

         <core xmlns="urn:activemq:core" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
               xsi:schemaLocation="urn:activemq:core ">

            <name>${POD_NAME}/${POD_IP}</name>


            <persistence-enabled>true</persistence-enabled>

            <!-- this could be ASYNCIO, MAPPED, NIO
               ASYNCIO: Linux Libaio
               MAPPED: mmap files
               NIO: Plain Java Files
            -->
            <journal-type>ASYNCIO</journal-type>

            <paging-directory>data/paging</paging-directory>

            <bindings-directory>data/bindings</bindings-directory>

            <journal-directory>data/journal</journal-directory>

            
<large-messages-directory>data/large-messages</large-messages-directory>

            <journal-datasync>true</journal-datasync>

            <journal-min-files>2</journal-min-files>

            <journal-pool-files>10</journal-pool-files>

            <journal-device-block-size>4096</journal-device-block-size>

            <journal-file-size>10M</journal-file-size>
            
            <!--
            This value was determined through a calculation.
            Your system could perform 41.67 writes per millisecond
            on the current journal configuration.
            That translates as a sync write every 24000 nanoseconds.

            Note: If you specify 0 the system will perform writes directly to 
the disk.
                  We recommend this to be 0 if you are using journalType=MAPPED 
and journal-datasync=false.
            -->
            <journal-buffer-timeout>24000</journal-buffer-timeout>


            <!--
            When using ASYNCIO, this will determine the writing queue depth for 
libaio.
            -->
            <journal-max-io>4096</journal-max-io>
            <!--
            You can verify the network health of a particular NIC by specifying 
the <network-check-NIC> element.
               <network-check-NIC>theNicName</network-check-NIC>
            -->

            <!--
            Use this to use an HTTP server to validate the network
               
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->

            <!-- <network-check-period>10000</network-check-period> -->
            <!-- <network-check-timeout>1000</network-check-timeout> -->

            <!-- this is a comma separated list, no spaces, just DNS or IPs
               it should accept IPV6

               Warning: Make sure you understand your network topology as this 
is meant to validate if your network is valid.
                        Using IPs that could eventually disappear or be 
partially visible may defeat the purpose.
                        You can use a list of multiple IPs, and if any 
successful ping will make the server OK to continue running -->
            <!-- <network-check-list>10.0.0.1</network-check-list> -->

            <!-- use this to customize the ping used for ipv4 addresses -->
            <!-- <network-check-ping-command>ping -c 1 -t %d 
%s</network-check-ping-command> -->

            <!-- use this to customize the ping used for ipv6 addresses -->
            <!-- <network-check-ping6-command>ping6 -c 1 
%2$s</network-check-ping6-command> -->



         <connectors>
            <!-- Connector used to be announced through cluster connections and 
notifications -->
            <connector name="artemis">tcp://${POD_IP}:61616</connector>
         </connectors>



            <!-- how often we are looking for how many bytes are being used on 
the disk in ms -->
            <disk-scan-period>5000</disk-scan-period>

            <!-- once the disk hits this limit the system will block, or close 
the connection in certain protocols
               that won't support flow control. -->
            <max-disk-usage>90</max-disk-usage>

            <!-- should the broker detect dead locks and other issues -->
            <critical-analyzer>true</critical-analyzer>

            <critical-analyzer-timeout>120000</critical-analyzer-timeout>

            
<critical-analyzer-check-period>60000</critical-analyzer-check-period>

            <critical-analyzer-policy>HALT</critical-analyzer-policy>

            
            <page-sync-timeout>168000</page-sync-timeout>


                  <!-- the system will enter into page mode once you hit this 
limit.
               This is an estimate in bytes of how much the messages are using 
in memory

                  The system will use half of the available memory (-Xmx) by 
default for the global-max-size.
                  You may specify a different value here if you need to 
customize it to your needs.

                  <global-max-size>100Mb</global-max-size>

            -->

            <acceptors>

               <!-- useEpoll means: it will use Netty epoll if you are on a 
system (Linux) that supports it -->
               <!-- amqpCredits: The number of credits sent to AMQP producers 
-->
               <!-- amqpLowCredits: The server will send the # credits 
specified at amqpCredits at this low mark -->
               <!-- amqpDuplicateDetection: If you are not using duplicate 
detection, set this to false
                                          as duplicate detection requires 
applicationProperties to be parsed on the server. -->
               <!-- amqpMinLargeMessageSize: Determines how many bytes are 
considered large, so we start using files to hold their data.
                                             default: 102400, -1 would mean to 
disable large mesasge control -->

               <!-- Note: If an acceptor needs to be compatible with HornetQ 
and/or Artemis 1.x clients add
                        "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." 
to the acceptor url.
                        See https://issues.apache.org/jira/browse/ARTEMIS-1644 
for more information. -->


               <!-- Acceptor for every supported protocol -->
               <acceptor 
name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>

               <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP 
traffic.-->
               <acceptor 
name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>

               <!-- STOMP Acceptor. -->
               <acceptor 
name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

               <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and 
STOMP for legacy HornetQ clients. -->
               <acceptor 
name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

               <!-- MQTT Acceptor -->
               <acceptor 
name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

            </acceptors>

             <!-- IMPORTANT: MUST BE CHANGED-->
            <cluster-user>${CLUSTER_USER}</cluster-user>
            <cluster-password>${CLUSTER_PASSWORD}</cluster-password>

            <broadcast-groups>
               <broadcast-group name="bg-group1">
                  <group-address>${BROADCAST_IP}</group-address>
                  <group-port>${BROADCAST_PORT}</group-port>
                  <broadcast-period>5000</broadcast-period>
                  <connector-ref>artemis</connector-ref>
               </broadcast-group>
            </broadcast-groups>

            <discovery-groups>
               <discovery-group name="dg-group1">
                  <group-address>${BROADCAST_IP}</group-address>
                  <group-port>${BROADCAST_PORT}</group-port>
                  <refresh-timeout>10000</refresh-timeout>
               </discovery-group>
            </discovery-groups>

            <cluster-connections>
                <cluster-connection name="artemis">
                   <address></address>
                   <connector-ref>artemis</connector-ref>
                   <check-period>30000</check-period>
                   <connection-ttl>60000</connection-ttl>
                   <min-large-message-size>102400</min-large-message-size>
                   <call-timeout>30000</call-timeout>
                   <retry-interval>1000</retry-interval>
                   <retry-interval-multiplier>1.0</retry-interval-multiplier>
                   <max-retry-interval>2000</max-retry-interval>
                   <initial-connect-attempts>-1</initial-connect-attempts>
                   <reconnect-attempts>-1</reconnect-attempts>
                   <use-duplicate-detection>true</use-duplicate-detection>
                   <message-load-balancing>ON_DEMAND</message-load-balancing>
                   <max-hops>1</max-hops>
                   <confirmation-window-size>1048576</confirmation-window-size>
                   <call-failover-timeout>-1</call-failover-timeout>
                   <notification-interval>1000</notification-interval>
                   <notification-attempts>100</notification-attempts>
                   <discovery-group-ref discovery-group-name="dg-group1"/>
                </cluster-connection>
            </cluster-connections>


            <ha-policy>
               <replication>
                   <colocated>
                      <request-backup>true</request-backup>
                      <max-backups>1</max-backups>
                      <backup-request-retries>-1</backup-request-retries>
                      
<backup-request-retry-interval>5000</backup-request-retry-interval>
                      
                      <master>
                         
<vote-on-replication-failure>true</vote-on-replication-failure>
                         <check-for-live-server>true</check-for-live-server>
                      </master>
                      <slave>
                         <scale-down/>

                         <!-- when a master returns, let clients reconnect to 
the master again -->
                         <!- NEEDS TESTING 
                         <allow-failback>true</allow-failback>
                         <failback-delay>10000</failback-delay>
                         -->
                      </slave>
                   </colocated>
                </replication>
             </ha-policy>

            <security-settings>
               <security-setting match="#">
                  <permission type="createNonDurableQueue" roles="amq"/>
                  <permission type="deleteNonDurableQueue" roles="amq"/>
                  <permission type="createDurableQueue" roles="amq"/>
                  <permission type="deleteDurableQueue" roles="amq"/>
                  <permission type="createAddress" roles="amq"/>
                  <permission type="deleteAddress" roles="amq"/>
                  <permission type="consume" roles="amq"/>
                  <permission type="browse" roles="amq"/>
                  <permission type="send" roles="amq"/>
                  <!-- we need this otherwise ./artemis data imp wouldn't work 
-->
                  <permission type="manage" roles="amq"/>
               </security-setting>
            </security-settings>

            <address-settings>
               <!-- if you define auto-create on certain queues, management has 
to be auto-create -->
               <address-setting match="activemq.management#">
                  <dead-letter-address>DLQ</dead-letter-address>
                  <expiry-address>ExpiryQueue</expiry-address>
                  <redelivery-delay>0</redelivery-delay>
                  <!-- with -1 only the global-max-size is in use for limiting 
-->
                  <max-size-bytes>-1</max-size-bytes>
                  
<message-counter-history-day-limit>10</message-counter-history-day-limit>
                  <address-full-policy>PAGE</address-full-policy>
                  <auto-create-queues>true</auto-create-queues>
                  <auto-create-addresses>true</auto-create-addresses>
               </address-setting>
               <!--default for catch all-->
               <address-setting match="#">
                  <dead-letter-address>DLQ</dead-letter-address>
                  <expiry-address>ExpiryQueue</expiry-address>
                  <redelivery-delay>0</redelivery-delay>
                  <!-- with -1 only the global-max-size is in use for limiting 
-->
                  <max-size-bytes>-1</max-size-bytes>
                  
<message-counter-history-day-limit>10</message-counter-history-day-limit>
                  <address-full-policy>PAGE</address-full-policy>
                  <auto-create-queues>true</auto-create-queues>
                  <auto-create-addresses>true</auto-create-addresses>
                  <auto-create-jms-queues>true</auto-create-jms-queues>
                  <auto-create-jms-topics>true</auto-create-jms-topics>
                  
                  <!-- redestribute messages in a queue if there is no consumer 
connected to the current node -->
                  <redistribution-delay>0</redistribution-delay>

               </address-setting>
            </address-settings>

            <addresses>
               <address name="DLQ">
                  <anycast>
                     <queue name="DLQ" />
                  </anycast>
               </address>
               <address name="ExpiryQueue">
                  <anycast>
                     <queue name="ExpiryQueue" />
                  </anycast>
               </address>

            </addresses>


            <!-- Uncomment the following if you want to use the Standard 
LoggingActiveMQServerPlugin pluging to log in events
            <broker-plugins>
               <broker-plugin 
class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
                  <property key="LOG_ALL_EVENTS" value="true"/>
                  <property key="LOG_CONNECTION_EVENTS" value="true"/>
                  <property key="LOG_SESSION_EVENTS" value="true"/>
                  <property key="LOG_CONSUMER_EVENTS" value="true"/>
                  <property key="LOG_DELIVERING_EVENTS" value="true"/>
                  <property key="LOG_SENDING_EVENTS" value="true"/>
                  <property key="LOG_INTERNAL_EVENTS" value="true"/>
               </broker-plugin>
            </broker-plugins>
            -->

         </core>
      </configuration>
---
apiVersion: v1
kind: ReplicationController
metadata: 
  name: artemis-rc
  namespace: artemis
spec: 
  replicas: 1
  selector: 
    app: artemis-rc
  template: 
    metadata: 
      name: artemis-test-pod
      namespace: artemis
      labels: 
        app: artemis-rc
    spec: 
      containers: 
        - name: artemis
          image: artemis:2.16.0
          # command: ['sh', '-c', 'sleep 36000']
          args: ["run", "--allow-kill", "--", "xml:/data/bootstrap.xml"]
          volumeMounts: 
            - name: artemis-config
              subPath: bootstrap.xml
              mountPath: /data/bootstrap.xml
              readOnly: true
            - name: artemis-config
              subPath: broker.xml
              mountPath: /data/broker.xml
              readOnly: true
          env: 
            - name: POD_NAME
              valueFrom: 
                fieldRef: 
                  fieldPath: metadata.name
            - name: POD_IP
              valueFrom: 
                fieldRef: 
                  fieldPath: status.podIP
            - name: BROADCAST_IP
              value: 231.7.7.7
            - name: BROADCAST_PORT
              value: "9876"
            - name: "EXTRA_ARGS"
              value: "" # extra creation args, to be precise
            - name: CLUSTER_USER
              value: cluster-admin
            - name: CLUSTER_PASSWORD
              value: password-admin
            - name: ARTEMIS_CLUSTER_PROPS
              value: |-
                -DPOD_NAME=$(POD_NAME) 
                -DPOD_IP=$(POD_IP) 
                -DBROADCAST_IP=$(BROADCAST_IP) 
                -DBROADCAST_PORT=$(BROADCAST_PORT)
                -DCLUSTER_USER=$(CLUSTER_USER) 
                -DCLUSTER_PASSWORD=$(CLUSTER_PASSWORD)
      volumes: 
        - name: artemis-config
          configMap: 
            defaultMode: 420
            name: artemis-config
---
apiVersion: v1
kind: Pod
metadata: 
  name: artemis-main
  namespace: artemis
  labels: 
    app: artemis-main
spec: 
  containers: 
    - name: artemis
      image: artemis:2.16.0
      #command: ['sh', '-c', 'sleep 36000']
      args: ["run", "--allow-kill", "--", "xml:/data/bootstrap.xml"]
      volumeMounts: 
        - name: artemis-config
          subPath: bootstrap.xml
          mountPath: /data/bootstrap.xml
          readOnly: true
        - name: artemis-config
          subPath: broker.xml
          mountPath: /data/broker.xml
          readOnly: true
      env: 
        - name: POD_NAME
          valueFrom: 
            fieldRef: 
              fieldPath: metadata.name
        - name: POD_IP
          valueFrom: 
            fieldRef: 
              fieldPath: status.podIP
        - name: BROADCAST_IP
          value: 231.7.7.7
        - name: BROADCAST_PORT
          value: "9876"
        - name: "EXTRA_ARGS"
          value: "" # extra creation args, to be precise
        - name: CLUSTER_USER
          value: cluster-admin
        - name: CLUSTER_PASSWORD
          value: password-admin
        - name: ARTEMIS_CLUSTER_PROPS
          value: |-
            -DPOD_NAME=$(POD_NAME) 
            -DPOD_IP=$(POD_IP) 
            -DBROADCAST_IP=$(BROADCAST_IP) 
            -DBROADCAST_PORT=$(BROADCAST_PORT)
            -DCLUSTER_USER=$(CLUSTER_USER) 
            -DCLUSTER_PASSWORD=$(CLUSTER_PASSWORD)
  volumes: 
    - name: artemis-config
      configMap: 
        defaultMode: 420
        name: artemis-config
{code}
The above code does work with a cluster that supports UDP broadcasting.
 But sadly Kubernetes usually does not support that across multiple nodes 
(machines, vps, etc).
 We do not use the root user inside of the Docker image for security reasons, 
so one does need to somehow bake the Kube-Ping JGroups plugin in the Docker 
image, in order for the peer Discovery to work properly in a Kubernetes 
environment.

Before baking anything inside of the image, one should update the JGroups 
library to the latest 4.x (preferrably 5.x, I guess, it's still in beta or 
something like that, iirc)

And then use one of the current Kube-Ping plugin versions.

The supported Kube Ping version for the current JGroups version (3.6.x) would 
be 0.9.3.

 

The proper way to go for *Kubernetes* is the "StatefulSet" instead of the 
ReplicationController/ReplicaSet that I did use in the example yaml manifest 
file.

 If anyone wants to continue from where I left off.

 

For the *docker Image*, there is already one that is (despite the configuration 
mess) well thought out and sadly archived, as the maintainer did try to donate 
it to the foundation, but I don't know what happened, but the result is visible 
now.

 The repository was archived at most a few days ago: 
[https://github.com/vromero/activemq-artemis-docker]

I did ask the maintainer to partake in the other discussion: 
[https://github.com/vromero/activemq-artemis-docker/issues/181]

 

 


was (Author: behm015):
I did spend 1.5 months learning the configuration aspect of ActiveMQ Artemis 
and sadly came to the conclusion that it is a configuration mess that's not 
really feasible to run in a docker container without changing Artemis itself to 
be container ready.

One does not simply put a non-container application inside of a container and 
says that it's now a proper container ready software.
 From my experience the configuration aspect of ActiveMQ Artemis needs to be 
redesigned to be container ready.
 Looking at my longer comment in the mentioned issue above, I did try to run a 
high availability Artemis cluster in a Kubernetes cluster environment and came 
across so many problems, that we did decide against using Artemis (the second 
aspect were message order problems, mentioned in a different issue of mine).

*Quoting* my comment from the other issue, for the lazy ones:

Well, RabbitMQ does a great job at being easy to setup. It took me two days to 
get a cluster running, without nearly any problems at all.
 The biggest advantage is that they do provide a lot of essential examples and 
a Kubernetes Operator implementation that can simply be deployed to the 
Kubernetes cluster and that somewhat manages the setup of the custom Kubernetes 
Resource (being the RabbitMQ cluster) that it provides.
 This might be overkill for the first step, but might be a long term target to 
look at (I'm not familiar with Operator Development, yet).

Contrary to Artemis, where it took me 1.5 months until now and still going. I 
did learn a lot about Artemis in the process as well as Docker, Kubernetes and 
Helm Charts, but for anyone willing to simply setup Artemis in a clustered high 
availability setup, they will not really spend that much time until they just 
say: No.

I am currently trying(when I do have the time, tho) to get this setup running 
in a Kubernetes cluster that does not really support udp broadcasting, so one 
has to use JGroups.
 The currently used version of JGroups (I do not know why) is super old, 
something around version 3.6.x if I recall correctly.
 This rather old version of JGroups only works with a super old version of the 
JGroups Protocol Stack plugin(however you may call this) KUBE_PING (0.9.3), 
which both should be updated to be, well, up to date with the current 
technology.

KUBE_PING [https://github.com/jgroups-extras/jgroups-kubernetes]

This jgroups plugin should be part of either a special Kubernetes docker image 
or of every Docker image, as it's key for peer discovery in a Kubernetes 
cluster.

Looking away from the configuration mess one has to fight through, one would 
need to have a docker image that lives on Dockerhub and not to build it 
yourself.
 The second step of someone willing to use the docker image from dockerhub is 
that they want to know how to configure the docker image to work the way they 
want it to work.
 * what environment variables do I set
 * what configuration files do I mount (before the startup of Artemis) at which 
path inside of the docker container, so that the application inside may pickup 
the mounted configuration files and work/run according to that custom 
configuration.
 * what examples can I use to simply copy and paste from
 I think in order to avoid the mess from my above configuration, Artemis should 
evaluate environment variables automatically the way they are passed through 
ARTEMIS_CLUSTER_PROPS, so that those may be directly used inside of the 
broker.xml and not be initially passed as environment variables and also passed 
as JVM arguments through ARTEMIS_CLUSTER_PROPS.

Another big problem from my point of view is that Artemis does generate 
configuration files at the startup of the container.
 This goes against the immutability of a container "principle" (I'm no expert, 
don't quote me:))

First thing is: the configuration files should not be generated inside of the 
container, but outside.
 The container does one thing and that is: Tell Artemis where the configuration 
is located and start the Artemis broker. Nothing else.

Those, I think, hard drive performance values that are calculated at startup of 
a container should not be part of the broker.xml, as they are inherent to the 
underlying container/vm/machine and cannot be really precisely known prior to 
the container startup.
 One may simply set those to some small values, but that would not be what the 
idea of those values is.

So my (uneducated) idea would be to check the environment variables for a non 
empty string for those specific configuration values that are calculated at 
startup.
 If the string is, in fact, not empty, then you may use those values and do not 
need to calculate anything. If the string is empty, you may calculate those 
performance values yourself at startup (and maybe also set them as environment 
variables).

Everything else that is configured inside of the broker.xml is static, so it 
can stay the way it is and should simply be mounted as a configuration into the 
container at a specific path, that Artemis expects.

*Quote END*

 

If one actually does build a Docker image according to those file here: 
[https://github.com/apache/activemq-artemis/tree/master/artemis-docker]

and deploys them to their private (e.g. Harbor) or some openly accessible image 
repository, you will come across this Kubernetes configuration mess as seen in 
the following file: 
{code:java}
apiVersion: v1
kind: ConfigMap
metadata: 
  name: artemis-config
  namespace: artemis
data: 
  bootstrap.xml: |-
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <!--
    ~ Licensed to the Apache Software Foundation (ASF) under one or more
    ~ contributor license agreements. See the NOTICE file distributed with
    ~ this work for additional information regarding copyright ownership.
    ~ The ASF licenses this file to You under the Apache License, Version 2.0
    ~ (the "License"); you may not use this file except in compliance with
    ~ the License. You may obtain a copy of the License at
    ~
    ~     http://www.apache.org/licenses/LICENSE-2.0
    ~
    ~ Unless required by applicable law or agreed to in writing, software
    ~ distributed under the License is distributed on an "AS IS" BASIS,
    ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    ~ See the License for the specific language governing permissions and
    ~ limitations under the License.
    -->

    <broker xmlns="http://activemq.org/schema";>

       <jaas-security domain="activemq"/>


       <!-- artemis.URI.instance is parsed from artemis.instance by the CLI 
startup.
          This is to avoid situations where you could have spaces or special 
characters on this URI -->
       <server configuration="file:/data/broker.xml"/><!-- 
<<<<<<<<<<<<<<<<<<<<<<<========================= -->

       <!-- The web server is only bound to localhost by default -->
       <web bind="http://localhost:8161"; path="web">
          <app url="activemq-branding" war="activemq-branding.war"/>
          <app url="artemis-plugin" war="artemis-plugin.war"/>
          <app url="console" war="console.war"/>
       </web>

    </broker>
  broker.xml: |-
    <?xml version='1.0'?>
      <!--
      Licensed to the Apache Software Foundation (ASF) under one
      or more contributor license agreements.  See the NOTICE file
      distributed with this work for additional information
      regarding copyright ownership.  The ASF licenses this file
      to you under the Apache License, Version 2.0 (the
      "License"); you may not use this file except in compliance
      with the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

      Unless required by applicable law or agreed to in writing,
      software distributed under the License is distributed on an
      "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
      KIND, either express or implied.  See the License for the
      specific language governing permissions and limitations
      under the License.
      -->

      <configuration xmlns="urn:activemq"
                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
                     xmlns:xi="http://www.w3.org/2001/XInclude";
                     xsi:schemaLocation="urn:activemq 
/schema/artemis-configuration.xsd">

         <core xmlns="urn:activemq:core" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
               xsi:schemaLocation="urn:activemq:core ">

            <name>${POD_NAME}/${POD_IP}</name>


            <persistence-enabled>true</persistence-enabled>

            <!-- this could be ASYNCIO, MAPPED, NIO
               ASYNCIO: Linux Libaio
               MAPPED: mmap files
               NIO: Plain Java Files
            -->
            <journal-type>ASYNCIO</journal-type>

            <paging-directory>data/paging</paging-directory>

            <bindings-directory>data/bindings</bindings-directory>

            <journal-directory>data/journal</journal-directory>

            
<large-messages-directory>data/large-messages</large-messages-directory>

            <journal-datasync>true</journal-datasync>

            <journal-min-files>2</journal-min-files>

            <journal-pool-files>10</journal-pool-files>

            <journal-device-block-size>4096</journal-device-block-size>

            <journal-file-size>10M</journal-file-size>
            
            <!--
            This value was determined through a calculation.
            Your system could perform 41.67 writes per millisecond
            on the current journal configuration.
            That translates as a sync write every 24000 nanoseconds.

            Note: If you specify 0 the system will perform writes directly to 
the disk.
                  We recommend this to be 0 if you are using journalType=MAPPED 
and journal-datasync=false.
            -->
            <journal-buffer-timeout>24000</journal-buffer-timeout>


            <!--
            When using ASYNCIO, this will determine the writing queue depth for 
libaio.
            -->
            <journal-max-io>4096</journal-max-io>
            <!--
            You can verify the network health of a particular NIC by specifying 
the <network-check-NIC> element.
               <network-check-NIC>theNicName</network-check-NIC>
            -->

            <!--
            Use this to use an HTTP server to validate the network
               
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->

            <!-- <network-check-period>10000</network-check-period> -->
            <!-- <network-check-timeout>1000</network-check-timeout> -->

            <!-- this is a comma separated list, no spaces, just DNS or IPs
               it should accept IPV6

               Warning: Make sure you understand your network topology as this 
is meant to validate if your network is valid.
                        Using IPs that could eventually disappear or be 
partially visible may defeat the purpose.
                        You can use a list of multiple IPs, and if any 
successful ping will make the server OK to continue running -->
            <!-- <network-check-list>10.0.0.1</network-check-list> -->

            <!-- use this to customize the ping used for ipv4 addresses -->
            <!-- <network-check-ping-command>ping -c 1 -t %d 
%s</network-check-ping-command> -->

            <!-- use this to customize the ping used for ipv6 addresses -->
            <!-- <network-check-ping6-command>ping6 -c 1 
%2$s</network-check-ping6-command> -->



         <connectors>
            <!-- Connector used to be announced through cluster connections and 
notifications -->
            <connector name="artemis">tcp://${POD_IP}:61616</connector>
         </connectors>



            <!-- how often we are looking for how many bytes are being used on 
the disk in ms -->
            <disk-scan-period>5000</disk-scan-period>

            <!-- once the disk hits this limit the system will block, or close 
the connection in certain protocols
               that won't support flow control. -->
            <max-disk-usage>90</max-disk-usage>

            <!-- should the broker detect dead locks and other issues -->
            <critical-analyzer>true</critical-analyzer>

            <critical-analyzer-timeout>120000</critical-analyzer-timeout>

            
<critical-analyzer-check-period>60000</critical-analyzer-check-period>

            <critical-analyzer-policy>HALT</critical-analyzer-policy>

            
            <page-sync-timeout>168000</page-sync-timeout>


                  <!-- the system will enter into page mode once you hit this 
limit.
               This is an estimate in bytes of how much the messages are using 
in memory

                  The system will use half of the available memory (-Xmx) by 
default for the global-max-size.
                  You may specify a different value here if you need to 
customize it to your needs.

                  <global-max-size>100Mb</global-max-size>

            -->

            <acceptors>

               <!-- useEpoll means: it will use Netty epoll if you are on a 
system (Linux) that supports it -->
               <!-- amqpCredits: The number of credits sent to AMQP producers 
-->
               <!-- amqpLowCredits: The server will send the # credits 
specified at amqpCredits at this low mark -->
               <!-- amqpDuplicateDetection: If you are not using duplicate 
detection, set this to false
                                          as duplicate detection requires 
applicationProperties to be parsed on the server. -->
               <!-- amqpMinLargeMessageSize: Determines how many bytes are 
considered large, so we start using files to hold their data.
                                             default: 102400, -1 would mean to 
disable large mesasge control -->

               <!-- Note: If an acceptor needs to be compatible with HornetQ 
and/or Artemis 1.x clients add
                        "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." 
to the acceptor url.
                        See https://issues.apache.org/jira/browse/ARTEMIS-1644 
for more information. -->


               <!-- Acceptor for every supported protocol -->
               <acceptor 
name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>

               <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP 
traffic.-->
               <acceptor 
name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>

               <!-- STOMP Acceptor. -->
               <acceptor 
name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

               <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and 
STOMP for legacy HornetQ clients. -->
               <acceptor 
name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

               <!-- MQTT Acceptor -->
               <acceptor 
name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

            </acceptors>

             <!-- IMPORTANT: MUST BE CHANGED-->
            <cluster-user>${CLUSTER_USER}</cluster-user>
            <cluster-password>${CLUSTER_PASSWORD}</cluster-password>

            <broadcast-groups>
               <broadcast-group name="bg-group1">
                  <group-address>${BROADCAST_IP}</group-address>
                  <group-port>${BROADCAST_PORT}</group-port>
                  <broadcast-period>5000</broadcast-period>
                  <connector-ref>artemis</connector-ref>
               </broadcast-group>
            </broadcast-groups>

            <discovery-groups>
               <discovery-group name="dg-group1">
                  <group-address>${BROADCAST_IP}</group-address>
                  <group-port>${BROADCAST_PORT}</group-port>
                  <refresh-timeout>10000</refresh-timeout>
               </discovery-group>
            </discovery-groups>

            <cluster-connections>
                <cluster-connection name="artemis">
                   <address></address>
                   <connector-ref>artemis</connector-ref>
                   <check-period>30000</check-period>
                   <connection-ttl>60000</connection-ttl>
                   <min-large-message-size>102400</min-large-message-size>
                   <call-timeout>30000</call-timeout>
                   <retry-interval>1000</retry-interval>
                   <retry-interval-multiplier>1.0</retry-interval-multiplier>
                   <max-retry-interval>2000</max-retry-interval>
                   <initial-connect-attempts>-1</initial-connect-attempts>
                   <reconnect-attempts>-1</reconnect-attempts>
                   <use-duplicate-detection>true</use-duplicate-detection>
                   <message-load-balancing>ON_DEMAND</message-load-balancing>
                   <max-hops>1</max-hops>
                   <confirmation-window-size>1048576</confirmation-window-size>
                   <call-failover-timeout>-1</call-failover-timeout>
                   <notification-interval>1000</notification-interval>
                   <notification-attempts>100</notification-attempts>
                   <discovery-group-ref discovery-group-name="dg-group1"/>
                </cluster-connection>
            </cluster-connections>


            <ha-policy>
               <replication>
                   <colocated>
                      <request-backup>true</request-backup>
                      <max-backups>1</max-backups>
                      <backup-request-retries>-1</backup-request-retries>
                      
<backup-request-retry-interval>5000</backup-request-retry-interval>
                      
                      <master>
                         
<vote-on-replication-failure>true</vote-on-replication-failure>
                         <check-for-live-server>true</check-for-live-server>
                      </master>
                      <slave>
                         <scale-down/>

                         <!-- when a master returns, let clients reconnect to 
the master again -->
                         <!- NEEDS TESTING 
                         <allow-failback>true</allow-failback>
                         <failback-delay>10000</failback-delay>
                         -->
                      </slave>
                   </colocated>
                </replication>
             </ha-policy>

            <security-settings>
               <security-setting match="#">
                  <permission type="createNonDurableQueue" roles="amq"/>
                  <permission type="deleteNonDurableQueue" roles="amq"/>
                  <permission type="createDurableQueue" roles="amq"/>
                  <permission type="deleteDurableQueue" roles="amq"/>
                  <permission type="createAddress" roles="amq"/>
                  <permission type="deleteAddress" roles="amq"/>
                  <permission type="consume" roles="amq"/>
                  <permission type="browse" roles="amq"/>
                  <permission type="send" roles="amq"/>
                  <!-- we need this otherwise ./artemis data imp wouldn't work 
-->
                  <permission type="manage" roles="amq"/>
               </security-setting>
            </security-settings>

            <address-settings>
               <!-- if you define auto-create on certain queues, management has 
to be auto-create -->
               <address-setting match="activemq.management#">
                  <dead-letter-address>DLQ</dead-letter-address>
                  <expiry-address>ExpiryQueue</expiry-address>
                  <redelivery-delay>0</redelivery-delay>
                  <!-- with -1 only the global-max-size is in use for limiting 
-->
                  <max-size-bytes>-1</max-size-bytes>
                  
<message-counter-history-day-limit>10</message-counter-history-day-limit>
                  <address-full-policy>PAGE</address-full-policy>
                  <auto-create-queues>true</auto-create-queues>
                  <auto-create-addresses>true</auto-create-addresses>
               </address-setting>
               <!--default for catch all-->
               <address-setting match="#">
                  <dead-letter-address>DLQ</dead-letter-address>
                  <expiry-address>ExpiryQueue</expiry-address>
                  <redelivery-delay>0</redelivery-delay>
                  <!-- with -1 only the global-max-size is in use for limiting 
-->
                  <max-size-bytes>-1</max-size-bytes>
                  
<message-counter-history-day-limit>10</message-counter-history-day-limit>
                  <address-full-policy>PAGE</address-full-policy>
                  <auto-create-queues>true</auto-create-queues>
                  <auto-create-addresses>true</auto-create-addresses>
                  <auto-create-jms-queues>true</auto-create-jms-queues>
                  <auto-create-jms-topics>true</auto-create-jms-topics>
                  
                  <!-- redestribute messages in a queue if there is no consumer 
connected to the current node -->
                  <redistribution-delay>0</redistribution-delay>

               </address-setting>
            </address-settings>

            <addresses>
               <address name="DLQ">
                  <anycast>
                     <queue name="DLQ" />
                  </anycast>
               </address>
               <address name="ExpiryQueue">
                  <anycast>
                     <queue name="ExpiryQueue" />
                  </anycast>
               </address>

            </addresses>


            <!-- Uncomment the following if you want to use the Standard 
LoggingActiveMQServerPlugin pluging to log in events
            <broker-plugins>
               <broker-plugin 
class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
                  <property key="LOG_ALL_EVENTS" value="true"/>
                  <property key="LOG_CONNECTION_EVENTS" value="true"/>
                  <property key="LOG_SESSION_EVENTS" value="true"/>
                  <property key="LOG_CONSUMER_EVENTS" value="true"/>
                  <property key="LOG_DELIVERING_EVENTS" value="true"/>
                  <property key="LOG_SENDING_EVENTS" value="true"/>
                  <property key="LOG_INTERNAL_EVENTS" value="true"/>
               </broker-plugin>
            </broker-plugins>
            -->

         </core>
      </configuration>
---
apiVersion: v1
kind: ReplicationController
metadata: 
  name: artemis-rc
  namespace: artemis
spec: 
  replicas: 1
  selector: 
    app: artemis-rc
  template: 
    metadata: 
      name: artemis-test-pod
      namespace: artemis
      labels: 
        app: artemis-rc
    spec: 
      containers: 
        - name: artemis
          image: artemis:2.16.0
          # command: ['sh', '-c', 'sleep 36000']
          args: ["run", "--allow-kill", "--", "xml:/data/bootstrap.xml"]
          volumeMounts: 
            - name: artemis-config
              subPath: bootstrap.xml
              mountPath: /data/bootstrap.xml
              readOnly: true
            - name: artemis-config
              subPath: broker.xml
              mountPath: /data/broker.xml
              readOnly: true
          env: 
            - name: POD_NAME
              valueFrom: 
                fieldRef: 
                  fieldPath: metadata.name
            - name: POD_IP
              valueFrom: 
                fieldRef: 
                  fieldPath: status.podIP
            - name: BROADCAST_IP
              value: 231.7.7.7
            - name: BROADCAST_PORT
              value: "9876"
            - name: "EXTRA_ARGS"
              value: "" # extra creation args, to be precise
            - name: CLUSTER_USER
              value: cluster-admin
            - name: CLUSTER_PASSWORD
              value: password-admin
            - name: ARTEMIS_CLUSTER_PROPS
              value: |-
                -DPOD_NAME=$(POD_NAME) 
                -DPOD_IP=$(POD_IP) 
                -DBROADCAST_IP=$(BROADCAST_IP) 
                -DBROADCAST_PORT=$(BROADCAST_PORT)
                -DCLUSTER_USER=$(CLUSTER_USER) 
                -DCLUSTER_PASSWORD=$(CLUSTER_PASSWORD)
      volumes: 
        - name: artemis-config
          configMap: 
            defaultMode: 420
            name: artemis-config
---
apiVersion: v1
kind: Pod
metadata: 
  name: artemis-main
  namespace: artemis
  labels: 
    app: artemis-main
spec: 
  containers: 
    - name: artemis
      image: artemis:2.16.0
      #command: ['sh', '-c', 'sleep 36000']
      args: ["run", "--allow-kill", "--", "xml:/data/bootstrap.xml"]
      volumeMounts: 
        - name: artemis-config
          subPath: bootstrap.xml
          mountPath: /data/bootstrap.xml
          readOnly: true
        - name: artemis-config
          subPath: broker.xml
          mountPath: /data/broker.xml
          readOnly: true
      env: 
        - name: POD_NAME
          valueFrom: 
            fieldRef: 
              fieldPath: metadata.name
        - name: POD_IP
          valueFrom: 
            fieldRef: 
              fieldPath: status.podIP
        - name: BROADCAST_IP
          value: 231.7.7.7
        - name: BROADCAST_PORT
          value: "9876"
        - name: "EXTRA_ARGS"
          value: "" # extra creation args, to be precise
        - name: CLUSTER_USER
          value: cluster-admin
        - name: CLUSTER_PASSWORD
          value: password-admin
        - name: ARTEMIS_CLUSTER_PROPS
          value: |-
            -DPOD_NAME=$(POD_NAME) 
            -DPOD_IP=$(POD_IP) 
            -DBROADCAST_IP=$(BROADCAST_IP) 
            -DBROADCAST_PORT=$(BROADCAST_PORT)
            -DCLUSTER_USER=$(CLUSTER_USER) 
            -DCLUSTER_PASSWORD=$(CLUSTER_PASSWORD)
  volumes: 
    - name: artemis-config
      configMap: 
        defaultMode: 420
        name: artemis-config
{code}
The above code does work with a cluster that supports UDP broadcasting.
 But sadly Kubernetes usually does not support that across multiple nodes 
(machines, vps, etc).
 We do not use the root user inside of the Docker image for security reasons, 
so one does need to somehow bake the Kube-Ping JGroups plugin in the Docker 
image, in order for the peer Discovery to work properly in a Kubernetes 
environment.

 

The proper way to go for *Kubernetes* is the "StatefulSet" instead of the 
ReplicationController/ReplicaSet that I did use in the example yaml manifest 
file.

 If anyone wants to continue from where I left off.

 

For the *docker Image*, there is already one that is (despite the configuration 
mess) well thought out and sadly archived, as the maintainer did try to donate 
it to the foundation, but I don't know what happened, but the result is visible 
now.

 The repository was archived at most a few days ago: 
[https://github.com/vromero/activemq-artemis-docker]

I did ask the maintainer to partake in the other discussion: 
[https://github.com/vromero/activemq-artemis-docker/issues/181]

 

 

> Create Docker Image
> -------------------
>
>                 Key: AMQ-8149
>                 URL: https://issues.apache.org/jira/browse/AMQ-8149
>             Project: ActiveMQ
>          Issue Type: New Feature
>    Affects Versions: 5.17.0
>            Reporter: Matt Pavlovich
>            Assignee: Matt Pavlovich
>            Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:
> [ ] jib or jkube mvn plugin
> [ ] Create a general container that supports most use cases (enable all 
> protocols on default ports, etc)
> [ ] Provide artifacts for users to build customized containers
> Tasks:
> [Pending] Creation of Docker repository for ActiveMQ INFRA-21430
> [ ] Add activemq-docker module to 5.17.x
> [ ] Add dockerhub deployment to release process



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to