Could you also provide the steps you followed to reproduce the issue? It's
important to not only have the code & configuration resources but the exact
procedure you followed so we are comparing apples to apples, as it were.

An automated process would be ideal here as that drastically speeds up the
investigation/debugging process. We use a Maven plugin in all our examples
to create broker instances (even clustered instances) and run clients
against them in an automated way. It's pretty straight-forward and may be
beneficial for you. Those are found in the "examples" directory in the
broker distribution.


Justin

On Wed, Jun 2, 2021 at 8:55 PM Thai Le <[email protected]> wrote:

> Hi Justtin,
>
> I have a simple project that can be used to reproduce the redistribution
> issue at https://github.com/lnthai2002/SimpleArtemisClient.
>
> Hope you have sometime to take a look,
>
> Thai
>
> On Wed, Jun 2, 2021 at 1:07 PM Thai Le <[email protected]> wrote:
>
> > Hi Justtin,
> >
> > I am still working on the JMS queue for now. I set the
> > connection-ttl-override = 60000 explicitly on the server:
> > ...
> >  <security-settings>
> >       <security-setting match="#">
> >         <permission type="createNonDurableQueue" roles="amq"/>
> >         <permission type="deleteNonDurableQueue" roles="amq"/>
> >         <permission type="createDurableQueue" roles="amq"/>
> >         <permission type="deleteDurableQueue" roles="amq"/>
> >         <permission type="createAddress" roles="amq"/>
> >         <permission type="deleteAddress" roles="amq"/>
> >         <permission type="consume" roles="amq"/>
> >         <permission type="browse" roles="amq"/>
> >         <permission type="send" roles="amq"/>
> >         <!-- we need this otherwise ./artemis data imp wouldn't work -->
> >         <permission type="manage" roles="amq"/>
> >       </security-setting>
> >     </security-settings>
> >
> >     <connection-ttl-override>60000</connection-ttl-override>
> >
> >     <ha-policy>
> >       <live-only>
> >         <scale-down>
> >           <connectors>
> >             <connector-ref>activemq-artemis-master-0</connector-ref>
> >             <connector-ref>activemq-artemis-master-1</connector-ref>
> >           </connectors>
> >         </scale-down>
> >       </live-only>
> >     </ha-policy>
> > ...
> >  and try to kill the consumer during consumption but i don't see any log
> > about cleaning up sessions after 30 min.
> >
> > Thai
> >
> > On Tue, Jun 1, 2021 at 10:53 PM Thai Le <[email protected]> wrote:
> >
> >> I did wait for more than 30 min and I checked the web console of the old
> >> Artemis node, the number of consumer was 0 while the count on the other
> >> node is 1. At some point, I saw the log of the old one print something
> like
> >> "clean up resource ..." but the queue still have 7 messages.
> >>
> >> I'll try to reduce connection ttl tomorrow to see if anything change.
> >>
> >> Thai
> >>
> >> On Tue, Jun 1, 2021, 22:30 Justin Bertram <[email protected]> wrote:
> >>
> >>> Thanks for the clarification.
> >>>
> >>> Did you wait for the connection TTL to elapse before looking for
> >>> redistribution? Given your description, the consumer was terminated
> >>> before
> >>> it properly closed its connection so the broker would still think the
> >>> consumer was active and therefore wouldn't redistribute any messages
> >>> until
> >>> the dead connection's TTL elapsed and the broker closed it. You would
> see
> >>> logging on the broker indicating that it was cleaning up a session.
> >>>
> >>> Also, are you using a JMS queue or topic?
> >>>
> >>>
> >>> Justin
> >>>
> >>> On Tue, Jun 1, 2021 at 9:13 PM Thai Le <[email protected]> wrote:
> >>>
> >>> > Hi Justin,
> >>> >
> >>> > It is not the same question. The question posted on stackiverflow is
> >>> about
> >>> > the case where one of the broker crashes and comes back. This
> question
> >>> is
> >>> > about the message consumer/queue listener dies and come back.
> >>> >
> >>> > A few weeks back I was able to make this work on a cluster with 3
> >>> master
> >>> > and 3 slaves. Now I don't have the slaves.
> >>> >
> >>> > I hope it's clearer
> >>> >
> >>> > Thai Le
> >>> >
> >>> >
> >>> > On Tue, Jun 1, 2021, 21:52 Justin Bertram <[email protected]>
> wrote:
> >>> >
> >>> > > Isn't this essentially the same question you asked on Stack
> Overflow
> >>> [1]?
> >>> > > If so, why are you asking it again here when you have marked the
> >>> answer
> >>> > as
> >>> > > correct. If not, please elaborate as to how the two use-cases
> differ.
> >>> > > Thanks!
> >>> > >
> >>> > >
> >>> > > Justin
> >>> > >
> >>> > > [1]
> >>> > >
> >>> > >
> >>> >
> >>>
> https://stackoverflow.com/questions/67644488/activemq-artemis-cluster-does-not-redistribute-messages-after-one-instance-crash
> >>> > >
> >>> > > On Tue, Jun 1, 2021 at 8:42 PM Thai Le <[email protected]>
> wrote:
> >>> > >
> >>> > > > Hello guys,
> >>> > > >
> >>> > > > I have a cluster of 2 Artemis brokers (2.17.0) without HA running
> >>> in
> >>> > > > kubernetes. They are configured with redistribution-delay=0 but
> >>> when
> >>> > the
> >>> > > > consumer dies and comes back it connects to the other Artemis
> node
> >>> but
> >>> > > > redistribution of left over messages from the previous Artemis
> node
> >>> > does
> >>> > > > not happen.
> >>> > > >
> >>> > > > The client connection is defined like this:
> >>> > > >
> >>> > > > spring.artemis.broker-url=
> >>> > >
> >>> >
> >>>
> (tcp://activemq-artemis-master-0.activemq-artemis-master.n-stack-nle.svc.cluster.local:61616,tcp://activemq-artemis-master-1.activemq-artemis-master.n-stack-nle.svc.cluster.local:61616)
> >>> > > >
> >>> > > > In my test, I sent 10 messages to the queue, then I killed the
> >>> > > > consumer after it consumed the first 3. When kubernetes revived
> the
> >>> > > > consumer, I saw it reconnected to the other Artemis pod (same
> queue
> >>> > name
> >>> > > > created) but the queue was empty. The queue on the previous
> >>> Artemis pod
> >>> > > > still has 7 messages undelivered.
> >>> > > >
> >>> > > > Is there a config I am missing?
> >>> > > >
> >>> > > > Regards
> >>> > > >
> >>> > > > Thai Le
> >>> > > >
> >>> > >
> >>> >
> >>>
> >>
> >
> > --
> > Where there is will, there is a way
> >
>
>
> --
> Where there is will, there is a way
>

Reply via email to