There is nothing new in that regard.

I would check if your nodes have the same time and are configured with NTP.

On Sun, Sep 3, 2023 at 4:56 AM Vilius Šumskas <vilius.sums...@rivile.lt>
wrote:

> Cluster connections are configured statically:
>
>       <cluster-connections>
>          <cluster-connection name="test-cluster">
>             <connector-ref>artemis-master</connector-ref>
>             <message-load-balancing>ON_DEMAND</message-load-balancing>
>             <max-hops>0</max-hops>
>             <static-connectors>
>                <connector-ref>artemis-slave</connector-ref>
>             </static-connectors>
>          </cluster-connection>
>       </cluster-connections>
>
> Is there a way I can debug why this message appears in 2.30 and not in
> 2.28? I'm trying to avoid unforeseen consequences before upgrading
> production clusters to 2.30.
>
> --
>     Vilius
>
> -----Original Message-----
> From: Justin Bertram <jbert...@apache.org>
> Sent: Friday, September 1, 2023 6:45 PM
> To: users@activemq.apache.org
> Subject: Re: There is a possible split brain on nodeID XXXXX after upgrade
> to 2.30
>
> How is your cluster-connection configured? Are you using broadcast &
> discovery groups? If so, I've seen instances of this when stopping and
> starting the broker quickly. As far as I can tell the UDP multicast
> datagram somehow is delivered back to the broker that sent it originally.
>
> As long as your backup broker isn't running while your primary broker is
> running then there's no issue.
>
> FWIW, shared storage configurations are relatively immune to split brain
> due to the file locks enforced by the shared store.
>
>
> Justin
>
> On Fri, Sep 1, 2023 at 10:25 AM Vilius Šumskas <vilius.sums...@rivile.lt>
> wrote:
>
> > Hi,
> >
> > we have upgraded some of our environments from 2.28 to 2.30, others
> > from
> > 2.24 to 2.30 and all upgraded instances now show this strange log
> > message during live node restart cycle:
> >
> > There is a possible split brain on nodeID
> > 165c6eec-0429-11ed-a12a-42010a961402. Topology update ignored
> >
> > Full restart cycle log https://p.defau.lt/?q4pc6mlUXzn66QY8zHVRjg
> >
> > The instances work normally and serve the clients through. I never saw
> > this happening on 2.28 or earlier versions, only 2.30 behave this way.
> > Is this something to worry about?
> >
> > All our instances are configured as HA live/backup clusters consisting
> > from 2 nodes. They share the data on NFS4.1 attached storage.
> >
> > --
> >     Best Regards,
> >     Vilius
> >
> >
>
-- 
Clebert Suconic

Reply via email to