Alex, I always use the highest durability sync policy for Master and Replica:
SYNC.

Also, for the virtual host, I am using this custom context variable:
use_async_message_store_recovery=true

We use that because we used to have some queues with a large amount of
messages.  I wanted to bring it up in case it matters.  But we've been using
that setting for a while and have never had this issue.


In node2's log file you can see the following messages after I made node2
the master that concern me:

2020-01-24 11:00:00,211 DEBUG [Feeder Output for dixon01]
(c.s.j.r.i.n.Feeder) - dixon02 Feeder output thread for replica dixon01
started at VLSN 7,024 master at 7,023 (DTVLSN:7,022) VLSN delta=-1
socket=(dixon01(1))com.sleepycat.je.rep.utilint.net.SimpleDataChannel@34d03b0c
2020-01-24 11:00:00,243 INFO  [StateChange-dixonbroker:dixon02]
(q.m.h.role_changed) - [Broker] [grp(/dixonbroker)] HA-1010 : Role change
reported: Node : 'dixon02' (spgmqtst2:5011) : from 'WAITING' to 'MASTER'
2020-01-24 11:00:00,243 INFO  [Queue Recoverer : app_test (vh: dixonbroker)]
(q.m.t.recovered) - [Broker] [vh(/dixonbroker)/ms(ProvidedBDBMessageStore)]
TXN-1005 : Recovered 0 messages for queue app_test


See that last log message - Recovered 0 messages for queue app_test.   Isn't
that an issue?  I would expect to see 50 messages recovered.  Perhaps I'm
misunderstanding the log messages.

Thanks
Bryan




--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to