The recommendation is to use KahaDB, not replicated KahaDB. Replicated KahaDB was abandoned before it was completed, because replicated LevelDB was expected to be better. But then LevelDB got deprecated because no one was interested in supporting it (both bug fixes and responding to questions on this list), so shared-storage KahaDB is the best remaining data store. That's not to say that you can't run LevelDB, but you should be prepared to answer questions (such as the Zookeeper one you asked here) yourself by going through the code and to perform your own big fixes if necessary, because it's unlikely that anyone will answer your LevelDB questions on this mailing list. Sorry.
Tim On Mar 1, 2017 7:54 AM, "Ivan Yiu" <[email protected]> wrote: I am in serious trouble recently. Starting to implement using the ActiveMQ, the requirement is to guarantee no message lost from our JMS producer. The JMS producer is sending persistent message to the broker. We are still using Replicated LevelDB setup with 3 ZooKeeper nodes. The first question is, from the official documentation page which stated that LevelDB was deprecated. Better to use KahaDB. But in the replicated KahaDB page, it claim that it is under review and not currently supported ........ I am confused. Another problem is that, with the 3 nodes ZK + LevelDB replication, try to send 60000 request per minutes. Stopping the leading broker will result in message loss. (shutting down the VM). JMeter result 100% success, seems that all message was send to the broker successfully. The failover work fine, the slave will pick up the traffic within 5 seconds, but a huge amount (over 10000) message will be lost after the failover. Any hints? does the ZK "weight" or "sync" parameter need to adjust? If ZK will make sure that at least one slave have a copy of the message in memory, i really don't understand why the message will be lost. -- View this message in context: http://activemq.2283324.n4. nabble.com/Replcated-LevelDB-or-KahaDB-tp4722617.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.
