I'm not sure how you're doing this... you should have 3 pairs of active/backup for this to work...
or if you don't care about having slaves for 2 of the servers? it seems to me that you have shared storage and 2 servers waiting to become live? they won't be part of the topology until they get the lock. I will need some clarification and version on what you're doing. On Fri, May 5, 2017 at 1:16 AM, 怀守昆 <[email protected]> wrote: > Sorry for late response. > > Just updated test plan, with 3 master and 1 slave, using static connectors. > > After 4 node started, the slave become backup of 1 master. > Then i use iptables to block outgoing packet on port 61616 at slave side, so > it is kind of isolation. > 60 seconds later, backup become live with the same node id. > > I tried master and the "slave", both can accept message. > > One more question, why live stop replication before quorum vote result come > out? > in this case, if vote result is to shutdown, then we my possibly lose some > message. > > > ----- 原始邮件 ----- > 发件人:Clebert Suconic <[email protected]> > 收件人:[email protected], [email protected] > 主题:Re: Artemis backup failover even quorum vote failed because of empty > networkhealthcheck > 日期:2017年05月05日 10点57分 > > > can you try master? > On Thu, May 4, 2017 at 10:53 PM, 怀守昆 <[email protected]> wrote: >> I'm test Artemis network isolation using quorum vote. >> >> The backup failover after quorum vote return false. >> >> I check the code find that in SharedNothingBackupQuorum#decideOnAction, >> after test isLiveDown(), networkHealthCheck is also checked. >> If no network health check url/address configured, it always returns true. >> So the backup decide it should failover. >> >> Will this be an issue? >> >> I use artemis 2.0.0, with 1 master and 1 slave. >> According user guide, the backup should never failover. > -- > Clebert Suconic -- Clebert Suconic
