Can anyone explain this error?

I'm updating Hibernate Search, and having a simple test which in a loop does:

- write to shared index
- add a node / remove a node
- wait for joins
- verifies index state

This is expected to work, as it already did with all previous
Infinispan versions.

Using Infinispan 5.1.1.FINAL and JGroups 3.0.5.Final.

2012-02-07 10:42:38,668 WARN  [CacheViewControlCommand]
(OOB-4,sanne-20017) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesMetadata,
type=PREPARE_VIEW, sender=sanne-3158, newViewId=8,
newMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511, sanne-30075], oldViewId=7, oldMembers=[sanne-3158,
sanne-63971, sanne-20017, sanne-2794, sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
        at 
org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
        at 
org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
        at 
org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
        at 
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
        at 
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
        at 
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
        at 
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
        at org.jgroups.JChannel.up(JChannel.java:716)
        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
        at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
        at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
        at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
        at org.jgroups.protocols.Discovery.up(Discovery.java:355)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
        at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more
2012-02-07 10:42:38,706 WARN  [CacheViewControlCommand]
(OOB-5,sanne-20017) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesData, type=PREPARE_VIEW,
sender=sanne-3158, newViewId=8, newMembers=[sanne-3158, sanne-63971,
sanne-20017, sanne-2794, sanne-25511, sanne-30075], oldViewId=7,
oldMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
        at 
org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
        at 
org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
        at 
org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
        at 
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
        at 
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
        at 
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
        at 
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
        at org.jgroups.JChannel.up(JChannel.java:716)
        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
        at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
        at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
        at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
        at org.jgroups.protocols.Discovery.up(Discovery.java:355)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
        at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more
2012-02-07 10:42:38,684 WARN  [UNICAST2] (OOB-7,sanne-2794)
sanne-2794: my conn_id (6) != received conn_id (1); discarding STABLE
message !
2012-02-07 10:42:38,671 WARN  [CacheViewControlCommand]
(OOB-3,sanne-63971) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesMetadata,
type=PREPARE_VIEW, sender=sanne-3158, newViewId=8,
newMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511, sanne-30075], oldViewId=7, oldMembers=[sanne-3158,
sanne-63971, sanne-20017, sanne-2794, sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-63971, viewId=8, state=24}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
        at 
org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
        at 
org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
        at 
org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
        at 
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
        at 
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
        at 
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
        at 
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
        at org.jgroups.JChannel.up(JChannel.java:716)
        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
        at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
        at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
        at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
        at org.jgroups.protocols.Discovery.up(Discovery.java:355)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-63971, viewId=8, state=24}
        at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more
2012-02-07 10:42:38,677 WARN  [CacheViewControlCommand]
(OOB-4,sanne-63971) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesData, type=PREPARE_VIEW,
sender=sanne-3158, newViewId=8, newMembers=[sanne-3158, sanne-63971,
sanne-20017, sanne-2794, sanne-25511, sanne-30075], oldViewId=7,
oldMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-63971, viewId=8, state=22}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
        at 
org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
        at 
org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
        at 
org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
        at 
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
        at 
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
        at 
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
        at 
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
        at org.jgroups.JChannel.up(JChannel.java:716)
        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
        at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
        at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
        at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
        at org.jgroups.protocols.Discovery.up(Discovery.java:355)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-63971, viewId=8, state=22}
        at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more
2012-02-07 10:42:38,718 WARN  [CacheViewControlCommand]
(OOB-6,sanne-25511) ISPN000071: Caught exception when handling command
CacheViewControlCommand{cache=LuceneIndexesData, type=PREPARE_VIEW,
sender=sanne-3158, newViewId=8, newMembers=[sanne-3158, sanne-63971,
sanne-20017, sanne-2794, sanne-25511, sanne-30075], oldViewId=7,
oldMembers=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511]}
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-25511, viewId=8, state=19}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
        at 
org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
        at 
org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
        at 
org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
        at 
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
        at 
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
        at 
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
        at 
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
        at org.jgroups.JChannel.up(JChannel.java:716)
        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
        at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
        at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
        at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
        at org.jgroups.protocols.Discovery.up(Discovery.java:355)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-25511, viewId=8, state=19}
        at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more
2012-02-07 10:42:38,733 ERROR [CacheViewsManagerImpl]
(CacheViewInstaller-1,sanne-3158) ISPN000172: Failed to prepare view
CacheView{viewId=8, members=[sanne-3158, sanne-63971, sanne-20017,
sanne-2794, sanne-25511, sanne-30075]} for cache
LuceneIndexesMetadata, rolling back to view CacheView{viewId=7,
members=[sanne-3158, sanne-63971, sanne-20017, sanne-2794,
sanne-25511]}
java.util.concurrent.ExecutionException:
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:319)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:250)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:876)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
        at 
org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
        at 
org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
        at 
org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
        at 
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
        at 
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
        at 
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
        at 
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
        at org.jgroups.JChannel.up(JChannel.java:716)
        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
        at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
        at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
        at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
        at org.jgroups.protocols.Discovery.up(Discovery.java:355)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
        ... 3 more
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesMetadata,
type=APPLY_STATE, sender=sanne-20017, viewId=8, state=4}
        at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more
2012-02-07 10:42:38,737 ERROR [CacheViewsManagerImpl]
(CacheViewInstaller-3,sanne-3158) ISPN000172: Failed to prepare view
CacheView{viewId=8, members=[sanne-3158, sanne-63971, sanne-20017,
sanne-2794, sanne-25511, sanne-30075]} for cache  LuceneIndexesData,
rolling back to view CacheView{viewId=7, members=[sanne-3158,
sanne-63971, sanne-20017, sanne-2794, sanne-25511]}
java.util.concurrent.ExecutionException:
java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:319)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:250)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:876)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.util.concurrent.ExecutionException:
org.infinispan.remoting.transport.jgroups.SuspectException: One or
more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
        at java.util.concurrent.FutureTask.get(FutureTask.java:91)
        at 
org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139)
        at 
org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116)
        at 
org.infinispan.statetransfer.BaseStateTransferTask.performStateTransfer(BaseStateTransferTask.java:93)
        at 
org.infinispan.statetransfer.BaseStateTransferManagerImpl.prepareView(BaseStateTransferManagerImpl.java:294)
        at 
org.infinispan.cacheviews.CacheViewsManagerImpl.handlePrepareView(CacheViewsManagerImpl.java:486)
        at 
org.infinispan.commands.control.CacheViewControlCommand.perform(CacheViewControlCommand.java:125)
        at 
org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:95)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:161)
        at 
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:141)
        at 
org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
        at 
org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
        at 
org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
        at 
org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:543)
        at org.jgroups.JChannel.up(JChannel.java:716)
        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
        at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
        at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:383)
        at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:697)
        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:559)
        at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:167)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:282)
        at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
        at org.jgroups.protocols.Discovery.up(Discovery.java:355)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1174)
        at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1722)
        at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1704)
        ... 3 more
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException:
One or more nodes have left the cluster while replicating command
StateTransferControlCommand{cache=LuceneIndexesData, type=APPLY_STATE,
sender=sanne-20017, viewId=8, state=3}
        at 
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:436)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78)
        at 
org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:249)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more
_______________________________________________
infinispan-dev mailing list
[email protected]
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to