Hi,
There is a cluster with 4 nodes, When I add a node to join this cluster, It
failed.
Here is the trace:
[09:06:11,090][INFO][main][IgniteKernal] Config URL:
file:/home/ignite23/config/default-config.xml
[09:06:11,090][INFO][main][IgniteKernal] Daemon mode: off
[09:06:11,090][INFO][main][IgniteKernal] OS: Linux 3.10.0-123.el7.x86_64 amd64
[09:06:11,090][INFO][main][IgniteKernal] OS user: root
[09:06:11,091][INFO][main][IgniteKernal] PID: 7678
[09:06:11,091][INFO][main][IgniteKernal] Language runtime: Java Platform API
Specification ver. 1.8 [09:06:11,091][INFO][main][IgniteKernal] VM information:
Java(TM) SE Runtime Environment 1.8.0_102-b14 Oracle Corporation Java
HotSpot(TM) 64-Bit Server VM 25.102-b14
[09:06:11,092][INFO][main][IgniteKernal] VM total memory: 30.0GB
[09:06:11,092][INFO][main][IgniteKernal] Remote Management [restart: on, REST:
on, JMX (remote: on, port: 49114, auth: off, ssl: off)]
[09:06:11,094][INFO][main][IgniteKernal] IGNITE_HOME=/home/ignite23
[09:06:11,094][INFO][main][IgniteKernal] VM arguments: [-Xms30g, -Xmx30g,
-XX:+AggressiveOpts, -XX:MaxMetaspaceSize=256m, -XX:MaxDirectMemorySize=8g,
-XX:+AlwaysPreTouch, -XX:+UseG1GC, -XX:+ScavengeBeforeFullGC,
-XX:+DisableExplicitGC, -Djava.net.preferIPv4Stack=true,
-DIGNITE_SQL_MERGE_TABLE_MAX_SIZE=30000000,
-DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true, -DIGNITE_QUIET=true,
-DIGNITE_SUCCESS_FILE=/home/ignite23/work/ignite_success_01ffd588-c864-4d6d-aa45-ad16338acf93,
-Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.port=49114,
-Dcom.sun.management.jmxremote.authenticate=false,
-Dcom.sun.management.jmxremote.ssl=false, -DIGNITE_HOME=/home/ignite23,
-DIGNITE_PROG_NAME=./ignite.sh]
[09:06:11,095][INFO][main][IgniteKernal] System cache's DataRegion size is
configured to 40 MB. Use DataStorageConfiguration.systemCacheMemorySize
property to change the setting. [09:06:11,101][INFO][main][IgniteKernal]
Configured caches [in 'sysMemPlc' dataRegion: ['ignite-sys-cache']]
[09:06:11,104][INFO][main][IgniteKernal] 3-rd party licenses can be found at:
/home/ignite23/libs/licenses [09:06:11,156][INFO][main][IgnitePluginProcessor]
Configured plugins: [09:06:11,156][INFO][main][IgnitePluginProcessor] ^-- None
[09:06:11,156][INFO][main][IgnitePluginProcessor]
[09:06:11,193][INFO][main][TcpCommunicationSpi] Successfully bound
communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0,
selectorsCnt=4, selectorSpins=0, pairedConn=false]
[09:06:21,237][WARNING][main][TcpCommunicationSpi] Message queue limit is set
to 0 which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver sides.
[09:06:21,257][WARNING][main][NoopCheckpointSpi] Checkpoints are disabled (to
enable configure any GridCheckpointSpi implementation)
[09:06:21,284][WARNING][main][GridCollisionManager] Collision resolution is
disabled (all jobs will be activated upon arrival).
[09:06:21,285][INFO][main][IgniteKernal] Security status [authentication=off,
tls/ssl=off] [09:06:21,502][INFO][main][ClientListenerProcessor] Client
connector processor has started on TCP port 10800
[09:06:21,542][INFO][main][GridTcpRestProtocol] Command protocol successfully
started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
[09:06:21,575][INFO][main][IgniteKernal] Non-loopback local IPs: 10.1.50.0,
10.1.50.1, 192.168.63.60 [09:06:21,575][INFO][main][IgniteKernal] Enabled local
MACs: 024294C8D41F, 6AB9618820B2 [09:06:21,613][INFO][main][TcpDiscoverySpi]
Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0,
locNodeId=8ad7a225-d6ad-4407-92fa-1035f43dfb4b]
[09:06:21,678][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted
incoming connection [rmtAddr=/192.168.63.47, rmtPort=59550]
[09:06:21,689][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning
a new thread for connection [rmtAddr=/192.168.63.47, rmtPort=59550]
[09:06:21,689][INFO][tcp-disco-sock-reader-#4][TcpDiscoverySpi] Started serving
remote node connection [rmtAddr=/192.168.63.47:59550, rmtPort=59550]
[09:06:26,657][WARNING][main][TcpDiscoverySpi] Node has not been connected to
topology and will repeat join process. Check remote nodes logs for possible
error messages. Note that large topology may require significant time to start.
Increase 'TcpDiscoverySpi.networkTimeout' configuration property if getting
this message on the starting nodes [networkTimeout=5000]
[09:06:46,753][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted
incoming connection [rmtAddr=/192.168.63.47, rmtPort=57627]
[09:06:46,753][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning
a new thread for connection [rmtAddr=/192.168.63.47, rmtPort=57627]
[09:06:46,754][INFO][tcp-disco-sock-reader-#5][TcpDiscoverySpi] Started serving
remote node connection [rmtAddr=/192.168.63.47:57627, rmtPort=57627]
[09:07:11,803][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted
incoming connection [rmtAddr=/192.168.63.47, rmtPort=60817]
[09:07:11,803][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning
a new thread for connection [rmtAddr=/192.168.63.47, rmtPort=60817]
[09:07:11,804][INFO][tcp-disco-sock-reader-#6][TcpDiscoverySpi] Started serving
remote node connection [rmtAddr=/192.168.63.47:60817, rmtPort=60817]
[09:07:16,790][INFO][tcp-disco-sock-reader-#4][TcpDiscoverySpi] Finished
serving remote node connection [rmtAddr=/192.168.63.47:59550, rmtPort=59550
[09:07:36,837][INFO][tcp-disco-sock-reader-#5][TcpDiscoverySpi] Finished
serving remote node connection [rmtAddr=/192.168.63.47:57627, rmtPort=57627
[09:07:36,867][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted
incoming connection [rmtAddr=/192.168.63.47, rmtPort=45884]
[09:07:36,868][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning
a new thread for connection [rmtAddr=/192.168.63.47, rmtPort=45884]
[09:07:36,868][INFO][tcp-disco-sock-reader-#7][TcpDiscoverySpi] Started serving
remote node connection [rmtAddr=/192.168.63.47:45884, rmtPort=45884]
The attachment is one node of the cluster's trace.
All the thing are as same as the other node.
What did I miss?
Thanks.
Lucky
[09:06:03,762][WARNING][exchange-worker-#102][diagnostic] Failed to wait for
partition map exchange [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], node=b94cf417-c03b-467f-ab27-c52e46b6cfa5]. Dumping pending
objects that might be the cause:
[09:06:13,105][WARNING][main][GridCachePartitionExchangeManager] Still waiting
for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture
[firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=b94cf417-c03b-467f-ab27-c52e46b6cfa5, addrs=[127.0.0.1, 192.168.63.47],
sockAddrs=[/127.0.0.1:47500, centos6-47/192.168.63.47:47500], discPort=47500,
order=11, intOrder=8, lastExchangeTime=1513213571791, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11, nodeId8=b94cf417,
msg=null, type=NODE_JOINED, tstamp=1513155330831], crd=TcpDiscoveryNode
[id=fcc47ef7-f080-4f88-93f5-2bc221dd1fcf, addrs=[127.0.0.1, 192.168.63.36],
sockAddrs=[/192.168.63.36:47500, /127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1513155306680, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=b94cf417-c03b-467f-ab27-c52e46b6cfa5, addrs=[127.0.0.1, 192.168.63.47],
sockAddrs=[/127.0.0.1:47500, centos6-47/192.168.63.47:47500], discPort=47500,
order=11, intOrder=8, lastExchangeTime=1513213571791, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11, nodeId8=b94cf417,
msg=null, type=NODE_JOINED, tstamp=1513155330831], nodeId=b94cf417,
evt=NODE_JOINED], added=true, initFut=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=true, hash=240185775], init=true,
lastVer=null, partReleaseFut=PartitionReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11, minorTopVer=0],
futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], futures=[]], TxReleaseFuture [topVer=AffinityTopologyVersion
[topVer=11, minorTopVer=0], futures=[]], AtomicUpdateReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11, minorTopVer=0], futures=[]],
DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], futures=[]]]], exchActions=null, affChangeMsg=null,
initTs=1513155330861, centralizedAff=false, changeGlobalStateE=null,
done=false, state=SRV, evtLatch=0,
remaining=[eb9794d8-1c2a-4033-98e5-45733e564124,
66ccb735-506a-4381-91b9-9e98c3644b0e, fcc47ef7-f080-4f88-93f5-2bc221dd1fcf],
super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=1003723341]]]
[09:06:13,762][WARNING][exchange-worker-#102][diagnostic] Failed to wait for
partition map exchange [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], node=b94cf417-c03b-467f-ab27-c52e46b6cfa5]. Dumping pending
objects that might be the cause:
[09:06:23,762][WARNING][exchange-worker-#102][diagnostic] Failed to wait for
partition map exchange [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], node=b94cf417-c03b-467f-ab27-c52e46b6cfa5]. Dumping pending
objects that might be the cause:
[09:06:31,491][WARNING][tcp-disco-msg-worker-#2][TcpDiscoverySpi] Timed out
waiting for message delivery receipt (most probably, the reason is in long GC
pauses on remote node; consider tuning GC and increasing 'ackTimeout'
configuration property). Will retry to send message with increased timeout
[currentTimeout=9919, rmtAddr=/192.168.63.60:47500, rmtPort=47500]
[09:06:33,106][WARNING][main][GridCachePartitionExchangeManager] Still waiting
for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture
[firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=b94cf417-c03b-467f-ab27-c52e46b6cfa5, addrs=[127.0.0.1, 192.168.63.47],
sockAddrs=[/127.0.0.1:47500, centos6-47/192.168.63.47:47500], discPort=47500,
order=11, intOrder=8, lastExchangeTime=1513213581433, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11, nodeId8=b94cf417,
msg=null, type=NODE_JOINED, tstamp=1513155330831], crd=TcpDiscoveryNode
[id=fcc47ef7-f080-4f88-93f5-2bc221dd1fcf, addrs=[127.0.0.1, 192.168.63.36],
sockAddrs=[/192.168.63.36:47500, /127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1513155306680, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=b94cf417-c03b-467f-ab27-c52e46b6cfa5, addrs=[127.0.0.1, 192.168.63.47],
sockAddrs=[/127.0.0.1:47500, centos6-47/192.168.63.47:47500], discPort=47500,
order=11, intOrder=8, lastExchangeTime=1513213581433, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11, nodeId8=b94cf417,
msg=null, type=NODE_JOINED, tstamp=1513155330831], nodeId=b94cf417,
evt=NODE_JOINED], added=true, initFut=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=true, hash=240185775], init=true,
lastVer=null, partReleaseFut=PartitionReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11, minorTopVer=0],
futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], futures=[]], TxReleaseFuture [topVer=AffinityTopologyVersion
[topVer=11, minorTopVer=0], futures=[]], AtomicUpdateReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11, minorTopVer=0], futures=[]],
DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], futures=[]]]], exchActions=null, affChangeMsg=null,
initTs=1513155330861, centralizedAff=false, changeGlobalStateE=null,
done=false, state=SRV, evtLatch=0,
remaining=[eb9794d8-1c2a-4033-98e5-45733e564124,
66ccb735-506a-4381-91b9-9e98c3644b0e, fcc47ef7-f080-4f88-93f5-2bc221dd1fcf],
super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=1003723341]]]
[09:06:33,763][WARNING][exchange-worker-#102][diagnostic] Failed to wait for
partition map exchange [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], node=b94cf417-c03b-467f-ab27-c52e46b6cfa5]. Dumping pending
objects that might be the cause:
[09:06:36,479][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted
incoming connection [rmtAddr=/192.168.63.60, rmtPort=52311]
[09:06:36,479][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning
a new thread for connection [rmtAddr=/192.168.63.60, rmtPort=52311]
[09:06:36,480][INFO][tcp-disco-sock-reader-#20][TcpDiscoverySpi] Started
serving remote node connection [rmtAddr=/192.168.63.60:52311, rmtPort=52311]
[09:06:36,484][INFO][tcp-disco-sock-reader-#20][TcpDiscoverySpi] Finished
serving remote node connection [rmtAddr=/192.168.63.60:52311, rmtPort=52311
[09:06:41,492][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery accepted
incoming connection [rmtAddr=/192.168.63.60, rmtPort=33332]
[09:06:41,492][INFO][tcp-disco-srvr-#3][TcpDiscoverySpi] TCP discovery spawning
a new thread for connection [rmtAddr=/192.168.63.60, rmtPort=33332]
[09:06:41,493][INFO][tcp-disco-sock-reader-#21][TcpDiscoverySpi] Started
serving remote node connection [rmtAddr=/192.168.63.60:33332, rmtPort=33332]
[09:06:41,498][INFO][tcp-disco-sock-reader-#21][TcpDiscoverySpi] Finished
serving remote node connection [rmtAddr=/192.168.63.60:33332, rmtPort=33332
[09:06:43,763][WARNING][exchange-worker-#102][diagnostic] Failed to wait for
partition map exchange [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], node=b94cf417-c03b-467f-ab27-c52e46b6cfa5]. Dumping pending
objects that might be the cause:
[09:06:45,493][WARNING][tcp-disco-msg-worker-#2][TcpDiscoverySpi] Failed to
send message to next node [msg=TcpDiscoveryNodeAddedMessage
[node=TcpDiscoveryNode [id=8ad7a225-d6ad-4407-92fa-1035f43dfb4b,
addrs=[10.1.50.0, 10.1.50.1, 127.0.0.1, 192.168.63.60],
sockAddrs=[/192.168.63.60:47500, /10.1.50.1:47500, /10.1.50.0:47500,
/127.0.0.1:47500], discPort=47500, order=0, intOrder=16,
lastExchangeTime=1513213581453, loc=false, ver=2.3.0#20171028-sha1:8add7fd5,
isClient=false],
dataPacket=o.a.i.spi.discovery.tcp.internal.DiscoveryDataPacket@18b4b841,
discardMsgId=null, discardCustomMsgId=null, top=null, clientTop=null,
gridStartTime=1513146187062, super=TcpDiscoveryAbstractMessage
[sndNodeId=eb9794d8-1c2a-4033-98e5-45733e564124,
id=d0c9f8e4061-fcc47ef7-f080-4f88-93f5-2bc221dd1fcf,
verifierNodeId=fcc47ef7-f080-4f88-93f5-2bc221dd1fcf, topVer=0, pendingIdx=0,
failedNodes=null, isClient=false]], next=TcpDiscoveryNode
[id=8ad7a225-d6ad-4407-92fa-1035f43dfb4b, addrs=[10.1.50.0, 10.1.50.1,
127.0.0.1, 192.168.63.60], sockAddrs=[/192.168.63.60:47500, /10.1.50.1:47500,
/10.1.50.0:47500, /127.0.0.1:47500], discPort=47500, order=0, intOrder=16,
lastExchangeTime=1513213581453, loc=false, ver=2.3.0#20171028-sha1:8add7fd5,
isClient=false], errMsg=Failed to send message to next node
[msg=TcpDiscoveryNodeAddedMessage [node=TcpDiscoveryNode
[id=8ad7a225-d6ad-4407-92fa-1035f43dfb4b, addrs=[10.1.50.0, 10.1.50.1,
127.0.0.1, 192.168.63.60], sockAddrs=[/192.168.63.60:47500, /10.1.50.1:47500,
/10.1.50.0:47500, /127.0.0.1:47500], discPort=47500, order=0, intOrder=16,
lastExchangeTime=1513213581453, loc=false, ver=2.3.0#20171028-sha1:8add7fd5,
isClient=false],
dataPacket=o.a.i.spi.discovery.tcp.internal.DiscoveryDataPacket@18b4b841,
discardMsgId=null, discardCustomMsgId=null, top=null, clientTop=null,
gridStartTime=1513146187062, super=TcpDiscoveryAbstractMessage
[sndNodeId=eb9794d8-1c2a-4033-98e5-45733e564124,
id=d0c9f8e4061-fcc47ef7-f080-4f88-93f5-2bc221dd1fcf,
verifierNodeId=fcc47ef7-f080-4f88-93f5-2bc221dd1fcf, topVer=0, pendingIdx=0,
failedNodes=null, isClient=false]], next=ClusterNode
[id=8ad7a225-d6ad-4407-92fa-1035f43dfb4b, order=0, addr=[10.1.50.0, 10.1.50.1,
127.0.0.1, 192.168.63.60], daemon=false]]]
[09:06:45,501][WARNING][tcp-disco-msg-worker-#2][TcpDiscoverySpi] Local node
has detected failed nodes and started cluster-wide procedure. To speed up
failure detection please see 'Failure Detection' section under javadoc for
'TcpDiscoverySpi'
[09:06:45,524][INFO][disco-event-worker-#101][GridDiscoveryManager] Added new
node to topology: TcpDiscoveryNode [id=8ad7a225-d6ad-4407-92fa-1035f43dfb4b,
addrs=[10.1.50.0, 10.1.50.1, 127.0.0.1, 192.168.63.60],
sockAddrs=[/192.168.63.60:47500, /10.1.50.1:47500, /10.1.50.0:47500,
/127.0.0.1:47500], discPort=47500, order=26, intOrder=16,
lastExchangeTime=1513213581453, loc=false, ver=2.3.0#20171028-sha1:8add7fd5,
isClient=false]
[09:06:45,532][INFO][disco-event-worker-#101][GridDiscoveryManager] Topology
snapshot [ver=26, servers=5, clients=0, CPUs=104, heap=190.0GB]
[09:06:45,532][WARNING][disco-event-worker-#101][GridDiscoveryManager] Node
FAILED: TcpDiscoveryNode [id=8ad7a225-d6ad-4407-92fa-1035f43dfb4b,
addrs=[10.1.50.0, 10.1.50.1, 127.0.0.1, 192.168.63.60],
sockAddrs=[/192.168.63.60:47500, /10.1.50.1:47500, /10.1.50.0:47500,
/127.0.0.1:47500], discPort=47500, order=26, intOrder=16,
lastExchangeTime=1513213581453, loc=false, ver=2.3.0#20171028-sha1:8add7fd5,
isClient=false]
[09:06:45,533][INFO][disco-event-worker-#101][GridDiscoveryManager] Topology
snapshot [ver=27, servers=4, clients=0, CPUs=96, heap=160.0GB]
[09:06:53,106][WARNING][main][GridCachePartitionExchangeManager] Still waiting
for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture
[firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=b94cf417-c03b-467f-ab27-c52e46b6cfa5, addrs=[127.0.0.1, 192.168.63.47],
sockAddrs=[/127.0.0.1:47500, centos6-47/192.168.63.47:47500], discPort=47500,
order=11, intOrder=8, lastExchangeTime=1513213606514, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11, nodeId8=b94cf417,
msg=null, type=NODE_JOINED, tstamp=1513155330831], crd=TcpDiscoveryNode
[id=fcc47ef7-f080-4f88-93f5-2bc221dd1fcf, addrs=[127.0.0.1, 192.168.63.36],
sockAddrs=[/192.168.63.36:47500, /127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1513155306680, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=b94cf417-c03b-467f-ab27-c52e46b6cfa5, addrs=[127.0.0.1, 192.168.63.47],
sockAddrs=[/127.0.0.1:47500, centos6-47/192.168.63.47:47500], discPort=47500,
order=11, intOrder=8, lastExchangeTime=1513213606514, loc=true,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], topVer=11, nodeId8=b94cf417,
msg=null, type=NODE_JOINED, tstamp=1513155330831], nodeId=b94cf417,
evt=NODE_JOINED], added=true, initFut=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=true, hash=240185775], init=true,
lastVer=null, partReleaseFut=PartitionReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11, minorTopVer=0],
futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], futures=[]], TxReleaseFuture [topVer=AffinityTopologyVersion
[topVer=11, minorTopVer=0], futures=[]], AtomicUpdateReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11, minorTopVer=0], futures=[]],
DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], futures=[]]]], exchActions=null, affChangeMsg=null,
initTs=1513155330861, centralizedAff=false, changeGlobalStateE=null,
done=false, state=SRV, evtLatch=0,
remaining=[eb9794d8-1c2a-4033-98e5-45733e564124,
66ccb735-506a-4381-91b9-9e98c3644b0e, fcc47ef7-f080-4f88-93f5-2bc221dd1fcf],
super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=1003723341]]]
[09:06:53,763][WARNING][exchange-worker-#102][diagnostic] Failed to wait for
partition map exchange [topVer=AffinityTopologyVersion [topVer=11,
minorTopVer=0], node=b94cf417-c03b-467f-ab27-c52e46b6cfa5]. Dumping pending
objects that might be the cause: