Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/825/

1 tests failed.
... and 1 other failed tests.



Build Log:
[...truncated 12898 lines...]
   [junit4] Suite: org.apache.solr.cloud.MoveReplicaHDFSTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/init-core-data-001
   [junit4]   2> 607977 WARN  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=2 numCloses=2
   [junit4]   2> 607977 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 607979 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 607980 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 607981 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 4 servers in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001
   [junit4]   2> 607981 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 607981 INFO  (Thread-1327) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 607981 INFO  (Thread-1327) [    ] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 607983 ERROR (Thread-1327) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 608081 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.c.ZkTestServer start zk server on port:39463
   [junit4]   2> 608094 INFO  (zkConnectionManagerCallback-1765-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608106 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 608107 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 608109 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 608109 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 608110 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 608110 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 608110 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 608110 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 608110 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 608111 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 608111 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 608111 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 608111 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@4a1d6aae{/solr,null,AVAILABLE}
   [junit4]   2> 608112 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@6e22dafb{SSL,[ssl, 
http/1.1]}{127.0.0.1:39439}
   [junit4]   2> 608112 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.e.j.s.Server Started @608164ms
   [junit4]   2> 608112 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=39439}
   [junit4]   2> 608113 ERROR (jetty-launcher-1762-thread-4) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 608113 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 608113 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 608113 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 608113 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 608113 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-08-27T14:53:33.248Z
   [junit4]   2> 608115 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@4c2154f3{/solr,null,AVAILABLE}
   [junit4]   2> 608115 INFO  (zkConnectionManagerCallback-1767-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608115 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@59130aec{SSL,[ssl, 
http/1.1]}{127.0.0.1:46345}
   [junit4]   2> 608115 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.e.j.s.Server Started @608167ms
   [junit4]   2> 608115 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=46345}
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@25d33a14{/solr,null,AVAILABLE}
   [junit4]   2> 608116 ERROR (jetty-launcher-1762-thread-3) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-08-27T14:53:33.251Z
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@76a072a0{SSL,[ssl, 
http/1.1]}{127.0.0.1:44679}
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.e.j.s.Server Started @608167ms
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=44679}
   [junit4]   2> 608116 ERROR (jetty-launcher-1762-thread-1) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 608116 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-08-27T14:53:33.251Z
   [junit4]   2> 608117 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 608118 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 608125 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 608125 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 608125 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.e.j.s.session node0 Scavenging every 660000ms
   [junit4]   2> 608126 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5c61f9f8{/solr,null,AVAILABLE}
   [junit4]   2> 608129 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@6e511a71{SSL,[ssl, 
http/1.1]}{127.0.0.1:39857}
   [junit4]   2> 608129 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.e.j.s.Server Started @608181ms
   [junit4]   2> 608129 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=39857}
   [junit4]   2> 608130 ERROR (jetty-launcher-1762-thread-2) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 608130 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 608130 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 608130 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 608130 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 608130 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-08-27T14:53:33.265Z
   [junit4]   2> 608148 INFO  (zkConnectionManagerCallback-1769-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608148 INFO  (zkConnectionManagerCallback-1771-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608148 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 608149 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 608166 INFO  (zkConnectionManagerCallback-1773-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608167 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 608342 INFO  (jetty-launcher-1762-thread-2) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:39463/solr
   [junit4]   2> 608344 INFO  (zkConnectionManagerCallback-1777-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608346 INFO  (zkConnectionManagerCallback-1779-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608399 INFO  (jetty-launcher-1762-thread-1) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:39463/solr
   [junit4]   2> 608403 INFO  (jetty-launcher-1762-thread-4) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:39463/solr
   [junit4]   2> 608403 INFO  (zkConnectionManagerCallback-1785-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608405 INFO  (zkConnectionManagerCallback-1789-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608406 INFO  (zkConnectionManagerCallback-1791-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608415 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.OverseerElectionContext I am going to be 
the leader 127.0.0.1:44679_solr
   [junit4]   2> 608416 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.Overseer Overseer 
(id=72232599637983241-127.0.0.1:44679_solr-n_0000000000) starting
   [junit4]   2> 608425 INFO  (zkConnectionManagerCallback-1793-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608431 INFO  (jetty-launcher-1762-thread-2) 
[n:127.0.0.1:39857_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:39857_solr
   [junit4]   2> 608435 INFO  (zkCallback-1778-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 608435 INFO  (zkCallback-1790-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 608436 INFO  (zkConnectionManagerCallback-1800-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608437 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 608438 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:39463/solr ready
   [junit4]   2> 608438 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 608439 INFO  
(OverseerStateUpdate-72232599637983241-127.0.0.1:44679_solr-n_0000000000) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.Overseer Starting to work on the main 
queue : 127.0.0.1:44679_solr
   [junit4]   2> 608445 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating 
transient cache for 2147483647 transient cores
   [junit4]   2> 608445 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:39439_solr
   [junit4]   2> 608446 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating 
transient cache for 2147483647 transient cores
   [junit4]   2> 608446 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:44679_solr
   [junit4]   2> 608454 INFO  (zkCallback-1790-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (3)
   [junit4]   2> 608454 INFO  (zkCallback-1778-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (3)
   [junit4]   2> 608458 INFO  (zkCallback-1799-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (3)
   [junit4]   2> 608467 INFO  (zkCallback-1792-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (3)
   [junit4]   2> 608481 INFO  (zkConnectionManagerCallback-1808-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608482 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (3)
   [junit4]   2> 608483 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:39463/solr ready
   [junit4]   2> 608486 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 608487 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 608489 INFO  (jetty-launcher-1762-thread-3) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:39463/solr
   [junit4]   2> 608490 INFO  (zkConnectionManagerCallback-1813-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608491 INFO  (zkConnectionManagerCallback-1818-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608492 INFO  (jetty-launcher-1762-thread-2) 
[n:127.0.0.1:39857_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (3)
   [junit4]   2> 608493 INFO  (jetty-launcher-1762-thread-2) 
[n:127.0.0.1:39857_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:39463/solr ready
   [junit4]   2> 608494 INFO  (zkConnectionManagerCallback-1820-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608501 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (3)
   [junit4]   2> 608502 INFO  (jetty-launcher-1762-thread-2) 
[n:127.0.0.1:39857_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 608513 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating 
transient cache for 2147483647 transient cores
   [junit4]   2> 608513 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:46345_solr
   [junit4]   2> 608515 INFO  (zkCallback-1778-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 608515 INFO  (zkCallback-1790-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 608521 INFO  (zkCallback-1792-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 608522 INFO  (zkCallback-1799-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 608526 INFO  (zkCallback-1812-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 608532 INFO  (zkCallback-1819-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 608532 INFO  (zkCallback-1807-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 608538 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39439.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608540 INFO  (jetty-launcher-1762-thread-2) 
[n:127.0.0.1:39857_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39857.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608547 INFO  (zkConnectionManagerCallback-1828-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608550 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39439.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608552 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39439.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608553 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (4)
   [junit4]   2> 608554 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:39463/solr ready
   [junit4]   2> 608554 INFO  (jetty-launcher-1762-thread-4) 
[n:127.0.0.1:39439_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node4/.
   [junit4]   2> 608554 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 608573 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_44679.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608580 INFO  (jetty-launcher-1762-thread-2) 
[n:127.0.0.1:39857_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39857.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608580 INFO  (jetty-launcher-1762-thread-2) 
[n:127.0.0.1:39857_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_39857.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608581 INFO  (jetty-launcher-1762-thread-2) 
[n:127.0.0.1:39857_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node3/.
   [junit4]   2> 608582 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_44679.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608582 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_44679.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608584 INFO  (jetty-launcher-1762-thread-1) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node1/.
   [junit4]   2> 608592 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46345.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608600 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46345.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608600 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46345.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 608601 INFO  (jetty-launcher-1762-thread-3) 
[n:127.0.0.1:46345_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node2/.
   [junit4]   2> 608707 INFO  (zkConnectionManagerCallback-1831-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608710 INFO  (zkConnectionManagerCallback-1836-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 608711 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (4)
   [junit4]   2> 608712 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39463/solr ready
   [junit4]   2> 608742 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :overseerstatus with 
params action=OVERSEERSTATUS&wt=javabin&version=2 and sendToOCPQueue=true
   [junit4]   2> 608748 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={action=OVERSEERSTATUS&wt=javabin&version=2} status=0 QTime=6
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 608788 WARN  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 608794 WARN  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 608797 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 608812 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/hdfs
 to ./temp/Jetty_localhost_35258_hdfs____tdqxy5/webapp
   [junit4]   2> 609203 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35258
   [junit4]   2> 609389 WARN  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 609390 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 609405 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_33767_datanode____.f7k2td/webapp
   [junit4]   2> 609812 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33767
   [junit4]   2> 609851 WARN  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 609852 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 609869 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_45212_datanode____2pzbtl/webapp
   [junit4]   2> 609909 ERROR (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-002/hdfsBaseDir/data/data1/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-002/hdfsBaseDir/data/data2/]]
  heartbeating to localhost/127.0.0.1:46439) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 609922 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x97cb13d8430d7: from storage 
DS-976b6ffd-3d53-4133-8f64-2e44d94d951f node 
DatanodeRegistration(127.0.0.1:39727, 
datanodeUuid=990ac5ae-4c8e-42ba-8fb5-9e626065a531, infoPort=38282, 
infoSecurePort=0, ipcPort=35017, 
storageInfo=lv=-56;cid=testClusterID;nsid=2022149229;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 0 msecs
   [junit4]   2> 609922 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x97cb13d8430d7: from storage 
DS-1ec3cdf9-3cab-46b2-8507-2f6ad2292a29 node 
DatanodeRegistration(127.0.0.1:39727, 
datanodeUuid=990ac5ae-4c8e-42ba-8fb5-9e626065a531, infoPort=38282, 
infoSecurePort=0, ipcPort=35017, 
storageInfo=lv=-56;cid=testClusterID;nsid=2022149229;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 610248 INFO  
(SUITE-MoveReplicaHDFSTest-seed#[585B80C8B8C8602C]-worker) [    ] o.m.log 
Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45212
   [junit4]   2> 610388 ERROR (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-002/hdfsBaseDir/data/data3/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-002/hdfsBaseDir/data/data4/]]
  heartbeating to localhost/127.0.0.1:46439) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 610423 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x97cb15b4680b5: from storage 
DS-f3a7c114-70ba-410d-8f13-01ad2206d651 node 
DatanodeRegistration(127.0.0.1:37123, 
datanodeUuid=0d8f2bfd-452f-4060-9877-f2cb80f38360, infoPort=41913, 
infoSecurePort=0, ipcPort=41599, 
storageInfo=lv=-56;cid=testClusterID;nsid=2022149229;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 3 msecs
   [junit4]   2> 610423 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x97cb15b4680b5: from storage 
DS-1272eb19-725c-4609-b019-a781f26fd42c node 
DatanodeRegistration(127.0.0.1:37123, 
datanodeUuid=0d8f2bfd-452f-4060-9877-f2cb80f38360, infoPort=41913, 
infoSecurePort=0, ipcPort=41599, 
storageInfo=lv=-56;cid=testClusterID;nsid=2022149229;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 610618 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 610619 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (4)
   [junit4]   2> 610633 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] o.e.j.s.Server 
jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 610698 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
   [junit4]   2> 610698 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] o.e.j.s.session 
No SessionScavenger set, using defaults
   [junit4]   2> 610699 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] o.e.j.s.session 
node0 Scavenging every 600000ms
   [junit4]   2> 610699 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@eaf6205{/solr,null,AVAILABLE}
   [junit4]   2> 610700 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@6c1b17d8{SSL,[ssl, 
http/1.1]}{127.0.0.1:43035}
   [junit4]   2> 610700 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] o.e.j.s.Server 
Started @610751ms
   [junit4]   2> 610700 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=43035}
   [junit4]   2> 610700 ERROR 
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 610700 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 610700 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 610700 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 610700 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 610700 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-08-27T14:53:35.835Z
   [junit4]   2> 610722 INFO  (zkConnectionManagerCallback-1840-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 610723 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 610752 INFO  
(OverseerCollectionConfigSetProcessor-72232599637983241-127.0.0.1:44679_solr-n_0000000000)
 [n:127.0.0.1:44679_solr    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 611310 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:39463/solr
   [junit4]   2> 611312 INFO  (zkConnectionManagerCallback-1844-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 611314 INFO  (zkConnectionManagerCallback-1846-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 611320 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (4)
   [junit4]   2> 611324 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 
2147483647 transient cores
   [junit4]   2> 611324 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:43035_solr
   [junit4]   2> 611325 INFO  (zkCallback-1778-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611325 INFO  (zkCallback-1812-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611325 INFO  (zkCallback-1792-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611325 INFO  (zkCallback-1807-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611325 INFO  (zkCallback-1835-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611325 INFO  (zkCallback-1799-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611325 INFO  (zkCallback-1835-thread-2) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611326 INFO  (zkCallback-1827-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611334 INFO  (zkCallback-1819-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611334 INFO  (zkCallback-1790-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611334 INFO  (zkCallback-1845-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (4) -> (5)
   [junit4]   2> 611348 INFO  (zkConnectionManagerCallback-1853-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 611349 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (5)
   [junit4]   2> 611349 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39463/solr 
ready
   [junit4]   2> 611350 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics 
history in memory.
   [junit4]   2> 611369 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_43035.solr.node' 
(registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 611379 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_43035.solr.jvm' 
(registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 611379 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr_43035.solr.jetty' 
(registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 611381 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [n:127.0.0.1:43035_solr 
   ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node5/.
   [junit4]   2> 611451 INFO  (zkConnectionManagerCallback-1856-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 611452 INFO  
(TEST-MoveReplicaHDFSTest.test-seed#[585B80C8B8C8602C]) [    ] 
o.a.s.c.MoveReplicaTest total_jettys: 5
   [junit4]   2> 611469 INFO  (qtp1667055842-6260) [n:127.0.0.1:39439_solr    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
collection.configName=conf1&maxShardsPerNode=2&autoAddReplicas=false&name=MoveReplicaHDFSTest_coll_true&nrtReplicas=2&action=CREATE&numShards=2&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 611472 INFO  
(OverseerThreadFactory-2586-thread-2-processing-n:127.0.0.1:44679_solr) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.a.c.CreateCollectionCmd Create collection 
MoveReplicaHDFSTest_coll_true
   [junit4]   2> 611581 INFO  
(OverseerStateUpdate-72232599637983241-127.0.0.1:44679_solr-n_0000000000) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"MoveReplicaHDFSTest_coll_true",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"MoveReplicaHDFSTest_coll_true_shard1_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:44679/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 611584 INFO  
(OverseerStateUpdate-72232599637983241-127.0.0.1:44679_solr-n_0000000000) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"MoveReplicaHDFSTest_coll_true",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"MoveReplicaHDFSTest_coll_true_shard1_replica_n2",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:46345/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 611586 INFO  
(OverseerStateUpdate-72232599637983241-127.0.0.1:44679_solr-n_0000000000) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"MoveReplicaHDFSTest_coll_true",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"MoveReplicaHDFSTest_coll_true_shard2_replica_n4",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:43035/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 611589 INFO  
(OverseerStateUpdate-72232599637983241-127.0.0.1:44679_solr-n_0000000000) 
[n:127.0.0.1:44679_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"MoveReplicaHDFSTest_coll_true",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"MoveReplicaHDFSTest_coll_true_shard2_replica_n6",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:39439/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 611803 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr    
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation 
core create command 
qt=/admin/cores&coreNodeName=core_node3&collection.configName=conf1&newCollection=true&name=MoveReplicaHDFSTest_coll_true_shard1_replica_n1&action=CREATE&numShards=2&collection=MoveReplicaHDFSTest_coll_true&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 611826 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr    
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.h.a.CoreAdminOperation 
core create command 
qt=/admin/cores&coreNodeName=core_node8&collection.configName=conf1&newCollection=true&name=MoveReplicaHDFSTest_coll_true_shard2_replica_n6&action=CREATE&numShards=2&collection=MoveReplicaHDFSTest_coll_true&shard=shard2&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 611842 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr    
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.h.a.CoreAdminOperation 
core create command 
qt=/admin/cores&coreNodeName=core_node5&collection.configName=conf1&newCollection=true&name=MoveReplicaHDFSTest_coll_true_shard1_replica_n2&action=CREATE&numShards=2&collection=MoveReplicaHDFSTest_coll_true&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 611853 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr    
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.h.a.CoreAdminOperation 
core create command 
qt=/admin/cores&coreNodeName=core_node7&collection.configName=conf1&newCollection=true&name=MoveReplicaHDFSTest_coll_true_shard2_replica_n4&action=CREATE&numShards=2&collection=MoveReplicaHDFSTest_coll_true&shard=shard2&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 612840 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.SolrConfig Using 
Lucene MatchVersion: 7.5.0
   [junit4]   2> 612848 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.s.IndexSchema 
[MoveReplicaHDFSTest_coll_true_shard1_replica_n1] Schema name=minimal
   [junit4]   2> 612852 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.s.IndexSchema Loaded 
schema minimal/1.1 with uniqueid field id
   [junit4]   2> 612852 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.CoreContainer 
Creating SolrCore 'MoveReplicaHDFSTest_coll_true_shard1_replica_n1' using 
configuration from collection MoveReplicaHDFSTest_coll_true, trusted=true
   [junit4]   2> 612853 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter 
JMX monitoring for 
'solr_44679.solr.core.MoveReplicaHDFSTest_coll_true.shard1.replica_n1' 
(registry 'solr.core.MoveReplicaHDFSTest_coll_true.shard1.replica_n1') enabled 
at server: com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 612855 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.SolrConfig Using 
Lucene MatchVersion: 7.5.0
   [junit4]   2> 612864 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:46439/data
   [junit4]   2> 612864 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory 
Solr Kerberos Authentication disabled
   [junit4]   2> 612864 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 612864 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.SolrCore 
[[MoveReplicaHDFSTest_coll_true_shard1_replica_n1] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node1/MoveReplicaHDFSTest_coll_true_shard1_replica_n1],
 
dataDir=[hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node3/data/]
   [junit4]   2> 612865 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.SolrConfig Using 
Lucene MatchVersion: 7.5.0
   [junit4]   2> 612866 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node3/data/snapshot_metadata
   [junit4]   2> 612866 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.SolrConfig Using 
Lucene MatchVersion: 7.5.0
   [junit4]   2> 612881 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.s.IndexSchema 
[MoveReplicaHDFSTest_coll_true_shard1_replica_n2] Schema name=minimal
   [junit4]   2> 612883 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.s.IndexSchema Loaded 
schema minimal/1.1 with uniqueid field id
   [junit4]   2> 612900 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.CoreContainer 
Creating SolrCore 'MoveReplicaHDFSTest_coll_true_shard1_replica_n2' using 
configuration from collection MoveReplicaHDFSTest_coll_true, trusted=true
   [junit4]   2> 612900 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.m.r.SolrJmxReporter 
JMX monitoring for 
'solr_46345.solr.core.MoveReplicaHDFSTest_coll_true.shard1.replica_n2' 
(registry 'solr.core.MoveReplicaHDFSTest_coll_true.shard1.replica_n2') enabled 
at server: com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 612901 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:46439/data
   [junit4]   2> 612901 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.HdfsDirectoryFactory 
Solr Kerberos Authentication disabled
   [junit4]   2> 612901 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 612901 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.SolrCore 
[[MoveReplicaHDFSTest_coll_true_shard1_replica_n2] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node2/MoveReplicaHDFSTest_coll_true_shard1_replica_n2],
 
dataDir=[hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node5/data/]
   [junit4]   2> 612902 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.s.IndexSchema 
[MoveReplicaHDFSTest_coll_true_shard2_replica_n4] Schema name=minimal
   [junit4]   2> 612902 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.s.IndexSchema 
[MoveReplicaHDFSTest_coll_true_shard2_replica_n6] Schema name=minimal
   [junit4]   2> 612903 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node5/data/snapshot_metadata
   [junit4]   2> 612904 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.s.IndexSchema Loaded 
schema minimal/1.1 with uniqueid field id
   [junit4]   2> 612904 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.s.IndexSchema Loaded 
schema minimal/1.1 with uniqueid field id
   [junit4]   2> 612904 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.CoreContainer 
Creating SolrCore 'MoveReplicaHDFSTest_coll_true_shard2_replica_n6' using 
configuration from collection MoveReplicaHDFSTest_coll_true, trusted=true
   [junit4]   2> 612904 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.CoreContainer 
Creating SolrCore 'MoveReplicaHDFSTest_coll_true_shard2_replica_n4' using 
configuration from collection MoveReplicaHDFSTest_coll_true, trusted=true
   [junit4]   2> 612905 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.m.r.SolrJmxReporter 
JMX monitoring for 
'solr_39439.solr.core.MoveReplicaHDFSTest_coll_true.shard2.replica_n6' 
(registry 'solr.core.MoveReplicaHDFSTest_coll_true.shard2.replica_n6') enabled 
at server: com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 612905 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.m.r.SolrJmxReporter 
JMX monitoring for 
'solr_43035.solr.core.MoveReplicaHDFSTest_coll_true.shard2.replica_n4' 
(registry 'solr.core.MoveReplicaHDFSTest_coll_true.shard2.replica_n4') enabled 
at server: com.sun.jmx.mbeanserver.JmxMBeanServer@6b7814e4
   [junit4]   2> 612905 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:46439/data
   [junit4]   2> 612905 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:46439/data
   [junit4]   2> 612905 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.HdfsDirectoryFactory 
Solr Kerberos Authentication disabled
   [junit4]   2> 612905 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 612905 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.SolrCore 
[[MoveReplicaHDFSTest_coll_true_shard2_replica_n4] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node5/MoveReplicaHDFSTest_coll_true_shard2_replica_n4],
 
dataDir=[hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node7/data/]
   [junit4]   2> 612905 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.HdfsDirectoryFactory 
Solr Kerberos Authentication disabled
   [junit4]   2> 612906 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 612906 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.SolrCore 
[[MoveReplicaHDFSTest_coll_true_shard2_replica_n6] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.cloud.MoveReplicaHDFSTest_585B80C8B8C8602C-001/tempDir-001/node4/MoveReplicaHDFSTest_coll_true_shard2_replica_n6],
 
dataDir=[hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node8/data/]
   [junit4]   2> 612907 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node7/data/snapshot_metadata
   [junit4]   2> 612907 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node8/data/snapshot_metadata
   [junit4]   2> 612929 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node3/data
   [junit4]   2> 612944 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node7/data
   [junit4]   2> 612953 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node5/data
   [junit4]   2> 612958 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node8/data
   [junit4]   2> 612985 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node3/data/index
   [junit4]   2> 613008 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node5/data/index
   [junit4]   2> 613016 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node7/data/index
   [junit4]   2> 613016 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:46439/data/MoveReplicaHDFSTest_coll_true/core_node8/data/index
   [junit4]   2> 613110 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37123 is added to 
blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-976b6ffd-3d53-4133-8f64-2e44d94d951f:NORMAL:127.0.0.1:39727|RBW],
 
ReplicaUC[[DISK]DS-1272eb19-725c-4609-b019-a781f26fd42c:NORMAL:127.0.0.1:37123|FINALIZED]]}
 size 0
   [junit4]   2> 613111 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39727 is added to 
blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-976b6ffd-3d53-4133-8f64-2e44d94d951f:NORMAL:127.0.0.1:39727|RBW],
 
ReplicaUC[[DISK]DS-1272eb19-725c-4609-b019-a781f26fd42c:NORMAL:127.0.0.1:37123|FINALIZED]]}
 size 0
   [junit4]   2> 613144 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37123 is added to 
blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-1272eb19-725c-4609-b019-a781f26fd42c:NORMAL:127.0.0.1:37123|RBW],
 
ReplicaUC[[DISK]DS-1ec3cdf9-3cab-46b2-8507-2f6ad2292a29:NORMAL:127.0.0.1:39727|RBW]]}
 size 0
   [junit4]   2> 613145 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39727 is added to 
blk_1073741827_1003{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-f3a7c114-70ba-410d-8f13-01ad2206d651:NORMAL:127.0.0.1:37123|RBW],
 
ReplicaUC[[DISK]DS-976b6ffd-3d53-4133-8f64-2e44d94d951f:NORMAL:127.0.0.1:39727|RBW]]}
 size 69
   [junit4]   2> 613145 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37123 is added to 
blk_1073741827_1003 size 69
   [junit4]   2> 613146 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39727 is added to 
blk_1073741826_1002 size 69
   [junit4]   2> 613151 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39727 is added to 
blk_1073741828_1004{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-1ec3cdf9-3cab-46b2-8507-2f6ad2292a29:NORMAL:127.0.0.1:39727|RBW],
 
ReplicaUC[[DISK]DS-f3a7c114-70ba-410d-8f13-01ad2206d651:NORMAL:127.0.0.1:37123|RBW]]}
 size 69
   [junit4]   2> 613151 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:37123 is added to 
blk_1073741828_1004 size 69
   [junit4]   2> 613205 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.u.UpdateHandler Using 
UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 613206 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 613206 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=3
   [junit4]   2> 613206 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.u.UpdateHandler Using 
UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 613206 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 613206 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=3
   [junit4]   2> 613218 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.u.CommitTracker Hard 
AutoCommit: disabled
   [junit4]   2> 613218 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.u.CommitTracker Soft 
AutoCommit: disabled
   [junit4]   2> 613218 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.u.CommitTracker Hard 
AutoCommit: disabled
   [junit4]   2> 613218 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.u.CommitTracker Soft 
AutoCommit: disabled
   [junit4]   2> 613236 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.s.SolrIndexSearcher 
Opening [Searcher@5f8716d8[MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
main]
   [junit4]   2> 613237 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.s.SolrIndexSearcher 
Opening [Searcher@545ac6ff[MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
main]
   [junit4]   2> 613239 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 613239 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 613239 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 613239 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 613240 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.h.ReplicationHandler 
Commits will be reserved for 10000ms.
   [junit4]   2> 613240 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.h.ReplicationHandler 
Commits will be reserved for 10000ms.
   [junit4]   2> 613240 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.u.UpdateLog Could not 
find max version in index or recent updates, using new clock 1609964315869184000
   [junit4]   2> 613241 INFO  
(searcherExecutor-2615-thread-1-processing-n:127.0.0.1:44679_solr 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.SolrCore 
[MoveReplicaHDFSTest_coll_true_shard1_replica_n1] Registered new searcher 
Searcher@5f8716d8[MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 613241 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.u.UpdateLog Could not 
find max version in index or recent updates, using new clock 1609964315870232576
   [junit4]   2> 613243 INFO  
(searcherExecutor-2618-thread-1-processing-n:127.0.0.1:43035_solr 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.SolrCore 
[MoveReplicaHDFSTest_coll_true_shard2_replica_n4] Registered new searcher 
Searcher@545ac6ff[MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 613251 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.ZkShardTerms 
Successful update of terms at 
/collections/MoveReplicaHDFSTest_coll_true/terms/shard2 to 
Terms{values={core_node7=0}, version=0}
   [junit4]   2> 613251 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.ZkShardTerms 
Successful update of terms at 
/collections/MoveReplicaHDFSTest_coll_true/terms/shard1 to 
Terms{values={core_node3=0}, version=0}
   [junit4]   2> 613256 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for 
shard shard2: total=2 found=1 timeoutin=9999ms
   [junit4]   2> 613257 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for 
shard shard1: total=2 found=1 timeoutin=9999ms
   [junit4]   2> 613608 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.u.UpdateHandler Using 
UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 613608 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 613608 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=3
   [junit4]   2> 613613 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.u.UpdateHandler Using 
UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 613613 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 613613 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=3
   [junit4]   2> 613620 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.u.CommitTracker Hard 
AutoCommit: disabled
   [junit4]   2> 613620 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.u.CommitTracker Soft 
AutoCommit: disabled
   [junit4]   2> 613627 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.u.CommitTracker Hard 
AutoCommit: disabled
   [junit4]   2> 613627 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.u.CommitTracker Soft 
AutoCommit: disabled
   [junit4]   2> 613645 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.s.SolrIndexSearcher 
Opening [Searcher@a93af2a[MoveReplicaHDFSTest_coll_true_shard1_replica_n2] main]
   [junit4]   2> 613646 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 613646 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 613647 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.h.ReplicationHandler 
Commits will be reserved for 10000ms.
   [junit4]   2> 613647 INFO  
(searcherExecutor-2616-thread-1-processing-n:127.0.0.1:46345_solr 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.SolrCore 
[MoveReplicaHDFSTest_coll_true_shard1_replica_n2] Registered new searcher 
Searcher@a93af2a[MoveReplicaHDFSTest_coll_true_shard1_replica_n2] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 613648 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.u.UpdateLog Could not 
find max version in index or recent updates, using new clock 1609964316297003008
   [junit4]   2> 613654 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.ZkShardTerms 
Successful update of terms at 
/collections/MoveReplicaHDFSTest_coll_true/terms/shard1 to 
Terms{values={core_node3=0, core_node5=0}, version=1}
   [junit4]   2> 613655 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.s.SolrIndexSearcher 
Opening [Searcher@756bea9c[MoveReplicaHDFSTest_coll_true_shard2_replica_n6] 
main]
   [junit4]   2> 613656 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 613657 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 613657 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.h.ReplicationHandler 
Commits will be reserved for 10000ms.
   [junit4]   2> 613658 INFO  
(searcherExecutor-2617-thread-1-processing-n:127.0.0.1:39439_solr 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.SolrCore 
[MoveReplicaHDFSTest_coll_true_shard2_replica_n6] Registered new searcher 
Searcher@756bea9c[MoveReplicaHDFSTest_coll_true_shard2_replica_n6] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 613658 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.u.UpdateLog Could not 
find max version in index or recent updates, using new clock 1609964316307488768
   [junit4]   2> 613664 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.ZkShardTerms 
Successful update of terms at 
/collections/MoveReplicaHDFSTest_coll_true/terms/shard2 to 
Terms{values={core_node7=0, core_node8=0}, version=1}
   [junit4]   2> 613757 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 613757 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 613757 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.SyncStrategy Sync 
replicas to 
https://127.0.0.1:43035/solr/MoveReplicaHDFSTest_coll_true_shard2_replica_n4/
   [junit4]   2> 613758 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 613758 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 613758 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.SyncStrategy Sync 
replicas to 
https://127.0.0.1:44679/solr/MoveReplicaHDFSTest_coll_true_shard1_replica_n1/
   [junit4]   2> 613758 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.u.PeerSync PeerSync: 
core=MoveReplicaHDFSTest_coll_true_shard2_replica_n4 
url=https://127.0.0.1:43035/solr START 
replicas=[https://127.0.0.1:39439/solr/MoveReplicaHDFSTest_coll_true_shard2_replica_n6/]
 nUpdates=100
   [junit4]   2> 613759 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.u.PeerSync PeerSync: 
core=MoveReplicaHDFSTest_coll_true_shard2_replica_n4 
url=https://127.0.0.1:43035/solr DONE.  We have no versions.  sync failed.
   [junit4]   2> 613759 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.u.PeerSync PeerSync: 
core=MoveReplicaHDFSTest_coll_true_shard1_replica_n1 
url=https://127.0.0.1:44679/solr START 
replicas=[https://127.0.0.1:46345/solr/MoveReplicaHDFSTest_coll_true_shard1_replica_n2/]
 nUpdates=100
   [junit4]   2> 613759 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.u.PeerSync PeerSync: 
core=MoveReplicaHDFSTest_coll_true_shard1_replica_n1 
url=https://127.0.0.1:44679/solr DONE.  We have no versions.  sync failed.
   [junit4]   2> 613786 INFO  (qtp2051061622-6253) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.c.S.Request 
[MoveReplicaHDFSTest_coll_true_shard1_replica_n2]  webapp=/solr path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=1
   [junit4]   2> 613786 INFO  (qtp1667055842-6255) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.c.S.Request 
[MoveReplicaHDFSTest_coll_true_shard2_replica_n6]  webapp=/solr path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=1
   [junit4]   2> 613787 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.SyncStrategy 
Leader's attempt to sync with shard failed, moving to the next candidate
   [junit4]   2> 613787 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.SyncStrategy 
Leader's attempt to sync with shard failed, moving to the next candidate
   [junit4]   2> 613787 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.c.ShardLeaderElectionContext We failed sync, but we have no versions - we 
can't sync in that case - we were active before, so become leader anyway
   [junit4]   2> 613787 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext We failed sync, but we have no versions - we 
can't sync in that case - we were active before, so become leader anyway
   [junit4]   2> 613787 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 613787 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 613791 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:43035/solr/MoveReplicaHDFSTest_coll_true_shard2_replica_n4/ 
shard2
   [junit4]   2> 613791 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:44679/solr/MoveReplicaHDFSTest_coll_true_shard1_replica_n1/ 
shard1
   [junit4]   2> 613944 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.ZkController I am 
the leader, no recovery necessary
   [junit4]   2> 613944 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.ZkController I am 
the leader, no recovery necessary
   [junit4]   2> 613951 INFO  (qtp1183989587-6242) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/cores 
params={qt=/admin/cores&coreNodeName=core_node3&collection.configName=conf1&newCollection=true&name=MoveReplicaHDFSTest_coll_true_shard1_replica_n1&action=CREATE&numShards=2&collection=MoveReplicaHDFSTest_coll_true&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=2148
   [junit4]   2> 613951 INFO  (qtp733099540-6544) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/cores 
params={qt=/admin/cores&coreNodeName=core_node7&collection.configName=conf1&newCollection=true&name=MoveReplicaHDFSTest_coll_true_shard2_replica_n4&action=CREATE&numShards=2&collection=MoveReplicaHDFSTest_coll_true&shard=shard2&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=2098
   [junit4]   2> 614053 INFO  (zkCallback-1790-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/MoveReplicaHDFSTest_coll_true/state.json] for collection 
[MoveReplicaHDFSTest_coll_true] has occurred - updating... (live nodes size: 
[5])
   [junit4]   2> 614053 INFO  (zkCallback-1845-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/MoveReplicaHDFSTest_coll_true/state.json] for collection 
[MoveReplicaHDFSTest_coll_true] has occurred - updating... (live nodes size: 
[5])
   [junit4]   2> 614671 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/cores 
params={qt=/admin/cores&coreNodeName=core_node5&collection.configName=conf1&newCollection=true&name=MoveReplicaHDFSTest_coll_true_shard1_replica_n2&action=CREATE&numShards=2&collection=MoveReplicaHDFSTest_coll_true&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=2829
   [junit4]   2> 614688 INFO  (qtp1667055842-6261) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/cores 
params={qt=/admin/cores&coreNodeName=core_node8&collection.configName=conf1&newCollection=true&name=MoveReplicaHDFSTest_coll_true_shard2_replica_n6&action=CREATE&numShards=2&collection=MoveReplicaHDFSTest_coll_true&shard=shard2&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=2863
   [junit4]   2> 614691 INFO  (qtp1667055842-6260) [n:127.0.0.1:39439_solr    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 614788 INFO  (zkCallback-1790-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/MoveReplicaHDFSTest_coll_true/state.json] for collection 
[MoveReplicaHDFSTest_coll_true] has occurred - updating... (live nodes size: 
[5])
   [junit4]   2> 614788 INFO  (zkCallback-1819-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/MoveReplicaHDFSTest_coll_true/state.json] for collection 
[MoveReplicaHDFSTest_coll_true] has occurred - updating... (live nodes size: 
[5])
   [junit4]   2> 614788 INFO  (zkCallback-1792-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/MoveReplicaHDFSTest_coll_true/state.json] for collection 
[MoveReplicaHDFSTest_coll_true] has occurred - updating... (live nodes size: 
[5])
   [junit4]   2> 614788 INFO  (zkCallback-1845-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/MoveReplicaHDFSTest_coll_true/state.json] for collection 
[MoveReplicaHDFSTest_coll_true] has occurred - updating... (live nodes size: 
[5])
   [junit4]   2> 615475 INFO  
(OverseerCollectionConfigSetProcessor-72232599637983241-127.0.0.1:44679_solr-n_0000000000)
 [n:127.0.0.1:44679_solr    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000002 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 615691 INFO  (qtp1667055842-6260) [n:127.0.0.1:39439_solr    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={collection.configName=conf1&maxShardsPerNode=2&autoAddReplicas=false&name=MoveReplicaHDFSTest_coll_true&nrtReplicas=2&action=CREATE&numShards=2&wt=javabin&version=2}
 status=0 QTime=4222
   [junit4]   2> 615709 INFO  (qtp1183989587-6244) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] o.a.s.c.ZkShardTerms 
Successful update of terms at 
/collections/MoveReplicaHDFSTest_coll_true/terms/shard1 to 
Terms{values={core_node3=1, core_node5=1}, version=2}
   [junit4]   2> 615742 INFO  (qtp2051061622-6250) [n:127.0.0.1:46345_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node5 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n2] 
o.a.s.u.p.LogUpdateProcessorFactory 
[MoveReplicaHDFSTest_coll_true_shard1_replica_n2]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:44679/solr/MoveReplicaHDFSTest_coll_true_shard1_replica_n1/&wt=javabin&version=2}{add=[1
 (1609964318446583808)]} 0 29
   [junit4]   2> 615742 INFO  (qtp1183989587-6244) [n:127.0.0.1:44679_solr 
c:MoveReplicaHDFSTest_coll_true s:shard1 r:core_node3 
x:MoveReplicaHDFSTest_coll_true_shard1_replica_n1] 
o.a.s.u.p.LogUpdateProcessorFactory 
[MoveReplicaHDFSTest_coll_true_shard1_replica_n1]  webapp=/solr path=/update 
params={wt=javabin&version=2}{add=[1 (1609964318446583808)]} 0 45
   [junit4]   2> 615756 INFO  (qtp733099540-6540) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] o.a.s.c.ZkShardTerms 
Successful update of terms at 
/collections/MoveReplicaHDFSTest_coll_true/terms/shard2 to 
Terms{values={core_node7=1, core_node8=1}, version=2}
   [junit4]   2> 615787 INFO  (qtp1667055842-6260) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] 
o.a.s.u.p.LogUpdateProcessorFactory 
[MoveReplicaHDFSTest_coll_true_shard2_replica_n6]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:43035/solr/MoveReplicaHDFSTest_coll_true_shard2_replica_n4/&wt=javabin&version=2}{add=[2
 (1609964318499012608)]} 0 23
   [junit4]   2> 615787 INFO  (qtp733099540-6540) [n:127.0.0.1:43035_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node7 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n4] 
o.a.s.u.p.LogUpdateProcessorFactory 
[MoveReplicaHDFSTest_coll_true_shard2_replica_n4]  webapp=/solr path=/update 
params={wt=javabin&version=2}{add=[2 (1609964318499012608)]} 0 40
   [junit4]   2> 615793 INFO  (qtp1667055842-6259) [n:127.0.0.1:39439_solr 
c:MoveReplicaHDFSTest_coll_true s:shard2 r:core_node8 
x:MoveReplicaHDFSTest_coll_true_shard2_replica_n6] 
o.a.s.u.p.LogUpdateProcessorFactory

[...truncated too long message...]



ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/top-level-ivy-settings.xml

resolve:

jar-checksums:
    [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/null1129588180
     [copy] Copying 239 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/null1129588180
   [delete] Deleting directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/null1129588180

check-working-copy:
[ivy:cachepath] :: resolving dependencies :: 
org.eclipse.jgit#org.eclipse.jgit-caller;working
[ivy:cachepath]         confs: [default]
[ivy:cachepath]         found 
org.eclipse.jgit#org.eclipse.jgit;4.6.0.201612231935-r in public
[ivy:cachepath]         found com.jcraft#jsch;0.1.53 in public
[ivy:cachepath]         found com.googlecode.javaewah#JavaEWAH;1.1.6 in public
[ivy:cachepath]         found org.apache.httpcomponents#httpclient;4.3.6 in 
public
[ivy:cachepath]         found org.apache.httpcomponents#httpcore;4.3.3 in public
[ivy:cachepath]         found commons-logging#commons-logging;1.1.3 in public
[ivy:cachepath]         found commons-codec#commons-codec;1.6 in public
[ivy:cachepath]         found org.slf4j#slf4j-api;1.7.2 in public
[ivy:cachepath] :: resolution report :: resolve 22ms :: artifacts dl 1ms
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   8   |   0   |   0   |   0   ||   8   |   0   |
        ---------------------------------------------------------------------
[wc-checker] Initializing working copy...
[wc-checker] SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[wc-checker] SLF4J: Defaulting to no-operation (NOP) logger implementation
[wc-checker] SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for 
further details.
[wc-checker] Checking working copy status...

-jenkins-base:

BUILD SUCCESSFUL
Total time: 91 minutes 5 seconds
Archiving artifacts
java.lang.InterruptedException: no matches found within 10000
        at hudson.FilePath$34.hasMatch(FilePath.java:2678)
        at hudson.FilePath$34.invoke(FilePath.java:2557)
        at hudson.FilePath$34.invoke(FilePath.java:2547)
        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2918)
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to lucene
                at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
                at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
                at hudson.remoting.Channel.call(Channel.java:955)
                at hudson.FilePath.act(FilePath.java:1036)
                at hudson.FilePath.act(FilePath.java:1025)
                at hudson.FilePath.validateAntFileMask(FilePath.java:2547)
                at 
hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
                at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
                at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
                at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
                at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
                at hudson.model.Build$BuildExecution.post2(Build.java:186)
                at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
                at hudson.model.Run.execute(Run.java:1819)
                at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
                at 
hudson.model.ResourceController.execute(ResourceController.java:97)
                at hudson.model.Executor.run(Executor.java:429)
Caused: hudson.FilePath$TunneledInterruptedException
        at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2920)
        at hudson.remoting.UserRequest.perform(UserRequest.java:212)
        at hudson.remoting.UserRequest.perform(UserRequest.java:54)
        at hudson.remoting.Request$2.run(Request.java:369)
        at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused: java.lang.InterruptedException: java.lang.InterruptedException: no 
matches found within 10000
        at hudson.FilePath.act(FilePath.java:1038)
        at hudson.FilePath.act(FilePath.java:1025)
        at hudson.FilePath.validateAntFileMask(FilePath.java:2547)
        at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
        at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
        at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
        at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
        at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
        at hudson.model.Build$BuildExecution.post2(Build.java:186)
        at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
        at hudson.model.Run.execute(Run.java:1819)
        at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
        at hudson.model.ResourceController.execute(ResourceController.java:97)
        at hudson.model.Executor.run(Executor.java:429)
No artifacts found that match the file pattern 
"**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error?
Recording test results
FATAL: Failed to save the JUnit test result
java.io.IOException: Failed to create a temporary file in 
/x1/jenkins/jenkins-home/jobs/Lucene-Solr-Tests-7.x/builds/825
        at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:144)
        at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:109)
        at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:84)
        at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:74)
        at hudson.XmlFile.write(XmlFile.java:187)
        at 
hudson.tasks.junit.TestResultAction.setResult(TestResultAction.java:117)
        at hudson.tasks.junit.TestResultAction.<init>(TestResultAction.java:85)
        at 
hudson.tasks.junit.JUnitResultArchiver.parseAndAttach(JUnitResultArchiver.java:174)
        at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:153)
        at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
        at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
        at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
        at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
        at hudson.model.Build$BuildExecution.post2(Build.java:186)
        at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
        at hudson.model.Run.execute(Run.java:1819)
        at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
        at hudson.model.ResourceController.execute(ResourceController.java:97)
        at hudson.model.Executor.run(Executor.java:429)
Caused by: java.io.IOException: Too many open files
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.createTempFile(File.java:2024)
        at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:142)
        ... 18 more
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to