Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1087/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.AssignBackwardCompatibilityTest.test

Error Message:
Expected 4 active replicas null Live Nodes: [127.0.0.1:33887_solr, 
127.0.0.1:36055_solr, 127.0.0.1:42197_solr, 127.0.0.1:44111_solr] Last 
available state: 
DocCollection(collection1//collections/collection1/state.json/7)={   
"pullReplicas":"0",   "replicationFactor":"4",   "shards":{"shard1":{       
"range":"80000000-7fffffff",       "state":"active",       "replicas":{         
"core_node2":{           "core":"collection1_shard1_replica_n1",           
"base_url":"http://127.0.0.1:33887/solr";,           
"node_name":"127.0.0.1:33887_solr",           "state":"active",           
"type":"NRT",           "leader":"true"},         "core_node4":{           
"core":"collection1_shard1_replica_n3",           
"base_url":"http://127.0.0.1:44111/solr";,           
"node_name":"127.0.0.1:44111_solr",           "state":"active",           
"type":"NRT"},         "core_node8":{           
"core":"collection1_shard1_replica_n7",           
"base_url":"http://127.0.0.1:42197/solr";,           
"node_name":"127.0.0.1:42197_solr",           "state":"active",           
"type":"NRT"},         "core_node10":{           
"core":"collection1_shard1_replica_n9",           
"base_url":"http://127.0.0.1:36055/solr";,           "state":"down",           
"node_name":"127.0.0.1:36055_solr",           "type":"NRT"}}}},   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1000",   
"autoAddReplicas":"false",   "nrtReplicas":"4",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected 4 active replicas
null
Live Nodes: [127.0.0.1:33887_solr, 127.0.0.1:36055_solr, 127.0.0.1:42197_solr, 
127.0.0.1:44111_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/7)={
  "pullReplicas":"0",
  "replicationFactor":"4",
  "shards":{"shard1":{
      "range":"80000000-7fffffff",
      "state":"active",
      "replicas":{
        "core_node2":{
          "core":"collection1_shard1_replica_n1",
          "base_url":"http://127.0.0.1:33887/solr";,
          "node_name":"127.0.0.1:33887_solr",
          "state":"active",
          "type":"NRT",
          "leader":"true"},
        "core_node4":{
          "core":"collection1_shard1_replica_n3",
          "base_url":"http://127.0.0.1:44111/solr";,
          "node_name":"127.0.0.1:44111_solr",
          "state":"active",
          "type":"NRT"},
        "core_node8":{
          "core":"collection1_shard1_replica_n7",
          "base_url":"http://127.0.0.1:42197/solr";,
          "node_name":"127.0.0.1:42197_solr",
          "state":"active",
          "type":"NRT"},
        "core_node10":{
          "core":"collection1_shard1_replica_n9",
          "base_url":"http://127.0.0.1:36055/solr";,
          "state":"down",
          "node_name":"127.0.0.1:36055_solr",
          "type":"NRT"}}}},
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1000",
  "autoAddReplicas":"false",
  "nrtReplicas":"4",
  "tlogReplicas":"0"}
        at 
__randomizedtesting.SeedInfo.seed([23F1118A31631A7F:ABA52E509F9F7787]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
        at 
org.apache.solr.cloud.AssignBackwardCompatibilityTest.test(AssignBackwardCompatibilityTest.java:92)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestHdfsCloudBackupRestore.test

Error Message:


Stack Trace:
java.lang.AssertionError
        at 
__randomizedtesting.SeedInfo.seed([23F1118A31631A7F:ABA52E509F9F7787]:0)
        at org.junit.Assert.fail(Assert.java:92)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at org.junit.Assert.assertTrue(Assert.java:54)
        at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:133)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 11883 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestHdfsCloudBackupRestore
   [junit4]   2> 202989 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/init-core-data-001
   [junit4]   2> 202989 WARN  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=32 numCloses=32
   [junit4]   2> 202989 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 202990 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 203309 WARN  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.h.u.NativeCodeLoader Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 203625 WARN  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 203722 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
   [junit4]   2> 203734 WARN  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 203858 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log jetty-6.1.x
   [junit4]   2> 203882 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log Extract 
jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/hdfs
 to ./temp/Jetty_localhost_localdomain_40517_hdfs____.ut03yf/webapp
   [junit4]   2> 203996 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:40517
   [junit4]   2> 204334 WARN  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 204336 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log jetty-6.1.x
   [junit4]   2> 204340 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log Extract 
jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_34583_datanode____785ee/webapp
   [junit4]   2> 204405 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34583
   [junit4]   2> 204416 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
i.n.u.i.PlatformDependent Your platform does not provide complete low-level API 
for accessing direct buffers reliably. Unless explicitly requested, heap buffer 
will always be preferred to avoid potential system unstability.
   [junit4]   2> 204540 WARN  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 204541 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log jetty-6.1.x
   [junit4]   2> 204546 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log Extract 
jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_35709_datanode____8d4ukl/webapp
   [junit4]   2> 204622 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35709
   [junit4]   2> 204870 ERROR (DataNode: 
[[[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-001/hdfsBaseDir/data/data3/,
 
[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-001/hdfsBaseDir/data/data4/]]
  heartbeating to localhost.localdomain/127.0.0.1:39171) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 204871 ERROR (DataNode: 
[[[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-001/hdfsBaseDir/data/data1/,
 
[DISK]file:/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-001/hdfsBaseDir/data/data2/]]
  heartbeating to localhost.localdomain/127.0.0.1:39171) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 204933 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x8932542f522db: from storage 
DS-ed93d7f5-e2db-4867-974a-57eb987109aa node 
DatanodeRegistration(127.0.0.1:33859, 
datanodeUuid=f4d784a5-ff8d-49ed-85ea-fc2889b5682a, infoPort=35893, 
infoSecurePort=0, ipcPort=32773, 
storageInfo=lv=-56;cid=testClusterID;nsid=1233212690;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 2 msecs
   [junit4]   2> 204933 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x8932542e6b017: from storage 
DS-5bc826a6-1bec-480b-8d03-4dddb2658e7f node 
DatanodeRegistration(127.0.0.1:40755, 
datanodeUuid=b31d512d-ac59-4eb2-8d5b-2ab546f9d50a, infoPort=37089, 
infoSecurePort=0, ipcPort=36015, 
storageInfo=lv=-56;cid=testClusterID;nsid=1233212690;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 0 msecs
   [junit4]   2> 204933 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x8932542f522db: from storage 
DS-c40fafb0-07e8-4d44-9b80-5e8d04143908 node 
DatanodeRegistration(127.0.0.1:33859, 
datanodeUuid=f4d784a5-ff8d-49ed-85ea-fc2889b5682a, infoPort=35893, 
infoSecurePort=0, ipcPort=32773, 
storageInfo=lv=-56;cid=testClusterID;nsid=1233212690;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 204934 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x8932542e6b017: from storage 
DS-2ef24a75-fc6e-4cc7-85c2-5c8a6dadefad node 
DatanodeRegistration(127.0.0.1:40755, 
datanodeUuid=b31d512d-ac59-4eb2-8d5b-2ab546f9d50a, infoPort=37089, 
infoSecurePort=0, ipcPort=36015, 
storageInfo=lv=-56;cid=testClusterID;nsid=1233212690;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 205089 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002
   [junit4]   2> 205090 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 205090 INFO  (Thread-369) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 205090 INFO  (Thread-369) [    ] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 205091 ERROR (Thread-369) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 205190 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.c.ZkTestServer start zk server on port:43707
   [junit4]   2> 205192 INFO  (zkConnectionManagerCallback-295-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205217 INFO  (jetty-launcher-292-thread-2) [    ] 
o.e.j.s.Server jetty-9.3.20.v20170531
   [junit4]   2> 205217 INFO  (jetty-launcher-292-thread-1) [    ] 
o.e.j.s.Server jetty-9.3.20.v20170531
   [junit4]   2> 205218 INFO  (jetty-launcher-292-thread-2) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@121ecf11{/solr,null,AVAILABLE}
   [junit4]   2> 205218 INFO  (jetty-launcher-292-thread-1) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@7211c42{/solr,null,AVAILABLE}
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-2) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@246af3ca{SSL,[ssl, 
http/1.1]}{127.0.0.1:45539}
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-1) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@86b5cae{SSL,[ssl, 
http/1.1]}{127.0.0.1:40065}
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-2) [    ] 
o.e.j.s.Server Started @207479ms
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-1) [    ] 
o.e.j.s.Server Started @207479ms
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-2) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=45539}
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-1) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=40065}
   [junit4]   2> 205219 ERROR (jetty-launcher-292-thread-1) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 205219 ERROR (jetty-launcher-292-thread-2) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.3.0
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.3.0
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-12-31T09:27:26.381842Z
   [junit4]   2> 205219 INFO  (jetty-launcher-292-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-12-31T09:27:26.381833Z
   [junit4]   2> 205222 INFO  (zkConnectionManagerCallback-298-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205222 INFO  (zkConnectionManagerCallback-299-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205223 INFO  (jetty-launcher-292-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 205223 INFO  (jetty-launcher-292-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 205226 INFO  (jetty-launcher-292-thread-1) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 205226 INFO  (jetty-launcher-292-thread-2) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 205227 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [    ] 
o.a.z.s.NIOServerCnxn caught end of stream exception
   [junit4]   2> EndOfStreamException: Unable to read additional data from 
client sessionid 0x160abe5b52c0001, likely client has closed socket
   [junit4]   2>        at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:239)
   [junit4]   2>        at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
   [junit4]   2>        at java.base/java.lang.Thread.run(Thread.java:844)
   [junit4]   2> 205228 INFO  (jetty-launcher-292-thread-1) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:43707/solr
   [junit4]   2> 205229 INFO  (jetty-launcher-292-thread-2) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:43707/solr
   [junit4]   2> 205230 INFO  (zkConnectionManagerCallback-304-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205230 INFO  (zkConnectionManagerCallback-307-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205232 INFO  
(zkConnectionManagerCallback-309-thread-1-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205233 INFO  
(zkConnectionManagerCallback-311-thread-1-processing-n:127.0.0.1:40065_solr) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205276 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 205276 INFO  (jetty-launcher-292-thread-1) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 205277 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.OverseerElectionContext I am going to be 
the leader 127.0.0.1:45539_solr
   [junit4]   2> 205277 INFO  (jetty-launcher-292-thread-1) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:40065_solr
   [junit4]   2> 205277 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.Overseer Overseer 
(id=99268194877571077-127.0.0.1:45539_solr-n_0000000000) starting
   [junit4]   2> 205278 INFO  
(zkCallback-310-thread-1-processing-n:127.0.0.1:40065_solr) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 205278 INFO  
(zkCallback-308-thread-1-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 205282 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:45539_solr
   [junit4]   2> 205282 INFO  
(zkCallback-308-thread-1-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (1) -> (2)
   [junit4]   2> 205282 INFO  
(zkCallback-310-thread-1-processing-n:127.0.0.1:40065_solr) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (1) -> (2)
   [junit4]   2> 205299 INFO  (jetty-launcher-292-thread-1) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.b.r.BackupRepositoryFactory Added backup 
repository with configuration params {type = repository,name = hdfs,class = 
org.apache.solr.core.backup.repository.HdfsBackupRepository,attributes = 
{name=hdfs, 
class=org.apache.solr.core.backup.repository.HdfsBackupRepository},args = 
{location=/backup,solr.hdfs.home=hdfs://localhost.localdomain:39171/solr,solr.hdfs.confdir=}}
   [junit4]   2> 205299 INFO  (jetty-launcher-292-thread-1) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.b.r.BackupRepositoryFactory Default 
configuration for backup repository is with configuration params {type = 
repository,name = hdfs,class = 
org.apache.solr.core.backup.repository.HdfsBackupRepository,attributes = 
{name=hdfs, 
class=org.apache.solr.core.backup.repository.HdfsBackupRepository},args = 
{location=/backup,solr.hdfs.home=hdfs://localhost.localdomain:39171/solr,solr.hdfs.confdir=}}
   [junit4]   2> 205301 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.b.r.BackupRepositoryFactory Added backup 
repository with configuration params {type = repository,name = hdfs,class = 
org.apache.solr.core.backup.repository.HdfsBackupRepository,attributes = 
{name=hdfs, 
class=org.apache.solr.core.backup.repository.HdfsBackupRepository},args = 
{location=/backup,solr.hdfs.home=hdfs://localhost.localdomain:39171/solr,solr.hdfs.confdir=}}
   [junit4]   2> 205301 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.b.r.BackupRepositoryFactory Default 
configuration for backup repository is with configuration params {type = 
repository,name = hdfs,class = 
org.apache.solr.core.backup.repository.HdfsBackupRepository,attributes = 
{name=hdfs, 
class=org.apache.solr.core.backup.repository.HdfsBackupRepository},args = 
{location=/backup,solr.hdfs.home=hdfs://localhost.localdomain:39171/solr,solr.hdfs.confdir=}}
   [junit4]   2> 205317 INFO  (jetty-launcher-292-thread-1) 
[n:127.0.0.1:40065_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 205320 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 205326 INFO  (jetty-launcher-292-thread-1) 
[n:127.0.0.1:40065_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 205326 INFO  (jetty-launcher-292-thread-1) 
[n:127.0.0.1:40065_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 205327 INFO  (jetty-launcher-292-thread-1) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node1/.
   [junit4]   2> 205330 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 205330 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 205331 INFO  (jetty-launcher-292-thread-2) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node2/.
   [junit4]   2> 205388 INFO  (zkConnectionManagerCallback-317-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205395 INFO  (zkConnectionManagerCallback-321-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 205395 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 205396 INFO  
(SUITE-TestHdfsCloudBackupRestore-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:43707/solr ready
   [junit4]   2> 205415 INFO  
(TEST-TestHdfsCloudBackupRestore.test-seed#[23F1118A31631A7F]) [    ] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 205425 INFO  (qtp1171428704-1686) [n:127.0.0.1:40065_solr    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
pullReplicas=0&replicationFactor=2&property.customKey=customValue&collection.configName=conf1&maxShardsPerNode=6&name=hdfsbackuprestore&nrtReplicas=2&action=CREATE&numShards=2&tlogReplicas=1&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 205430 INFO  
(OverseerThreadFactory-605-thread-1-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.CreateCollectionCmd Create collection 
hdfsbackuprestore
   [junit4]   2> 205430 WARN  
(OverseerThreadFactory-605-thread-1-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.CreateCollectionCmd Specified number of 
replicas of 3 on collection hdfsbackuprestore is higher than the number of Solr 
instances currently live or live and part of your createNodeSet(2). It's 
unusual to run two replica of the same slice on the same Solr-instance.
   [junit4]   2> 205533 INFO  
(OverseerStateUpdate-99268194877571077-127.0.0.1:45539_solr-n_0000000000) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"hdfsbackuprestore",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"hdfsbackuprestore_shard1_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:40065/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 205534 INFO  
(OverseerStateUpdate-99268194877571077-127.0.0.1:45539_solr-n_0000000000) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"hdfsbackuprestore",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"hdfsbackuprestore_shard1_replica_n2",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:45539/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 205535 INFO  
(OverseerStateUpdate-99268194877571077-127.0.0.1:45539_solr-n_0000000000) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"hdfsbackuprestore",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"hdfsbackuprestore_shard1_replica_t4",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:40065/solr";,
   [junit4]   2>   "type":"TLOG",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 205536 INFO  
(OverseerStateUpdate-99268194877571077-127.0.0.1:45539_solr-n_0000000000) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"hdfsbackuprestore",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"hdfsbackuprestore_shard2_replica_n6",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:45539/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 205537 INFO  
(OverseerStateUpdate-99268194877571077-127.0.0.1:45539_solr-n_0000000000) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"hdfsbackuprestore",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"hdfsbackuprestore_shard2_replica_n8",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:40065/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 205537 INFO  
(OverseerStateUpdate-99268194877571077-127.0.0.1:45539_solr-n_0000000000) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"hdfsbackuprestore",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"hdfsbackuprestore_shard2_replica_t10",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:45539/solr";,
   [junit4]   2>   "type":"TLOG",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 205756 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node3&name=hdfsbackuprestore_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin
   [junit4]   2> 205756 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 
transient cores
   [junit4]   2> 205758 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=TLOG&property.customKey=customValue&coreNodeName=core_node7&name=hdfsbackuprestore_shard1_replica_t4&action=CREATE&numShards=2&shard=shard1&wt=javabin
   [junit4]   2> 205779 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node11&name=hdfsbackuprestore_shard2_replica_n8&action=CREATE&numShards=2&shard=shard2&wt=javabin
   [junit4]   2> 205790 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node9&name=hdfsbackuprestore_shard2_replica_n6&action=CREATE&numShards=2&shard=shard2&wt=javabin
   [junit4]   2> 205790 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 
transient cores
   [junit4]   2> 205807 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=TLOG&property.customKey=customValue&coreNodeName=core_node12&name=hdfsbackuprestore_shard2_replica_t10&action=CREATE&numShards=2&shard=shard2&wt=javabin
   [junit4]   2> 205807 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node5&name=hdfsbackuprestore_shard1_replica_n2&action=CREATE&numShards=2&shard=shard1&wt=javabin
   [junit4]   2> 205909 INFO  
(zkCallback-308-thread-1-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 205909 INFO  
(zkCallback-310-thread-1-processing-n:127.0.0.1:40065_solr) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 205910 INFO  
(zkCallback-308-thread-2-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 206769 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.SolrConfig Using Lucene 
MatchVersion: 7.3.0
   [junit4]   2> 206776 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.s.IndexSchema 
[hdfsbackuprestore_shard1_replica_t4] Schema name=minimal
   [junit4]   2> 206778 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.s.IndexSchema Loaded schema 
minimal/1.1 with uniqueid field id
   [junit4]   2> 206778 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.CoreContainer Creating SolrCore 
'hdfsbackuprestore_shard1_replica_t4' using configuration from collection 
hdfsbackuprestore, trusted=true
   [junit4]   2> 206778 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.m.r.SolrJmxReporter JMX monitoring 
for 'solr.core.hdfsbackuprestore.shard1.replica_t4' (registry 
'solr.core.hdfsbackuprestore.shard1.replica_t4') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 206783 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 206784 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.SolrCore 
[[hdfsbackuprestore_shard1_replica_t4] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node1/hdfsbackuprestore_shard1_replica_t4],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node1/./hdfsbackuprestore_shard1_replica_t4/data/]
   [junit4]   2> 206787 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene 
MatchVersion: 7.3.0
   [junit4]   2> 206788 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.SolrConfig Using Lucene 
MatchVersion: 7.3.0
   [junit4]   2> 206801 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.s.IndexSchema 
[hdfsbackuprestore_shard1_replica_n1] Schema name=minimal
   [junit4]   2> 206801 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.s.IndexSchema 
[hdfsbackuprestore_shard2_replica_n8] Schema name=minimal
   [junit4]   2> 206803 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema 
minimal/1.1 with uniqueid field id
   [junit4]   2> 206803 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 
'hdfsbackuprestore_shard1_replica_n1' using configuration from collection 
hdfsbackuprestore, trusted=true
   [junit4]   2> 206803 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.s.IndexSchema Loaded schema 
minimal/1.1 with uniqueid field id
   [junit4]   2> 206803 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.CoreContainer Creating SolrCore 
'hdfsbackuprestore_shard2_replica_n8' using configuration from collection 
hdfsbackuprestore, trusted=true
   [junit4]   2> 206804 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring 
for 'solr.core.hdfsbackuprestore.shard1.replica_n1' (registry 
'solr.core.hdfsbackuprestore.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 206804 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 206804 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
[[hdfsbackuprestore_shard1_replica_n1] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node1/hdfsbackuprestore_shard1_replica_n1],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node1/./hdfsbackuprestore_shard1_replica_n1/data/]
   [junit4]   2> 206804 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.m.r.SolrJmxReporter JMX monitoring 
for 'solr.core.hdfsbackuprestore.shard2.replica_n8' (registry 
'solr.core.hdfsbackuprestore.shard2.replica_n8') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 206804 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 206804 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.SolrCore 
[[hdfsbackuprestore_shard2_replica_n8] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node1/hdfsbackuprestore_shard2_replica_n8],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node1/./hdfsbackuprestore_shard2_replica_n8/data/]
   [junit4]   2> 206808 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.c.SolrConfig Using Lucene 
MatchVersion: 7.3.0
   [junit4]   2> 206812 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.s.IndexSchema 
[hdfsbackuprestore_shard2_replica_n6] Schema name=minimal
   [junit4]   2> 206814 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.c.SolrConfig Using Lucene 
MatchVersion: 7.3.0
   [junit4]   2> 206819 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.c.SolrConfig Using Lucene 
MatchVersion: 7.3.0
   [junit4]   2> 206819 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.s.IndexSchema Loaded schema 
minimal/1.1 with uniqueid field id
   [junit4]   2> 206819 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.c.CoreContainer Creating SolrCore 
'hdfsbackuprestore_shard2_replica_n6' using configuration from collection 
hdfsbackuprestore, trusted=true
   [junit4]   2> 206820 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.m.r.SolrJmxReporter JMX monitoring 
for 'solr.core.hdfsbackuprestore.shard2.replica_n6' (registry 
'solr.core.hdfsbackuprestore.shard2.replica_n6') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 206820 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 206820 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.c.SolrCore 
[[hdfsbackuprestore_shard2_replica_n6] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node2/hdfsbackuprestore_shard2_replica_n6],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node2/./hdfsbackuprestore_shard2_replica_n6/data/]
   [junit4]   2> 206823 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.s.IndexSchema 
[hdfsbackuprestore_shard2_replica_t10] Schema name=minimal
   [junit4]   2> 206824 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.s.IndexSchema 
[hdfsbackuprestore_shard1_replica_n2] Schema name=minimal
   [junit4]   2> 206825 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.s.IndexSchema Loaded schema 
minimal/1.1 with uniqueid field id
   [junit4]   2> 206825 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.c.CoreContainer Creating SolrCore 
'hdfsbackuprestore_shard2_replica_t10' using configuration from collection 
hdfsbackuprestore, trusted=true
   [junit4]   2> 206825 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.m.r.SolrJmxReporter JMX 
monitoring for 'solr.core.hdfsbackuprestore.shard2.replica_t10' (registry 
'solr.core.hdfsbackuprestore.shard2.replica_t10') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 206825 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 206825 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.c.SolrCore 
[[hdfsbackuprestore_shard2_replica_t10] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node2/hdfsbackuprestore_shard2_replica_t10],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node2/./hdfsbackuprestore_shard2_replica_t10/data/]
   [junit4]   2> 206826 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.s.IndexSchema Loaded schema 
minimal/1.1 with uniqueid field id
   [junit4]   2> 206826 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.c.CoreContainer Creating SolrCore 
'hdfsbackuprestore_shard1_replica_n2' using configuration from collection 
hdfsbackuprestore, trusted=true
   [junit4]   2> 206827 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.m.r.SolrJmxReporter JMX monitoring 
for 'solr.core.hdfsbackuprestore.shard1.replica_n2' (registry 
'solr.core.hdfsbackuprestore.shard1.replica_n2') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@50aae5fe
   [junit4]   2> 206827 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 206827 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.c.SolrCore 
[[hdfsbackuprestore_shard1_replica_n2] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node2/hdfsbackuprestore_shard1_replica_n2],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestHdfsCloudBackupRestore_23F1118A31631A7F-001/tempDir-002/node2/./hdfsbackuprestore_shard1_replica_n2/data/]
   [junit4]   2> 206894 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.UpdateHandler Using UpdateLog 
implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 206894 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.UpdateLog Initializing 
UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 206895 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.CommitTracker Hard AutoCommit: 
disabled
   [junit4]   2> 206895 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.CommitTracker Soft AutoCommit: 
disabled
   [junit4]   2> 206896 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.u.UpdateHandler Using UpdateLog 
implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 206896 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.u.UpdateLog Initializing 
UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 206897 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.u.CommitTracker Hard AutoCommit: 
disabled
   [junit4]   2> 206897 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.u.CommitTracker Soft AutoCommit: 
disabled
   [junit4]   2> 206898 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@7a217b3b[hdfsbackuprestore_shard1_replica_t4] main]
   [junit4]   2> 206898 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.r.ManagedResourceStorage 
Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 206899 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.r.ManagedResourceStorage Loaded 
null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 206899 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@1becd60e[hdfsbackuprestore_shard2_replica_n8] main]
   [junit4]   2> 206899 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.h.ReplicationHandler Commits will 
be reserved for 10000ms.
   [junit4]   2> 206900 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.r.ManagedResourceStorage 
Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 206900 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.r.ManagedResourceStorage Loaded 
null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 206900 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.UpdateLog Could not find max 
version in index or recent updates, using new clock 1588291119939059712
   [junit4]   2> 206900 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.h.ReplicationHandler Commits will 
be reserved for 10000ms.
   [junit4]   2> 206901 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.u.UpdateLog Could not find max 
version in index or recent updates, using new clock 1588291119940108288
   [junit4]   2> 206902 INFO  
(searcherExecutor-610-thread-1-processing-n:127.0.0.1:40065_solr 
x:hdfsbackuprestore_shard1_replica_t4 s:shard1 c:hdfsbackuprestore 
r:core_node7) [n:127.0.0.1:40065_solr c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.SolrCore 
[hdfsbackuprestore_shard1_replica_t4] Registered new searcher 
Searcher@7a217b3b[hdfsbackuprestore_shard1_replica_t4] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 206905 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.ShardLeaderElectionContext 
Waiting until we see more replicas up for shard shard1: total=3 found=1 
timeoutin=9999ms
   [junit4]   2> 206905 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.ShardLeaderElectionContext 
Waiting until we see more replicas up for shard shard2: total=3 found=1 
timeoutin=9999ms
   [junit4]   2> 206906 INFO  
(searcherExecutor-612-thread-1-processing-n:127.0.0.1:40065_solr 
x:hdfsbackuprestore_shard2_replica_n8 s:shard2 c:hdfsbackuprestore 
r:core_node11) [n:127.0.0.1:40065_solr c:hdfsbackuprestore s:shard2 
r:core_node11 x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.SolrCore 
[hdfsbackuprestore_shard2_replica_n8] Registered new searcher 
Searcher@1becd60e[hdfsbackuprestore_shard2_replica_n8] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 206914 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.u.UpdateHandler Using UpdateLog 
implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 206914 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.u.UpdateLog Initializing 
UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 206914 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.u.CommitTracker Hard AutoCommit: 
disabled
   [junit4]   2> 206914 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.u.CommitTracker Soft AutoCommit: 
disabled
   [junit4]   2> 206915 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.u.UpdateHandler Using UpdateLog 
implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 206915 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.u.UpdateLog Initializing 
UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 206915 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog 
implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 206915 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.u.UpdateLog Initializing 
UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 206916 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@25422840[hdfsbackuprestore_shard2_replica_t10] main]
   [junit4]   2> 206916 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.u.CommitTracker Hard AutoCommit: 
disabled
   [junit4]   2> 206916 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: 
disabled
   [junit4]   2> 206916 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.u.CommitTracker Soft AutoCommit: 
disabled
   [junit4]   2> 206916 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: 
disabled
   [junit4]   2> 206917 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.r.ManagedResourceStorage 
Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 206917 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.r.ManagedResourceStorage Loaded 
null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 206917 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.h.ReplicationHandler Commits will 
be reserved for 10000ms.
   [junit4]   2> 206917 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.u.UpdateLog Could not find max 
version in index or recent updates, using new clock 1588291119956885504
   [junit4]   2> 206918 INFO  
(searcherExecutor-614-thread-1-processing-n:127.0.0.1:45539_solr 
x:hdfsbackuprestore_shard2_replica_t10 s:shard2 c:hdfsbackuprestore 
r:core_node12) [n:127.0.0.1:45539_solr c:hdfsbackuprestore s:shard2 
r:core_node12 x:hdfsbackuprestore_shard2_replica_t10] o.a.s.c.SolrCore 
[hdfsbackuprestore_shard2_replica_t10] Registered new searcher 
Searcher@25422840[hdfsbackuprestore_shard2_replica_t10] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 206919 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@10c55a98[hdfsbackuprestore_shard2_replica_n6] main]
   [junit4]   2> 206919 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@70c33a4[hdfsbackuprestore_shard1_replica_n1] main]
   [junit4]   2> 206919 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.r.ManagedResourceStorage 
Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 206920 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.r.ManagedResourceStorage Loaded 
null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 206920 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.r.ManagedResourceStorage 
Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 206920 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.h.ReplicationHandler Commits will 
be reserved for 10000ms.
   [junit4]   2> 206920 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded 
null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 206921 INFO  
(searcherExecutor-613-thread-1-processing-n:127.0.0.1:45539_solr 
x:hdfsbackuprestore_shard2_replica_n6 s:shard2 c:hdfsbackuprestore 
r:core_node9) [n:127.0.0.1:45539_solr c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.c.SolrCore 
[hdfsbackuprestore_shard2_replica_n6] Registered new searcher 
Searcher@10c55a98[hdfsbackuprestore_shard2_replica_n6] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 206921 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.u.UpdateLog Could not find max 
version in index or recent updates, using new clock 1588291119961079808
   [junit4]   2> 206921 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will 
be reserved for 10000ms.
   [junit4]   2> 206921 INFO  
(searcherExecutor-611-thread-1-processing-n:127.0.0.1:40065_solr 
x:hdfsbackuprestore_shard1_replica_n1 s:shard1 c:hdfsbackuprestore 
r:core_node3) [n:127.0.0.1:40065_solr c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
[hdfsbackuprestore_shard1_replica_n1] Registered new searcher 
Searcher@70c33a4[hdfsbackuprestore_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 206921 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.u.UpdateLog Could not find max 
version in index or recent updates, using new clock 1588291119961079808
   [junit4]   2> 206926 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.u.UpdateHandler Using UpdateLog 
implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 206926 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.u.UpdateLog Initializing 
UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 206927 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.u.CommitTracker Hard AutoCommit: 
disabled
   [junit4]   2> 206927 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.u.CommitTracker Soft AutoCommit: 
disabled
   [junit4]   2> 206929 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@378e1cda[hdfsbackuprestore_shard1_replica_n2] main]
   [junit4]   2> 206930 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.r.ManagedResourceStorage 
Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
   [junit4]   2> 206930 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.r.ManagedResourceStorage Loaded 
null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 206931 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.h.ReplicationHandler Commits will 
be reserved for 10000ms.
   [junit4]   2> 206931 INFO  
(searcherExecutor-615-thread-1-processing-n:127.0.0.1:45539_solr 
x:hdfsbackuprestore_shard1_replica_n2 s:shard1 c:hdfsbackuprestore 
r:core_node5) [n:127.0.0.1:45539_solr c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.c.SolrCore 
[hdfsbackuprestore_shard1_replica_n2] Registered new searcher 
Searcher@378e1cda[hdfsbackuprestore_shard1_replica_n2] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 206931 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.u.UpdateLog Could not find max 
version in index or recent updates, using new clock 1588291119971565568
   [junit4]   2> 207006 INFO  
(zkCallback-308-thread-2-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 207006 INFO  
(zkCallback-308-thread-3-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 207006 INFO  
(zkCallback-310-thread-1-processing-n:127.0.0.1:40065_solr) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 207405 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.ShardLeaderElectionContext 
Enough replicas found to continue.
   [junit4]   2> 207405 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.ShardLeaderElectionContext 
Enough replicas found to continue.
   [junit4]   2> 207406 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.ShardLeaderElectionContext I may 
be the new leader - try and sync
   [junit4]   2> 207406 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.ShardLeaderElectionContext I may 
be the new leader - try and sync
   [junit4]   2> 207406 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:40065/solr/hdfsbackuprestore_shard2_replica_n8/
   [junit4]   2> 207406 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:40065/solr/hdfsbackuprestore_shard1_replica_t4/
   [junit4]   2> 207406 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.PeerSync PeerSync: 
core=hdfsbackuprestore_shard1_replica_t4 url=https://127.0.0.1:40065/solr START 
replicas=[https://127.0.0.1:40065/solr/hdfsbackuprestore_shard1_replica_n1/, 
https://127.0.0.1:45539/solr/hdfsbackuprestore_shard1_replica_n2/] nUpdates=100
   [junit4]   2> 207406 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.u.PeerSync PeerSync: 
core=hdfsbackuprestore_shard2_replica_n8 url=https://127.0.0.1:40065/solr START 
replicas=[https://127.0.0.1:45539/solr/hdfsbackuprestore_shard2_replica_n6/, 
https://127.0.0.1:45539/solr/hdfsbackuprestore_shard2_replica_t10/] nUpdates=100
   [junit4]   2> 207419 INFO  (qtp1887539743-1683) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.c.S.Request 
[hdfsbackuprestore_shard2_replica_t10]  webapp=/solr path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=1
   [junit4]   2> 207423 INFO  (qtp1887539743-1677) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.c.S.Request 
[hdfsbackuprestore_shard1_replica_n2]  webapp=/solr path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=2
   [junit4]   2> 207425 INFO  (qtp1171428704-1742) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.c.S.Request 
[hdfsbackuprestore_shard1_replica_n1]  webapp=/solr path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=1
   [junit4]   2> 207427 INFO  (qtp1887539743-1685) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.c.S.Request 
[hdfsbackuprestore_shard2_replica_n6]  webapp=/solr path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=0
   [junit4]   2> 207707 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.PeerSync PeerSync: 
core=hdfsbackuprestore_shard1_replica_t4 url=https://127.0.0.1:40065/solr DONE. 
 We have no versions.  sync failed.
   [junit4]   2> 207708 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.SyncStrategy Leader's attempt to 
sync with shard failed, moving to the next candidate
   [junit4]   2> 207708 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.ShardLeaderElectionContext We 
failed sync, but we have no versions - we can't sync in that case - we were 
active before, so become leader anyway
   [junit4]   2> 207708 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.ShardLeaderElectionContext Found 
all replicas participating in election, clear LIR
   [junit4]   2> 207708 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.u.PeerSync PeerSync: 
core=hdfsbackuprestore_shard2_replica_n8 url=https://127.0.0.1:40065/solr DONE. 
 We have no versions.  sync failed.
   [junit4]   2> 207708 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.SyncStrategy Leader's attempt to 
sync with shard failed, moving to the next candidate
   [junit4]   2> 207708 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.ShardLeaderElectionContext We 
failed sync, but we have no versions - we can't sync in that case - we were 
active before, so become leader anyway
   [junit4]   2> 207708 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.ShardLeaderElectionContext Found 
all replicas participating in election, clear LIR
   [junit4]   2> 207709 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.ZkController 
hdfsbackuprestore_shard1_replica_t4 stopping background replication from leader
   [junit4]   2> 207712 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.ShardLeaderElectionContext I am 
the new leader: 
https://127.0.0.1:40065/solr/hdfsbackuprestore_shard1_replica_t4/ shard1
   [junit4]   2> 207712 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.ShardLeaderElectionContext I am 
the new leader: 
https://127.0.0.1:40065/solr/hdfsbackuprestore_shard2_replica_n8/ shard2
   [junit4]   2> 207813 INFO  
(zkCallback-310-thread-1-processing-n:127.0.0.1:40065_solr) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 207813 INFO  
(zkCallback-308-thread-2-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 207813 INFO  
(zkCallback-308-thread-3-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 207863 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.c.ZkController I am the leader, no 
recovery necessary
   [junit4]   2> 207863 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.c.ZkController I am the leader, no 
recovery necessary
   [junit4]   2> 207871 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=TLOG&property.customKey=customValue&coreNodeName=core_node7&name=hdfsbackuprestore_shard1_replica_t4&action=CREATE&numShards=2&shard=shard1&wt=javabin}
 status=0 QTime=2113
   [junit4]   2> 207871 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node11&name=hdfsbackuprestore_shard2_replica_n8&action=CREATE&numShards=2&shard=shard2&wt=javabin}
 status=0 QTime=2092
   [junit4]   2> 207920 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.c.ZkController 
hdfsbackuprestore_shard2_replica_t10 starting background replication from leader
   [junit4]   2> 207920 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.c.ReplicateFromLeader Will start 
replication from leader with poll interval: 00:00:03
   [junit4]   2> 207921 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.h.ReplicationHandler Poll 
scheduled at an interval of 3000ms
   [junit4]   2> 207921 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.h.ReplicationHandler Commits will 
be reserved for 10000ms.
   [junit4]   2> 207922 INFO  (qtp1887539743-1671) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=TLOG&property.customKey=customValue&coreNodeName=core_node12&name=hdfsbackuprestore_shard2_replica_t10&action=CREATE&numShards=2&shard=shard2&wt=javabin}
 status=0 QTime=2115
   [junit4]   2> 207924 INFO  (qtp1887539743-1675) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node9&name=hdfsbackuprestore_shard2_replica_n6&action=CREATE&numShards=2&shard=shard2&wt=javabin}
 status=0 QTime=2134
   [junit4]   2> 207925 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node3&name=hdfsbackuprestore_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin}
 status=0 QTime=2168
   [junit4]   2> 207935 INFO  (qtp1887539743-1747) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node5&name=hdfsbackuprestore_shard1_replica_n2&action=CREATE&numShards=2&shard=shard1&wt=javabin}
 status=0 QTime=2128
   [junit4]   2> 207937 INFO  (qtp1171428704-1686) [n:127.0.0.1:40065_solr    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 208036 INFO  
(zkCallback-308-thread-3-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 208036 INFO  
(zkCallback-308-thread-2-processing-n:127.0.0.1:45539_solr) 
[n:127.0.0.1:45539_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 208036 INFO  
(zkCallback-310-thread-1-processing-n:127.0.0.1:40065_solr) 
[n:127.0.0.1:40065_solr    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/hdfsbackuprestore/state.json] for collection 
[hdfsbackuprestore] has occurred - updating... (live nodes size: [2])
   [junit4]   2> 208937 INFO  (qtp1171428704-1686) [n:127.0.0.1:40065_solr    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={pullReplicas=0&replicationFactor=2&property.customKey=customValue&collection.configName=conf1&maxShardsPerNode=6&name=hdfsbackuprestore&nrtReplicas=2&action=CREATE&numShards=2&tlogReplicas=1&wt=javabin&version=2}
 status=0 QTime=3512
   [junit4]   2> 208963 INFO  (qtp1171428704-1743) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node3 
x:hdfsbackuprestore_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory 
[hdfsbackuprestore_shard1_replica_n1]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:40065/solr/hdfsbackuprestore_shard1_replica_t4/&wt=javabin&version=2}{add=[0
 (1588291122080251904), 1 (1588291122089689088), 4 (1588291122089689089), 8 
(1588291122090737664), 10 (1588291122090737665), 11 (1588291122091786240), 12 
(1588291122091786241), 13 (1588291122092834816), 14 (1588291122094931968), 15 
(1588291122094931969), ... (12 adds)]} 0 10
   [junit4]   2> 208964 INFO  (qtp1887539743-1746) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node12 
x:hdfsbackuprestore_shard2_replica_t10] o.a.s.u.p.LogUpdateProcessorFactory 
[hdfsbackuprestore_shard2_replica_t10]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:40065/solr/hdfsbackuprestore_shard2_replica_n8/&wt=javabin&version=2}{add=[2
 (1588291122086543360), 3 (1588291122097029120), 5 (1588291122097029121), 6 
(1588291122097029122), 7 (1588291122100174848), 9 (1588291122101223424), 17 
(1588291122101223425), 18 (1588291122101223426), 19 (1588291122101223427)]} 0 2
   [junit4]   2> 208969 INFO  (qtp1887539743-1677) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory 
[hdfsbackuprestore_shard1_replica_n2]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:40065/solr/hdfsbackuprestore_shard1_replica_t4/&wt=javabin&version=2}{add=[0
 (1588291122080251904), 1 (1588291122089689088), 4 (1588291122089689089), 8 
(1588291122090737664), 10 (1588291122090737665), 11 (1588291122091786240), 12 
(1588291122091786241), 13 (1588291122092834816), 14 (1588291122094931968), 15 
(1588291122094931969), ... (12 adds)]} 0 16
   [junit4]   2> 208969 INFO  (qtp1887539743-1683) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.u.p.LogUpdateProcessorFactory 
[hdfsbackuprestore_shard2_replica_n6]  webapp=/solr path=/update 
params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:40065/solr/hdfsbackuprestore_shard2_replica_n8/&wt=javabin&version=2}{add=[2
 (1588291122086543360), 3 (1588291122097029120), 5 (1588291122097029121), 6 
(1588291122097029122), 7 (1588291122100174848), 9 (1588291122101223424), 17 
(1588291122101223425), 18 (1588291122101223426), 19 (1588291122101223427)]} 0 9
   [junit4]   2> 208970 INFO  (qtp1171428704-1672) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.p.LogUpdateProcessorFactory 
[hdfsbackuprestore_shard1_replica_t4]  webapp=/solr path=/update 
params={wt=javabin&version=2}{add=[0 (1588291122080251904), 1 
(1588291122089689088), 4 (1588291122089689089), 8 (1588291122090737664), 10 
(1588291122090737665), 11 (1588291122091786240), 12 (1588291122091786241), 13 
(1588291122092834816), 14 (1588291122094931968), 15 (1588291122094931969), ... 
(12 adds)]} 0 28
   [junit4]   2> 208970 INFO  (qtp1171428704-1682) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard2 r:core_node11 
x:hdfsbackuprestore_shard2_replica_n8] o.a.s.u.p.LogUpdateProcessorFactory 
[hdfsbackuprestore_shard2_replica_n8]  webapp=/solr path=/update 
params={wt=javabin&version=2}{add=[2 (1588291122086543360), 3 
(1588291122097029120), 5 (1588291122097029121), 6 (1588291122097029122), 7 
(1588291122100174848), 9 (1588291122101223424), 17 (1588291122101223425), 18 
(1588291122101223426), 19 (1588291122101223427)]} 0 22
   [junit4]   2> 208974 INFO  (qtp1887539743-1771) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1588291122113806336,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 208974 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1588291122113806336,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 208974 INFO  (qtp1887539743-1771) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard2 r:core_node9 
x:hdfsbackuprestore_shard2_replica_n6] o.a.s.u.SolrIndexWriter Calling 
setCommitData with IW:org.apache.solr.update.SolrIndexWriter@656e2c78 
commitCommandVersion:1588291122113806336
   [junit4]   2> 208975 INFO  (qtp1171428704-1684) [n:127.0.0.1:40065_solr 
c:hdfsbackuprestore s:shard1 r:core_node7 
x:hdfsbackuprestore_shard1_replica_t4] o.a.s.u.SolrIndexWriter Calling 
setCommitData with IW:org.apache.solr.update.SolrIndexWriter@33053554 
commitCommandVersion:1588291122113806336
   [junit4]   2> 208979 INFO  (qtp1887539743-1685) [n:127.0.0.1:45539_solr 
c:hdfsbackuprestore s:shard1 r:core_node5 
x:hdfsbackuprestore_shard1_replica_n2] o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1588291122119049216,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 208979 INFO  (qtp1887539743-1

[...truncated too long message...]

it4]   2> 257347 INFO  (coreCloseExecutor-458-thread-1) [n:127.0.0.1:33887_solr 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter Closing reporter 
[org.apache.solr.metrics.reporters.SolrJmxReporter@3413e33f: rootName = 
solr_33887, domain = solr.core.collection1.shard1.replica_n1, service url = 
null, agent id = null] for registry solr.core.collection1.shard1.replica_n1 / 
com.codahale.metrics.MetricRegistry@6eccc50b
   [junit4]   2> 257347 INFO  
(zkCallback-213-thread-3-processing-n:127.0.0.1:33887_solr) 
[n:127.0.0.1:33887_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (4) -> (0)
   [junit4]   2> 257348 INFO  
(zkCallback-215-thread-4-processing-n:127.0.0.1:42197_solr) 
[n:127.0.0.1:42197_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (4) -> (0)
   [junit4]   2> 257355 INFO  (coreCloseExecutor-455-thread-1) 
[n:127.0.0.1:44111_solr c:collection1 s:shard1 r:core_node4 
x:collection1_shard1_replica_n3] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.collection1.shard1.leader, tag=1581781992
   [junit4]   2> 257357 INFO  (coreCloseExecutor-456-thread-1) 
[n:127.0.0.1:42197_solr c:collection1 s:shard1 r:core_node8 
x:collection1_shard1_replica_n7] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.collection1.shard1.leader, tag=2126708523
   [junit4]   2> 257358 INFO  (jetty-closer-184-thread-4) [    ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@7690dbe{/solr,null,UNAVAILABLE}
   [junit4]   2> 257359 INFO  (jetty-closer-184-thread-2) [    ] 
o.a.s.c.Overseer Overseer 
(id=99268192019742730-127.0.0.1:42197_solr-n_0000000000) closing
   [junit4]   2> 257359 INFO  
(OverseerStateUpdate-99268192019742730-127.0.0.1:42197_solr-n_0000000000) 
[n:127.0.0.1:42197_solr    ] o.a.s.c.Overseer Overseer Loop exiting : 
127.0.0.1:42197_solr
   [junit4]   2> 257359 INFO  (coreCloseExecutor-458-thread-1) 
[n:127.0.0.1:33887_solr c:collection1 s:shard1 r:core_node2 
x:collection1_shard1_replica_n1] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.collection1.shard1.leader, tag=1658691855
   [junit4]   2> 257359 WARN  
(OverseerAutoScalingTriggerThread-99268192019742730-127.0.0.1:42197_solr-n_0000000000)
 [n:127.0.0.1:42197_solr    ] o.a.s.c.a.OverseerTriggerThread 
OverseerTriggerThread woken up but we are closed, exiting.
   [junit4]   2> 257361 WARN  
(zkCallback-219-thread-3-processing-n:127.0.0.1:36055_solr) 
[n:127.0.0.1:36055_solr    ] o.a.s.c.LeaderElector Our node is no longer in 
line to be leader
   [junit4]   2> 257361 INFO  (jetty-closer-184-thread-2) [    ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@216c6533{/solr,null,UNAVAILABLE}
   [junit4]   2> 257362 INFO  
(zkCallback-219-thread-5-processing-n:127.0.0.1:36055_solr) 
[n:127.0.0.1:36055_solr    ] o.a.s.c.OverseerElectionContext I am going to be 
the leader 127.0.0.1:36055_solr
   [junit4]   2> 257362 INFO  (jetty-closer-184-thread-3) [    ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@6aab6fb8{/solr,null,UNAVAILABLE}
   [junit4]   2> 262317 INFO  
(recoveryExecutor-205-thread-1-processing-n:127.0.0.1:36055_solr 
x:collection1_shard1_replica_n9 s:shard1 c:collection1 r:core_node10) 
[n:127.0.0.1:36055_solr c:collection1 s:shard1 r:core_node10 
x:collection1_shard1_replica_n9] o.a.s.c.RecoveryStrategy RecoveryStrategy has 
been closed
   [junit4]   2> 262317 INFO  
(recoveryExecutor-205-thread-1-processing-n:127.0.0.1:36055_solr 
x:collection1_shard1_replica_n9 s:shard1 c:collection1 r:core_node10) 
[n:127.0.0.1:36055_solr c:collection1 s:shard1 r:core_node10 
x:collection1_shard1_replica_n9] o.a.s.c.RecoveryStrategy Finished recovery 
process, successful=[false]
   [junit4]   2> 262317 INFO  
(recoveryExecutor-205-thread-1-processing-n:127.0.0.1:36055_solr 
x:collection1_shard1_replica_n9 s:shard1 c:collection1 r:core_node10) 
[n:127.0.0.1:36055_solr c:collection1 s:shard1 r:core_node10 
x:collection1_shard1_replica_n9] o.a.s.c.SolrCore 
[collection1_shard1_replica_n9]  CLOSING SolrCore 
org.apache.solr.core.SolrCore@31cee06d
   [junit4]   2> 262317 INFO  
(recoveryExecutor-205-thread-1-processing-n:127.0.0.1:36055_solr 
x:collection1_shard1_replica_n9 s:shard1 c:collection1 r:core_node10) 
[n:127.0.0.1:36055_solr c:collection1 s:shard1 r:core_node10 
x:collection1_shard1_replica_n9] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.core.collection1.shard1.replica_n9, tag=835641453
   [junit4]   2> 262318 INFO  
(recoveryExecutor-205-thread-1-processing-n:127.0.0.1:36055_solr 
x:collection1_shard1_replica_n9 s:shard1 c:collection1 r:core_node10) 
[n:127.0.0.1:36055_solr c:collection1 s:shard1 r:core_node10 
x:collection1_shard1_replica_n9] o.a.s.m.r.SolrJmxReporter Closing reporter 
[org.apache.solr.metrics.reporters.SolrJmxReporter@524b13cb: rootName = 
solr_36055, domain = solr.core.collection1.shard1.replica_n9, service url = 
null, agent id = null] for registry solr.core.collection1.shard1.replica_n9 / 
com.codahale.metrics.MetricRegistry@7e35bde2
   [junit4]   2> 262324 INFO  
(recoveryExecutor-205-thread-1-processing-n:127.0.0.1:36055_solr 
x:collection1_shard1_replica_n9 s:shard1 c:collection1 r:core_node10) 
[n:127.0.0.1:36055_solr c:collection1 s:shard1 r:core_node10 
x:collection1_shard1_replica_n9] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.collection1.shard1.leader, tag=835641453
   [junit4]   2> 262324 WARN  
(recoveryExecutor-205-thread-1-processing-n:127.0.0.1:36055_solr 
x:collection1_shard1_replica_n9 s:shard1 c:collection1 r:core_node10) 
[n:127.0.0.1:36055_solr c:collection1 s:shard1 r:core_node10 
x:collection1_shard1_replica_n9] o.a.s.c.RecoveryStrategy Stopping recovery for 
core=[collection1_shard1_replica_n9] coreNodeName=[core_node10]
   [junit4]   2> 262326 INFO  (jetty-closer-184-thread-1) [    ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@57949819{/solr,null,UNAVAILABLE}
   [junit4]   2> 262328 ERROR 
(SUITE-AssignBackwardCompatibilityTest-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.z.s.ZooKeeperServer ZKShutdownHandler is not registered, so ZooKeeper 
server won't take any action on ERROR or SHUTDOWN server state changes
   [junit4]   2> 262328 INFO  
(SUITE-AssignBackwardCompatibilityTest-seed#[23F1118A31631A7F]-worker) [    ] 
o.a.s.c.ZkTestServer connecting to 127.0.0.1:39543 39543
   [junit4]   2> 272374 INFO  (Thread-162) [    ] o.a.s.c.ZkTestServer 
connecting to 127.0.0.1:39543 39543
   [junit4]   2> 272374 WARN  (Thread-162) [    ] o.a.s.c.ZkTestServer Watch 
limit violations: 
   [junit4]   2> Maximum concurrent create/delete watches above limit:
   [junit4]   2> 
   [junit4]   2>        5       /solr/configs/collection1
   [junit4]   2>        5       /solr/aliases.json
   [junit4]   2>        5       /solr/configs/collection1/managed-schema
   [junit4]   2>        5       /solr/clusterprops.json
   [junit4]   2>        4       /solr/security.json
   [junit4]   2> 
   [junit4]   2> Maximum concurrent data watches above limit:
   [junit4]   2> 
   [junit4]   2>        32      /solr/collections/collection1/state.json
   [junit4]   2>        5       /solr/clusterstate.json
   [junit4]   2>        3       
/solr/collections/collection1/leader_elect/shard1/election/99268192019742730-core_node8-n_0000000001
   [junit4]   2>        2       
/solr/overseer_elect/election/99268192019742732-127.0.0.1:36055_solr-n_0000000001
   [junit4]   2> 
   [junit4]   2> Maximum concurrent children watches above limit:
   [junit4]   2> 
   [junit4]   2>        5       /solr/live_nodes
   [junit4]   2>        5       /solr/collections
   [junit4]   2> 
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.AssignBackwardCompatibilityTest_23F1118A31631A7F-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
docValues:{}, maxPointsInLeafNode=1543, maxMBSortInHeap=6.583153993051027, 
sim=RandomSimilarity(queryNorm=true): {}, locale=ms-BN, timezone=America/Juneau
   [junit4]   2> NOTE: Linux 4.10.0-40-generic amd64/Oracle Corporation 9.0.1 
(64-bit)/cpus=8,threads=1,free=262072616,total=525336576
   [junit4]   2> NOTE: All tests run in this JVM: [DocValuesTest, 
DirectoryFactoryTest, TermsComponentTest, CoreMergeIndexesAdminHandlerTest, 
IgnoreCommitOptimizeUpdateProcessorFactoryTest, 
ManagedSchemaRoundRobinCloudTest, BaseCdcrDistributedZkTest, 
TestOverriddenPrefixQueryForCustomFieldType, 
LeaderInitiatedRecoveryOnShardRestartTest, CleanupOldIndexTest, 
TestPostingsSolrHighlighter, TestSolrDeletionPolicy1, DateRangeFieldTest, 
URLClassifyProcessorTest, TestNumericTerms32, ResourceLoaderTest, 
RAMDirectoryFactoryTest, TestEmbeddedSolrServerSchemaAPI, TestLargeCluster, 
SyncSliceTest, AssignBackwardCompatibilityTest]
   [junit4] Completed [67/764 (2!)] on J0 in 110.39s, 1 test, 1 failure <<< 
FAILURES!

[...truncated 2350 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/temp/junit4-J1-20171231_092358_8437864630989356919796.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) ----
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/heapdumps/java_pid32029.hprof ...
   [junit4] Heap dump file created [142179397 bytes in 0.453 secs]
   [junit4] <<< JVM J1: EOF ----

[...truncated 8571 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:835: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml:787: Some of the tests 
produced a heap dump, but did not fail. Maybe a suppressed OutOfMemoryError? 
Dumps created:
* java_pid32029.hprof

Total time: 80 minutes 33 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to