I'm looking into this failure. I think I caused it; apologies.

On Thu, Aug 29, 2019 at 12:56 PM Apache Jenkins Server
<[email protected]> wrote:
>
> Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/491/
>
> 2 tests failed.
> FAILED:  
> org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore.testRestoreFailure
>
> Error Message:
> Failed collection is still in the clusterstate: 
> DocCollection(hdfsbackuprestore_testfailure_restored//collections/hdfsbackuprestore_testfailure_restored/state.json/2)={
>    "pullReplicas":0,   "replicationFactor":1,   "shards":{     "shard2":{     
>   "range":"0-7fffffff",       "state":"construction",       
> "replicas":{"core_node2":{           
> "core":"hdfsbackuprestore_testfailure_restored_shard2_replica_n1",           
> "base_url":"https://127.0.0.1:36659/solr";,           
> "node_name":"127.0.0.1:36659_solr",           "state":"down",           
> "type":"NRT",           "force_set_state":"false"}},       
> "stateTimestamp":"1567059232049688251"},     "shard1":{       
> "range":"80000000-ffffffff",       "state":"construction",       
> "replicas":{},       "stateTimestamp":"1567059232049701653"}},   
> "router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
> "autoAddReplicas":"false",   "nrtReplicas":1,   "tlogReplicas":0} Expected: 
> not a collection containing "hdfsbackuprestore_testfailure_restored"      
> but: was <[hdfsbackuprestore_testok, hdfsbackuprestore_testfailure_restored, 
> hdfsbackuprestore_testfailure, hdfsbackuprestore_testok_restored]>
>
> Stack Trace:
> java.lang.AssertionError: Failed collection is still in the clusterstate: 
> DocCollection(hdfsbackuprestore_testfailure_restored//collections/hdfsbackuprestore_testfailure_restored/state.json/2)={
>   "pullReplicas":0,
>   "replicationFactor":1,
>   "shards":{
>     "shard2":{
>       "range":"0-7fffffff",
>       "state":"construction",
>       "replicas":{"core_node2":{
>           "core":"hdfsbackuprestore_testfailure_restored_shard2_replica_n1",
>           "base_url":"https://127.0.0.1:36659/solr";,
>           "node_name":"127.0.0.1:36659_solr",
>           "state":"down",
>           "type":"NRT",
>           "force_set_state":"false"}},
>       "stateTimestamp":"1567059232049688251"},
>     "shard1":{
>       "range":"80000000-ffffffff",
>       "state":"construction",
>       "replicas":{},
>       "stateTimestamp":"1567059232049701653"}},
>   "router":{"name":"compositeId"},
>   "maxShardsPerNode":"1",
>   "autoAddReplicas":"false",
>   "nrtReplicas":1,
>   "tlogReplicas":0}
> Expected: not a collection containing "hdfsbackuprestore_testfailure_restored"
>      but: was <[hdfsbackuprestore_testok, 
> hdfsbackuprestore_testfailure_restored, hdfsbackuprestore_testfailure, 
> hdfsbackuprestore_testok_restored]>
>         at 
> __randomizedtesting.SeedInfo.seed([E037D74065656872:C94B49654D3C6B5F]:0)
>         at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>         at org.junit.Assert.assertThat(Assert.java:956)
>         at 
> org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testRestoreFailure(AbstractCloudBackupRestoreTestCase.java:211)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>         at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>         at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>         at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>         at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>         at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>         at java.lang.Thread.run(Thread.java:748)
>
>
> FAILED:  
> org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.testRestoreFailure
>
> Error Message:
> Failed collection is still in the clusterstate: 
> DocCollection(backuprestore_testfailure_restored//collections/backuprestore_testfailure_restored/state.json/2)={
>    "pullReplicas":0,   "replicationFactor":1,   "shards":{     "shard2":{     
>   "range":"0-7fffffff",       "state":"construction",       
> "replicas":{"core_node2":{           
> "core":"backuprestore_testfailure_restored_shard2_replica_n1",           
> "base_url":"http://127.0.0.1:33205/solr";,           
> "node_name":"127.0.0.1:33205_solr",           "state":"down",           
> "type":"NRT",           "force_set_state":"false"}},       
> "stateTimestamp":"1567060879213084847"},     "shard1":{       
> "range":"80000000-ffffffff",       "state":"construction",       
> "replicas":{},       "stateTimestamp":"1567060879213099152"}},   
> "router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
> "autoAddReplicas":"false",   "nrtReplicas":1,   "tlogReplicas":0} Expected: 
> not a collection containing "backuprestore_testfailure_restored"      but: 
> was <[backuprestore_testok, backuprestore_testfailure, 
> backuprestore_testfailure_restored, backuprestore_testok_restored]>
>
> Stack Trace:
> java.lang.AssertionError: Failed collection is still in the clusterstate: 
> DocCollection(backuprestore_testfailure_restored//collections/backuprestore_testfailure_restored/state.json/2)={
>   "pullReplicas":0,
>   "replicationFactor":1,
>   "shards":{
>     "shard2":{
>       "range":"0-7fffffff",
>       "state":"construction",
>       "replicas":{"core_node2":{
>           "core":"backuprestore_testfailure_restored_shard2_replica_n1",
>           "base_url":"http://127.0.0.1:33205/solr";,
>           "node_name":"127.0.0.1:33205_solr",
>           "state":"down",
>           "type":"NRT",
>           "force_set_state":"false"}},
>       "stateTimestamp":"1567060879213084847"},
>     "shard1":{
>       "range":"80000000-ffffffff",
>       "state":"construction",
>       "replicas":{},
>       "stateTimestamp":"1567060879213099152"}},
>   "router":{"name":"compositeId"},
>   "maxShardsPerNode":"1",
>   "autoAddReplicas":"false",
>   "nrtReplicas":1,
>   "tlogReplicas":0}
> Expected: not a collection containing "backuprestore_testfailure_restored"
>      but: was <[backuprestore_testok, backuprestore_testfailure, 
> backuprestore_testfailure_restored, backuprestore_testok_restored]>
>         at 
> __randomizedtesting.SeedInfo.seed([E037D74065656872:C94B49654D3C6B5F]:0)
>         at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>         at org.junit.Assert.assertThat(Assert.java:956)
>         at 
> org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testRestoreFailure(AbstractCloudBackupRestoreTestCase.java:211)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>         at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>         at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
>         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>         at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>         at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>         at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>         at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>         at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>         at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>         at java.lang.Thread.run(Thread.java:748)
>
>
>
>
> Build Log:
> [...truncated 13726 lines...]
>    [junit4] Suite: 
> org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore
>    [junit4]   1> Formatting using clusterid: testClusterID
>    [junit4]   2> 439279 WARN  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
> hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
>    [junit4]   2> 439296 WARN  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
>    [junit4]   2> 439298 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: 
> afcf563148970e98786327af5e07c261fda175d3; jvm 1.8.0_191-b12
>    [junit4]   2> 439300 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session DefaultSessionIdManager workerName=node0
>    [junit4]   2> 439300 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session No SessionScavenger set, using defaults
>    [junit4]   2> 439300 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session node0 Scavenging every 600000ms
>    [junit4]   2> 439301 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.h.ContextHandler Started 
> o.e.j.s.ServletContextHandler@2774068b{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,AVAILABLE}
>    [junit4]   2> 439459 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.h.ContextHandler Started 
> o.e.j.w.WebAppContext@f007949{hdfs,/,file:///home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/jetty-localhost.localdomain-36239-hdfs-_-any-924387434669286531.dir/webapp/,AVAILABLE}{/hdfs}
>    [junit4]   2> 439460 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.AbstractConnector Started 
> ServerConnector@7f6b887c{HTTP/1.1,[http/1.1]}{localhost.localdomain:36239}
>    [junit4]   2> 439461 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.Server Started @439527ms
>    [junit4]   2> 439553 WARN  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
>    [junit4]   2> 439556 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: 
> afcf563148970e98786327af5e07c261fda175d3; jvm 1.8.0_191-b12
>    [junit4]   2> 439556 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session DefaultSessionIdManager workerName=node0
>    [junit4]   2> 439556 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session No SessionScavenger set, using defaults
>    [junit4]   2> 439557 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session node0 Scavenging every 600000ms
>    [junit4]   2> 439557 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.h.ContextHandler Started 
> o.e.j.s.ServletContextHandler@6adf3fad{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,AVAILABLE}
>    [junit4]   2> 439714 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.h.ContextHandler Started 
> o.e.j.w.WebAppContext@1c703108{datanode,/,file:///home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/jetty-localhost-37543-datanode-_-any-7314119788980653551.dir/webapp/,AVAILABLE}{/datanode}
>    [junit4]   2> 439715 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.AbstractConnector Started 
> ServerConnector@7162d9c9{HTTP/1.1,[http/1.1]}{localhost:37543}
>    [junit4]   2> 439715 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.Server Started @439781ms
>    [junit4]   2> 439791 WARN  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
>    [junit4]   2> 439792 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: 
> afcf563148970e98786327af5e07c261fda175d3; jvm 1.8.0_191-b12
>    [junit4]   2> 439794 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session DefaultSessionIdManager workerName=node0
>    [junit4]   2> 439794 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session No SessionScavenger set, using defaults
>    [junit4]   2> 439794 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.session node0 Scavenging every 600000ms
>    [junit4]   2> 439795 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.h.ContextHandler Started 
> o.e.j.s.ServletContextHandler@11532006{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,AVAILABLE}
>    [junit4]   2> 439974 INFO  (Block report processor) [     ] 
> BlockStateChange BLOCK* processReport 0xcac9d599fa230d3: Processing first 
> storage report for DS-d29fa2ae-e164-4cca-aa51-f36bddc1bd73 from datanode 
> f9a9e1ed-6c2b-46ce-b8bb-7bae1b0f893d
>    [junit4]   2> 439974 INFO  (Block report processor) [     ] 
> BlockStateChange BLOCK* processReport 0xcac9d599fa230d3: from storage 
> DS-d29fa2ae-e164-4cca-aa51-f36bddc1bd73 node 
> DatanodeRegistration(127.0.0.1:38477, 
> datanodeUuid=f9a9e1ed-6c2b-46ce-b8bb-7bae1b0f893d, infoPort=34367, 
> infoSecurePort=0, ipcPort=41639, 
> storageInfo=lv=-57;cid=testClusterID;nsid=968518402;c=1567059213337), blocks: 
> 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
>    [junit4]   2> 439974 INFO  (Block report processor) [     ] 
> BlockStateChange BLOCK* processReport 0xcac9d599fa230d3: Processing first 
> storage report for DS-1c4da6b9-2544-4f1f-b527-c4142a5267fd from datanode 
> f9a9e1ed-6c2b-46ce-b8bb-7bae1b0f893d
>    [junit4]   2> 439974 INFO  (Block report processor) [     ] 
> BlockStateChange BLOCK* processReport 0xcac9d599fa230d3: from storage 
> DS-1c4da6b9-2544-4f1f-b527-c4142a5267fd node 
> DatanodeRegistration(127.0.0.1:38477, 
> datanodeUuid=f9a9e1ed-6c2b-46ce-b8bb-7bae1b0f893d, infoPort=34367, 
> infoSecurePort=0, ipcPort=41639, 
> storageInfo=lv=-57;cid=testClusterID;nsid=968518402;c=1567059213337), blocks: 
> 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
>    [junit4]   2> 440012 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.h.ContextHandler Started 
> o.e.j.w.WebAppContext@397cc67d{datanode,/,file:///home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/jetty-localhost-42969-datanode-_-any-8925702212772852981.dir/webapp/,AVAILABLE}{/datanode}
>    [junit4]   2> 440012 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.AbstractConnector Started 
> ServerConnector@4892d943{HTTP/1.1,[http/1.1]}{localhost:42969}
>    [junit4]   2> 440012 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.e.j.s.Server Started @440079ms
>    [junit4]   2> 440173 INFO  (Block report processor) [     ] 
> BlockStateChange BLOCK* processReport 0xb57ad50ac595db20: Processing first 
> storage report for DS-ae78d8eb-dd57-4c19-ae6e-ea8f8519c130 from datanode 
> d9c8819b-1365-4c42-ae05-ffe965768d2c
>    [junit4]   2> 440173 INFO  (Block report processor) [     ] 
> BlockStateChange BLOCK* processReport 0xb57ad50ac595db20: from storage 
> DS-ae78d8eb-dd57-4c19-ae6e-ea8f8519c130 node 
> DatanodeRegistration(127.0.0.1:42413, 
> datanodeUuid=d9c8819b-1365-4c42-ae05-ffe965768d2c, infoPort=40741, 
> infoSecurePort=0, ipcPort=34355, 
> storageInfo=lv=-57;cid=testClusterID;nsid=968518402;c=1567059213337), blocks: 
> 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
>    [junit4]   2> 440173 INFO  (Block report processor) [     ] 
> BlockStateChange BLOCK* processReport 0xb57ad50ac595db20: Processing first 
> storage report for DS-ef48ae90-a08b-4a82-9795-00787d190e45 from datanode 
> d9c8819b-1365-4c42-ae05-ffe965768d2c
>    [junit4]   2> 440173 INFO  (Block report processor) [     ] 
> BlockStateChange BLOCK* processReport 0xb57ad50ac595db20: from storage 
> DS-ef48ae90-a08b-4a82-9795-00787d190e45 node 
> DatanodeRegistration(127.0.0.1:42413, 
> datanodeUuid=d9c8819b-1365-4c42-ae05-ffe965768d2c, infoPort=40741, 
> infoSecurePort=0, ipcPort=34355, 
> storageInfo=lv=-57;cid=testClusterID;nsid=968518402;c=1567059213337), blocks: 
> 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
>    [junit4]   2> 440259 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002
>    [junit4]   2> 440260 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
>    [junit4]   2> 440260 INFO  (ZkTestServer Run Thread) [     ] 
> o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
>    [junit4]   2> 440260 INFO  (ZkTestServer Run Thread) [     ] 
> o.a.s.c.ZkTestServer Starting server
>    [junit4]   2> 440360 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.ZkTestServer start zk server on port:45147
>    [junit4]   2> 440360 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.ZkTestServer waitForServerUp: 127.0.0.1:45147
>    [junit4]   2> 440360 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:45147
>    [junit4]   2> 440360 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.ZkTestServer connecting to 127.0.0.1 45147
>    [junit4]   2> 440363 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>    [junit4]   2> 440367 INFO  (zkConnectionManagerCallback-2523-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 440367 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>    [junit4]   2> 440371 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>    [junit4]   2> 440372 INFO  (zkConnectionManagerCallback-2525-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 440372 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>    [junit4]   2> 440376 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>    [junit4]   2> 440378 INFO  (zkConnectionManagerCallback-2527-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 440378 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>    [junit4]   2> 440486 WARN  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
>    [junit4]   2> 440486 WARN  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
>    [junit4]   2> 440487 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
>    [junit4]   2> 440487 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
>    [junit4]   2> 440487 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
>    [junit4]   2> 440487 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
>    [junit4]   2> 440487 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: 
> afcf563148970e98786327af5e07c261fda175d3; jvm 1.8.0_191-b12
>    [junit4]   2> 440487 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: 
> afcf563148970e98786327af5e07c261fda175d3; jvm 1.8.0_191-b12
>    [junit4]   2> 440495 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.s.session DefaultSessionIdManager workerName=node0
>    [junit4]   2> 440495 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.s.session No SessionScavenger set, using defaults
>    [junit4]   2> 440496 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.s.session node0 Scavenging every 660000ms
>    [junit4]   2> 440496 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.s.session DefaultSessionIdManager workerName=node0
>    [junit4]   2> 440496 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.s.session No SessionScavenger set, using defaults
>    [junit4]   2> 440496 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.s.session node0 Scavenging every 600000ms
>    [junit4]   2> 440496 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.s.h.ContextHandler Started 
> o.e.j.s.ServletContextHandler@45e0eb05{/solr,null,AVAILABLE}
>    [junit4]   2> 440496 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.s.h.ContextHandler Started 
> o.e.j.s.ServletContextHandler@2f3604af{/solr,null,AVAILABLE}
>    [junit4]   2> 440497 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.s.AbstractConnector Started ServerConnector@224c8694{SSL,[ssl, 
> http/1.1]}{127.0.0.1:36659}
>    [junit4]   2> 440497 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.s.Server Started @440564ms
>    [junit4]   2> 440497 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
> hostPort=36659}
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.s.AbstractConnector Started ServerConnector@341995c6{SSL,[ssl, 
> http/1.1]}{127.0.0.1:46735}
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.s.Server Started @440564ms
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
> hostPort=46735}
>    [junit4]   2> 440498 ERROR (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
> missing or incomplete.
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.s.SolrDispatchFilter Using logger factory 
> org.apache.logging.slf4j.Log4jLoggerFactory
>    [junit4]   2> 440498 ERROR (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
> missing or incomplete.
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
> 8.3.0
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.s.SolrDispatchFilter Using logger factory 
> org.apache.logging.slf4j.Log4jLoggerFactory
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port 
> null
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
> 8.3.0
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port 
> null
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
> 2019-08-29T06:13:34.605Z
>    [junit4]   2> 440498 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
> 2019-08-29T06:13:34.605Z
>    [junit4]   2> 440500 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>    [junit4]   2> 440503 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>    [junit4]   2> 440503 INFO  (zkConnectionManagerCallback-2530-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 440503 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>    [junit4]   2> 440504 INFO  (zkConnectionManagerCallback-2532-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 440504 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>    [junit4]   2> 440504 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
>    [junit4]   2> 440505 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
>    [junit4]   2> 440523 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.SolrXmlConfig MBean server found: 
> com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267, but no JMX reporters were 
> configured - adding default JMX reporter.
>    [junit4]   2> 440530 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.SolrXmlConfig MBean server found: 
> com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267, but no JMX reporters were 
> configured - adding default JMX reporter.
>    [junit4]   2> 440887 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: 
> WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=true]
>    [junit4]   2> 440888 WARN  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.s.i.Http2SolrClient Create Http2SolrClient with HTTP/1.1 transport 
> since Java 8 or lower versions does not support SSL + HTTP/2
>    [junit4]   2> 440889 WARN  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.u.s.S.config Trusting all certificates configured for 
> Client@4b3ef0b7[provider=null,keyStore=null,trustStore=null]
>    [junit4]   2> 440889 WARN  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for 
> Client@4b3ef0b7[provider=null,keyStore=null,trustStore=null]
>    [junit4]   2> 440890 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: 
> WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=true]
>    [junit4]   2> 440891 WARN  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.s.i.Http2SolrClient Create Http2SolrClient with HTTP/1.1 transport 
> since Java 8 or lower versions does not support SSL + HTTP/2
>    [junit4]   2> 440893 WARN  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.u.s.S.config Trusting all certificates configured for 
> Client@7637eb82[provider=null,keyStore=null,trustStore=null]
>    [junit4]   2> 440893 WARN  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for 
> Client@7637eb82[provider=null,keyStore=null,trustStore=null]
>    [junit4]   2> 440895 WARN  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.s.i.Http2SolrClient Create Http2SolrClient with HTTP/1.1 transport 
> since Java 8 or lower versions does not support SSL + HTTP/2
>    [junit4]   2> 440901 WARN  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.s.i.Http2SolrClient Create Http2SolrClient with HTTP/1.1 transport 
> since Java 8 or lower versions does not support SSL + HTTP/2
>    [junit4]   2> 440901 WARN  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.u.s.S.config Trusting all certificates configured for 
> Client@74bd9149[provider=null,keyStore=null,trustStore=null]
>    [junit4]   2> 440901 WARN  (jetty-launcher-2528-thread-2) [     ] 
> o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for 
> Client@74bd9149[provider=null,keyStore=null,trustStore=null]
>    [junit4]   2> 440902 WARN  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.u.s.S.config Trusting all certificates configured for 
> Client@18e664b8[provider=null,keyStore=null,trustStore=null]
>    [junit4]   2> 440902 WARN  (jetty-launcher-2528-thread-1) [     ] 
> o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for 
> Client@18e664b8[provider=null,keyStore=null,trustStore=null]
>    [junit4]   2> 440903 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:45147/solr
>    [junit4]   2> 440903 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:45147/solr
>    [junit4]   2> 440905 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>    [junit4]   2> 440906 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>    [junit4]   2> 440906 INFO  (zkConnectionManagerCallback-2546-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 440906 INFO  (jetty-launcher-2528-thread-1) [     ] 
> o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>    [junit4]   2> 440911 INFO  (zkConnectionManagerCallback-2544-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 440911 INFO  (jetty-launcher-2528-thread-2) [     ] 
> o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>    [junit4]   2> 441011 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.c.ConnectionManager Waiting for client 
> to connect to ZooKeeper
>    [junit4]   2> 441014 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.c.ConnectionManager Waiting for client 
> to connect to ZooKeeper
>    [junit4]   2> 441014 INFO  (zkConnectionManagerCallback-2548-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 441014 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.c.ConnectionManager Client is connected 
> to ZooKeeper
>    [junit4]   2> 441024 INFO  (zkConnectionManagerCallback-2550-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 441024 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.c.ConnectionManager Client is connected 
> to ZooKeeper
>    [junit4]   2> 441203 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.OverseerElectionContext I am going to 
> be the leader 127.0.0.1:46735_solr
>    [junit4]   2> 441205 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.Overseer Overseer 
> (id=72285712308305927-127.0.0.1:46735_solr-n_0000000000) starting
>    [junit4]   2> 441223 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.c.ConnectionManager Waiting for client 
> to connect to ZooKeeper
>    [junit4]   2> 441231 INFO  (zkConnectionManagerCallback-2559-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 441231 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.c.ConnectionManager Client is connected 
> to ZooKeeper
>    [junit4]   2> 441237 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.s.i.ZkClientClusterStateProvider 
> Cluster at 127.0.0.1:45147/solr ready
>    [junit4]   2> 441243 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.ZkController Register node as live in 
> ZooKeeper:/live_nodes/127.0.0.1:36659_solr
>    [junit4]   2> 441246 INFO  
> (OverseerStateUpdate-72285712308305927-127.0.0.1:46735_solr-n_0000000000) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.Overseer Starting to work on the main 
> queue : 127.0.0.1:46735_solr
>    [junit4]   2> 441247 INFO  
> (OverseerStateUpdate-72285712308305927-127.0.0.1:46735_solr-n_0000000000) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.c.ZkStateReader Updated live nodes from 
> ZooKeeper... (0) -> (1)
>    [junit4]   2> 441257 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.ZkController Publish 
> node=127.0.0.1:46735_solr as DOWN
>    [junit4]   2> 441259 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.TransientSolrCoreCacheDefault 
> Allocating transient cache for 2147483647 transient cores
>    [junit4]   2> 441259 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.ZkController Register node as live in 
> ZooKeeper:/live_nodes/127.0.0.1:46735_solr
>    [junit4]   2> 441262 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.PackageManager clusterprops.json 
> changed , version 0
>    [junit4]   2> 441262 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.b.r.BackupRepositoryFactory Added 
> backup repository with configuration params {type = repository,name = 
> hdfs,class = 
> org.apache.solr.core.backup.repository.HdfsBackupRepository,attributes = 
> {name=hdfs, 
> class=org.apache.solr.core.backup.repository.HdfsBackupRepository},args = 
> {location=/backup,solr.hdfs.home=hdfs://localhost.localdomain:46481/solr,solr.hdfs.confdir=}}
>    [junit4]   2> 441262 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.b.r.BackupRepositoryFactory Added 
> backup repository with configuration params {type = repository,name = 
> poisioned,class = 
> org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore$PoinsionedRepository,attributes
>  = {default=true, name=poisioned, 
> class=org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore$PoinsionedRepository},}
>    [junit4]   2> 441262 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.b.r.BackupRepositoryFactory Default 
> configuration for backup repository is with configuration params {type = 
> repository,name = poisioned,class = 
> org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore$PoinsionedRepository,attributes
>  = {default=true, name=poisioned, 
> class=org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore$PoinsionedRepository},}
>    [junit4]   2> 441267 INFO  (zkCallback-2547-thread-1) [     ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
>    [junit4]   2> 441274 INFO  (zkCallback-2558-thread-1) [     ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
>    [junit4]   2> 441277 INFO  (zkCallback-2549-thread-1) [     ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
>    [junit4]   2> 441280 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.c.ConnectionManager Waiting for client 
> to connect to ZooKeeper
>    [junit4]   2> 441301 INFO  (zkConnectionManagerCallback-2564-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 441301 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.c.ConnectionManager Client is connected 
> to ZooKeeper
>    [junit4]   2> 441302 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.c.ZkStateReader Updated live nodes from 
> ZooKeeper... (0) -> (2)
>    [junit4]   2> 441305 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.s.i.ZkClientClusterStateProvider 
> Cluster at 127.0.0.1:45147/solr ready
>    [junit4]   2> 441306 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.PackageManager clusterprops.json 
> changed , version 0
>    [junit4]   2> 441306 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.b.r.BackupRepositoryFactory Added 
> backup repository with configuration params {type = repository,name = 
> hdfs,class = 
> org.apache.solr.core.backup.repository.HdfsBackupRepository,attributes = 
> {name=hdfs, 
> class=org.apache.solr.core.backup.repository.HdfsBackupRepository},args = 
> {location=/backup,solr.hdfs.home=hdfs://localhost.localdomain:46481/solr,solr.hdfs.confdir=}}
>    [junit4]   2> 441306 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.b.r.BackupRepositoryFactory Added 
> backup repository with configuration params {type = repository,name = 
> poisioned,class = 
> org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore$PoinsionedRepository,attributes
>  = {default=true, name=poisioned, 
> class=org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore$PoinsionedRepository},}
>    [junit4]   2> 441306 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.b.r.BackupRepositoryFactory Default 
> configuration for backup repository is with configuration params {type = 
> repository,name = poisioned,class = 
> org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore$PoinsionedRepository,attributes
>  = {default=true, name=poisioned, 
> class=org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore$PoinsionedRepository},}
>    [junit4]   2> 441332 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.h.a.MetricsHistoryHandler No .system 
> collection, keeping metrics history in memory.
>    [junit4]   2> 441378 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.h.a.MetricsHistoryHandler No .system 
> collection, keeping metrics history in memory.
>    [junit4]   2> 441409 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
> 'solr.node' (registry 'solr.node') enabled at server: 
> com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 441436 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
> 'solr.node' (registry 'solr.node') enabled at server: 
> com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 441444 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
> 'solr.jvm' (registry 'solr.jvm') enabled at server: 
> com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 441444 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
> 'solr.jetty' (registry 'solr.jetty') enabled at server: 
> com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 441445 INFO  (jetty-launcher-2528-thread-1) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.CorePropertiesLocator Found 0 core 
> definitions underneath 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node1/.
>    [junit4]   2> 441454 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
> 'solr.jvm' (registry 'solr.jvm') enabled at server: 
> com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 441454 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
> 'solr.jetty' (registry 'solr.jetty') enabled at server: 
> com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 441456 INFO  (jetty-launcher-2528-thread-2) 
> [n:127.0.0.1:36659_solr     ] o.a.s.c.CorePropertiesLocator Found 0 core 
> definitions underneath 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node2/.
>    [junit4]   2> 441578 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.MiniSolrCloudCluster waitForAllNodes: numServers=2
>    [junit4]   2> 441579 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
>    [junit4]   2> 441584 INFO  (zkConnectionManagerCallback-2571-thread-1) [   
>   ] o.a.s.c.c.ConnectionManager zkClient has connected
>    [junit4]   2> 441585 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
>    [junit4]   2> 441588 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
>    [junit4]   2> 441607 INFO  
> (SUITE-TestHdfsCloudBackupRestore-seed#[E037D74065656872]-worker) [     ] 
> o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:45147/solr ready
>    [junit4]   2> 441715 INFO  (qtp1840676713-6927) [n:127.0.0.1:36659_solr    
>  ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> collection.configName=conf1&router.name=implicit&version=2&pullReplicas=0&shards=shard1,shard2&property.customKey=customValue&maxShardsPerNode=3&router.field=shard_s&autoAddReplicas=true&name=hdfsbackuprestore_testok&nrtReplicas=2&action=CREATE&tlogReplicas=1&wt=javabin
>  and sendToOCPQueue=true
>    [junit4]   2> 441723 INFO  
> (OverseerThreadFactory-1679-thread-1-processing-n:127.0.0.1:46735_solr) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.a.c.CreateCollectionCmd Create 
> collection hdfsbackuprestore_testok
>    [junit4]   2> 441830 WARN  
> (OverseerThreadFactory-1679-thread-1-processing-n:127.0.0.1:46735_solr) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.a.c.CreateCollectionCmd Specified 
> number of replicas of 3 on collection hdfsbackuprestore_testok is higher than 
> the number of Solr instances currently live or live and part of your 
> createNodeSet(2). It's unusual to run two replica of the same slice on the 
> same Solr-instance.
>    [junit4]   2> 441836 INFO  
> (OverseerStateUpdate-72285712308305927-127.0.0.1:46735_solr-n_0000000000) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.o.SliceMutator createReplica() {
>    [junit4]   2>   "operation":"ADDREPLICA",
>    [junit4]   2>   "collection":"hdfsbackuprestore_testok",
>    [junit4]   2>   "shard":"shard1",
>    [junit4]   2>   "core":"hdfsbackuprestore_testok_shard1_replica_n1",
>    [junit4]   2>   "state":"down",
>    [junit4]   2>   "base_url":"https://127.0.0.1:46735/solr";,
>    [junit4]   2>   "type":"NRT",
>    [junit4]   2>   "waitForFinalState":"false"}
>    [junit4]   2> 441841 INFO  
> (OverseerStateUpdate-72285712308305927-127.0.0.1:46735_solr-n_0000000000) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.o.SliceMutator createReplica() {
>    [junit4]   2>   "operation":"ADDREPLICA",
>    [junit4]   2>   "collection":"hdfsbackuprestore_testok",
>    [junit4]   2>   "shard":"shard1",
>    [junit4]   2>   "core":"hdfsbackuprestore_testok_shard1_replica_n2",
>    [junit4]   2>   "state":"down",
>    [junit4]   2>   "base_url":"https://127.0.0.1:36659/solr";,
>    [junit4]   2>   "type":"NRT",
>    [junit4]   2>   "waitForFinalState":"false"}
>    [junit4]   2> 441847 INFO  
> (OverseerStateUpdate-72285712308305927-127.0.0.1:46735_solr-n_0000000000) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.o.SliceMutator createReplica() {
>    [junit4]   2>   "operation":"ADDREPLICA",
>    [junit4]   2>   "collection":"hdfsbackuprestore_testok",
>    [junit4]   2>   "shard":"shard1",
>    [junit4]   2>   "core":"hdfsbackuprestore_testok_shard1_replica_t4",
>    [junit4]   2>   "state":"down",
>    [junit4]   2>   "base_url":"https://127.0.0.1:46735/solr";,
>    [junit4]   2>   "type":"TLOG",
>    [junit4]   2>   "waitForFinalState":"false"}
>    [junit4]   2> 441852 INFO  
> (OverseerStateUpdate-72285712308305927-127.0.0.1:46735_solr-n_0000000000) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.o.SliceMutator createReplica() {
>    [junit4]   2>   "operation":"ADDREPLICA",
>    [junit4]   2>   "collection":"hdfsbackuprestore_testok",
>    [junit4]   2>   "shard":"shard2",
>    [junit4]   2>   "core":"hdfsbackuprestore_testok_shard2_replica_n7",
>    [junit4]   2>   "state":"down",
>    [junit4]   2>   "base_url":"https://127.0.0.1:36659/solr";,
>    [junit4]   2>   "type":"NRT",
>    [junit4]   2>   "waitForFinalState":"false"}
>    [junit4]   2> 441855 INFO  
> (OverseerStateUpdate-72285712308305927-127.0.0.1:46735_solr-n_0000000000) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.o.SliceMutator createReplica() {
>    [junit4]   2>   "operation":"ADDREPLICA",
>    [junit4]   2>   "collection":"hdfsbackuprestore_testok",
>    [junit4]   2>   "shard":"shard2",
>    [junit4]   2>   "core":"hdfsbackuprestore_testok_shard2_replica_n8",
>    [junit4]   2>   "state":"down",
>    [junit4]   2>   "base_url":"https://127.0.0.1:46735/solr";,
>    [junit4]   2>   "type":"NRT",
>    [junit4]   2>   "waitForFinalState":"false"}
>    [junit4]   2> 441859 INFO  
> (OverseerStateUpdate-72285712308305927-127.0.0.1:46735_solr-n_0000000000) 
> [n:127.0.0.1:46735_solr     ] o.a.s.c.o.SliceMutator createReplica() {
>    [junit4]   2>   "operation":"ADDREPLICA",
>    [junit4]   2>   "collection":"hdfsbackuprestore_testok",
>    [junit4]   2>   "shard":"shard2",
>    [junit4]   2>   "core":"hdfsbackuprestore_testok_shard2_replica_t10",
>    [junit4]   2>   "state":"down",
>    [junit4]   2>   "base_url":"https://127.0.0.1:36659/solr";,
>    [junit4]   2>   "type":"TLOG",
>    [junit4]   2>   "waitForFinalState":"false"}
>    [junit4]   2> 442066 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr    
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.h.a.CoreAdminOperation 
> core create command 
> qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore_testok&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node5&name=hdfsbackuprestore_testok_shard1_replica_n2&action=CREATE&numShards=2&shard=shard1&wt=javabin
>    [junit4]   2> 442066 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr    
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] 
> o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 
> 2147483647 transient cores
>    [junit4]   2> 442078 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr    
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.h.a.CoreAdminOperation 
> core create command 
> qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore_testok&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node9&name=hdfsbackuprestore_testok_shard2_replica_n7&action=CREATE&numShards=2&shard=shard2&wt=javabin
>    [junit4]   2> 442084 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr    
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.h.a.CoreAdminOperation 
> core create command 
> qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore_testok&version=2&replicaType=TLOG&property.customKey=customValue&coreNodeName=core_node12&name=hdfsbackuprestore_testok_shard2_replica_t10&action=CREATE&numShards=2&shard=shard2&wt=javabin
>    [junit4]   2> 442098 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr    
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.h.a.CoreAdminOperation 
> core create command 
> qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore_testok&version=2&replicaType=TLOG&property.customKey=customValue&coreNodeName=core_node6&name=hdfsbackuprestore_testok_shard1_replica_t4&action=CREATE&numShards=2&shard=shard1&wt=javabin
>    [junit4]   2> 442101 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr    
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.h.a.CoreAdminOperation 
> core create command 
> qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore_testok&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node3&name=hdfsbackuprestore_testok_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin
>    [junit4]   2> 442112 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr    
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.h.a.CoreAdminOperation 
> core create command 
> qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=hdfsbackuprestore_testok&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node11&name=hdfsbackuprestore_testok_shard2_replica_n8&action=CREATE&numShards=2&shard=shard2&wt=javabin
>    [junit4]   2> 443135 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.c.SolrConfig Using 
> Lucene MatchVersion: 8.3.0
>    [junit4]   2> 443135 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.c.SolrConfig Using 
> Lucene MatchVersion: 8.3.0
>    [junit4]   2> 443138 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.c.SolrConfig Using 
> Lucene MatchVersion: 8.3.0
>    [junit4]   2> 443144 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.c.SolrConfig Using 
> Lucene MatchVersion: 8.3.0
>    [junit4]   2> 443144 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.c.SolrConfig Using 
> Lucene MatchVersion: 8.3.0
>    [junit4]   2> 443151 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.c.SolrConfig Using 
> Lucene MatchVersion: 8.3.0
>    [junit4]   2> 443188 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.s.IndexSchema 
> [hdfsbackuprestore_testok_shard1_replica_n1] Schema name=minimal
>    [junit4]   2> 443204 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.s.IndexSchema 
> [hdfsbackuprestore_testok_shard1_replica_t4] Schema name=minimal
>    [junit4]   2> 443207 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.s.IndexSchema Loaded 
> schema minimal/1.1 with uniqueid field id
>    [junit4]   2> 443208 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.c.CoreContainer Creating 
> SolrCore 'hdfsbackuprestore_testok_shard1_replica_t4' using configuration 
> from collection hdfsbackuprestore_testok, trusted=true
>    [junit4]   2> 443208 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.m.r.SolrJmxReporter JMX 
> monitoring for 'solr.core.hdfsbackuprestore_testok.shard1.replica_t4' 
> (registry 'solr.core.hdfsbackuprestore_testok.shard1.replica_t4') enabled at 
> server: com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 443215 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.s.IndexSchema 
> [hdfsbackuprestore_testok_shard2_replica_n7] Schema name=minimal
>    [junit4]   2> 443218 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.s.IndexSchema 
> [hdfsbackuprestore_testok_shard1_replica_n2] Schema name=minimal
>    [junit4]   2> 443218 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.s.IndexSchema Loaded 
> schema minimal/1.1 with uniqueid field id
>    [junit4]   2> 443218 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.c.CoreContainer Creating 
> SolrCore 'hdfsbackuprestore_testok_shard2_replica_n7' using configuration 
> from collection hdfsbackuprestore_testok, trusted=true
>    [junit4]   2> 443219 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.m.r.SolrJmxReporter JMX 
> monitoring for 'solr.core.hdfsbackuprestore_testok.shard2.replica_n7' 
> (registry 'solr.core.hdfsbackuprestore_testok.shard2.replica_n7') enabled at 
> server: com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 443222 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.s.IndexSchema 
> [hdfsbackuprestore_testok_shard2_replica_t10] Schema name=minimal
>    [junit4]   2> 443226 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.s.IndexSchema Loaded 
> schema minimal/1.1 with uniqueid field id
>    [junit4]   2> 443226 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.c.CoreContainer Creating 
> SolrCore 'hdfsbackuprestore_testok_shard1_replica_n1' using configuration 
> from collection hdfsbackuprestore_testok, trusted=true
>    [junit4]   2> 443227 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.m.r.SolrJmxReporter JMX 
> monitoring for 'solr.core.hdfsbackuprestore_testok.shard1.replica_n1' 
> (registry 'solr.core.hdfsbackuprestore_testok.shard1.replica_n1') enabled at 
> server: com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 443230 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.s.IndexSchema Loaded 
> schema minimal/1.1 with uniqueid field id
>    [junit4]   2> 443231 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.c.CoreContainer Creating 
> SolrCore 'hdfsbackuprestore_testok_shard1_replica_n2' using configuration 
> from collection hdfsbackuprestore_testok, trusted=true
>    [junit4]   2> 443231 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.c.SolrCore 
> [[hdfsbackuprestore_testok_shard2_replica_n7] ] Opening new SolrCore at 
> [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node2/hdfsbackuprestore_testok_shard2_replica_n7],
>  
> dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node2/./hdfsbackuprestore_testok_shard2_replica_n7/data/]
>    [junit4]   2> 443232 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.m.r.SolrJmxReporter JMX 
> monitoring for 'solr.core.hdfsbackuprestore_testok.shard1.replica_n2' 
> (registry 'solr.core.hdfsbackuprestore_testok.shard1.replica_n2') enabled at 
> server: com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 443232 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.c.SolrCore 
> [[hdfsbackuprestore_testok_shard1_replica_n2] ] Opening new SolrCore at 
> [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node2/hdfsbackuprestore_testok_shard1_replica_n2],
>  
> dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node2/./hdfsbackuprestore_testok_shard1_replica_n2/data/]
>    [junit4]   2> 443233 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.c.SolrCore 
> [[hdfsbackuprestore_testok_shard1_replica_t4] ] Opening new SolrCore at 
> [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node1/hdfsbackuprestore_testok_shard1_replica_t4],
>  
> dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node1/./hdfsbackuprestore_testok_shard1_replica_t4/data/]
>    [junit4]   2> 443234 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.s.IndexSchema Loaded 
> schema minimal/1.1 with uniqueid field id
>    [junit4]   2> 443234 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.c.CoreContainer 
> Creating SolrCore 'hdfsbackuprestore_testok_shard2_replica_t10' using 
> configuration from collection hdfsbackuprestore_testok, trusted=true
>    [junit4]   2> 443235 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.m.r.SolrJmxReporter JMX 
> monitoring for 'solr.core.hdfsbackuprestore_testok.shard2.replica_t10' 
> (registry 'solr.core.hdfsbackuprestore_testok.shard2.replica_t10') enabled at 
> server: com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 443235 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.c.SolrCore 
> [[hdfsbackuprestore_testok_shard2_replica_t10] ] Opening new SolrCore at 
> [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node2/hdfsbackuprestore_testok_shard2_replica_t10],
>  
> dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node2/./hdfsbackuprestore_testok_shard2_replica_t10/data/]
>    [junit4]   2> 443236 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.c.SolrCore 
> [[hdfsbackuprestore_testok_shard1_replica_n1] ] Opening new SolrCore at 
> [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node1/hdfsbackuprestore_testok_shard1_replica_n1],
>  
> dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node1/./hdfsbackuprestore_testok_shard1_replica_n1/data/]
>    [junit4]   2> 443240 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.s.IndexSchema 
> [hdfsbackuprestore_testok_shard2_replica_n8] Schema name=minimal
>    [junit4]   2> 443243 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.s.IndexSchema Loaded 
> schema minimal/1.1 with uniqueid field id
>    [junit4]   2> 443243 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.c.CoreContainer Creating 
> SolrCore 'hdfsbackuprestore_testok_shard2_replica_n8' using configuration 
> from collection hdfsbackuprestore_testok, trusted=true
>    [junit4]   2> 443244 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.m.r.SolrJmxReporter JMX 
> monitoring for 'solr.core.hdfsbackuprestore_testok.shard2.replica_n8' 
> (registry 'solr.core.hdfsbackuprestore_testok.shard2.replica_n8') enabled at 
> server: com.sun.jmx.mbeanserver.JmxMBeanServer@481a4267
>    [junit4]   2> 443244 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.c.SolrCore 
> [[hdfsbackuprestore_testok_shard2_replica_n8] ] Opening new SolrCore at 
> [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node1/hdfsbackuprestore_testok_shard2_replica_n8],
>  
> dataDir=[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/build/solr-core/test/J0/temp/solr.cloud.api.collections.TestHdfsCloudBackupRestore_E037D74065656872-001/tempDir-002/node1/./hdfsbackuprestore_testok_shard2_replica_n8/data/]
>    [junit4]   2> 443410 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.u.UpdateHandler Using 
> UpdateLog implementation: org.apache.solr.update.UpdateLog
>    [junit4]   2> 443410 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.u.UpdateLog Initializing 
> UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
> maxNumLogsToKeep=10 numVersionBuckets=65536
>    [junit4]   2> 443412 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.u.CommitTracker Hard 
> AutoCommit: disabled
>    [junit4]   2> 443412 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.u.CommitTracker Soft 
> AutoCommit: disabled
>    [junit4]   2> 443418 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.s.SolrIndexSearcher 
> Opening [Searcher@5d6f00f3[hdfsbackuprestore_testok_shard1_replica_n1] main]
>    [junit4]   2> 443427 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage 
> Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
>    [junit4]   2> 443428 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage 
> Loaded null at path _rest_managed.json using 
> ZooKeeperStorageIO:path=/configs/conf1
>    [junit4]   2> 443432 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.h.ReplicationHandler 
> Commits will be reserved for 10000ms.
>    [junit4]   2> 443432 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.u.UpdateLog Could not 
> find max version in index or recent updates, using new clock 
> 1643180686090174464
>    [junit4]   2> 443440 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.c.ZkShardTerms 
> Successful update of terms at 
> /collections/hdfsbackuprestore_testok/terms/shard1 to 
> Terms{values={core_node3=0}, version=0}
>    [junit4]   2> 443441 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] 
> o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
> /collections/hdfsbackuprestore_testok/leaders/shard1
>    [junit4]   2> 443444 INFO  
> (searcherExecutor-1690-thread-1-processing-n:127.0.0.1:46735_solr 
> x:hdfsbackuprestore_testok_shard1_replica_n1 c:hdfsbackuprestore_testok 
> s:shard1 r:core_node3) [n:127.0.0.1:46735_solr c:hdfsbackuprestore_testok 
> s:shard1 r:core_node3 x:hdfsbackuprestore_testok_shard1_replica_n1 ] 
> o.a.s.c.SolrCore [hdfsbackuprestore_testok_shard1_replica_n1] Registered new 
> searcher Searcher@5d6f00f3[hdfsbackuprestore_testok_shard1_replica_n1] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>    [junit4]   2> 443449 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] 
> o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for 
> shard shard1: total=3 found=1 timeoutin=9999ms
>    [junit4]   2> 443462 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.u.UpdateHandler Using 
> UpdateLog implementation: org.apache.solr.update.UpdateLog
>    [junit4]   2> 443462 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.u.UpdateLog Initializing 
> UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
> maxNumLogsToKeep=10 numVersionBuckets=65536
>    [junit4]   2> 443462 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.u.UpdateHandler Using 
> UpdateLog implementation: org.apache.solr.update.UpdateLog
>    [junit4]   2> 443462 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.u.UpdateLog Initializing 
> UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
> maxNumLogsToKeep=10 numVersionBuckets=65536
>    [junit4]   2> 443463 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.u.CommitTracker Hard 
> AutoCommit: disabled
>    [junit4]   2> 443463 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.u.CommitTracker Soft 
> AutoCommit: disabled
>    [junit4]   2> 443463 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.u.CommitTracker Hard 
> AutoCommit: disabled
>    [junit4]   2> 443464 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.u.CommitTracker Soft 
> AutoCommit: disabled
>    [junit4]   2> 443466 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.u.UpdateHandler Using 
> UpdateLog implementation: org.apache.solr.update.UpdateLog
>    [junit4]   2> 443466 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.u.UpdateLog Initializing 
> UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
> maxNumLogsToKeep=10 numVersionBuckets=65536
>    [junit4]   2> 443467 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.s.SolrIndexSearcher 
> Opening [Searcher@7add75ac[hdfsbackuprestore_testok_shard1_replica_n2] main]
>    [junit4]   2> 443467 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.u.CommitTracker Hard 
> AutoCommit: disabled
>    [junit4]   2> 443467 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.u.CommitTracker Soft 
> AutoCommit: disabled
>    [junit4]   2> 443470 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.s.SolrIndexSearcher 
> Opening [Searcher@abe7412[hdfsbackuprestore_testok_shard1_replica_t4] main]
>    [junit4]   2> 443473 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.r.ManagedResourceStorage 
> Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
>    [junit4]   2> 443475 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.r.ManagedResourceStorage 
> Loaded null at path _rest_managed.json using 
> ZooKeeperStorageIO:path=/configs/conf1
>    [junit4]   2> 443475 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.h.ReplicationHandler 
> Commits will be reserved for 10000ms.
>    [junit4]   2> 443476 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.u.UpdateLog Could not 
> find max version in index or recent updates, using new clock 
> 1643180686136311808
>    [junit4]   2> 443484 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.r.ManagedResourceStorage 
> Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
>    [junit4]   2> 443484 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.s.SolrIndexSearcher 
> Opening [Searcher@40bbb2b8[hdfsbackuprestore_testok_shard2_replica_n8] main]
>    [junit4]   2> 443485 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.r.ManagedResourceStorage 
> Loaded null at path _rest_managed.json using 
> ZooKeeperStorageIO:path=/configs/conf1
>    [junit4]   2> 443485 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.h.ReplicationHandler 
> Commits will be reserved for 10000ms.
>    [junit4]   2> 443485 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.u.UpdateLog Could not 
> find max version in index or recent updates, using new clock 
> 1643180686145748992
>    [junit4]   2> 443487 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.r.ManagedResourceStorage 
> Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
>    [junit4]   2> 443488 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.r.ManagedResourceStorage 
> Loaded null at path _rest_managed.json using 
> ZooKeeperStorageIO:path=/configs/conf1
>    [junit4]   2> 443488 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.h.ReplicationHandler 
> Commits will be reserved for 10000ms.
>    [junit4]   2> 443489 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.u.UpdateLog Could not 
> find max version in index or recent updates, using new clock 
> 1643180686148894720
>    [junit4]   2> 443494 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.u.UpdateHandler Using 
> UpdateLog implementation: org.apache.solr.update.UpdateLog
>    [junit4]   2> 443494 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.u.UpdateLog 
> Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
> numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
>    [junit4]   2> 443494 INFO  
> (searcherExecutor-1691-thread-1-processing-n:127.0.0.1:36659_solr 
> x:hdfsbackuprestore_testok_shard1_replica_n2 c:hdfsbackuprestore_testok 
> s:shard1 r:core_node5) [n:127.0.0.1:36659_solr c:hdfsbackuprestore_testok 
> s:shard1 r:core_node5 x:hdfsbackuprestore_testok_shard1_replica_n2 ] 
> o.a.s.c.SolrCore [hdfsbackuprestore_testok_shard1_replica_n2] Registered new 
> searcher Searcher@7add75ac[hdfsbackuprestore_testok_shard1_replica_n2] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>    [junit4]   2> 443495 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.u.CommitTracker Hard 
> AutoCommit: disabled
>    [junit4]   2> 443495 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.u.CommitTracker Soft 
> AutoCommit: disabled
>    [junit4]   2> 443496 INFO  
> (searcherExecutor-1693-thread-1-processing-n:127.0.0.1:46735_solr 
> x:hdfsbackuprestore_testok_shard2_replica_n8 c:hdfsbackuprestore_testok 
> s:shard2 r:core_node11) [n:127.0.0.1:46735_solr c:hdfsbackuprestore_testok 
> s:shard2 r:core_node11 x:hdfsbackuprestore_testok_shard2_replica_n8 ] 
> o.a.s.c.SolrCore [hdfsbackuprestore_testok_shard2_replica_n8] Registered new 
> searcher Searcher@40bbb2b8[hdfsbackuprestore_testok_shard2_replica_n8] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>    [junit4]   2> 443497 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] o.a.s.c.ZkShardTerms 
> Successful update of terms at 
> /collections/hdfsbackuprestore_testok/terms/shard1 to 
> Terms{values={core_node6=0, core_node3=0}, version=1}
>    [junit4]   2> 443498 INFO  (qtp2078506737-6920) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node6 
> x:hdfsbackuprestore_testok_shard1_replica_t4 ] 
> o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
> /collections/hdfsbackuprestore_testok/leaders/shard1
>    [junit4]   2> 443500 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.s.SolrIndexSearcher 
> Opening [Searcher@69173d6[hdfsbackuprestore_testok_shard2_replica_t10] main]
>    [junit4]   2> 443501 INFO  
> (searcherExecutor-1688-thread-1-processing-n:127.0.0.1:46735_solr 
> x:hdfsbackuprestore_testok_shard1_replica_t4 c:hdfsbackuprestore_testok 
> s:shard1 r:core_node6) [n:127.0.0.1:46735_solr c:hdfsbackuprestore_testok 
> s:shard1 r:core_node6 x:hdfsbackuprestore_testok_shard1_replica_t4 ] 
> o.a.s.c.SolrCore [hdfsbackuprestore_testok_shard1_replica_t4] Registered new 
> searcher Searcher@abe7412[hdfsbackuprestore_testok_shard1_replica_t4] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>    [junit4]   2> 443503 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] 
> o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
> /configs/conf1
>    [junit4]   2> 443504 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] 
> o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
> ZooKeeperStorageIO:path=/configs/conf1
>    [junit4]   2> 443505 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.h.ReplicationHandler 
> Commits will be reserved for 10000ms.
>    [junit4]   2> 443505 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.u.UpdateLog Could not 
> find max version in index or recent updates, using new clock 
> 1643180686166720512
>    [junit4]   2> 443505 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.c.ZkShardTerms 
> Successful update of terms at 
> /collections/hdfsbackuprestore_testok/terms/shard1 to 
> Terms{values={core_node6=0, core_node3=0, core_node5=0}, version=2}
>    [junit4]   2> 443505 INFO  (qtp1840676713-6923) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] 
> o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
> /collections/hdfsbackuprestore_testok/leaders/shard1
>    [junit4]   2> 443514 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] o.a.s.c.ZkShardTerms 
> Successful update of terms at 
> /collections/hdfsbackuprestore_testok/terms/shard2 to 
> Terms{values={core_node11=0}, version=0}
>    [junit4]   2> 443514 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] 
> o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
> /collections/hdfsbackuprestore_testok/leaders/shard2
>    [junit4]   2> 443520 INFO  
> (searcherExecutor-1692-thread-1-processing-n:127.0.0.1:36659_solr 
> x:hdfsbackuprestore_testok_shard2_replica_t10 c:hdfsbackuprestore_testok 
> s:shard2 r:core_node12) [n:127.0.0.1:36659_solr c:hdfsbackuprestore_testok 
> s:shard2 r:core_node12 x:hdfsbackuprestore_testok_shard2_replica_t10 ] 
> o.a.s.c.SolrCore [hdfsbackuprestore_testok_shard2_replica_t10] Registered new 
> searcher Searcher@69173d6[hdfsbackuprestore_testok_shard2_replica_t10] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>    [junit4]   2> 443521 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] o.a.s.c.ZkShardTerms 
> Successful update of terms at 
> /collections/hdfsbackuprestore_testok/terms/shard2 to 
> Terms{values={core_node12=0, core_node11=0}, version=1}
>    [junit4]   2> 443524 INFO  (qtp2078506737-6924) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node11 
> x:hdfsbackuprestore_testok_shard2_replica_n8 ] 
> o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for 
> shard shard2: total=3 found=1 timeoutin=9998ms
>    [junit4]   2> 443530 INFO  (qtp1840676713-6921) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node12 
> x:hdfsbackuprestore_testok_shard2_replica_t10 ] 
> o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
> /collections/hdfsbackuprestore_testok/leaders/shard2
>    [junit4]   2> 443530 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.u.UpdateHandler Using 
> UpdateLog implementation: org.apache.solr.update.UpdateLog
>    [junit4]   2> 443530 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.u.UpdateLog Initializing 
> UpdateLog: dataDir=null defaultSyncLevel=FLUSH numRecordsToKeep=100 
> maxNumLogsToKeep=10 numVersionBuckets=65536
>    [junit4]   2> 443532 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.u.CommitTracker Hard 
> AutoCommit: disabled
>    [junit4]   2> 443532 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.u.CommitTracker Soft 
> AutoCommit: disabled
>    [junit4]   2> 443536 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.s.SolrIndexSearcher 
> Opening [Searcher@50d2c5a[hdfsbackuprestore_testok_shard2_replica_n7] main]
>    [junit4]   2> 443538 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.r.ManagedResourceStorage 
> Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
>    [junit4]   2> 443538 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.r.ManagedResourceStorage 
> Loaded null at path _rest_managed.json using 
> ZooKeeperStorageIO:path=/configs/conf1
>    [junit4]   2> 443539 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.h.ReplicationHandler 
> Commits will be reserved for 10000ms.
>    [junit4]   2> 443539 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.u.UpdateLog Could not 
> find max version in index or recent updates, using new clock 
> 1643180686202372096
>    [junit4]   2> 443545 INFO  
> (searcherExecutor-1689-thread-1-processing-n:127.0.0.1:36659_solr 
> x:hdfsbackuprestore_testok_shard2_replica_n7 c:hdfsbackuprestore_testok 
> s:shard2 r:core_node9) [n:127.0.0.1:36659_solr c:hdfsbackuprestore_testok 
> s:shard2 r:core_node9 x:hdfsbackuprestore_testok_shard2_replica_n7 ] 
> o.a.s.c.SolrCore [hdfsbackuprestore_testok_shard2_replica_n7] Registered new 
> searcher Searcher@50d2c5a[hdfsbackuprestore_testok_shard2_replica_n7] 
> main{ExitableDirectoryReader(UninvertingDirectoryReader())}
>    [junit4]   2> 443546 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] o.a.s.c.ZkShardTerms 
> Successful update of terms at 
> /collections/hdfsbackuprestore_testok/terms/shard2 to 
> Terms{values={core_node12=0, core_node11=0, core_node9=0}, version=2}
>    [junit4]   2> 443546 INFO  (qtp1840676713-6919) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard2 r:core_node9 
> x:hdfsbackuprestore_testok_shard2_replica_n7 ] 
> o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
> /collections/hdfsbackuprestore_testok/leaders/shard2
>    [junit4]   2> 443952 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] 
> o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
>    [junit4]   2> 443952 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] 
> o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
>    [junit4]   2> 443952 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.c.SyncStrategy Sync 
> replicas to 
> https://127.0.0.1:46735/solr/hdfsbackuprestore_testok_shard1_replica_n1/
>    [junit4]   2> 443953 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.u.PeerSync PeerSync: 
> core=hdfsbackuprestore_testok_shard1_replica_n1 
> url=https://127.0.0.1:46735/solr START 
> replicas=[https://127.0.0.1:36659/solr/hdfsbackuprestore_testok_shard1_replica_n2/,
>  https://127.0.0.1:46735/solr/hdfsbackuprestore_testok_shard1_replica_t4/] 
> nUpdates=100
>    [junit4]   2> 443954 INFO  (qtp2078506737-6918) [n:127.0.0.1:46735_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node3 
> x:hdfsbackuprestore_testok_shard1_replica_n1 ] o.a.s.u.PeerSync PeerSync: 
> core=hdfsbackuprestore_testok_shard1_replica_n1 
> url=https://127.0.0.1:46735/solr DONE.  We have no versions.  sync failed.
>    [junit4]   2> 443961 INFO  (qtp1840676713-6925) [n:127.0.0.1:36659_solr 
> c:hdfsbackuprestore_testok s:shard1 r:core_node5 
> x:hdfsbackuprestore_testok_shard1_replica_n2 ] o.a.s.c.S.Request [hdfsbac
>
> [...truncated too long message...]
>
>  loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> ivy-availability-check:
> [loadresource] Do not set property disallowed.ivy.jars.list as its length is 
> 0.
>
> -ivy-fail-disallowed-ivy-version:
>
> ivy-fail:
>
> ivy-fail:
>
> ivy-configure:
> [ivy:configure] :: loading settings :: file = 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/lucene/top-level-ivy-settings.xml
>
> resolve:
>
> jar-checksums:
>     [mkdir] Created dir: 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/null699052273
>      [copy] Copying 249 files to 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/null699052273
>    [delete] Deleting directory 
> /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-8.x/solr/null699052273
>
> check-working-copy:
> [ivy:cachepath] :: resolving dependencies :: #;working@lucene1-us-west
> [ivy:cachepath]         confs: [default]
> [ivy:cachepath]         found 
> org.eclipse.jgit#org.eclipse.jgit;5.3.0.201903130848-r in public
> [ivy:cachepath]         found com.jcraft#jsch;0.1.54 in public
> [ivy:cachepath]         found com.jcraft#jzlib;1.1.1 in public
> [ivy:cachepath]         found com.googlecode.javaewah#JavaEWAH;1.1.6 in public
> [ivy:cachepath]         found org.slf4j#slf4j-api;1.7.2 in public
> [ivy:cachepath]         found org.bouncycastle#bcpg-jdk15on;1.60 in public
> [ivy:cachepath]         found org.bouncycastle#bcprov-jdk15on;1.60 in public
> [ivy:cachepath]         found org.bouncycastle#bcpkix-jdk15on;1.60 in public
> [ivy:cachepath]         found org.slf4j#slf4j-nop;1.7.2 in public
> [ivy:cachepath] :: resolution report :: resolve 30ms :: artifacts dl 2ms
>         ---------------------------------------------------------------------
>         |                  |            modules            ||   artifacts   |
>         |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
>         ---------------------------------------------------------------------
>         |      default     |   9   |   0   |   0   |   0   ||   9   |   0   |
>         ---------------------------------------------------------------------
> [wc-checker] Initializing working copy...
> [wc-checker] Checking working copy status...
>
> -jenkins-base:
>
> BUILD SUCCESSFUL
> Total time: 118 minutes 58 seconds
> Archiving artifacts
> java.lang.InterruptedException: no matches found within 10000
>         at hudson.FilePath$ValidateAntFileMask.hasMatch(FilePath.java:2847)
>         at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2726)
>         at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2707)
>         at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086)
> Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to lucene
>                 at 
> hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
>                 at 
> hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
>                 at hudson.remoting.Channel.call(Channel.java:955)
>                 at hudson.FilePath.act(FilePath.java:1072)
>                 at hudson.FilePath.act(FilePath.java:1061)
>                 at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
>                 at 
> hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
>                 at 
> hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
>                 at 
> hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
>                 at 
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
>                 at 
> hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
>                 at hudson.model.Build$BuildExecution.post2(Build.java:186)
>                 at 
> hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
>                 at hudson.model.Run.execute(Run.java:1835)
>                 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
>                 at 
> hudson.model.ResourceController.execute(ResourceController.java:97)
>                 at hudson.model.Executor.run(Executor.java:429)
> Caused: hudson.FilePath$TunneledInterruptedException
>         at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3088)
>         at hudson.remoting.UserRequest.perform(UserRequest.java:212)
>         at hudson.remoting.UserRequest.perform(UserRequest.java:54)
>         at hudson.remoting.Request$2.run(Request.java:369)
>         at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:744)
> Caused: java.lang.InterruptedException: java.lang.InterruptedException: no 
> matches found within 10000
>         at hudson.FilePath.act(FilePath.java:1074)
>         at hudson.FilePath.act(FilePath.java:1061)
>         at hudson.FilePath.validateAntFileMask(FilePath.java:2705)
>         at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
>         at 
> hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
>         at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
>         at 
> hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
>         at 
> hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
>         at hudson.model.Build$BuildExecution.post2(Build.java:186)
>         at 
> hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
>         at hudson.model.Run.execute(Run.java:1835)
>         at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
>         at hudson.model.ResourceController.execute(ResourceController.java:97)
>         at hudson.model.Executor.run(Executor.java:429)
> No artifacts found that match the file pattern 
> "**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error?
> Recording test results
> Build step 'Publish JUnit test result report' changed build result to UNSTABLE
> Email was triggered for: Unstable (Test Failures)
> Sending email for trigger: Unstable (Test Failures)
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to