Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2237/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test

Error Message:
expected:<COMPLETED> but was:<FAILED>

Stack Trace:
java.lang.AssertionError: expected:<COMPLETED> but was:<FAILED>
        at 
__randomizedtesting.SeedInfo.seed([C7BE4F0A869F4392:4FEA70D028632E6A]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.failNotEquals(Assert.java:647)
        at org.junit.Assert.assertEquals(Assert.java:128)
        at org.junit.Assert.assertEquals(Assert.java:147)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:327)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:145)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test

Error Message:
expected:<COMPLETED> but was:<FAILED>

Stack Trace:
java.lang.AssertionError: expected:<COMPLETED> but was:<FAILED>
        at 
__randomizedtesting.SeedInfo.seed([C7BE4F0A869F4392:4FEA70D028632E6A]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.failNotEquals(Assert.java:647)
        at org.junit.Assert.assertEquals(Assert.java:128)
        at org.junit.Assert.assertEquals(Assert.java:147)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:327)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:145)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test

Error Message:
expected:<COMPLETED> but was:<FAILED>

Stack Trace:
java.lang.AssertionError: expected:<COMPLETED> but was:<FAILED>
        at 
__randomizedtesting.SeedInfo.seed([C7BE4F0A869F4392:4FEA70D028632E6A]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.failNotEquals(Assert.java:647)
        at org.junit.Assert.assertEquals(Assert.java:128)
        at org.junit.Assert.assertEquals(Assert.java:147)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:327)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:145)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test

Error Message:
expected:<COMPLETED> but was:<FAILED>

Stack Trace:
java.lang.AssertionError: expected:<COMPLETED> but was:<FAILED>
        at 
__randomizedtesting.SeedInfo.seed([C7BE4F0A869F4392:4FEA70D028632E6A]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.failNotEquals(Assert.java:647)
        at org.junit.Assert.assertEquals(Assert.java:128)
        at org.junit.Assert.assertEquals(Assert.java:147)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:327)
        at 
org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:145)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 14209 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/init-core-data-001
   [junit4]   2> 1396160 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1396160 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", ssl=0.0/0.0, value=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 1396161 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 1396161 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001
   [junit4]   2> 1396161 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1396161 INFO  (Thread-4757) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1396161 INFO  (Thread-4757) [    ] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1396163 ERROR (Thread-4757) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1396261 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.c.ZkTestServer start zk server on port:33559
   [junit4]   2> 1396263 INFO  (zkConnectionManagerCallback-3992-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396266 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 10.0.1+10
   [junit4]   2> 1396266 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 10.0.1+10
   [junit4]   2> 1396276 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1396276 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1396276 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 1396276 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1396276 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1396276 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.e.j.s.session node0 Scavenging every 600000ms
   [junit4]   2> 1396276 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@4af12b{/solr,null,AVAILABLE}
   [junit4]   2> 1396276 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@7bd637c1{/solr,null,AVAILABLE}
   [junit4]   2> 1396277 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@36a6f04c{SSL,[ssl, 
http/1.1]}{127.0.0.1:45625}
   [junit4]   2> 1396277 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.e.j.s.Server Started @1396320ms
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.e.j.s.AbstractConnector Started ServerConnector@2bdfad34{SSL,[ssl, 
http/1.1]}{127.0.0.1:46429}
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=45625}
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.e.j.s.Server Started @1396320ms
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=46429}
   [junit4]   2> 1396278 ERROR (jetty-launcher-3989-thread-2) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1396278 ERROR (jetty-launcher-3989-thread-1) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.5.0
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-07-01T16:35:23.779539Z
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1396278 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-07-01T16:35:23.779581Z
   [junit4]   2> 1396279 INFO  (zkConnectionManagerCallback-3995-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396279 INFO  (zkConnectionManagerCallback-3996-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396280 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 1396280 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 1396415 INFO  (jetty-launcher-3989-thread-2) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:33559/solr
   [junit4]   2> 1396463 INFO  (zkConnectionManagerCallback-4000-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396465 INFO  (zkConnectionManagerCallback-4002-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396468 INFO  (jetty-launcher-3989-thread-1) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:33559/solr
   [junit4]   2> 1396469 INFO  (zkConnectionManagerCallback-4006-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396470 INFO  (zkConnectionManagerCallback-4010-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396525 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1396525 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1396525 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.c.OverseerElectionContext I am going to be 
the leader 127.0.0.1:45625_solr
   [junit4]   2> 1396525 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:46429_solr
   [junit4]   2> 1396525 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.c.Overseer Overseer 
(id=72296409538428932-127.0.0.1:45625_solr-n_0000000000) starting
   [junit4]   2> 1396526 INFO  (zkCallback-4001-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1396526 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1396530 INFO  (zkConnectionManagerCallback-4017-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396531 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 1396531 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:33559/solr ready
   [junit4]   2> 1396532 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.c.TransientSolrCoreCacheDefault Allocating 
transient cache for 2147483647 transient cores
   [junit4]   2> 1396532 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:45625_solr
   [junit4]   2> 1396534 INFO  (zkCallback-4001-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1396534 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1396537 INFO  (zkCallback-4016-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1396538 INFO  (zkConnectionManagerCallback-4022-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396539 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (2)
   [junit4]   2> 1396539 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster 
at 127.0.0.1:33559/solr ready
   [junit4]   2> 1396547 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 1396554 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.h.a.MetricsHistoryHandler No .system 
collection, keeping metrics history in memory.
   [junit4]   2> 1396559 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46429.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1396563 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_45625.solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1396566 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46429.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1396566 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46429.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1396567 INFO  (jetty-launcher-3989-thread-1) 
[n:127.0.0.1:46429_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node1/.
   [junit4]   2> 1396570 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_45625.solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1396570 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_45625.solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1396571 INFO  (jetty-launcher-3989-thread-2) 
[n:127.0.0.1:45625_solr    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node2/.
   [junit4]   2> 1396607 INFO  (zkConnectionManagerCallback-4026-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396609 INFO  (zkConnectionManagerCallback-4031-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1396610 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 1396610 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:33559/solr ready
   [junit4]   2> 1396618 INFO  
(TEST-TestLocalFSCloudBackupRestore.test-seed#[C7BE4F0A869F4392]) [    ] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 1396630 INFO  (qtp504088295-16063) [n:127.0.0.1:46429_solr    
] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
pullReplicas=0&property.customKey=customValue&collection.configName=conf1&router.field=shard_s&autoAddReplicas=true&name=backuprestore&nrtReplicas=1&action=CREATE&numShards=2&tlogReplicas=0&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1396631 INFO  (OverseerThreadFactory-6153-thread-1) [    ] 
o.a.s.c.a.c.CreateCollectionCmd Create collection backuprestore
   [junit4]   2> 1396735 INFO  
(OverseerStateUpdate-72296409538428932-127.0.0.1:45625_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1",
   [junit4]   2>   "core":"backuprestore_shard1_replica_n1",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:46429/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 1396736 INFO  
(OverseerStateUpdate-72296409538428932-127.0.0.1:45625_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"ADDREPLICA",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard2",
   [junit4]   2>   "core":"backuprestore_shard2_replica_n2",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:45625/solr";,
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   "waitForFinalState":"false"} 
   [junit4]   2> 1396940 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node3&name=backuprestore_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin
   [junit4]   2> 1396941 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.c.TransientSolrCoreCacheDefault 
Allocating transient cache for 2147483647 transient cores
   [junit4]   2> 1396948 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr    
x:backuprestore_shard2_replica_n2] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node4&name=backuprestore_shard2_replica_n2&action=CREATE&numShards=2&shard=shard2&wt=javabin
   [junit4]   2> 1397947 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 1397950 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.IndexSchema [backuprestore_shard1_replica_n1] Schema name=minimal
   [junit4]   2> 1397951 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 1397951 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_replica_n1' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 1397952 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46429.solr.core.backuprestore.shard1.replica_n1' (registry 
'solr.core.backuprestore.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1397952 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 1397952 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SolrCore [[backuprestore_shard1_replica_n1] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node1/backuprestore_shard1_replica_n1],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node1/./backuprestore_shard1_replica_n1/data/]
   [junit4]   2> 1397954 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 1397958 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.s.IndexSchema [backuprestore_shard2_replica_n2] Schema name=minimal
   [junit4]   2> 1397959 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 1397959 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard2_replica_n2' using 
configuration from collection backuprestore, trusted=true
   [junit4]   2> 1397959 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_45625.solr.core.backuprestore.shard2.replica_n2' (registry 
'solr.core.backuprestore.shard2.replica_n2') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1397959 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 1397959 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.SolrCore [[backuprestore_shard2_replica_n2] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node2/backuprestore_shard2_replica_n2],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node2/./backuprestore_shard2_replica_n2/data/]
   [junit4]   2> 1397995 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 1397995 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1397996 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 1397996 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 1397997 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 1397997 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1397997 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@183b6d5c[backuprestore_shard1_replica_n1] main]
   [junit4]   2> 1397998 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 1397998 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 1397999 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1397999 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1397999 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@2f0df784[backuprestore_shard2_replica_n2] main]
   [junit4]   2> 1397999 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 1398000 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1398000 INFO  
(searcherExecutor-6162-thread-1-processing-n:127.0.0.1:46429_solr 
x:backuprestore_shard1_replica_n1 c:backuprestore s:shard1 r:core_node3) 
[n:127.0.0.1:46429_solr c:backuprestore s:shard1 r:core_node3 
x:backuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_n1] Registered new searcher 
Searcher@183b6d5c[backuprestore_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1398000 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604806692570136576
   [junit4]   2> 1398000 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1398000 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 1398001 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604806692571185152
   [junit4]   2> 1398001 INFO  
(searcherExecutor-6163-thread-1-processing-n:127.0.0.1:45625_solr 
x:backuprestore_shard2_replica_n2 c:backuprestore s:shard2 r:core_node4) 
[n:127.0.0.1:45625_solr c:backuprestore s:shard2 r:core_node4 
x:backuprestore_shard2_replica_n2] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_n2] Registered new searcher 
Searcher@2f0df784[backuprestore_shard2_replica_n2] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1398004 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1 to Terms{values={core_node3=0}, 
version=0}
   [junit4]   2> 1398004 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard2 to Terms{values={core_node4=0}, 
version=0}
   [junit4]   2> 1398005 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1398005 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1398005 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:46429/solr/backuprestore_shard1_replica_n1/
   [junit4]   2> 1398005 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1398005 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1398005 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 1398005 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:45625/solr/backuprestore_shard2_replica_n2/
   [junit4]   2> 1398005 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:46429/solr/backuprestore_shard1_replica_n1/ has no replicas
   [junit4]   2> 1398005 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1398005 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 1398006 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:45625/solr/backuprestore_shard2_replica_n2/ has no replicas
   [junit4]   2> 1398006 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1398008 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:45625/solr/backuprestore_shard2_replica_n2/ shard2
   [junit4]   2> 1398009 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:46429/solr/backuprestore_shard1_replica_n1/ shard1
   [junit4]   2> 1398110 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 1398110 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 1398211 INFO  (zkCallback-4001-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1398211 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1398738 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node4&name=backuprestore_shard2_replica_n2&action=CREATE&numShards=2&shard=shard2&wt=javabin}
 status=0 QTime=1789
   [junit4]   2> 1398742 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&collection=backuprestore&version=2&replicaType=NRT&property.customKey=customValue&coreNodeName=core_node3&name=backuprestore_shard1_replica_n1&action=CREATE&numShards=2&shard=shard1&wt=javabin}
 status=0 QTime=1801
   [junit4]   2> 1398743 INFO  (qtp504088295-16063) [n:127.0.0.1:46429_solr    
] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 1398743 INFO  (qtp504088295-16063) [n:127.0.0.1:46429_solr    
] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={pullReplicas=0&property.customKey=customValue&collection.configName=conf1&router.field=shard_s&autoAddReplicas=true&name=backuprestore&nrtReplicas=1&action=CREATE&numShards=2&tlogReplicas=0&wt=javabin&version=2}
 status=0 QTime=2113
   [junit4]   2> 1398749 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard2 to Terms{values={core_node4=1}, 
version=1}
   [junit4]   2> 1398749 INFO  (qtp2035474042-16059) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n2]  
webapp=/solr path=/update params={wt=javabin&version=2}{add=[0 
(1604806693353422848), 1 (1604806693353422849), 2 (1604806693353422850), 3 
(1604806693353422851), 4 (1604806693353422852), 5 (1604806693353422853), 6 
(1604806693353422854), 7 (1604806693353422855), 8 (1604806693353422856), 9 
(1604806693353422857), ... (70 adds)]} 0 2
   [junit4]   2> 1398764 INFO  (qtp2035474042-16056) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1604806693371248640,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1398764 INFO  (qtp2035474042-16056) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@173e532f 
commitCommandVersion:1604806693371248640
   [junit4]   2> 1398764 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1604806693371248640,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1398765 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1398765 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1398765 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1 r:core_node3 x:backuprestore_shard1_replica_n1] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard1_replica_n1]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:45625/solr/backuprestore_shard2_replica_n2/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 0
   [junit4]   2> 1398768 INFO  (qtp2035474042-16056) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@1e997662[backuprestore_shard2_replica_n2] main]
   [junit4]   2> 1398768 INFO  (qtp2035474042-16056) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1398769 INFO  
(searcherExecutor-6163-thread-1-processing-n:127.0.0.1:45625_solr 
x:backuprestore_shard2_replica_n2 c:backuprestore s:shard2 r:core_node4) 
[n:127.0.0.1:45625_solr c:backuprestore s:shard2 r:core_node4 
x:backuprestore_shard2_replica_n2] o.a.s.c.SolrCore 
[backuprestore_shard2_replica_n2] Registered new searcher 
Searcher@1e997662[backuprestore_shard2_replica_n2] 
main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.5.0):C70)))}
   [junit4]   2> 1398769 INFO  (qtp2035474042-16056) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n2]  
webapp=/solr path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:45625/solr/backuprestore_shard2_replica_n2/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 5
   [junit4]   2> 1398770 INFO  (qtp2035474042-16062) [n:127.0.0.1:45625_solr 
c:backuprestore s:shard2 r:core_node4 x:backuprestore_shard2_replica_n2] 
o.a.s.u.p.LogUpdateProcessorFactory [backuprestore_shard2_replica_n2]  
webapp=/solr path=/update 
params={_stateVer_=backuprestore:4&waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=}
 0 19
   [junit4]   2> 1398770 INFO  
(TEST-TestLocalFSCloudBackupRestore.test-seed#[C7BE4F0A869F4392]) [    ] 
o.a.s.c.a.c.AbstractCloudBackupRestoreTestCase Indexed 70 docs to collection: 
backuprestore
   [junit4]   2> 1398770 INFO  (qtp504088295-16065) [n:127.0.0.1:46429_solr    
] o.a.s.h.a.CollectionsHandler Invoked Collection Action :splitshard with 
params 
action=SPLITSHARD&collection=backuprestore&shard=shard1&wt=javabin&version=2 
and sendToOCPQueue=true
   [junit4]   2> 1398778 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Split shard invoked
   [junit4]   2> 1398779 INFO  
(OverseerCollectionConfigSetProcessor-72296409538428932-127.0.0.1:45625_solr-n_0000000000)
 [    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1398782 INFO  (qtp504088295-16052) [n:127.0.0.1:46429_solr    
] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics 
params={prefix=CONTAINER.fs.usableSpace&wt=javabin&version=2&group=solr.node} 
status=0 QTime=0
   [junit4]   2> 1398783 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr    
] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics 
params={wt=javabin&version=2&key=solr.core.backuprestore.shard1.replica_n1:INDEX.sizeInBytes}
 status=0 QTime=0
   [junit4]   2> 1398784 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Creating slice shard1_0 
of collection backuprestore on 127.0.0.1:46429_solr
   [junit4]   2> 1398885 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1398885 INFO  (zkCallback-4001-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1399784 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Adding replica 
backuprestore_shard1_0_replica_n5 as part of slice shard1_0 of collection 
backuprestore on 127.0.0.1:46429_solr
   [junit4]   2> 1399786 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Node Identified 
127.0.0.1:46429_solr for creating new replica
   [junit4]   2> 1399787 INFO  
(OverseerStateUpdate-72296409538428932-127.0.0.1:45625_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"addreplica",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1_0",
   [junit4]   2>   "core":"backuprestore_shard1_0_replica_n5",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:46429/solr";,
   [junit4]   2>   "node_name":"127.0.0.1:46429_solr",
   [junit4]   2>   "type":"NRT"} 
   [junit4]   2> 1399888 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1399888 INFO  (zkCallback-4001-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1399988 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&coreNodeName=core_node7&collection.configName=conf1&name=backuprestore_shard1_0_replica_n5&action=CREATE&collection=backuprestore&shard=shard1_0&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 1399994 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 1399997 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.s.IndexSchema [backuprestore_shard1_0_replica_n5] Schema name=minimal
   [junit4]   2> 1399998 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 1399999 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_0_replica_n5' 
using configuration from collection backuprestore, trusted=true
   [junit4]   2> 1399999 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46429.solr.core.backuprestore.shard1_0.replica_n5' (registry 
'solr.core.backuprestore.shard1_0.replica_n5') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1399999 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 1399999 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SolrCore [[backuprestore_shard1_0_replica_n5] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node1/backuprestore_shard1_0_replica_n5],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node1/./backuprestore_shard1_0_replica_n5/data/]
   [junit4]   2> 1400048 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 1400048 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1400049 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 1400049 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 1400050 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@d94885b[backuprestore_shard1_0_replica_n5] main]
   [junit4]   2> 1400051 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1400051 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1400052 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 1400052 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604806694721814528
   [junit4]   2> 1400052 INFO  
(searcherExecutor-6172-thread-1-processing-n:127.0.0.1:46429_solr 
x:backuprestore_shard1_0_replica_n5 c:backuprestore s:shard1_0 r:core_node7) 
[n:127.0.0.1:46429_solr c:backuprestore s:shard1_0 r:core_node7 
x:backuprestore_shard1_0_replica_n5] o.a.s.c.SolrCore 
[backuprestore_shard1_0_replica_n5] Registered new searcher 
Searcher@d94885b[backuprestore_shard1_0_replica_n5] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1400053 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, 
tlog=null}
   [junit4]   2> 1400054 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1_0 to Terms{values={core_node7=0}, 
version=0}
   [junit4]   2> 1400056 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1400056 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1400056 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:46429/solr/backuprestore_shard1_0_replica_n5/
   [junit4]   2> 1400056 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 1400056 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:46429/solr/backuprestore_shard1_0_replica_n5/ has no replicas
   [junit4]   2> 1400056 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1400058 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:46429/solr/backuprestore_shard1_0_replica_n5/ shard1_0
   [junit4]   2> 1400159 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1400159 INFO  (zkCallback-4001-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1400208 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 1400234 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_0 r:core_node7 x:backuprestore_shard1_0_replica_n5] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&coreNodeName=core_node7&collection.configName=conf1&name=backuprestore_shard1_0_replica_n5&action=CREATE&collection=backuprestore&shard=shard1_0&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=246
   [junit4]   2> 1400234 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Creating slice shard1_1 
of collection backuprestore on 127.0.0.1:46429_solr
   [junit4]   2> 1400335 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1400335 INFO  (zkCallback-4001-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1401235 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Adding replica 
backuprestore_shard1_1_replica_n6 as part of slice shard1_1 of collection 
backuprestore on 127.0.0.1:46429_solr
   [junit4]   2> 1401235 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.AddReplicaCmd Node Identified 
127.0.0.1:46429_solr for creating new replica
   [junit4]   2> 1401236 INFO  
(OverseerStateUpdate-72296409538428932-127.0.0.1:45625_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator createReplica() {
   [junit4]   2>   "operation":"addreplica",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard":"shard1_1",
   [junit4]   2>   "core":"backuprestore_shard1_1_replica_n6",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "base_url":"https://127.0.0.1:46429/solr";,
   [junit4]   2>   "node_name":"127.0.0.1:46429_solr",
   [junit4]   2>   "type":"NRT"} 
   [junit4]   2> 1401337 INFO  (zkCallback-4001-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1401337 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1401437 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&coreNodeName=core_node8&collection.configName=conf1&name=backuprestore_shard1_1_replica_n6&action=CREATE&collection=backuprestore&shard=shard1_1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 1401441 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 1401453 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.s.IndexSchema [backuprestore_shard1_1_replica_n6] Schema name=minimal
   [junit4]   2> 1401453 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.s.IndexSchema Loaded schema minimal/1.1 with uniqueid field id
   [junit4]   2> 1401453 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.CoreContainer Creating SolrCore 'backuprestore_shard1_1_replica_n6' 
using configuration from collection backuprestore, trusted=true
   [junit4]   2> 1401454 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr_46429.solr.core.backuprestore.shard1_1.replica_n6' (registry 
'solr.core.backuprestore.shard1_1.replica_n6') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@1d93e442
   [junit4]   2> 1401454 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 1401454 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SolrCore [[backuprestore_shard1_1_replica_n6] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node1/backuprestore_shard1_1_replica_n6],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-001/tempDir-001/node1/./backuprestore_shard1_1_replica_n6/data/]
   [junit4]   2> 1401482 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 1401482 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir=null defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1401483 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 1401483 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 1401484 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@426f456b[backuprestore_shard1_1_replica_n6] main]
   [junit4]   2> 1401484 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1401485 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1401485 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 1401485 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1604806696224423936
   [junit4]   2> 1401486 INFO  
(searcherExecutor-6177-thread-1-processing-n:127.0.0.1:46429_solr 
x:backuprestore_shard1_1_replica_n6 c:backuprestore s:shard1_1 r:core_node8) 
[n:127.0.0.1:46429_solr c:backuprestore s:shard1_1 r:core_node8 
x:backuprestore_shard1_1_replica_n6] o.a.s.c.SolrCore 
[backuprestore_shard1_1_replica_n6] Registered new searcher 
Searcher@426f456b[backuprestore_shard1_1_replica_n6] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1401486 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, 
tlog=null}
   [junit4]   2> 1401488 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/backuprestore/terms/shard1_1 to Terms{values={core_node8=0}, 
version=0}
   [junit4]   2> 1401489 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1401489 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1401489 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:46429/solr/backuprestore_shard1_1_replica_n6/
   [junit4]   2> 1401489 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 1401489 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:46429/solr/backuprestore_shard1_1_replica_n6/ has no replicas
   [junit4]   2> 1401489 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1401490 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:46429/solr/backuprestore_shard1_1_replica_n6/ shard1_1
   [junit4]   2> 1401592 INFO  (zkCallback-4001-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1401592 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1401641 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 1401694 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr 
c:backuprestore s:shard1_1 r:core_node8 x:backuprestore_shard1_1_replica_n6] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&coreNodeName=core_node8&collection.configName=conf1&name=backuprestore_shard1_1_replica_n6&action=CREATE&collection=backuprestore&shard=shard1_1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=256
   [junit4]   2> 1401694 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Asking parent leader to 
wait for: backuprestore_shard1_0_replica_n5 to be alive on: 127.0.0.1:46429_solr
   [junit4]   2> 1401694 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Asking parent leader to 
wait for: backuprestore_shard1_1_replica_n6 to be alive on: 127.0.0.1:46429_solr
   [junit4]   2> 1401695 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.PrepRecoveryOp Going to wait for 
coreNodeName: core_node7, state: active, checkLive: true, onlyIfLeader: true, 
onlyIfLeaderActive: null, maxTime: 183 s
   [junit4]   2> 1401695 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(active): collection=backuprestore, shard=shard1_0, 
thisCore=backuprestore_shard1_0_replica_n5, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=active, 
localState=active, nodeName=127.0.0.1:46429_solr, coreNodeName=core_node7, 
onlyIfActiveCheckResult=false, nodeProps: 
core_node7:{"core":"backuprestore_shard1_0_replica_n5","base_url":"https://127.0.0.1:46429/solr","node_name":"127.0.0.1:46429_solr","state":"active","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 1401695 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.PrepRecoveryOp Waited 
coreNodeName: core_node7, state: active, checkLive: true, onlyIfLeader: true 
for: 0 seconds.
   [junit4]   2> 1401695 INFO  (qtp504088295-16132) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={nodeName=127.0.0.1:46429_solr&core=backuprestore_shard1_0_replica_n5&qt=/admin/cores&coreNodeName=core_node7&action=PREPRECOVERY&checkLive=true&state=active&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=0
   [junit4]   2> 1401696 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.PrepRecoveryOp Going to wait for 
coreNodeName: core_node8, state: active, checkLive: true, onlyIfLeader: true, 
onlyIfLeaderActive: null, maxTime: 183 s
   [junit4]   2> 1401697 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(active): collection=backuprestore, shard=shard1_1, 
thisCore=backuprestore_shard1_1_replica_n6, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=down, 
localState=active, nodeName=127.0.0.1:46429_solr, coreNodeName=core_node8, 
onlyIfActiveCheckResult=false, nodeProps: 
core_node8:{"core":"backuprestore_shard1_1_replica_n6","base_url":"https://127.0.0.1:46429/solr","node_name":"127.0.0.1:46429_solr","state":"down","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 1401743 INFO  (zkCallback-4001-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1401743 INFO  (zkCallback-4009-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/backuprestore/state.json] for collection [backuprestore] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1402697 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(active): collection=backuprestore, shard=shard1_1, 
thisCore=backuprestore_shard1_1_replica_n6, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=active, 
localState=active, nodeName=127.0.0.1:46429_solr, coreNodeName=core_node8, 
onlyIfActiveCheckResult=false, nodeProps: 
core_node8:{"core":"backuprestore_shard1_1_replica_n6","base_url":"https://127.0.0.1:46429/solr","node_name":"127.0.0.1:46429_solr","state":"active","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 1402697 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.PrepRecoveryOp Waited 
coreNodeName: core_node8, state: active, checkLive: true, onlyIfLeader: true 
for: 1 seconds.
   [junit4]   2> 1402697 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={nodeName=127.0.0.1:46429_solr&core=backuprestore_shard1_1_replica_n6&qt=/admin/cores&coreNodeName=core_node8&action=PREPRECOVERY&checkLive=true&state=active&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=1000
   [junit4]   2> 1402697 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Successfully created all 
sub-shards for collection backuprestore parent shard: shard1 on: 
core_node3:{"core":"backuprestore_shard1_replica_n1","base_url":"https://127.0.0.1:46429/solr","node_name":"127.0.0.1:46429_solr","state":"active","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 1402697 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Splitting shard 
core_node3 as part of slice shard1 of collection backuprestore on 
core_node3:{"core":"backuprestore_shard1_replica_n1","base_url":"https://127.0.0.1:46429/solr","node_name":"127.0.0.1:46429_solr","state":"active","type":"NRT","force_set_state":"false","leader":"true"}
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.h.a.SplitOp Invoked split action for 
core: backuprestore_shard1_replica_n1
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start 
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted 
changes. Skipping IW.commit.
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter SolrIndexSplitter: 
partitions=2 segments=0
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter SolrIndexSplitter: 
partition #0 partitionCount=2 range=80000000-bfffffff
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling 
setCommitData with IW:org.apache.solr.update.SolrIndexWriter@1eb1b4fc 
commitCommandVersion:-1
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexSplitter SolrIndexSplitter: 
partition #1 partitionCount=2 range=c0000000-ffffffff
   [junit4]   2> 1402698 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling 
setCommitData with IW:org.apache.solr.update.SolrIndexWriter@1962f189 
commitCommandVersion:-1
   [junit4]   2> 1402699 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of 
terms at /collections/backuprestore/terms/shard1_0 to 
Terms{values={core_node7=1}, version=1}
   [junit4]   2> 1402699 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of 
terms at /collections/backuprestore/terms/shard1_1 to 
Terms{values={core_node8=1}, version=1}
   [junit4]   2> 1402699 INFO  (qtp504088295-16066) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={core=backuprestore_shard1_replica_n1&qt=/admin/cores&action=SPLIT&targetCore=backuprestore_shard1_0_replica_n5&targetCore=backuprestore_shard1_1_replica_n6&wt=javabin&version=2}
 status=0 QTime=1
   [junit4]   2> 1402699 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Index on shard: 
127.0.0.1:46429_solr split into two successfully
   [junit4]   2> 1402699 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Applying buffered updates 
on : backuprestore_shard1_0_replica_n5
   [junit4]   2> 1402699 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Applying buffered updates 
on : backuprestore_shard1_1_replica_n6
   [junit4]   2> 1402700 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.CoreAdminOperation Applying 
buffered updates on core: backuprestore_shard1_1_replica_n6
   [junit4]   2> 1402700 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.h.a.CoreAdminOperation No buffered 
updates available. core=backuprestore_shard1_1_replica_n6
   [junit4]   2> 1402700 INFO  (qtp504088295-16144) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_1_replica_n6] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={qt=/admin/cores&name=backuprestore_shard1_1_replica_n6&action=REQUESTAPPLYUPDATES&wt=javabin&version=2}
 status=0 QTime=0
   [junit4]   2> 1402700 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.CoreAdminOperation Applying 
buffered updates on core: backuprestore_shard1_0_replica_n5
   [junit4]   2> 1402700 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.h.a.CoreAdminOperation No buffered 
updates available. core=backuprestore_shard1_0_replica_n5
   [junit4]   2> 1402700 INFO  (qtp504088295-16057) [n:127.0.0.1:46429_solr    
x:backuprestore_shard1_0_replica_n5] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={qt=/admin/cores&name=backuprestore_shard1_0_replica_n5&action=REQUESTAPPLYUPDATES&wt=javabin&version=2}
 status=0 QTime=0
   [junit4]   2> 1402700 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Successfully applied 
buffered updates on : [backuprestore_shard1_0_replica_n5, 
backuprestore_shard1_1_replica_n6]
   [junit4]   2> 1402701 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Replication factor is 1 
so switching shard states
   [junit4]   2> 1402701 INFO  (OverseerThreadFactory-6153-thread-2) [ 
c:backuprestore s:shard1  ] o.a.s.c.a.c.SplitShardCmd Successfully created all 
replica shards for all sub-slices [shard1_0, shard1_1]
   [junit4]   2> 1402701 INFO  
(OverseerStateUpdate-72296409538428932-127.0.0.1:45625_solr-n_0000000000) [    
] o.a.s.c.o.SliceMutator Update shard state invoked for collection: 
backuprestore with message: {
   [junit4]   2>   "shard1":"inactive",
   [junit4]   2>   "collection":"backuprestore",
   [junit4]   2>   "shard1_1":"active",
   [junit4]   2>   "operation":"updateshardstate",
   [

[...truncated too long message...]

 nodes from ZooKeeper... (1) -> (0)
   [junit4]   2> 30477 INFO  (coreCloseExecutor-90-thread-1) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1 r:core_node3 
x:backuprestore_shard1_replica_n1] o.a.s.c.SolrCore 
[backuprestore_shard1_replica_n1]  CLOSING SolrCore 
org.apache.solr.core.SolrCore@8590bd5
   [junit4]   2> 30477 INFO  (coreCloseExecutor-90-thread-1) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1 r:core_node3 
x:backuprestore_shard1_replica_n1] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.core.backuprestore.shard1.replica_n1, tag=8590bd5
   [junit4]   2> 30478 INFO  (coreCloseExecutor-90-thread-1) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1 r:core_node3 
x:backuprestore_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter Closing reporter 
[org.apache.solr.metrics.reporters.SolrJmxReporter@283155ea: rootName = 
solr_36125, domain = solr.core.backuprestore.shard1.replica_n1, service url = 
null, agent id = null] for registry solr.core.backuprestore.shard1.replica_n1 / 
com.codahale.metrics.MetricRegistry@45ed9b1b
   [junit4]   2> 30486 INFO  (coreCloseExecutor-89-thread-1) 
[n:127.0.0.1:41379_solr c:backuprestore s:shard2 r:core_node4 
x:backuprestore_shard2_replica_n2] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.backuprestore.shard2.leader, tag=6e958738
   [junit4]   2> 30489 INFO  (coreCloseExecutor-90-thread-1) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1 r:core_node3 
x:backuprestore_shard1_replica_n1] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.backuprestore.shard1.leader, tag=8590bd5
   [junit4]   2> 30495 INFO  (coreCloseExecutor-90-thread-2) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1_0 r:core_node7 
x:backuprestore_shard1_0_replica_n5] o.a.s.c.SolrCore 
[backuprestore_shard1_0_replica_n5]  CLOSING SolrCore 
org.apache.solr.core.SolrCore@6f758db6
   [junit4]   2> 30495 INFO  (coreCloseExecutor-90-thread-3) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1_1 r:core_node8 
x:backuprestore_shard1_1_replica_n6] o.a.s.c.SolrCore 
[backuprestore_shard1_1_replica_n6]  CLOSING SolrCore 
org.apache.solr.core.SolrCore@554b99fc
   [junit4]   2> 30495 INFO  (coreCloseExecutor-90-thread-3) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1_1 r:core_node8 
x:backuprestore_shard1_1_replica_n6] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.core.backuprestore.shard1_1.replica_n6, tag=554b99fc
   [junit4]   2> 30495 INFO  (coreCloseExecutor-90-thread-3) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1_1 r:core_node8 
x:backuprestore_shard1_1_replica_n6] o.a.s.m.r.SolrJmxReporter Closing reporter 
[org.apache.solr.metrics.reporters.SolrJmxReporter@2193bb6e: rootName = 
solr_36125, domain = solr.core.backuprestore.shard1_1.replica_n6, service url = 
null, agent id = null] for registry solr.core.backuprestore.shard1_1.replica_n6 
/ com.codahale.metrics.MetricRegistry@70ba5643
   [junit4]   2> 30505 INFO  (coreCloseExecutor-90-thread-2) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1_0 r:core_node7 
x:backuprestore_shard1_0_replica_n5] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.core.backuprestore.shard1_0.replica_n5, tag=6f758db6
   [junit4]   2> 30506 INFO  (coreCloseExecutor-90-thread-2) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1_0 r:core_node7 
x:backuprestore_shard1_0_replica_n5] o.a.s.m.r.SolrJmxReporter Closing reporter 
[org.apache.solr.metrics.reporters.SolrJmxReporter@4c015ef2: rootName = 
solr_36125, domain = solr.core.backuprestore.shard1_0.replica_n5, service url = 
null, agent id = null] for registry solr.core.backuprestore.shard1_0.replica_n5 
/ com.codahale.metrics.MetricRegistry@3795acae
   [junit4]   2> 30520 INFO  (jetty-closer-45-thread-1) [    ] o.a.s.c.Overseer 
Overseer (id=72296492112019460-127.0.0.1:41379_solr-n_0000000000) closing
   [junit4]   2> 30520 INFO  
(OverseerStateUpdate-72296492112019460-127.0.0.1:41379_solr-n_0000000000) [    
] o.a.s.c.Overseer Overseer Loop exiting : 127.0.0.1:41379_solr
   [junit4]   2> 30520 WARN  
(OverseerAutoScalingTriggerThread-72296492112019460-127.0.0.1:41379_solr-n_0000000000)
 [    ] o.a.s.c.a.OverseerTriggerThread OverseerTriggerThread woken up but we 
are closed, exiting.
   [junit4]   2> 30521 INFO  (coreCloseExecutor-90-thread-3) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1_1 r:core_node8 
x:backuprestore_shard1_1_replica_n6] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.backuprestore.shard1_1.leader, 
tag=554b99fc
   [junit4]   2> 30521 INFO  (coreCloseExecutor-90-thread-2) 
[n:127.0.0.1:36125_solr c:backuprestore s:shard1_0 r:core_node7 
x:backuprestore_shard1_0_replica_n5] o.a.s.m.SolrMetricManager Closing metric 
reporters for registry=solr.collection.backuprestore.shard1_0.leader, 
tag=6f758db6
   [junit4]   2> 30536 INFO  (zkCallback-70-thread-1) [    ] 
o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:36125_solr
   [junit4]   2> 30536 INFO  (jetty-closer-45-thread-1) [    ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@18664bc6{/solr,null,UNAVAILABLE}
   [junit4]   2> 30537 INFO  (jetty-closer-45-thread-1) [    ] o.e.j.s.session 
node0 Stopped scavenging
   [junit4]   2> 30540 INFO  (jetty-closer-45-thread-2) [    ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@6d25633{/solr,null,UNAVAILABLE}
   [junit4]   2> 30541 INFO  (jetty-closer-45-thread-2) [    ] o.e.j.s.session 
node0 Stopped scavenging
   [junit4]   2> 30541 ERROR 
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.z.s.ZooKeeperServer ZKShutdownHandler is not registered, so ZooKeeper 
server won't take any action on ERROR or SHUTDOWN server state changes
   [junit4]   2> 30542 INFO  
(SUITE-TestLocalFSCloudBackupRestore-seed#[C7BE4F0A869F4392]-worker) [    ] 
o.a.s.c.ZkTestServer connecting to 127.0.0.1:43953 43953
   [junit4]   2> 30650 INFO  (Thread-27) [    ] o.a.s.c.ZkTestServer connecting 
to 127.0.0.1:43953 43953
   [junit4]   2> 30650 WARN  (Thread-27) [    ] o.a.s.c.ZkTestServer Watch 
limit violations: 
   [junit4]   2> Maximum concurrent create/delete watches above limit:
   [junit4]   2> 
   [junit4]   2>        5       /solr/aliases.json
   [junit4]   2>        3       /solr/collections/backuprestore/terms/shard1_0
   [junit4]   2>        3       /solr/collections/backuprestore/terms/shard2
   [junit4]   2>        3       /solr/collections/backuprestore/terms/shard1_1
   [junit4]   2>        2       /solr/security.json
   [junit4]   2>        2       /solr/configs/conf1
   [junit4]   2>        2       /solr/collections/backuprestore/terms/shard1
   [junit4]   2> 
   [junit4]   2> Maximum concurrent data watches above limit:
   [junit4]   2> 
   [junit4]   2>        5       /solr/clusterstate.json
   [junit4]   2>        5       /solr/clusterprops.json
   [junit4]   2>        2       /solr/collections/backuprestore/state.json
   [junit4]   2> 
   [junit4]   2> Maximum concurrent children watches above limit:
   [junit4]   2> 
   [junit4]   2>        5       /solr/live_nodes
   [junit4]   2>        5       /solr/collections
   [junit4]   2> 
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.api.collections.TestLocalFSCloudBackupRestore_C7BE4F0A869F4392-002
   [junit4]   2> Jul 01, 2018 4:56:34 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 1 leaked 
thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{shard_s=PostingsFormat(name=Memory), id=Lucene50(blocksize=128)}, 
docValues:{}, maxPointsInLeafNode=1489, maxMBSortInHeap=6.2552758465144205, 
sim=RandomSimilarity(queryNorm=false): {}, locale=es-PR, 
timezone=America/Thunder_Bay
   [junit4]   2> NOTE: Linux 4.13.0-41-generic amd64/Oracle Corporation 10.0.1 
(64-bit)/cpus=8,threads=1,free=313460152,total=536870912
   [junit4]   2> NOTE: All tests run in this JVM: 
[TestLocalFSCloudBackupRestore, TestLocalFSCloudBackupRestore]
   [junit4] Completed [5/5 (5!)] on J2 in 11.89s, 1 test, 1 failure <<< 
FAILURES!

[...truncated 15 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:1568: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/common-build.xml:1092: 
There were test failures: 5 suites, 5 tests, 5 failures [seed: C7BE4F0A869F4392]

Total time: 33 seconds

[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: 
org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore
[repro] Exiting with code 256
+ mv lucene/build lucene/build.repro
+ mv solr/build solr/build.repro
+ mv lucene/build.orig lucene/build
+ mv solr/build.orig solr/build
Archiving artifacts
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
[WARNINGS] Parsing warnings in console log with parser Java Compiler (javac)
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
<Git Blamer> Using GitBlamer to create author and commit information for all 
warnings.
<Git Blamer> GIT_COMMIT=9a395f83ccd83bca568056f178757dd032007140, 
workspace=/var/lib/jenkins/workspace/Lucene-Solr-7.x-Linux
[WARNINGS] Computing warning deltas based on reference build #2236
Recording test results
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to