Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20326/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:38751/collMinRf_1x3 due to: Path 
not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:38751/collMinRf_1x3 due to: Path not found: /id; rsp={doc=null}
        at 
__randomizedtesting.SeedInfo.seed([EF1DFB4F6CFCBE47:6749C495C200D3BF]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
        at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
        at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
        at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12440 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> 1660859 INFO  
(SUITE-HttpPartitionTest-seed#[EF1DFB4F6CFCBE47]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/init-core-data-001
   [junit4]   2> 1660860 WARN  
(SUITE-HttpPartitionTest-seed#[EF1DFB4F6CFCBE47]-worker) [    ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=102 numCloses=102
   [junit4]   2> 1660860 INFO  
(SUITE-HttpPartitionTest-seed#[EF1DFB4F6CFCBE47]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1660861 INFO  
(SUITE-HttpPartitionTest-seed#[EF1DFB4F6CFCBE47]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 1660861 INFO  
(SUITE-HttpPartitionTest-seed#[EF1DFB4F6CFCBE47]-worker) [    ] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1660864 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1660864 INFO  (Thread-3757) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1660864 INFO  (Thread-3757) [    ] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1660866 ERROR (Thread-3757) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1660964 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.ZkTestServer start zk server on port:37885
   [junit4]   2> 1660981 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 1660983 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 1660984 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 1660985 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 1660986 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 1660987 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 1660988 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 1660989 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 1660990 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 1660990 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 1660991 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 1660993 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase Will use TLOG replicas unless explicitly 
asked otherwise
   [junit4]   2> 1661063 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 1661064 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@4a3c5693{/,null,AVAILABLE}
   [junit4]   2> 1661064 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@1763b40d{HTTP/1.1,[http/1.1]}{127.0.0.1:46735}
   [junit4]   2> 1661065 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] o.e.j.s.Server 
Started @1662598ms
   [junit4]   2> 1661065 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/tempDir-001/control/data,
 replicaType=NRT, hostContext=/, hostPort=38751, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/control-001/cores}
   [junit4]   2> 1661065 ERROR 
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1661065 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 1661065 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1661065 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null, Default config 
dir: null
   [junit4]   2> 1661065 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-08-16T07:17:45.694Z
   [junit4]   2> 1661067 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 1661067 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/control-001/solr.xml
   [junit4]   2> 1661070 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 1661071 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:37885/solr
   [junit4]   2> 1661093 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:38751_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1661093 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:38751_    ] 
o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:38751_
   [junit4]   2> 1661093 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:38751_    ] 
o.a.s.c.Overseer Overseer (id=98491948433014788-127.0.0.1:38751_-n_0000000000) 
starting
   [junit4]   2> 1661096 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:38751_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:38751_
   [junit4]   2> 1661097 INFO  
(zkCallback-2136-thread-1-processing-n:127.0.0.1:38751_) [n:127.0.0.1:38751_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1661189 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:38751_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1661199 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:38751_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1661199 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:38751_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1661200 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:38751_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/control-001/cores
   [junit4]   2> 1661220 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1661220 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:37885/solr ready
   [junit4]   2> 1661221 INFO  (SocketProxy-Acceptor-38751) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=42294,localport=38751], receiveBufferSize:531000
   [junit4]   2> 1661221 INFO  (SocketProxy-Acceptor-38751) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=46735,localport=37426], receiveBufferSize=530904
   [junit4]   2> 1661223 INFO  (qtp229601067-15111) [n:127.0.0.1:38751_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:38751_&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1661226 INFO  
(OverseerThreadFactory-7803-thread-1-processing-n:127.0.0.1:38751_) 
[n:127.0.0.1:38751_    ] o.a.s.c.CreateCollectionCmd Create collection 
control_collection
   [junit4]   2> 1661328 INFO  (SocketProxy-Acceptor-38751) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=42298,localport=38751], receiveBufferSize:531000
   [junit4]   2> 1661328 INFO  (SocketProxy-Acceptor-38751) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=46735,localport=37430], receiveBufferSize=530904
   [junit4]   2> 1661329 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 1661329 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 1661432 INFO  
(zkCallback-2136-thread-1-processing-n:127.0.0.1:38751_) [n:127.0.0.1:38751_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 1662340 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 1662352 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema [control_collection_shard1_replica_n1] Schema name=test
   [junit4]   2> 1662445 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 1662452 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.CoreContainer Creating SolrCore 'control_collection_shard1_replica_n1' 
using configuration from collection control_collection, trusted=true
   [junit4]   2> 1662452 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.control_collection.shard1.replica_n1' (registry 
'solr.core.control_collection.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1662452 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 1662453 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore [[control_collection_shard1_replica_n1] ] Opening new SolrCore 
at 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/control-001/cores/control_collection_shard1_replica_n1],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/control-001/cores/control_collection_shard1_replica_n1/data/]
   [junit4]   2> 1662454 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=21, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=1.0]
   [junit4]   2> 1662455 WARN  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 1662475 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 1662475 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1662476 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 1662476 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 1662476 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=12, maxMergeAtOnceExplicit=14, maxMergedSegmentMB=47.0576171875, 
floorSegmentMB=0.84375, forceMergeDeletesPctAllowed=17.341642163683865, 
segmentsPerTier=35.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.7773620120575228
   [junit4]   2> 1662477 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@3fad4db6[control_collection_shard1_replica_n1] main]
   [junit4]   2> 1662477 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1662477 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1662478 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 1662478 INFO  
(searcherExecutor-7806-thread-1-processing-n:127.0.0.1:38751_ 
x:control_collection_shard1_replica_n1 s:shard1 c:control_collection) 
[n:127.0.0.1:38751_ c:control_collection s:shard1  
x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore 
[control_collection_shard1_replica_n1] Registered new searcher 
Searcher@3fad4db6[control_collection_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1662478 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1575871176619589632
   [junit4]   2> 1662482 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1662482 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1662482 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.0.1:38751/control_collection_shard1_replica_n1/
   [junit4]   2> 1662482 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 1662482 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy 
http://127.0.0.1:38751/control_collection_shard1_replica_n1/ has no replicas
   [junit4]   2> 1662482 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1662483 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:38751/control_collection_shard1_replica_n1/ shard1
   [junit4]   2> 1662584 INFO  
(zkCallback-2136-thread-1-processing-n:127.0.0.1:38751_) [n:127.0.0.1:38751_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 1662633 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 1662635 INFO  (qtp229601067-15113) [n:127.0.0.1:38751_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=1306
   [junit4]   2> 1662639 INFO  (qtp229601067-15111) [n:127.0.0.1:38751_    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 1662737 INFO  
(zkCallback-2136-thread-2-processing-n:127.0.0.1:38751_) [n:127.0.0.1:38751_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 1663229 INFO  
(OverseerCollectionConfigSetProcessor-98491948433014788-127.0.0.1:38751_-n_0000000000)
 [n:127.0.0.1:38751_    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1663639 INFO  (qtp229601067-15111) [n:127.0.0.1:38751_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:38751_&wt=javabin&version=2}
 status=0 QTime=2416
   [junit4]   2> 1663643 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1663643 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:37885/solr ready
   [junit4]   2> 1663643 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.ChaosMonkey monkey: init - expire sessions:false cause connection 
loss:false
   [junit4]   2> 1663644 INFO  (SocketProxy-Acceptor-38751) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=42346,localport=38751], receiveBufferSize:531000
   [junit4]   2> 1663644 INFO  (SocketProxy-Acceptor-38751) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=46735,localport=37478], receiveBufferSize=530904
   [junit4]   2> 1663644 INFO  (qtp229601067-15111) [n:127.0.0.1:38751_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=2&createNodeSet=&stateFormat=2&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1663646 INFO  
(OverseerThreadFactory-7803-thread-2-processing-n:127.0.0.1:38751_) 
[n:127.0.0.1:38751_    ] o.a.s.c.CreateCollectionCmd Create collection 
collection1
   [junit4]   2> 1663646 WARN  
(OverseerThreadFactory-7803-thread-2-processing-n:127.0.0.1:38751_) 
[n:127.0.0.1:38751_    ] o.a.s.c.CreateCollectionCmd It is unusual to create a 
collection (collection1) without cores.
   [junit4]   2> 1663849 INFO  (qtp229601067-15111) [n:127.0.0.1:38751_    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 1663850 INFO  (qtp229601067-15111) [n:127.0.0.1:38751_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=2&createNodeSet=&stateFormat=2&wt=javabin&version=2}
 status=0 QTime=205
   [junit4]   2> 1663912 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-1-001
 of type TLOG
   [junit4]   2> 1663912 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 1663913 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5b084a76{/,null,AVAILABLE}
   [junit4]   2> 1663913 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@198d37d{HTTP/1.1,[http/1.1]}{127.0.0.1:34617}
   [junit4]   2> 1663914 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] o.e.j.s.Server 
Started @1665447ms
   [junit4]   2> 1663914 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/tempDir-001/jetty1,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/, hostPort=42499, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-1-001/cores}
   [junit4]   2> 1663914 ERROR 
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1663914 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 1663914 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1663914 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null, Default config 
dir: null
   [junit4]   2> 1663914 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-08-16T07:17:48.543Z
   [junit4]   2> 1663916 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 1663916 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-1-001/solr.xml
   [junit4]   2> 1663918 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 1663920 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:37885/solr
   [junit4]   2> 1663924 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:42499_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1663924 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:42499_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1663925 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:42499_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:42499_
   [junit4]   2> 1663925 INFO  
(zkCallback-2136-thread-2-processing-n:127.0.0.1:38751_) [n:127.0.0.1:38751_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1663926 INFO  (zkCallback-2143-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1663927 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1663966 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:42499_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1663972 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:42499_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1663972 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:42499_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1663973 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:42499_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-1-001/cores
   [junit4]   2> 1663987 INFO  (qtp229601067-15112) [n:127.0.0.1:38751_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params 
node=127.0.0.1:42499_&action=ADDREPLICA&collection=collection1&shard=shard2&type=TLOG&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1663988 INFO  
(OverseerCollectionConfigSetProcessor-98491948433014788-127.0.0.1:38751_-n_0000000000)
 [n:127.0.0.1:38751_    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000002 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1663989 INFO  
(OverseerThreadFactory-7803-thread-3-processing-n:127.0.0.1:38751_) 
[n:127.0.0.1:38751_    ] o.a.s.c.AddReplicaCmd Node Identified 127.0.0.1:42499_ 
for creating new replica
   [junit4]   2> 1663989 INFO  (SocketProxy-Acceptor-42499) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=40810,localport=42499], receiveBufferSize:531000
   [junit4]   2> 1663990 INFO  (SocketProxy-Acceptor-42499) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=34617,localport=41478], receiveBufferSize=530904
   [junit4]   2> 1663990 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard2_replica_t41&action=CREATE&collection=collection1&shard=shard2&wt=javabin&version=2&replicaType=TLOG
   [junit4]   2> 1663990 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 1664093 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1665022 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 1665033 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.s.IndexSchema 
[collection1_shard2_replica_t41] Schema name=test
   [junit4]   2> 1665128 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 1665149 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard2_replica_t41' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 1665149 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard2.replica_t41' (registry 
'solr.core.collection1.shard2.replica_t41') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1665149 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 1665149 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SolrCore 
[[collection1_shard2_replica_t41] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-1-001/cores/collection1_shard2_replica_t41],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-1-001/cores/collection1_shard2_replica_t41/data/]
   [junit4]   2> 1665151 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=21, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=1.0]
   [junit4]   2> 1665153 WARN  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 1665181 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 1665181 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1665182 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 1665182 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 1665183 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=12, maxMergeAtOnceExplicit=14, maxMergedSegmentMB=47.0576171875, 
floorSegmentMB=0.84375, forceMergeDeletesPctAllowed=17.341642163683865, 
segmentsPerTier=35.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.7773620120575228
   [junit4]   2> 1665183 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@57987d32[collection1_shard2_replica_t41] main]
   [junit4]   2> 1665183 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1665184 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1665184 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 1665184 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1575871179457036288
   [junit4]   2> 1665185 INFO  
(searcherExecutor-7817-thread-1-processing-n:127.0.0.1:42499_ 
x:collection1_shard2_replica_t41 s:shard2 c:collection1) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SolrCore 
[collection1_shard2_replica_t41] Registered new searcher 
Searcher@57987d32[collection1_shard2_replica_t41] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1665188 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1665188 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1665188 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SyncStrategy 
Sync replicas to http://127.0.0.1:42499/collection1_shard2_replica_t41/
   [junit4]   2> 1665188 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SyncStrategy 
Sync Success - now sync replicas to me
   [junit4]   2> 1665188 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SyncStrategy 
http://127.0.0.1:42499/collection1_shard2_replica_t41/ has no replicas
   [junit4]   2> 1665188 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1665188 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.ZkController 
collection1_shard2_replica_t41 stopping background replication from leader
   [junit4]   2> 1665189 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:42499/collection1_shard2_replica_t41/ shard2
   [junit4]   2> 1665291 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 1665340 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.ZkController 
I am the leader, no recovery necessary
   [junit4]   2> 1665341 INFO  (qtp1023703421-15160) [n:127.0.0.1:42499_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard2_replica_t41&action=CREATE&collection=collection1&shard=shard2&wt=javabin&version=2&replicaType=TLOG}
 status=0 QTime=1351
   [junit4]   2> 1665342 INFO  (qtp229601067-15112) [n:127.0.0.1:38751_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:42499_&action=ADDREPLICA&collection=collection1&shard=shard2&type=TLOG&wt=javabin&version=2}
 status=0 QTime=1355
   [junit4]   2> 1665399 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 2 in directory 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-2-001
 of type TLOG
   [junit4]   2> 1665399 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 1665400 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5c38117f{/,null,AVAILABLE}
   [junit4]   2> 1665400 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@25a1338a{HTTP/1.1,[http/1.1]}{127.0.0.1:32983}
   [junit4]   2> 1665400 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] o.e.j.s.Server 
Started @1666933ms
   [junit4]   2> 1665401 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/tempDir-001/jetty2,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/, hostPort=40835, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-2-001/cores}
   [junit4]   2> 1665401 ERROR 
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1665401 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 1665401 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1665401 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null, Default config 
dir: null
   [junit4]   2> 1665401 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-08-16T07:17:50.030Z
   [junit4]   2> 1665403 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 1665403 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-2-001/solr.xml
   [junit4]   2> 1665405 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 1665407 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:37885/solr
   [junit4]   2> 1665411 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:40835_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 1665412 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:40835_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1665412 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:40835_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:40835_
   [junit4]   2> 1665412 INFO  (zkCallback-2143-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1665412 INFO  
(zkCallback-2136-thread-2-processing-n:127.0.0.1:38751_) [n:127.0.0.1:38751_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1665412 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1665416 INFO  
(zkCallback-2154-thread-1-processing-n:127.0.0.1:40835_) [n:127.0.0.1:40835_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1665481 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:40835_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1665487 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:40835_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1665487 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:40835_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1665488 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:40835_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-2-001/cores
   [junit4]   2> 1665506 INFO  (SocketProxy-Acceptor-40835) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=44180,localport=40835], receiveBufferSize:531000
   [junit4]   2> 1665507 INFO  (SocketProxy-Acceptor-40835) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=32983,localport=43420], receiveBufferSize=530904
   [junit4]   2> 1665507 INFO  (qtp1186991680-15193) [n:127.0.0.1:40835_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params 
node=127.0.0.1:40835_&action=ADDREPLICA&collection=collection1&shard=shard1&type=TLOG&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1665508 INFO  
(OverseerCollectionConfigSetProcessor-98491948433014788-127.0.0.1:38751_-n_0000000000)
 [n:127.0.0.1:38751_    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000004 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1665508 INFO  
(OverseerThreadFactory-7803-thread-4-processing-n:127.0.0.1:38751_) 
[n:127.0.0.1:38751_    ] o.a.s.c.AddReplicaCmd Node Identified 127.0.0.1:40835_ 
for creating new replica
   [junit4]   2> 1665509 INFO  (SocketProxy-Acceptor-40835) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=44184,localport=40835], receiveBufferSize:531000
   [junit4]   2> 1665509 INFO  (SocketProxy-Acceptor-40835) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=32983,localport=43424], receiveBufferSize=530904
   [junit4]   2> 1665509 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_t43&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=TLOG
   [junit4]   2> 1665509 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 1665612 INFO  
(zkCallback-2154-thread-1-processing-n:127.0.0.1:40835_) [n:127.0.0.1:40835_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 1665612 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 1666520 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 1666550 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.s.IndexSchema 
[collection1_shard1_replica_t43] Schema name=test
   [junit4]   2> 1666644 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 1666652 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard1_replica_t43' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 1666653 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_t43' (registry 
'solr.core.collection1.shard1.replica_t43') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1666653 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 1666653 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SolrCore 
[[collection1_shard1_replica_t43] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-2-001/cores/collection1_shard1_replica_t43],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-2-001/cores/collection1_shard1_replica_t43/data/]
   [junit4]   2> 1666655 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=21, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=1.0]
   [junit4]   2> 1666656 WARN  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 1666688 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 1666688 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1666689 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 1666689 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 1666690 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=12, maxMergeAtOnceExplicit=14, maxMergedSegmentMB=47.0576171875, 
floorSegmentMB=0.84375, forceMergeDeletesPctAllowed=17.341642163683865, 
segmentsPerTier=35.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.7773620120575228
   [junit4]   2> 1666690 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@4ea8ee2b[collection1_shard1_replica_t43] main]
   [junit4]   2> 1666706 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1666706 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1666707 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 1666708 INFO  
(searcherExecutor-7828-thread-1-processing-n:127.0.0.1:40835_ 
x:collection1_shard1_replica_t43 s:shard1 c:collection1) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SolrCore 
[collection1_shard1_replica_t43] Registered new searcher 
Searcher@4ea8ee2b[collection1_shard1_replica_t43] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1666708 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1575871181055066112
   [junit4]   2> 1666712 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1666712 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1666712 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SyncStrategy 
Sync replicas to http://127.0.0.1:40835/collection1_shard1_replica_t43/
   [junit4]   2> 1666712 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SyncStrategy 
Sync Success - now sync replicas to me
   [junit4]   2> 1666712 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SyncStrategy 
http://127.0.0.1:40835/collection1_shard1_replica_t43/ has no replicas
   [junit4]   2> 1666712 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1666712 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.ZkController 
collection1_shard1_replica_t43 stopping background replication from leader
   [junit4]   2> 1666713 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:40835/collection1_shard1_replica_t43/ shard1
   [junit4]   2> 1666815 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 1666815 INFO  
(zkCallback-2154-thread-1-processing-n:127.0.0.1:40835_) [n:127.0.0.1:40835_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 1666864 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.ZkController 
I am the leader, no recovery necessary
   [junit4]   2> 1666866 INFO  (qtp1186991680-15194) [n:127.0.0.1:40835_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_t43&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=TLOG}
 status=0 QTime=1356
   [junit4]   2> 1666868 INFO  (qtp1186991680-15193) [n:127.0.0.1:40835_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:40835_&action=ADDREPLICA&collection=collection1&shard=shard1&type=TLOG&wt=javabin&version=2}
 status=0 QTime=1361
   [junit4]   2> 1666943 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 3 in directory 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-3-001
 of type TLOG
   [junit4]   2> 1666944 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 1666945 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@37167598{/,null,AVAILABLE}
   [junit4]   2> 1666945 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@408dc3e4{HTTP/1.1,[http/1.1]}{127.0.0.1:34181}
   [junit4]   2> 1666946 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] o.e.j.s.Server 
Started @1668478ms
   [junit4]   2> 1666946 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/tempDir-001/jetty3,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/, hostPort=33121, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-3-001/cores}
   [junit4]   2> 1666946 ERROR 
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1666946 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 1666946 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1666946 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null, Default config 
dir: null
   [junit4]   2> 1666946 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-08-16T07:17:51.575Z
   [junit4]   2> 1666947 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 1666948 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-3-001/solr.xml
   [junit4]   2> 1666950 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 1666952 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:37885/solr
   [junit4]   2> 1666956 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:33121_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3)
   [junit4]   2> 1666956 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:33121_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1666957 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:33121_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:33121_
   [junit4]   2> 1666957 INFO  
(zkCallback-2136-thread-2-processing-n:127.0.0.1:38751_) [n:127.0.0.1:38751_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 1666957 INFO  (zkCallback-2143-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 1666958 INFO  
(zkCallback-2160-thread-1-processing-n:127.0.0.1:33121_) [n:127.0.0.1:33121_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 1666958 INFO  
(zkCallback-2154-thread-1-processing-n:127.0.0.1:40835_) [n:127.0.0.1:40835_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 1666957 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 1666987 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:33121_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1666993 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:33121_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1666993 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:33121_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1666994 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [n:127.0.0.1:33121_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-3-001/cores
   [junit4]   2> 1667018 INFO  (qtp1186991680-15192) [n:127.0.0.1:40835_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params 
node=127.0.0.1:33121_&action=ADDREPLICA&collection=collection1&shard=shard2&type=TLOG&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1667019 INFO  
(OverseerCollectionConfigSetProcessor-98491948433014788-127.0.0.1:38751_-n_0000000000)
 [n:127.0.0.1:38751_    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000006 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1667020 INFO  
(OverseerThreadFactory-7803-thread-5-processing-n:127.0.0.1:38751_) 
[n:127.0.0.1:38751_    ] o.a.s.c.AddReplicaCmd Node Identified 127.0.0.1:33121_ 
for creating new replica
   [junit4]   2> 1667020 INFO  (SocketProxy-Acceptor-33121) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=35800,localport=33121], receiveBufferSize:531000
   [junit4]   2> 1667020 INFO  (SocketProxy-Acceptor-33121) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=34181,localport=51294], receiveBufferSize=530904
   [junit4]   2> 1667021 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard2_replica_t45&action=CREATE&collection=collection1&shard=shard2&wt=javabin&version=2&replicaType=TLOG
   [junit4]   2> 1667021 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 1667123 INFO  
(zkCallback-2154-thread-1-processing-n:127.0.0.1:40835_) [n:127.0.0.1:40835_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 1667123 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 1667123 INFO  
(zkCallback-2160-thread-1-processing-n:127.0.0.1:33121_) [n:127.0.0.1:33121_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 1668032 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 1668042 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.s.IndexSchema 
[collection1_shard2_replica_t45] Schema name=test
   [junit4]   2> 1668155 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 1668163 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard2_replica_t45' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 1668163 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard2.replica_t45' (registry 
'solr.core.collection1.shard2.replica_t45') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@136cfe03
   [junit4]   2> 1668164 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 1668164 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.SolrCore 
[[collection1_shard2_replica_t45] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-3-001/cores/collection1_shard2_replica_t45],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_EF1DFB4F6CFCBE47-001/shard-3-001/cores/collection1_shard2_replica_t45/data/]
   [junit4]   2> 1668165 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=21, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=1.0]
   [junit4]   2> 1668167 WARN  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 1668196 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 1668196 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1668197 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 1668197 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 1668198 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=12, maxMergeAtOnceExplicit=14, maxMergedSegmentMB=47.0576171875, 
floorSegmentMB=0.84375, forceMergeDeletesPctAllowed=17.341642163683865, 
segmentsPerTier=35.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.7773620120575228
   [junit4]   2> 1668198 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@45b6c049[collection1_shard2_replica_t45] main]
   [junit4]   2> 1668198 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1668199 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1668199 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 1668199 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1575871182618492928
   [junit4]   2> 1668200 INFO  
(searcherExecutor-7839-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.SolrCore 
[collection1_shard2_replica_t45] Registered new searcher 
Searcher@45b6c049[collection1_shard2_replica_t45] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1668202 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.ZkController 
Core needs to recover:collection1_shard2_replica_t45
   [junit4]   2> 1668203 INFO  
(updateExecutor-2157-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1) [n:127.0.0.1:33121_ 
c:collection1 s:shard2 r:core_node46 x:collection1_shard2_replica_t45] 
o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 1668203 INFO  (qtp1818456044-15226) [n:127.0.0.1:33121_ 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard2_replica_t45&action=CREATE&collection=collection1&shard=shard2&wt=javabin&version=2&replicaType=TLOG}
 status=0 QTime=1182
   [junit4]   2> 1668203 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Starting recovery 
process. recoveringAfterStartup=true
   [junit4]   2> 1668203 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy ###### 
startupVersions=[[]]
   [junit4]   2> 1668203 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.ZkController 
collection1_shard2_replica_t45 stopping background replication from leader
   [junit4]   2> 1668203 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Begin buffering 
updates. core=[collection1_shard2_replica_t45]
   [junit4]   2> 1668203 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.u.UpdateLog Starting to buffer updates. 
FSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 1668203 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Publishing state of 
core [collection1_shard2_replica_t45] as recovering, leader is 
[http://127.0.0.1:42499/collection1_shard2_replica_t41/] and I am 
[http://127.0.0.1:33121/collection1_shard2_replica_t45/]
   [junit4]   2> 1668204 INFO  (qtp1186991680-15192) [n:127.0.0.1:40835_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:33121_&action=ADDREPLICA&collection=collection1&shard=shard2&type=TLOG&wt=javabin&version=2}
 status=0 QTime=1186
   [junit4]   2> 1668204 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Sending prep 
recovery command to [http://127.0.0.1:42499]; [WaitForState: 
action=PREPRECOVERY&core=collection1_shard2_replica_t41&nodeName=127.0.0.1:33121_&coreNodeName=core_node46&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 1668205 INFO  (SocketProxy-Acceptor-42499) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=40874,localport=42499], receiveBufferSize:531000
   [junit4]   2> 1668205 INFO  (SocketProxy-Acceptor-42499) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=34617,localport=41542], receiveBufferSize=530904
   [junit4]   2> 1668205 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_    ] 
o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node46, state: 
recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true, 
maxTime: 183 s
   [junit4]   2> 1668205 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_    ] 
o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, 
shard=shard2, thisCore=collection1_shard2_replica_t41, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=down, localState=active, nodeName=127.0.0.1:33121_, 
coreNodeName=core_node46, onlyIfActiveCheckResult=false, nodeProps: 
core_node46:{"core":"collection1_shard2_replica_t45","base_url":"http://127.0.0.1:33121","node_name":"127.0.0.1:33121_","state":"down","type":"TLOG"}
   [junit4]   2> 1668206 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 1668206 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase Wait for recoveries to finish - wait 
30000 for each attempt
   [junit4]   2> 1668206 INFO  
(TEST-HttpPartitionTest.test-seed#[EF1DFB4F6CFCBE47]) [    ] 
o.a.s.c.AbstractDistribZkTestBase Wait for recoveries to finish - collection: 
collection1 failOnTimeout:true timeout (sec):30000
   [junit4]   2> 1668305 INFO  
(zkCallback-2160-thread-1-processing-n:127.0.0.1:33121_) [n:127.0.0.1:33121_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 1668306 INFO  
(zkCallback-2148-thread-1-processing-n:127.0.0.1:42499_) [n:127.0.0.1:42499_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 1668306 INFO  
(zkCallback-2154-thread-1-processing-n:127.0.0.1:40835_) [n:127.0.0.1:40835_    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 1669021 INFO  
(OverseerCollectionConfigSetProcessor-98491948433014788-127.0.0.1:38751_-n_0000000000)
 [n:127.0.0.1:38751_    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000008 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1669206 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_    ] 
o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, 
shard=shard2, thisCore=collection1_shard2_replica_t41, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=recovering, localState=active, nodeName=127.0.0.1:33121_, 
coreNodeName=core_node46, onlyIfActiveCheckResult=false, nodeProps: 
core_node46:{"core":"collection1_shard2_replica_t45","base_url":"http://127.0.0.1:33121","node_name":"127.0.0.1:33121_","state":"recovering","type":"TLOG"}
   [junit4]   2> 1669206 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_    ] 
o.a.s.h.a.PrepRecoveryOp Waited coreNodeName: core_node46, state: recovering, 
checkLive: true, onlyIfLeader: true for: 1 seconds.
   [junit4]   2> 1669206 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={nodeName=127.0.0.1:33121_&onlyIfLeaderActive=true&core=collection1_shard2_replica_t41&coreNodeName=core_node46&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=1000
   [junit4]   2> 1669706 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Starting Replication 
Recovery.
   [junit4]   2> 1669706 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Attempting to 
replicate from [http://127.0.0.1:42499/collection1_shard2_replica_t41/].
   [junit4]   2> 1669708 INFO  (SocketProxy-Acceptor-42499) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=40894,localport=42499], receiveBufferSize:531000
   [junit4]   2> 1669708 INFO  (SocketProxy-Acceptor-42499) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=34617,localport=41562], receiveBufferSize=530904
   [junit4]   2> 1669708 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_ 
c:collection1 s:shard2 r:core_node42 x:collection1_shard2_replica_t41] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1575871184200794112,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 1669708 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_ 
c:collection1 s:shard2 r:core_node42 x:collection1_shard2_replica_t41] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 1669708 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_ 
c:collection1 s:shard2 r:core_node42 x:collection1_shard2_replica_t41] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 1669708 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_ 
c:collection1 s:shard2 r:core_node42 x:collection1_shard2_replica_t41] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard2_replica_t41]  webapp= 
path=/update 
params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&commit_end_point=true&wt=javabin&version=2}{commit=}
 0 0
   [junit4]   2> 1669709 INFO  (SocketProxy-Acceptor-42499) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=40898,localport=42499], receiveBufferSize:531000
   [junit4]   2> 1669710 INFO  (SocketProxy-Acceptor-42499) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=34617,localport=41566], receiveBufferSize=530904
   [junit4]   2> 1669710 INFO  (qtp1023703421-15156) [n:127.0.0.1:42499_ 
c:collection1 s:shard2 r:core_node42 x:collection1_shard2_replica_t41] 
o.a.s.c.S.Request [collection1_shard2_replica_t41]  webapp= path=/replication 
params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 
QTime=0
   [junit4]   2> 1669710 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.h.IndexFetcher Master's generation: 1
   [junit4]   2> 1669710 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.h.IndexFetcher Master's version: 0
   [junit4]   2> 1669710 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.h.IndexFetcher Slave's generation: 1
   [junit4]   2> 1669710 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.h.IndexFetcher Slave's version: 0
   [junit4]   2> 1669710 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Replication Recovery 
was successful.
   [junit4]   2> 1669710 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Registering as 
Active after recovery.
   [junit4]   2> 1669711 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.ZkController 
collection1_shard2_replica_t45 starting background replication from leader
   [junit4]   2> 1669711 INFO  
(recoveryExecutor-2158-thread-1-processing-n:127.0.0.1:33121_ 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:33121_ c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.ReplicateFromLeader Will start 
replication 

[...truncated too long message...]

 'a1' is never closed
 [ecj-lint] ----------
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 207)
 [ecj-lint]     OffsetWindowTokenFilter tots = new 
OffsetWindowTokenFilter(tokenStream);
 [ecj-lint]                             ^^^^
 [ecj-lint] Resource leak: 'tots' is never closed
 [ecj-lint] ----------
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 211)
 [ecj-lint]     Analyzer a2 = new WhitespaceAnalyzer();
 [ecj-lint]              ^^
 [ecj-lint] Resource leak: 'a2' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 11. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/metrics/reporters/SolrJmxReporterCloudTest.java
 (at line 20)
 [ecj-lint]     import javax.management.MBeanServerFactory;
 [ecj-lint]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] The import javax.management.MBeanServerFactory is never used
 [ecj-lint] ----------
 [ecj-lint] 12. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/metrics/reporters/SolrJmxReporterCloudTest.java
 (at line 39)
 [ecj-lint]     import org.junit.AfterClass;
 [ecj-lint]            ^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] The import org.junit.AfterClass is never used
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 13. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/response/TestJavabinTupleStreamParser.java
 (at line 72)
 [ecj-lint]     JavabinTupleStreamParser parser = new 
JavabinTupleStreamParser(new ByteArrayInputStream(bytes), true);
 [ecj-lint]                              ^^^^^^
 [ecj-lint] Resource leak: 'parser' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 14. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/search/TestDocSet.java
 (at line 243)
 [ecj-lint]     return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new 
HashDocSet(a,0,n);
 [ecj-lint]                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 15. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/search/TestDocSet.java
 (at line 528)
 [ecj-lint]     DocSet a = new BitDocSet(bs);
 [ecj-lint]            ^
 [ecj-lint] Resource leak: 'a' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 16. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/security/TestPKIAuthenticationPlugin.java
 (at line 76)
 [ecj-lint]     final MockPKIAuthenticationPlugin mock = new 
MockPKIAuthenticationPlugin(null, nodeName);
 [ecj-lint]                                       ^^^^
 [ecj-lint] Resource leak: 'mock' is never closed
 [ecj-lint] ----------
 [ecj-lint] 17. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/security/TestPKIAuthenticationPlugin.java
 (at line 131)
 [ecj-lint]     MockPKIAuthenticationPlugin mock1 = new 
MockPKIAuthenticationPlugin(null, nodeName) {
 [ecj-lint]                                 ^^^^^
 [ecj-lint] Resource leak: 'mock1' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 18. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/security/TestRuleBasedAuthorizationPlugin.java
 (at line 380)
 [ecj-lint]     RuleBasedAuthorizationPlugin plugin = new 
RuleBasedAuthorizationPlugin();
 [ecj-lint]                                  ^^^^^^
 [ecj-lint] Resource leak: 'plugin' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 19. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/security/TestSha256AuthenticationProvider.java
 (at line 49)
 [ecj-lint]     BasicAuthPlugin basicAuthPlugin = new BasicAuthPlugin();
 [ecj-lint]                     ^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'basicAuthPlugin' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 20. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/spelling/SimpleQueryConverter.java
 (at line 42)
 [ecj-lint]     WhitespaceAnalyzer analyzer = new WhitespaceAnalyzer();
 [ecj-lint]                        ^^^^^^^^
 [ecj-lint] Resource leak: 'analyzer' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 21. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 139)
 [ecj-lint]     IndexWriter w = new IndexWriter(d, 
newIndexWriterConfig(analyzer));
 [ecj-lint]                 ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 22. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 172)
 [ecj-lint]     throw iae;
 [ecj-lint]     ^^^^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 23. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 178)
 [ecj-lint]     return;
 [ecj-lint]     ^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 24. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 134)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 25. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 333)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 26. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 367)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 27. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 413)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 28. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 458)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 29. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 516)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 30. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrIndexSplitterTest.java
 (at line 181)
 [ecj-lint]     EmbeddedSolrServer server1 = new 
EmbeddedSolrServer(h.getCoreContainer(), "split1");
 [ecj-lint]                        ^^^^^^^
 [ecj-lint] Resource leak: 'server1' is never closed
 [ecj-lint] ----------
 [ecj-lint] 31. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrIndexSplitterTest.java
 (at line 182)
 [ecj-lint]     EmbeddedSolrServer server2 = new 
EmbeddedSolrServer(h.getCoreContainer(), "split2");
 [ecj-lint]                        ^^^^^^^
 [ecj-lint] Resource leak: 'server2' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 32. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/processor/RecordingUpdateProcessorFactory.java
 (at line 67)
 [ecj-lint]     return recording ? new 
RecordingUpdateRequestProcessor(commandQueue, next) : next;
 [ecj-lint]                        
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 32 problems (2 errors, 30 warnings)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:810: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build.xml:689: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2013: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2046: 
Compile failed; see the compiler error output for details.

Total time: 74 minutes 44 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to