Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20359/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:44829/_jg/q/collMinRf_1x3 due to: 
Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:44829/_jg/q/collMinRf_1x3 due to: Path not found: /id; 
rsp={doc=null}
        at 
__randomizedtesting.SeedInfo.seed([168542E7C10F0BFB:9ED17D3D6FF36603]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
        at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
        at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
        at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12273 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2> 2035256 INFO  
(SUITE-HttpPartitionTest-seed#[168542E7C10F0BFB]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/init-core-data-001
   [junit4]   2> 2035257 INFO  
(SUITE-HttpPartitionTest-seed#[168542E7C10F0BFB]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 2035257 INFO  
(SUITE-HttpPartitionTest-seed#[168542E7C10F0BFB]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 2035257 INFO  
(SUITE-HttpPartitionTest-seed#[168542E7C10F0BFB]-worker) [    ] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /_jg/q
   [junit4]   2> 2035259 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2035259 INFO  (Thread-7733) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2035259 INFO  (Thread-7733) [    ] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2035260 ERROR (Thread-7733) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 2035359 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.ZkTestServer start zk server on port:35735
   [junit4]   2> 2035366 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 2035366 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 2035367 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 2035367 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 2035368 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 2035368 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 2035368 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 2035369 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 2035369 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 2035369 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 2035370 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 2035370 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase Will use TLOG replicas unless explicitly 
asked otherwise
   [junit4]   2> 2035420 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] o.e.j.s.Server 
jetty-9.3.20.v20170531
   [junit4]   2> 2035421 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@33543f0a{/_jg/q,null,AVAILABLE}
   [junit4]   2> 2035422 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@1ff0ac5d{HTTP/1.1,[http/1.1]}{127.0.0.1:44969}
   [junit4]   2> 2035422 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] o.e.j.s.Server 
Started @2037473ms
   [junit4]   2> 2035422 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/tempDir-001/control/data,
 hostContext=/_jg/q, hostPort=44829, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/control-001/cores}
   [junit4]   2> 2035422 ERROR 
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2035422 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
8.0.0
   [junit4]   2> 2035422 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 2035422 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null, Default config 
dir: null
   [junit4]   2> 2035422 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-08-22T18:25:36.544Z
   [junit4]   2> 2035426 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 2035426 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/control-001/solr.xml
   [junit4]   2> 2035429 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 2035430 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35735/solr
   [junit4]   2> 2035453 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 2035453 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.OverseerElectionContext I am going to 
be the leader 127.0.0.1:44829__jg%2Fq
   [junit4]   2> 2035453 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.Overseer Overseer 
(id=98528548381130756-127.0.0.1:44829__jg%2Fq-n_0000000000) starting
   [junit4]   2> 2035455 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:44829__jg%2Fq
   [junit4]   2> 2035455 INFO  
(zkCallback-3180-thread-1-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 2035494 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2035500 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2035500 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2035501 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/control-001/cores
   [junit4]   2> 2035512 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 2035513 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35735/solr ready
   [junit4]   2> 2035513 INFO  (SocketProxy-Acceptor-44829) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=41186,localport=44829], receiveBufferSize:531000
   [junit4]   2> 2035513 INFO  (SocketProxy-Acceptor-44829) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=44969,localport=37452], receiveBufferSize=530904
   [junit4]   2> 2035513 INFO  (qtp1137177561-22639) [n:127.0.0.1:44829__jg%2Fq 
   ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:44829__jg%252Fq&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 2035514 INFO  
(OverseerThreadFactory-8544-thread-1-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.CreateCollectionCmd Create collection 
control_collection
   [junit4]   2> 2035617 INFO  (SocketProxy-Acceptor-44829) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=41190,localport=44829], receiveBufferSize:531000
   [junit4]   2> 2035617 INFO  (SocketProxy-Acceptor-44829) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=44969,localport=37456], receiveBufferSize=530904
   [junit4]   2> 2035617 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
   ] o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 2035617 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
   ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 2035720 INFO  
(zkCallback-3180-thread-1-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 2036625 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 2036634 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema [control_collection_shard1_replica_n1] Schema name=test
   [junit4]   2> 2036741 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 2036759 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.CoreContainer Creating SolrCore 'control_collection_shard1_replica_n1' 
using configuration from collection control_collection, trusted=true
   [junit4]   2> 2036760 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.control_collection.shard1.replica_n1' (registry 
'solr.core.control_collection.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2036760 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 2036760 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore [[control_collection_shard1_replica_n1] ] Opening new SolrCore 
at 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/control-001/cores/control_collection_shard1_replica_n1],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/control-001/cores/control_collection_shard1_replica_n1/data/]
   [junit4]   2> 2036764 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=14, maxMergeAtOnceExplicit=40, maxMergedSegmentMB=9.40625, 
floorSegmentMB=1.55078125, forceMergeDeletesPctAllowed=14.678975364956953, 
segmentsPerTier=20.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.6553940157080618
   [junit4]   2> 2036768 WARN  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 2036814 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.UpdateLog
   [junit4]   2> 2036814 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 2036816 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 2036817 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 2036819 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: 
minMergeSize=1000, mergeFactor=38, maxMergeSize=9223372036854775807, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.27233139108897464]
   [junit4]   2> 2036820 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@12f71bf4[control_collection_shard1_replica_n1] main]
   [junit4]   2> 2036821 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 2036821 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 2036822 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 2036822 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1576456775735967744
   [junit4]   2> 2036825 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 2036826 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 2036826 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.0.1:44829/_jg/q/control_collection_shard1_replica_n1/
   [junit4]   2> 2036826 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 2036826 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy 
http://127.0.0.1:44829/_jg/q/control_collection_shard1_replica_n1/ has no 
replicas
   [junit4]   2> 2036826 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 2036842 INFO  
(searcherExecutor-8547-thread-1-processing-n:127.0.0.1:44829__jg%2Fq 
x:control_collection_shard1_replica_n1 s:shard1 c:control_collection) 
[n:127.0.0.1:44829__jg%2Fq c:control_collection s:shard1  
x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore 
[control_collection_shard1_replica_n1] Registered new searcher 
Searcher@12f71bf4[control_collection_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 2036843 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:44829/_jg/q/control_collection_shard1_replica_n1/ shard1
   [junit4]   2> 2036945 INFO  
(zkCallback-3180-thread-1-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 2036993 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 2037013 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=1395
   [junit4]   2> 2037015 INFO  (qtp1137177561-22639) [n:127.0.0.1:44829__jg%2Fq 
   ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at 
most 30 seconds. Check all shard replicas
   [junit4]   2> 2037116 INFO  
(zkCallback-3180-thread-1-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 2037516 INFO  
(OverseerCollectionConfigSetProcessor-98528548381130756-127.0.0.1:44829__jg%2Fq-n_0000000000)
 [n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 2038016 INFO  (qtp1137177561-22639) [n:127.0.0.1:44829__jg%2Fq 
   ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:44829__jg%252Fq&wt=javabin&version=2}
 status=0 QTime=2502
   [junit4]   2> 2038023 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 2038024 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35735/solr ready
   [junit4]   2> 2038024 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.ChaosMonkey monkey: init - expire sessions:false cause connection 
loss:false
   [junit4]   2> 2038024 INFO  (SocketProxy-Acceptor-44829) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=41208,localport=44829], receiveBufferSize:531000
   [junit4]   2> 2038026 INFO  (SocketProxy-Acceptor-44829) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=44969,localport=37474], receiveBufferSize=530904
   [junit4]   2> 2038026 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
   ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=2&createNodeSet=&stateFormat=2&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 2038028 INFO  
(OverseerThreadFactory-8544-thread-2-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.CreateCollectionCmd Create collection 
collection1
   [junit4]   2> 2038028 WARN  
(OverseerThreadFactory-8544-thread-2-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.CreateCollectionCmd It is unusual to 
create a collection (collection1) without cores.
   [junit4]   2> 2038230 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
   ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at 
most 30 seconds. Check all shard replicas
   [junit4]   2> 2038230 INFO  (qtp1137177561-22641) [n:127.0.0.1:44829__jg%2Fq 
   ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=2&createNodeSet=&stateFormat=2&wt=javabin&version=2}
 status=0 QTime=204
   [junit4]   2> 2038322 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-1-001
 of type TLOG
   [junit4]   2> 2038323 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] o.e.j.s.Server 
jetty-9.3.20.v20170531
   [junit4]   2> 2038324 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@222c90a{/_jg/q,null,AVAILABLE}
   [junit4]   2> 2038324 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@55bd721a{HTTP/1.1,[http/1.1]}{127.0.0.1:44749}
   [junit4]   2> 2038324 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] o.e.j.s.Server 
Started @2040376ms
   [junit4]   2> 2038325 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/tempDir-001/jetty1,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/_jg/q, 
hostPort=46843, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-1-001/cores}
   [junit4]   2> 2038325 ERROR 
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2038325 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
8.0.0
   [junit4]   2> 2038325 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 2038325 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null, Default config 
dir: null
   [junit4]   2> 2038325 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-08-22T18:25:39.447Z
   [junit4]   2> 2038327 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 2038327 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-1-001/solr.xml
   [junit4]   2> 2038331 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 2038334 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35735/solr
   [junit4]   2> 2038339 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 2038340 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 2038341 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:46843__jg%2Fq
   [junit4]   2> 2038342 INFO  
(zkCallback-3180-thread-2-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (1) -> (2)
   [junit4]   2> 2038342 INFO  (zkCallback-3187-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 2038342 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (1) -> (2)
   [junit4]   2> 2038392 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2038399 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2038399 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2038400 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-1-001/cores
   [junit4]   2> 2038415 INFO  (SocketProxy-Acceptor-46843) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=45662,localport=46843], receiveBufferSize:531000
   [junit4]   2> 2038415 INFO  (SocketProxy-Acceptor-46843) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=44749,localport=42246], receiveBufferSize=530904
   [junit4]   2> 2038416 INFO  (qtp1353016925-22688) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with 
params 
node=127.0.0.1:46843__jg%252Fq&action=ADDREPLICA&collection=collection1&shard=shard2&type=TLOG&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 2038417 INFO  
(OverseerCollectionConfigSetProcessor-98528548381130756-127.0.0.1:44829__jg%2Fq-n_0000000000)
 [n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000002 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 2038417 INFO  
(OverseerThreadFactory-8544-thread-3-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.AddReplicaCmd Node Identified 
127.0.0.1:46843__jg%2Fq for creating new replica
   [junit4]   2> 2038417 INFO  (SocketProxy-Acceptor-46843) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=45666,localport=46843], receiveBufferSize:531000
   [junit4]   2> 2038418 INFO  (SocketProxy-Acceptor-46843) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=44749,localport=42250], receiveBufferSize=530904
   [junit4]   2> 2038418 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard2_replica_t41&action=CREATE&collection=collection1&shard=shard2&wt=javabin&version=2&replicaType=TLOG
   [junit4]   2> 2038418 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 2038520 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 2039429 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 2039437 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.s.IndexSchema 
[collection1_shard2_replica_t41] Schema name=test
   [junit4]   2> 2039514 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 2039520 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard2_replica_t41' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 2039520 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard2.replica_t41' (registry 
'solr.core.collection1.shard2.replica_t41') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2039520 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 2039520 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SolrCore 
[[collection1_shard2_replica_t41] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-1-001/cores/collection1_shard2_replica_t41],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-1-001/cores/collection1_shard2_replica_t41/data/]
   [junit4]   2> 2039521 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=14, maxMergeAtOnceExplicit=40, maxMergedSegmentMB=9.40625, 
floorSegmentMB=1.55078125, forceMergeDeletesPctAllowed=14.678975364956953, 
segmentsPerTier=20.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.6553940157080618
   [junit4]   2> 2039523 WARN  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 2039544 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 2039544 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 2039545 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 2039545 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 2039546 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: 
minMergeSize=1000, mergeFactor=38, maxMergeSize=9223372036854775807, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.27233139108897464]
   [junit4]   2> 2039546 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@2198acaf[collection1_shard2_replica_t41] main]
   [junit4]   2> 2039547 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 2039547 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 2039547 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 2039548 INFO  
(searcherExecutor-8558-thread-1-processing-n:127.0.0.1:46843__jg%2Fq 
x:collection1_shard2_replica_t41 s:shard2 c:collection1) 
[n:127.0.0.1:46843__jg%2Fq c:collection1 s:shard2  
x:collection1_shard2_replica_t41] o.a.s.c.SolrCore 
[collection1_shard2_replica_t41] Registered new searcher 
Searcher@2198acaf[collection1_shard2_replica_t41] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 2039548 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1576456778594385920
   [junit4]   2> 2039552 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 2039552 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 2039552 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SyncStrategy 
Sync replicas to http://127.0.0.1:46843/_jg/q/collection1_shard2_replica_t41/
   [junit4]   2> 2039552 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SyncStrategy 
Sync Success - now sync replicas to me
   [junit4]   2> 2039552 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.SyncStrategy 
http://127.0.0.1:46843/_jg/q/collection1_shard2_replica_t41/ has no replicas
   [junit4]   2> 2039552 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 2039552 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.ZkController 
collection1_shard2_replica_t41 stopping background replication from leader
   [junit4]   2> 2039553 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:46843/_jg/q/collection1_shard2_replica_t41/ shard2
   [junit4]   2> 2039654 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 2039704 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.c.ZkController 
I am the leader, no recovery necessary
   [junit4]   2> 2039705 INFO  (qtp1353016925-22690) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t41] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard2_replica_t41&action=CREATE&collection=collection1&shard=shard2&wt=javabin&version=2&replicaType=TLOG}
 status=0 QTime=1287
   [junit4]   2> 2039706 INFO  (qtp1353016925-22688) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:46843__jg%252Fq&action=ADDREPLICA&collection=collection1&shard=shard2&type=TLOG&wt=javabin&version=2}
 status=0 QTime=1290
   [junit4]   2> 2039765 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 2 in directory 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-2-001
 of type TLOG
   [junit4]   2> 2039766 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] o.e.j.s.Server 
jetty-9.3.20.v20170531
   [junit4]   2> 2039766 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@3562fa8d{/_jg/q,null,AVAILABLE}
   [junit4]   2> 2039767 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@2b37398a{HTTP/1.1,[http/1.1]}{127.0.0.1:41459}
   [junit4]   2> 2039767 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] o.e.j.s.Server 
Started @2041818ms
   [junit4]   2> 2039767 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/tempDir-001/jetty2,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/_jg/q, 
hostPort=36489, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-2-001/cores}
   [junit4]   2> 2039767 ERROR 
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2039767 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
8.0.0
   [junit4]   2> 2039767 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 2039767 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null, Default config 
dir: null
   [junit4]   2> 2039767 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-08-22T18:25:40.889Z
   [junit4]   2> 2039769 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 2039769 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-2-001/solr.xml
   [junit4]   2> 2039771 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 2039773 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35735/solr
   [junit4]   2> 2039777 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (2)
   [junit4]   2> 2039778 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 2039778 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:36489__jg%2Fq
   [junit4]   2> 2039779 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (2) -> (3)
   [junit4]   2> 2039779 INFO  
(zkCallback-3180-thread-2-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (2) -> (3)
   [junit4]   2> 2039779 INFO  (zkCallback-3187-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 2039779 INFO  
(zkCallback-3198-thread-1-processing-n:127.0.0.1:36489__jg%2Fq) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (2) -> (3)
   [junit4]   2> 2039818 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2039854 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2039855 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2039856 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-2-001/cores
   [junit4]   2> 2039879 INFO  (SocketProxy-Acceptor-36489) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=56676,localport=36489], receiveBufferSize:531000
   [junit4]   2> 2039881 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 2039882 INFO  (SocketProxy-Acceptor-36489) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=41459,localport=41930], receiveBufferSize=530904
   [junit4]   2> 2039883 INFO  (qtp1264010015-22722) [n:127.0.0.1:36489__jg%2Fq 
   ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with 
params 
node=127.0.0.1:36489__jg%252Fq&action=ADDREPLICA&collection=collection1&shard=shard1&type=TLOG&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 2039884 INFO  
(OverseerCollectionConfigSetProcessor-98528548381130756-127.0.0.1:44829__jg%2Fq-n_0000000000)
 [n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000004 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 2039884 INFO  
(OverseerThreadFactory-8544-thread-4-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.AddReplicaCmd Node Identified 
127.0.0.1:36489__jg%2Fq for creating new replica
   [junit4]   2> 2039886 INFO  (SocketProxy-Acceptor-36489) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=56680,localport=36489], receiveBufferSize:531000
   [junit4]   2> 2039886 INFO  (SocketProxy-Acceptor-36489) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=41459,localport=41934], receiveBufferSize=530904
   [junit4]   2> 2039887 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
   ] o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_t43&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=TLOG
   [junit4]   2> 2039888 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
   ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 2039994 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 2039994 INFO  
(zkCallback-3198-thread-1-processing-n:127.0.0.1:36489__jg%2Fq) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 2040898 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 2040908 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.s.IndexSchema 
[collection1_shard1_replica_t43] Schema name=test
   [junit4]   2> 2040989 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 2040998 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard1_replica_t43' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 2040998 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_t43' (registry 
'solr.core.collection1.shard1.replica_t43') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2040998 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 2040998 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SolrCore 
[[collection1_shard1_replica_t43] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-2-001/cores/collection1_shard1_replica_t43],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-2-001/cores/collection1_shard1_replica_t43/data/]
   [junit4]   2> 2041000 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=14, maxMergeAtOnceExplicit=40, maxMergedSegmentMB=9.40625, 
floorSegmentMB=1.55078125, forceMergeDeletesPctAllowed=14.678975364956953, 
segmentsPerTier=20.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.6553940157080618
   [junit4]   2> 2041002 WARN  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 2041026 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 2041026 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 2041027 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 2041027 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 2041028 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: 
minMergeSize=1000, mergeFactor=38, maxMergeSize=9223372036854775807, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.27233139108897464]
   [junit4]   2> 2041028 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@6b6257ad[collection1_shard1_replica_t43] main]
   [junit4]   2> 2041029 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 2041029 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 2041030 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 2041030 INFO  
(searcherExecutor-8569-thread-1-processing-n:127.0.0.1:36489__jg%2Fq 
x:collection1_shard1_replica_t43 s:shard1 c:collection1) 
[n:127.0.0.1:36489__jg%2Fq c:collection1 s:shard1  
x:collection1_shard1_replica_t43] o.a.s.c.SolrCore 
[collection1_shard1_replica_t43] Registered new searcher 
Searcher@6b6257ad[collection1_shard1_replica_t43] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 2041031 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1576456780149424128
   [junit4]   2> 2041034 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 2041034 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 2041034 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SyncStrategy 
Sync replicas to http://127.0.0.1:36489/_jg/q/collection1_shard1_replica_t43/
   [junit4]   2> 2041034 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SyncStrategy 
Sync Success - now sync replicas to me
   [junit4]   2> 2041034 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.SyncStrategy 
http://127.0.0.1:36489/_jg/q/collection1_shard1_replica_t43/ has no replicas
   [junit4]   2> 2041034 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 2041034 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.ZkController 
collection1_shard1_replica_t43 stopping background replication from leader
   [junit4]   2> 2041035 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:36489/_jg/q/collection1_shard1_replica_t43/ shard1
   [junit4]   2> 2041137 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 2041137 INFO  
(zkCallback-3198-thread-1-processing-n:127.0.0.1:36489__jg%2Fq) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 2041186 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.c.ZkController 
I am the leader, no recovery necessary
   [junit4]   2> 2041187 INFO  (qtp1264010015-22724) [n:127.0.0.1:36489__jg%2Fq 
c:collection1 s:shard1  x:collection1_shard1_replica_t43] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_t43&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=TLOG}
 status=0 QTime=1299
   [junit4]   2> 2041188 INFO  (qtp1264010015-22722) [n:127.0.0.1:36489__jg%2Fq 
   ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:36489__jg%252Fq&action=ADDREPLICA&collection=collection1&shard=shard1&type=TLOG&wt=javabin&version=2}
 status=0 QTime=1304
   [junit4]   2> 2041244 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 3 in directory 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-3-001
 of type TLOG
   [junit4]   2> 2041244 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] o.e.j.s.Server 
jetty-9.3.20.v20170531
   [junit4]   2> 2041245 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@42d80f68{/_jg/q,null,AVAILABLE}
   [junit4]   2> 2041245 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@927dfe{HTTP/1.1,[http/1.1]}{127.0.0.1:44143}
   [junit4]   2> 2041245 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] o.e.j.s.Server 
Started @2043297ms
   [junit4]   2> 2041245 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/tempDir-001/jetty3,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/_jg/q, 
hostPort=40683, 
coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-3-001/cores}
   [junit4]   2> 2041245 ERROR 
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2041245 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
8.0.0
   [junit4]   2> 2041245 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 2041245 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null, Default config 
dir: null
   [junit4]   2> 2041245 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-08-22T18:25:42.367Z
   [junit4]   2> 2041247 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 2041247 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-3-001/solr.xml
   [junit4]   2> 2041250 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 2041251 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35735/solr
   [junit4]   2> 2041255 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (3)
   [junit4]   2> 2041256 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 2041256 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:40683__jg%2Fq
   [junit4]   2> 2041257 INFO  (zkCallback-3187-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 2041257 INFO  
(zkCallback-3198-thread-1-processing-n:127.0.0.1:36489__jg%2Fq) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (3) -> (4)
   [junit4]   2> 2041257 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (3) -> (4)
   [junit4]   2> 2041257 INFO  
(zkCallback-3180-thread-1-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (3) -> (4)
   [junit4]   2> 2041257 INFO  
(zkCallback-3204-thread-1-processing-n:127.0.0.1:40683__jg%2Fq) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (3) -> (4)
   [junit4]   2> 2041317 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2041324 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2041324 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2041325 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-3-001/cores
   [junit4]   2> 2041357 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 2041357 INFO  
(zkCallback-3198-thread-1-processing-n:127.0.0.1:36489__jg%2Fq) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 2041373 INFO  (qtp1264010015-22723) [n:127.0.0.1:36489__jg%2Fq 
   ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with 
params 
node=127.0.0.1:40683__jg%252Fq&action=ADDREPLICA&collection=collection1&shard=shard2&type=TLOG&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 2041374 INFO  
(OverseerCollectionConfigSetProcessor-98528548381130756-127.0.0.1:44829__jg%2Fq-n_0000000000)
 [n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000006 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 2041374 INFO  
(OverseerThreadFactory-8544-thread-5-processing-n:127.0.0.1:44829__jg%2Fq) 
[n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.AddReplicaCmd Node Identified 
127.0.0.1:40683__jg%2Fq for creating new replica
   [junit4]   2> 2041375 INFO  (SocketProxy-Acceptor-40683) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=49148,localport=40683], receiveBufferSize:531000
   [junit4]   2> 2041375 INFO  (SocketProxy-Acceptor-40683) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=44143,localport=36544], receiveBufferSize=530904
   [junit4]   2> 2041375 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
   ] o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard2_replica_t45&action=CREATE&collection=collection1&shard=shard2&wt=javabin&version=2&replicaType=TLOG
   [junit4]   2> 2041376 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
   ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 2041478 INFO  
(zkCallback-3198-thread-1-processing-n:127.0.0.1:36489__jg%2Fq) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 2041478 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 2041478 INFO  
(zkCallback-3204-thread-1-processing-n:127.0.0.1:40683__jg%2Fq) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 2042384 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 2042395 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.s.IndexSchema 
[collection1_shard2_replica_t45] Schema name=test
   [junit4]   2> 2042509 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 2042518 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard2_replica_t45' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 2042518 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard2.replica_t45' (registry 
'solr.core.collection1.shard2.replica_t45') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@410b3c5
   [junit4]   2> 2042519 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 2042519 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.SolrCore 
[[collection1_shard2_replica_t45] ] Opening new SolrCore at 
[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-3-001/cores/collection1_shard2_replica_t45],
 
dataDir=[/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.HttpPartitionTest_168542E7C10F0BFB-001/shard-3-001/cores/collection1_shard2_replica_t45/data/]
   [junit4]   2> 2042521 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=14, maxMergeAtOnceExplicit=40, maxMergedSegmentMB=9.40625, 
floorSegmentMB=1.55078125, forceMergeDeletesPctAllowed=14.678975364956953, 
segmentsPerTier=20.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.6553940157080618
   [junit4]   2> 2042523 WARN  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 2042553 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.UpdateLog
   [junit4]   2> 2042553 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 2042577 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 2042577 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 2042578 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: 
minMergeSize=1000, mergeFactor=38, maxMergeSize=9223372036854775807, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.27233139108897464]
   [junit4]   2> 2042579 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@5ff4980f[collection1_shard2_replica_t45] main]
   [junit4]   2> 2042581 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 2042581 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 2042582 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 2042583 INFO  
(searcherExecutor-8580-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2  
x:collection1_shard2_replica_t45] o.a.s.c.SolrCore 
[collection1_shard2_replica_t45] Registered new searcher 
Searcher@5ff4980f[collection1_shard2_replica_t45] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 2042583 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1576456781776814080
   [junit4]   2> 2042585 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.c.ZkController 
Core needs to recover:collection1_shard2_replica_t45
   [junit4]   2> 2042586 INFO  
(updateExecutor-3201-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 2042586 INFO  (qtp1238269002-22756) [n:127.0.0.1:40683__jg%2Fq 
c:collection1 s:shard2  x:collection1_shard2_replica_t45] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard2_replica_t45&action=CREATE&collection=collection1&shard=shard2&wt=javabin&version=2&replicaType=TLOG}
 status=0 QTime=1210
   [junit4]   2> 2042605 INFO  (qtp1264010015-22723) [n:127.0.0.1:36489__jg%2Fq 
   ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:40683__jg%252Fq&action=ADDREPLICA&collection=collection1&shard=shard2&type=TLOG&wt=javabin&version=2}
 status=0 QTime=1231
   [junit4]   2> 2042606 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Starting recovery 
process. recoveringAfterStartup=true
   [junit4]   2> 2042606 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy ###### 
startupVersions=[[]]
   [junit4]   2> 2042606 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.ZkController 
collection1_shard2_replica_t45 stopping background replication from leader
   [junit4]   2> 2042606 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Begin buffering 
updates. core=[collection1_shard2_replica_t45]
   [junit4]   2> 2042606 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.u.UpdateLog Starting to buffer updates. 
FSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 2042606 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Publishing state of 
core [collection1_shard2_replica_t45] as recovering, leader is 
[http://127.0.0.1:46843/_jg/q/collection1_shard2_replica_t41/] and I am 
[http://127.0.0.1:40683/_jg/q/collection1_shard2_replica_t45/]
   [junit4]   2> 2042611 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.SolrTestCaseJ4 ###Starting test
   [junit4]   2> 2042611 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase Wait for recoveries to finish - wait 
30000 for each attempt
   [junit4]   2> 2042611 INFO  
(TEST-HttpPartitionTest.test-seed#[168542E7C10F0BFB]) [    ] 
o.a.s.c.AbstractDistribZkTestBase Wait for recoveries to finish - collection: 
collection1 failOnTimeout:true timeout (sec):30000
   [junit4]   2> 2042618 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Sending prep 
recovery command to [http://127.0.0.1:46843/_jg/q]; [WaitForState: 
action=PREPRECOVERY&core=collection1_shard2_replica_t41&nodeName=127.0.0.1:40683__jg%252Fq&coreNodeName=core_node46&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 2042618 INFO  (SocketProxy-Acceptor-46843) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=45720,localport=46843], receiveBufferSize:531000
   [junit4]   2> 2042619 INFO  (SocketProxy-Acceptor-46843) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=44749,localport=42304], receiveBufferSize=530904
   [junit4]   2> 2042619 INFO  (qtp1353016925-22685) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node46, 
state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: 
true, maxTime: 183 s
   [junit4]   2> 2042619 INFO  (qtp1353016925-22685) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): 
collection=collection1, shard=shard2, thisCore=collection1_shard2_replica_t41, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=down, localState=active, nodeName=127.0.0.1:40683__jg%2Fq, 
coreNodeName=core_node46, onlyIfActiveCheckResult=false, nodeProps: 
core_node46:{"core":"collection1_shard2_replica_t45","base_url":"http://127.0.0.1:40683/_jg/q","node_name":"127.0.0.1:40683__jg%2Fq","state":"down","type":"TLOG"}
   [junit4]   2> 2042709 INFO  
(zkCallback-3192-thread-1-processing-n:127.0.0.1:46843__jg%2Fq) 
[n:127.0.0.1:46843__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 2042709 INFO  
(zkCallback-3198-thread-1-processing-n:127.0.0.1:36489__jg%2Fq) 
[n:127.0.0.1:36489__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 2042709 INFO  
(zkCallback-3204-thread-1-processing-n:127.0.0.1:40683__jg%2Fq) 
[n:127.0.0.1:40683__jg%2Fq    ] o.a.s.c.c.ZkStateReader A cluster state change: 
[WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 2043375 INFO  
(OverseerCollectionConfigSetProcessor-98528548381130756-127.0.0.1:44829__jg%2Fq-n_0000000000)
 [n:127.0.0.1:44829__jg%2Fq    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000008 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 2043619 INFO  (qtp1353016925-22685) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): 
collection=collection1, shard=shard2, thisCore=collection1_shard2_replica_t41, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=recovering, localState=active, nodeName=127.0.0.1:40683__jg%2Fq, 
coreNodeName=core_node46, onlyIfActiveCheckResult=false, nodeProps: 
core_node46:{"core":"collection1_shard2_replica_t45","base_url":"http://127.0.0.1:40683/_jg/q","node_name":"127.0.0.1:40683__jg%2Fq","state":"recovering","type":"TLOG"}
   [junit4]   2> 2043620 INFO  (qtp1353016925-22685) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.h.a.PrepRecoveryOp Waited coreNodeName: core_node46, state: 
recovering, checkLive: true, onlyIfLeader: true for: 1 seconds.
   [junit4]   2> 2043620 INFO  (qtp1353016925-22685) [n:127.0.0.1:46843__jg%2Fq 
   ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={nodeName=127.0.0.1:40683__jg%252Fq&onlyIfLeaderActive=true&core=collection1_shard2_replica_t41&coreNodeName=core_node46&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=1000
   [junit4]   2> 2044120 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Starting Replication 
Recovery.
   [junit4]   2> 2044120 INFO  
(recoveryExecutor-3202-thread-1-processing-n:127.0.0.1:40683__jg%2Fq 
x:collection1_shard2_replica_t45 s:shard2 c:collection1 r:core_node46) 
[n:127.0.0.1:40683__jg%2Fq c:collection1 s:shard2 r:core_node46 
x:collection1_shard2_replica_t45] o.a.s.c.RecoveryStrategy Attempting to 
replicate from [http://127.0.0.1:46843/_jg/q/collection1_shard2_replica_t41/].
   [junit4]   2> 2044121 INFO  (SocketProxy-Acceptor-46843) [    ] 
o.a.s.c.SocketProxy accepted 
Socket[addr=/127.0.0.1,port=45724,localport=46843], receiveBufferSize:531000
   [junit4]   2> 2044122 INFO  (qtp1353016925-22685) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2 r:core_node42 x:collection1_shard2_replica_t41] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1576456783390572544,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 2044122 INFO  (qtp1353016925-22685) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2 r:core_node42 x:collection1_shard2_replica_t41] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 2044122 INFO  (SocketProxy-Acceptor-46843) [    ] 
o.a.s.c.SocketProxy proxy connection 
Socket[addr=/127.0.0.1,port=44749,localport=42308], receiveBufferSize=530904
   [junit4]   2> 2044122 INFO  (qtp1353016925-22685) [n:127.0.0.1:46843__jg%2Fq 
c:collection1 s:shard2 r:core_node42 x:collection1_shard2_replica_t41] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 2044122 INFO  (qtp1353016925-22685) [n:127.0.0.

[...truncated too long message...]

sed
 [ecj-lint] ----------
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/handler/component/DistributedDebugComponentTest.java
 (at line 221)
 [ecj-lint]     throw new AssertionError(q.toString() + ": " + e.getMessage(), 
e);
 [ecj-lint]     
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'client' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 9. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 204)
 [ecj-lint]     Analyzer a1 = new WhitespaceAnalyzer();
 [ecj-lint]              ^^
 [ecj-lint] Resource leak: 'a1' is never closed
 [ecj-lint] ----------
 [ecj-lint] 10. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 207)
 [ecj-lint]     OffsetWindowTokenFilter tots = new 
OffsetWindowTokenFilter(tokenStream);
 [ecj-lint]                             ^^^^
 [ecj-lint] Resource leak: 'tots' is never closed
 [ecj-lint] ----------
 [ecj-lint] 11. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 211)
 [ecj-lint]     Analyzer a2 = new WhitespaceAnalyzer();
 [ecj-lint]              ^^
 [ecj-lint] Resource leak: 'a2' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 12. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/response/TestJavabinTupleStreamParser.java
 (at line 72)
 [ecj-lint]     JavabinTupleStreamParser parser = new 
JavabinTupleStreamParser(new ByteArrayInputStream(bytes), true);
 [ecj-lint]                              ^^^^^^
 [ecj-lint] Resource leak: 'parser' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 13. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/search/TestDocSet.java
 (at line 243)
 [ecj-lint]     return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new 
HashDocSet(a,0,n);
 [ecj-lint]                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 14. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/search/TestDocSet.java
 (at line 528)
 [ecj-lint]     DocSet a = new BitDocSet(bs);
 [ecj-lint]            ^
 [ecj-lint] Resource leak: 'a' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 15. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/security/TestPKIAuthenticationPlugin.java
 (at line 76)
 [ecj-lint]     final MockPKIAuthenticationPlugin mock = new 
MockPKIAuthenticationPlugin(null, nodeName);
 [ecj-lint]                                       ^^^^
 [ecj-lint] Resource leak: 'mock' is never closed
 [ecj-lint] ----------
 [ecj-lint] 16. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/security/TestPKIAuthenticationPlugin.java
 (at line 131)
 [ecj-lint]     MockPKIAuthenticationPlugin mock1 = new 
MockPKIAuthenticationPlugin(null, nodeName) {
 [ecj-lint]                                 ^^^^^
 [ecj-lint] Resource leak: 'mock1' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 17. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/security/TestRuleBasedAuthorizationPlugin.java
 (at line 380)
 [ecj-lint]     RuleBasedAuthorizationPlugin plugin = new 
RuleBasedAuthorizationPlugin();
 [ecj-lint]                                  ^^^^^^
 [ecj-lint] Resource leak: 'plugin' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 18. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/security/TestSha256AuthenticationProvider.java
 (at line 49)
 [ecj-lint]     BasicAuthPlugin basicAuthPlugin = new BasicAuthPlugin();
 [ecj-lint]                     ^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'basicAuthPlugin' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 19. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/spelling/SimpleQueryConverter.java
 (at line 42)
 [ecj-lint]     WhitespaceAnalyzer analyzer = new WhitespaceAnalyzer();
 [ecj-lint]                        ^^^^^^^^
 [ecj-lint] Resource leak: 'analyzer' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 20. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 139)
 [ecj-lint]     IndexWriter w = new IndexWriter(d, 
newIndexWriterConfig(analyzer));
 [ecj-lint]                 ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 21. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 172)
 [ecj-lint]     throw iae;
 [ecj-lint]     ^^^^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 22. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 178)
 [ecj-lint]     return;
 [ecj-lint]     ^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 23. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 134)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 24. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 333)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 25. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 367)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 26. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 413)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 27. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 458)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 28. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 516)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 29. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrIndexSplitterTest.java
 (at line 181)
 [ecj-lint]     EmbeddedSolrServer server1 = new 
EmbeddedSolrServer(h.getCoreContainer(), "split1");
 [ecj-lint]                        ^^^^^^^
 [ecj-lint] Resource leak: 'server1' is never closed
 [ecj-lint] ----------
 [ecj-lint] 30. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/SolrIndexSplitterTest.java
 (at line 182)
 [ecj-lint]     EmbeddedSolrServer server2 = new 
EmbeddedSolrServer(h.getCoreContainer(), "split2");
 [ecj-lint]                        ^^^^^^^
 [ecj-lint] Resource leak: 'server2' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 31. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/test/org/apache/solr/update/processor/RecordingUpdateProcessorFactory.java
 (at line 67)
 [ecj-lint]     return recording ? new 
RecordingUpdateRequestProcessor(commandQueue, next) : next;
 [ecj-lint]                        
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 31 problems (1 error, 30 warnings)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:810: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build.xml:689: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2013: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2046: 
Compile failed; see the compiler error output for details.

Total time: 79 minutes 26 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to