Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2246/ Java: 32bit/jdk1.8.0_102 -server -XX:+UseG1GC
1 tests failed. FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: PeerSynced node did not become leader expected:<CloudJettyRunner [url=https://127.0.0.1:42944/r_hy/collection1]> but was:<CloudJettyRunner [url=https://127.0.0.1:38852/r_hy/collection1]> Stack Trace: java.lang.AssertionError: PeerSynced node did not become leader expected:<CloudJettyRunner [url=https://127.0.0.1:42944/r_hy/collection1]> but was:<CloudJettyRunner [url=https://127.0.0.1:38852/r_hy/collection1]> at __randomizedtesting.SeedInfo.seed([BE6E9861DFC86AF9:363AA7BB71340701]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:154) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 10801 lines...] [junit4] Suite: org.apache.solr.cloud.PeerSyncReplicationTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/init-core-data-001 [junit4] 2> 32834 INFO (SUITE-PeerSyncReplicationTest-seed#[BE6E9861DFC86AF9]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 32835 INFO (SUITE-PeerSyncReplicationTest-seed#[BE6E9861DFC86AF9]-worker) [ ] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /r_hy/ [junit4] 2> 32837 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 32837 INFO (Thread-44) [ ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 32837 INFO (Thread-44) [ ] o.a.s.c.ZkTestServer Starting server [junit4] 2> 32937 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ZkTestServer start zk server on port:36121 [junit4] 2> 33018 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml to /configs/conf1/solrconfig.xml [junit4] 2> 33020 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/schema.xml to /configs/conf1/schema.xml [junit4] 2> 33031 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml to /configs/conf1/solrconfig.snippet.randomindexconfig.xml [junit4] 2> 33032 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt to /configs/conf1/stopwords.txt [junit4] 2> 33033 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/protwords.txt to /configs/conf1/protwords.txt [junit4] 2> 33034 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/currency.xml to /configs/conf1/currency.xml [junit4] 2> 33034 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml to /configs/conf1/enumsConfig.xml [junit4] 2> 33035 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json to /configs/conf1/open-exchange-rates.json [junit4] 2> 33036 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt to /configs/conf1/mapping-ISOLatin1Accent.txt [junit4] 2> 33037 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt to /configs/conf1/old_synonyms.txt [junit4] 2> 33038 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractZkTestCase put /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/test-files/solr/collection1/conf/synonyms.txt to /configs/conf1/synonyms.txt [junit4] 2> 33173 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.SolrTestCaseJ4 Writing core.properties file to /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/control-001/cores/collection1 [junit4] 2> 33175 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 33176 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@faaeb7{/r_hy,null,AVAILABLE} [junit4] 2> 33188 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.ServerConnector Started ServerConnector@1472a5b{SSL,[ssl, http/1.1]}{127.0.0.1:37772} [junit4] 2> 33188 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.Server Started @35040ms [junit4] 2> 33188 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/tempDir-001/control/data, hostContext=/r_hy, hostPort=37772, coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/control-001/cores} [junit4] 2> 33188 ERROR (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 33189 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 6.4.0 [junit4] 2> 33189 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 33189 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 33189 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2016-11-23T02:18:56.422Z [junit4] 2> 33191 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 33192 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/control-001/solr.xml [junit4] 2> 33199 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=340000&connTimeout=45000&retry=true [junit4] 2> 33199 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:36121/solr [junit4] 2> 33276 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:37772_r_hy ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:37772_r_hy [junit4] 2> 33277 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:37772_r_hy ] o.a.s.c.Overseer Overseer (id=96984598843949061-127.0.0.1:37772_r_hy-n_0000000000) starting [junit4] 2> 33284 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:37772_r_hy ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:37772_r_hy [junit4] 2> 33291 INFO (OverseerStateUpdate-96984598843949061-127.0.0.1:37772_r_hy-n_0000000000) [n:127.0.0.1:37772_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 33369 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:37772_r_hy ] o.a.s.c.CorePropertiesLocator Found 1 core definitions underneath /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/control-001/cores [junit4] 2> 33370 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:37772_r_hy ] o.a.s.c.CorePropertiesLocator Cores are: [collection1] [junit4] 2> 33374 INFO (OverseerStateUpdate-96984598843949061-127.0.0.1:37772_r_hy-n_0000000000) [n:127.0.0.1:37772_r_hy ] o.a.s.c.o.ReplicaMutator Assigning new node to shard shard=shard1 [junit4] 2> 34404 WARN (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection x:collection1] o.a.s.c.Config Beginning with Solr 5.5, <mergePolicy> is deprecated, use <mergePolicyFactory> instead. [junit4] 2> 34405 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection x:collection1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 6.4.0 [junit4] 2> 34429 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection x:collection1] o.a.s.s.IndexSchema [collection1] Schema name=test [junit4] 2> 34644 WARN (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection x:collection1] o.a.s.s.IndexSchema [collection1] default search field in schema is text. WARNING: Deprecated, please use 'df' on request instead. [junit4] 2> 34646 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection x:collection1] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id [junit4] 2> 34658 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection x:collection1] o.a.s.c.CoreContainer Creating SolrCore 'collection1' using configuration from collection control_collection [junit4] 2> 34659 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.SolrCore [[collection1] ] Opening new SolrCore at [/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/control-001/cores/collection1], dataDir=[/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/control-001/cores/collection1/data/] [junit4] 2> 34659 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.JmxMonitoredMap JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@1c213b0 [junit4] 2> 34662 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=32, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0] [junit4] 2> 34772 WARN (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,args = {defaults={a=A,b=B}}} [junit4] 2> 34779 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog [junit4] 2> 34780 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=1000 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 34789 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 34789 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 34790 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=30, maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.3264736313076797] [junit4] 2> 34791 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.s.SolrIndexSearcher Opening [Searcher@1b90316[collection1] main] [junit4] 2> 34793 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 34793 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 34794 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000 [junit4] 2> 34795 INFO (searcherExecutor-152-thread-1-processing-n:127.0.0.1:37772_r_hy x:collection1 s:shard1 c:control_collection r:core_node1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.SolrCore [collection1] Registered new searcher Searcher@1b90316[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 34795 INFO (coreLoadExecutor-151-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1551753583555248128 [junit4] 2> 34803 INFO (coreZkRegister-144-thread-1-processing-n:127.0.0.1:37772_r_hy x:collection1 s:shard1 c:control_collection r:core_node1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue. [junit4] 2> 34803 INFO (coreZkRegister-144-thread-1-processing-n:127.0.0.1:37772_r_hy x:collection1 s:shard1 c:control_collection r:core_node1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 34803 INFO (coreZkRegister-144-thread-1-processing-n:127.0.0.1:37772_r_hy x:collection1 s:shard1 c:control_collection r:core_node1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.SyncStrategy Sync replicas to https://127.0.0.1:37772/r_hy/collection1/ [junit4] 2> 34803 INFO (coreZkRegister-144-thread-1-processing-n:127.0.0.1:37772_r_hy x:collection1 s:shard1 c:control_collection r:core_node1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 34803 INFO (coreZkRegister-144-thread-1-processing-n:127.0.0.1:37772_r_hy x:collection1 s:shard1 c:control_collection r:core_node1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.SyncStrategy https://127.0.0.1:37772/r_hy/collection1/ has no replicas [junit4] 2> 34808 INFO (coreZkRegister-144-thread-1-processing-n:127.0.0.1:37772_r_hy x:collection1 s:shard1 c:control_collection r:core_node1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext I am the new leader: https://127.0.0.1:37772/r_hy/collection1/ shard1 [junit4] 2> 34878 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 34879 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ChaosMonkey monkey: init - expire sessions:false cause connection loss:false [junit4] 2> 34879 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractFullDistribZkTestBase Creating collection1 with stateFormat=2 [junit4] 2> 34909 INFO (coreZkRegister-144-thread-1-processing-n:127.0.0.1:37772_r_hy x:collection1 s:shard1 c:control_collection r:core_node1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.ZkController I am the leader, no recovery necessary [junit4] 2> 35009 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.SolrTestCaseJ4 Writing core.properties file to /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-1-001/cores/collection1 [junit4] 2> 35010 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-1-001 [junit4] 2> 35011 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 35012 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@851328{/r_hy,null,AVAILABLE} [junit4] 2> 35015 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.ServerConnector Started ServerConnector@17c348f{SSL,[ssl, http/1.1]}{127.0.0.1:38852} [junit4] 2> 35015 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.Server Started @36867ms [junit4] 2> 35015 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/tempDir-001/jetty1, solrconfig=solrconfig.xml, hostContext=/r_hy, hostPort=38852, coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-1-001/cores} [junit4] 2> 35015 ERROR (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 35015 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 6.4.0 [junit4] 2> 35015 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 35015 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 35015 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2016-11-23T02:18:58.248Z [junit4] 2> 35019 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 35019 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-1-001/solr.xml [junit4] 2> 35025 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=340000&connTimeout=45000&retry=true [junit4] 2> 35025 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:36121/solr [junit4] 2> 35038 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 35041 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:38852_r_hy ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:38852_r_hy [junit4] 2> 35042 INFO (zkCallback-47-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 35042 INFO (zkCallback-43-thread-1-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 35042 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 35088 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:38852_r_hy ] o.a.s.c.CorePropertiesLocator Found 1 core definitions underneath /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-1-001/cores [junit4] 2> 35088 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:38852_r_hy ] o.a.s.c.CorePropertiesLocator Cores are: [collection1] [junit4] 2> 35097 INFO (OverseerStateUpdate-96984598843949061-127.0.0.1:37772_r_hy-n_0000000000) [n:127.0.0.1:37772_r_hy ] o.a.s.c.o.ReplicaMutator Assigning new node to shard shard=shard1 [junit4] 2> 35206 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [2]) [junit4] 2> 36119 WARN (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 x:collection1] o.a.s.c.Config Beginning with Solr 5.5, <mergePolicy> is deprecated, use <mergePolicyFactory> instead. [junit4] 2> 36120 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 x:collection1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 6.4.0 [junit4] 2> 36137 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema [collection1] Schema name=test [junit4] 2> 36381 WARN (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema [collection1] default search field in schema is text. WARNING: Deprecated, please use 'df' on request instead. [junit4] 2> 36385 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id [junit4] 2> 36402 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 x:collection1] o.a.s.c.CoreContainer Creating SolrCore 'collection1' using configuration from collection collection1 [junit4] 2> 36403 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.SolrCore [[collection1] ] Opening new SolrCore at [/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-1-001/cores/collection1], dataDir=[/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-1-001/cores/collection1/data/] [junit4] 2> 36403 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.JmxMonitoredMap JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@1c213b0 [junit4] 2> 36405 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=32, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0] [junit4] 2> 36452 WARN (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,args = {defaults={a=A,b=B}}} [junit4] 2> 36466 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog [junit4] 2> 36466 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=1000 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 36467 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 36467 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 36468 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=30, maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.3264736313076797] [junit4] 2> 36469 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.s.SolrIndexSearcher Opening [Searcher@1c261fa[collection1] main] [junit4] 2> 36470 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 36471 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 36471 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000 [junit4] 2> 36473 INFO (searcherExecutor-163-thread-1-processing-n:127.0.0.1:38852_r_hy x:collection1 s:shard1 c:collection1 r:core_node1) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.SolrCore [collection1] Registered new searcher Searcher@1c261fa[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 36474 INFO (coreLoadExecutor-162-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1551753585315807232 [junit4] 2> 36490 INFO (coreZkRegister-157-thread-1-processing-n:127.0.0.1:38852_r_hy x:collection1 s:shard1 c:collection1 r:core_node1) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue. [junit4] 2> 36490 INFO (coreZkRegister-157-thread-1-processing-n:127.0.0.1:38852_r_hy x:collection1 s:shard1 c:collection1 r:core_node1) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 36490 INFO (coreZkRegister-157-thread-1-processing-n:127.0.0.1:38852_r_hy x:collection1 s:shard1 c:collection1 r:core_node1) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.SyncStrategy Sync replicas to https://127.0.0.1:38852/r_hy/collection1/ [junit4] 2> 36490 INFO (coreZkRegister-157-thread-1-processing-n:127.0.0.1:38852_r_hy x:collection1 s:shard1 c:collection1 r:core_node1) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 36490 INFO (coreZkRegister-157-thread-1-processing-n:127.0.0.1:38852_r_hy x:collection1 s:shard1 c:collection1 r:core_node1) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.SyncStrategy https://127.0.0.1:38852/r_hy/collection1/ has no replicas [junit4] 2> 36521 INFO (coreZkRegister-157-thread-1-processing-n:127.0.0.1:38852_r_hy x:collection1 s:shard1 c:collection1 r:core_node1) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext I am the new leader: https://127.0.0.1:38852/r_hy/collection1/ shard1 [junit4] 2> 36624 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [2]) [junit4] 2> 36672 INFO (coreZkRegister-157-thread-1-processing-n:127.0.0.1:38852_r_hy x:collection1 s:shard1 c:collection1 r:core_node1) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.ZkController I am the leader, no recovery necessary [junit4] 2> 36788 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [2]) [junit4] 2> 36807 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.SolrTestCaseJ4 Writing core.properties file to /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-2-001/cores/collection1 [junit4] 2> 36809 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 2 in directory /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-2-001 [junit4] 2> 36811 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 36812 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@701b0b{/r_hy,null,AVAILABLE} [junit4] 2> 36815 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.ServerConnector Started ServerConnector@1ba606e{SSL,[ssl, http/1.1]}{127.0.0.1:42944} [junit4] 2> 36815 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.Server Started @38667ms [junit4] 2> 36815 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/tempDir-001/jetty2, solrconfig=solrconfig.xml, hostContext=/r_hy, hostPort=42944, coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-2-001/cores} [junit4] 2> 36815 ERROR (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 36815 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 6.4.0 [junit4] 2> 36815 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 36815 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 36815 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2016-11-23T02:19:00.048Z [junit4] 2> 36840 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 36840 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-2-001/solr.xml [junit4] 2> 36848 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=340000&connTimeout=45000&retry=true [junit4] 2> 36848 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:36121/solr [junit4] 2> 36867 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2) [junit4] 2> 36872 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:42944_r_hy ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:42944_r_hy [junit4] 2> 36873 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3) [junit4] 2> 36873 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3) [junit4] 2> 36873 INFO (zkCallback-47-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3) [junit4] 2> 36873 INFO (zkCallback-43-thread-2-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3) [junit4] 2> 36977 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [3]) [junit4] 2> 37052 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:42944_r_hy ] o.a.s.c.CorePropertiesLocator Found 1 core definitions underneath /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-2-001/cores [junit4] 2> 37052 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:42944_r_hy ] o.a.s.c.CorePropertiesLocator Cores are: [collection1] [junit4] 2> 37061 INFO (OverseerStateUpdate-96984598843949061-127.0.0.1:37772_r_hy-n_0000000000) [n:127.0.0.1:37772_r_hy ] o.a.s.c.o.ReplicaMutator Assigning new node to shard shard=shard1 [junit4] 2> 37188 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [3]) [junit4] 2> 37188 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [3]) [junit4] 2> 38076 WARN (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 x:collection1] o.a.s.c.Config Beginning with Solr 5.5, <mergePolicy> is deprecated, use <mergePolicyFactory> instead. [junit4] 2> 38076 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 x:collection1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 6.4.0 [junit4] 2> 38088 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema [collection1] Schema name=test [junit4] 2> 38194 WARN (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema [collection1] default search field in schema is text. WARNING: Deprecated, please use 'df' on request instead. [junit4] 2> 38197 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id [junit4] 2> 38209 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 x:collection1] o.a.s.c.CoreContainer Creating SolrCore 'collection1' using configuration from collection collection1 [junit4] 2> 38209 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.SolrCore [[collection1] ] Opening new SolrCore at [/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-2-001/cores/collection1], dataDir=[/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/../../../../../../../../../home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-2-001/cores/collection1/data/] [junit4] 2> 38210 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.JmxMonitoredMap JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@1c213b0 [junit4] 2> 38220 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=32, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0] [junit4] 2> 38297 WARN (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,args = {defaults={a=A,b=B}}} [junit4] 2> 38306 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog [junit4] 2> 38306 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=1000 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 38315 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 38315 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 38316 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=30, maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.3264736313076797] [junit4] 2> 38316 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.s.SolrIndexSearcher Opening [Searcher@106c7f7[collection1] main] [junit4] 2> 38317 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 38318 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 38318 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000 [junit4] 2> 38321 INFO (searcherExecutor-174-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.SolrCore [collection1] Registered new searcher Searcher@106c7f7[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 38321 INFO (coreLoadExecutor-173-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1551753587252527104 [junit4] 2> 38324 INFO (coreZkRegister-168-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.ZkController Core needs to recover:collection1 [junit4] 2> 38325 INFO (updateExecutor-56-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 38330 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true [junit4] 2> 38331 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy ###### startupVersions=[[]] [junit4] 2> 38331 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[collection1] [junit4] 2> 38331 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null} [junit4] 2> 38331 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Publishing state of core [collection1] as recovering, leader is [https://127.0.0.1:38852/r_hy/collection1/] and I am [https://127.0.0.1:42944/r_hy/collection1/] [junit4] 2> 38338 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Sending prep recovery command to [https://127.0.0.1:38852/r_hy]; [WaitForState: action=PREPRECOVERY&core=collection1&nodeName=127.0.0.1:42944_r_hy&coreNodeName=core_node2&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true] [junit4] 2> 38424 INFO (qtp5302139-264) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node2, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true [junit4] 2> 38425 INFO (qtp5302139-264) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp Will wait a max of 183 seconds to see collection1 (shard1 of collection1) have state: recovering [junit4] 2> 38425 INFO (qtp5302139-264) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=down, localState=active, nodeName=127.0.0.1:42944_r_hy, coreNodeName=core_node2, onlyIfActiveCheckResult=false, nodeProps: core_node2:{"core":"collection1","base_url":"https://127.0.0.1:42944/r_hy","node_name":"127.0.0.1:42944_r_hy","state":"down"} [junit4] 2> 38442 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [3]) [junit4] 2> 38442 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [3]) [junit4] 2> 38710 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.SolrTestCaseJ4 Writing core.properties file to /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-3-001/cores/collection1 [junit4] 2> 38710 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 3 in directory /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-3-001 [junit4] 2> 38712 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.Server jetty-9.3.8.v20160314 [junit4] 2> 38713 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@15f9260{/r_hy,null,AVAILABLE} [junit4] 2> 38736 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.ServerConnector Started ServerConnector@1e01044{SSL,[ssl, http/1.1]}{127.0.0.1:33902} [junit4] 2> 38736 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.Server Started @40588ms [junit4] 2> 38736 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/tempDir-001/jetty3, solrconfig=solrconfig.xml, hostContext=/r_hy, hostPort=33902, coreRootDirectory=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-3-001/cores} [junit4] 2> 38736 ERROR (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 38736 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 6.4.0 [junit4] 2> 38737 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 38737 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 38740 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2016-11-23T02:19:01.973Z [junit4] 2> 38744 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 38744 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-3-001/solr.xml [junit4] 2> 38749 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=340000&connTimeout=45000&retry=true [junit4] 2> 38750 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:36121/solr [junit4] 2> 38760 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:33902_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3) [junit4] 2> 38765 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:33902_r_hy ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:33902_r_hy [junit4] 2> 38779 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4) [junit4] 2> 38779 INFO (zkCallback-47-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4) [junit4] 2> 38780 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4) [junit4] 2> 38780 INFO (zkCallback-66-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4) [junit4] 2> 38780 INFO (zkCallback-43-thread-2-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4) [junit4] 2> 38885 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 38885 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 38902 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:33902_r_hy ] o.a.s.c.CorePropertiesLocator Found 1 core definitions underneath /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-3-001/cores [junit4] 2> 38902 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [n:127.0.0.1:33902_r_hy ] o.a.s.c.CorePropertiesLocator Cores are: [collection1] [junit4] 2> 38904 INFO (OverseerStateUpdate-96984598843949061-127.0.0.1:37772_r_hy-n_0000000000) [n:127.0.0.1:37772_r_hy ] o.a.s.c.o.ReplicaMutator Assigning new node to shard shard=shard1 [junit4] 2> 39005 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 39005 INFO (zkCallback-66-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 39005 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 39425 INFO (qtp5302139-264) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:42944_r_hy, coreNodeName=core_node2, onlyIfActiveCheckResult=false, nodeProps: core_node2:{"core":"collection1","base_url":"https://127.0.0.1:42944/r_hy","node_name":"127.0.0.1:42944_r_hy","state":"recovering"} [junit4] 2> 39425 INFO (qtp5302139-264) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp Waited coreNodeName: core_node2, state: recovering, checkLive: true, onlyIfLeader: true for: 1 seconds. [junit4] 2> 39425 INFO (qtp5302139-264) [n:127.0.0.1:38852_r_hy ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:42944_r_hy&onlyIfLeaderActive=true&core=collection1&coreNodeName=core_node2&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=1001 [junit4] 2> 39915 WARN (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 x:collection1] o.a.s.c.Config Beginning with Solr 5.5, <mergePolicy> is deprecated, use <mergePolicyFactory> instead. [junit4] 2> 39917 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 x:collection1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 6.4.0 [junit4] 2> 39944 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema [collection1] Schema name=test [junit4] 2> 40088 WARN (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema [collection1] default search field in schema is text. WARNING: Deprecated, please use 'df' on request instead. [junit4] 2> 40091 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 x:collection1] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id [junit4] 2> 40104 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 x:collection1] o.a.s.c.CoreContainer Creating SolrCore 'collection1' using configuration from collection collection1 [junit4] 2> 40105 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.SolrCore [[collection1] ] Opening new SolrCore at [/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-3-001/cores/collection1], dataDir=[/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001/shard-3-001/cores/collection1/data/] [junit4] 2> 40105 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.JmxMonitoredMap JMX monitoring is enabled. Adding Solr mbeans to JMX Server: com.sun.jmx.mbeanserver.JmxMBeanServer@1c213b0 [junit4] 2> 40112 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=32, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0] [junit4] 2> 40187 WARN (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,args = {defaults={a=A,b=B}}} [junit4] 2> 40201 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog [junit4] 2> 40201 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=1000 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 40212 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 40212 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 40213 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=30, maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.3264736313076797] [junit4] 2> 40215 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.s.SolrIndexSearcher Opening [Searcher@17ec670[collection1] main] [junit4] 2> 40217 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 40217 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 40217 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000 [junit4] 2> 40220 INFO (searcherExecutor-185-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.SolrCore [collection1] Registered new searcher Searcher@17ec670[collection1] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 40220 INFO (coreLoadExecutor-184-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1551753589243772928 [junit4] 2> 40225 INFO (coreZkRegister-179-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.ZkController Core needs to recover:collection1 [junit4] 2> 40226 INFO (updateExecutor-63-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 40226 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true [junit4] 2> 40226 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy ###### startupVersions=[[]] [junit4] 2> 40227 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[collection1] [junit4] 2> 40227 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null} [junit4] 2> 40227 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy Publishing state of core [collection1] as recovering, leader is [https://127.0.0.1:38852/r_hy/collection1/] and I am [https://127.0.0.1:33902/r_hy/collection1/] [junit4] 2> 40230 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy Sending prep recovery command to [https://127.0.0.1:38852/r_hy]; [WaitForState: action=PREPRECOVERY&core=collection1&nodeName=127.0.0.1:33902_r_hy&coreNodeName=core_node3&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true] [junit4] 2> 40247 INFO (qtp5302139-265) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node3, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true [junit4] 2> 40247 INFO (qtp5302139-265) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp Will wait a max of 183 seconds to see collection1 (shard1 of collection1) have state: recovering [junit4] 2> 40247 INFO (qtp5302139-265) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=down, localState=active, nodeName=127.0.0.1:33902_r_hy, coreNodeName=core_node3, onlyIfActiveCheckResult=false, nodeProps: core_node3:{"core":"collection1","base_url":"https://127.0.0.1:33902/r_hy","node_name":"127.0.0.1:33902_r_hy","state":"down"} [junit4] 2> 40342 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 40342 INFO (zkCallback-66-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 40343 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 40405 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.SolrTestCaseJ4 ###Starting test [junit4] 2> 40405 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractFullDistribZkTestBase Wait for recoveries to finish - wait 30 for each attempt [junit4] 2> 40405 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractDistribZkTestBase Wait for recoveries to finish - collection: collection1 failOnTimeout:true timeout (sec):30 [junit4] 2> 41249 INFO (qtp5302139-265) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:33902_r_hy, coreNodeName=core_node3, onlyIfActiveCheckResult=false, nodeProps: core_node3:{"core":"collection1","base_url":"https://127.0.0.1:33902/r_hy","node_name":"127.0.0.1:33902_r_hy","state":"recovering"} [junit4] 2> 41249 INFO (qtp5302139-265) [n:127.0.0.1:38852_r_hy ] o.a.s.h.a.PrepRecoveryOp Waited coreNodeName: core_node3, state: recovering, checkLive: true, onlyIfLeader: true for: 1 seconds. [junit4] 2> 41249 INFO (qtp5302139-265) [n:127.0.0.1:38852_r_hy ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:33902_r_hy&onlyIfLeaderActive=true&core=collection1&coreNodeName=core_node3&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=1002 [junit4] 2> 46427 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Attempting to PeerSync from [https://127.0.0.1:38852/r_hy/collection1/] - recoveringAfterStartup=[true] [junit4] 2> 46431 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.PeerSync PeerSync: core=collection1 url=https://127.0.0.1:42944/r_hy START replicas=[https://127.0.0.1:38852/r_hy/collection1/] nUpdates=1000 [junit4] 2> 46450 INFO (qtp5302139-264) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.IndexFingerprint IndexFingerprint millis:4.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 46450 INFO (qtp5302139-264) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.S.Request [collection1] webapp=/r_hy path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=7 [junit4] 2> 46452 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 46453 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 46453 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.DirectUpdateHandler2 start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 46453 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 46453 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 46453 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy PeerSync stage of recovery was successful. [junit4] 2> 46453 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Replaying updates buffered during PeerSync. [junit4] 2> 46453 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy No replay needed. [junit4] 2> 46453 INFO (recoveryExecutor-57-thread-1-processing-n:127.0.0.1:42944_r_hy x:collection1 s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Registering as Active after recovery. [junit4] 2> 46456 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 46456 INFO (zkCallback-66-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 46456 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 48252 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy Attempting to PeerSync from [https://127.0.0.1:38852/r_hy/collection1/] - recoveringAfterStartup=[true] [junit4] 2> 48252 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.PeerSync PeerSync: core=collection1 url=https://127.0.0.1:33902/r_hy START replicas=[https://127.0.0.1:38852/r_hy/collection1/] nUpdates=1000 [junit4] 2> 48261 INFO (qtp5302139-267) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 48261 INFO (qtp5302139-267) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.S.Request [collection1] webapp=/r_hy path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.DirectUpdateHandler2 start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy PeerSync stage of recovery was successful. [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy Replaying updates buffered during PeerSync. [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy No replay needed. [junit4] 2> 48263 INFO (recoveryExecutor-64-thread-1-processing-n:127.0.0.1:33902_r_hy x:collection1 s:shard1 c:collection1 r:core_node3) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.c.RecoveryStrategy Registering as Active after recovery. [junit4] 2> 48265 INFO (zkCallback-59-thread-1-processing-n:127.0.0.1:42944_r_hy) [n:127.0.0.1:42944_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 48265 INFO (zkCallback-66-thread-1-processing-n:127.0.0.1:33902_r_hy) [n:127.0.0.1:33902_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 48265 INFO (zkCallback-53-thread-1-processing-n:127.0.0.1:38852_r_hy) [n:127.0.0.1:38852_r_hy ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [4]) [junit4] 2> 48410 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.AbstractDistribZkTestBase Recoveries finished - collection: collection1 [junit4] 2> 48492 INFO (qtp15887303-226) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.DirectUpdateHandler2 start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 48493 INFO (qtp15887303-226) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 48493 INFO (qtp15887303-226) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 48493 INFO (qtp15887303-226) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.u.p.LogUpdateProcessorFactory [collection1] webapp=/r_hy path=/update params={waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 5 [junit4] 2> 48535 INFO (qtp5302139-270) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.DirectUpdateHandler2 start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 48536 INFO (qtp5302139-270) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 48545 INFO (qtp5302139-270) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 48545 INFO (qtp5302139-270) [n:127.0.0.1:38852_r_hy c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.p.LogUpdateProcessorFactory [collection1] webapp=/r_hy path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:38852/r_hy/collection1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 10 [junit4] 2> 48605 INFO (qtp4947881-292) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.DirectUpdateHandler2 start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 48606 INFO (qtp4947881-292) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 48606 INFO (qtp4947881-292) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 48606 INFO (qtp4947881-292) [n:127.0.0.1:42944_r_hy c:collection1 s:shard1 r:core_node2 x:collection1] o.a.s.u.p.LogUpdateProcessorFactory [collection1] webapp=/r_hy path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=https://127.0.0.1:38852/r_hy/collection1/&commit_end_point=true&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 1 [junit4] 2> 48607 INFO (qtp14059052-327) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.DirectUpdateHandler2 start commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 48607 INFO (qtp14059052-327) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 48607 INFO (qtp14059052-327) [n:127.0.0.1:33902_r_hy c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit [...truncated too long message...] [junit4] 1> "core_node1":{ [junit4] 1> "core":"collection1", [junit4] 1> "base_url":"https://127.0.0.1:38852/r_hy", [junit4] 1> "node_name":"127.0.0.1:38852_r_hy", [junit4] 1> "state":"down", [junit4] 1> "leader":"true"}, [junit4] 1> "core_node2":{ [junit4] 1> "core":"collection1", [junit4] 1> "base_url":"https://127.0.0.1:42944/r_hy", [junit4] 1> "node_name":"127.0.0.1:42944_r_hy", [junit4] 1> "state":"down"}, [junit4] 1> "core_node3":{ [junit4] 1> "core":"collection1", [junit4] 1> "base_url":"https://127.0.0.1:33902/r_hy", [junit4] 1> "node_name":"127.0.0.1:33902_r_hy", [junit4] 1> "state":"down"}}}}}} [junit4] 1> /solr/collections/collection1/leader_elect (1) [junit4] 1> /solr/collections/collection1/leader_elect/shard1 (1) [junit4] 1> /solr/collections/collection1/leader_elect/shard1/election (0) [junit4] 1> /solr/collections/control_collection (3) [junit4] 1> DATA: [junit4] 1> {"configName":"conf1"} [junit4] 1> /solr/collections/control_collection/shards (0) [junit4] 1> /solr/collections/control_collection/leaders (1) [junit4] 1> /solr/collections/control_collection/leaders/shard1 (1) [junit4] 1> /solr/collections/control_collection/leaders/shard1/leader (0) [junit4] 1> DATA: [junit4] 1> { [junit4] 1> "core":"collection1", [junit4] 1> "core_node_name":"core_node1", [junit4] 1> "base_url":"https://127.0.0.1:37772/r_hy", [junit4] 1> "node_name":"127.0.0.1:37772_r_hy"} [junit4] 1> /solr/collections/control_collection/leader_elect (1) [junit4] 1> /solr/collections/control_collection/leader_elect/shard1 (1) [junit4] 1> /solr/collections/control_collection/leader_elect/shard1/election (1) [junit4] 1> /solr/collections/control_collection/leader_elect/shard1/election/96984598843949061-core_node1-n_0000000000 (0) [junit4] 1> /solr/live_nodes (1) [junit4] 1> /solr/live_nodes/127.0.0.1:37772_r_hy (0) [junit4] 1> /solr/overseer_elect (2) [junit4] 1> /solr/overseer_elect/leader (0) [junit4] 1> DATA: [junit4] 1> {"id":"96984598843949061-127.0.0.1:37772_r_hy-n_0000000000"} [junit4] 1> /solr/overseer_elect/election (1) [junit4] 1> /solr/overseer_elect/election/96984598843949061-127.0.0.1:37772_r_hy-n_0000000000 (0) [junit4] 1> /solr/security.json (0) [junit4] 1> DATA: [junit4] 1> {} [junit4] 1> /solr/clusterstate.json (0) [junit4] 1> DATA: [junit4] 1> {"control_collection":{ [junit4] 1> "replicationFactor":"1", [junit4] 1> "router":{"name":"compositeId"}, [junit4] 1> "maxShardsPerNode":"1", [junit4] 1> "autoAddReplicas":"false", [junit4] 1> "autoCreated":"true", [junit4] 1> "shards":{"shard1":{ [junit4] 1> "range":"80000000-7fffffff", [junit4] 1> "state":"active", [junit4] 1> "replicas":{"core_node1":{ [junit4] 1> "core":"collection1", [junit4] 1> "base_url":"https://127.0.0.1:37772/r_hy", [junit4] 1> "node_name":"127.0.0.1:37772_r_hy", [junit4] 1> "state":"active", [junit4] 1> "leader":"true"}}}}}} [junit4] 1> /solr/clusterprops.json (0) [junit4] 1> DATA: [junit4] 1> {"urlScheme":"https"} [junit4] 1> [junit4] 2> 126030 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ChaosMonkey monkey: stop shard! 37772 [junit4] 2> 126030 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.CoreContainer Shutting down CoreContainer instance=7852592 [junit4] 2> 126031 INFO (coreCloseExecutor-196-thread-1) [n:127.0.0.1:37772_r_hy c:control_collection s:shard1 r:core_node1 x:collection1] o.a.s.c.SolrCore [collection1] CLOSING SolrCore org.apache.solr.core.SolrCore@1e5859 [junit4] 2> 126033 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.Overseer Overseer (id=96984598843949061-127.0.0.1:37772_r_hy-n_0000000000) closing [junit4] 2> 126033 INFO (OverseerStateUpdate-96984598843949061-127.0.0.1:37772_r_hy-n_0000000000) [n:127.0.0.1:37772_r_hy ] o.a.s.c.Overseer Overseer Loop exiting : 127.0.0.1:37772_r_hy [junit4] 2> 127534 WARN (zkCallback-43-thread-5-processing-n:127.0.0.1:37772_r_hy) [n:127.0.0.1:37772_r_hy ] o.a.s.c.c.ZkStateReader ZooKeeper watch triggered, but Solr cannot talk to ZK: [KeeperErrorCode = Session expired for /live_nodes] [junit4] 2> 127535 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.ServerConnector Stopped ServerConnector@1472a5b{SSL,[ssl, http/1.1]}{127.0.0.1:0} [junit4] 2> 127535 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@faaeb7{/r_hy,null,UNAVAILABLE} [junit4] 2> 127536 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ChaosMonkey monkey: stop shard! 38852 [junit4] 2> 127536 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ChaosMonkey monkey: stop shard! 42944 [junit4] 2> 127536 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ChaosMonkey monkey: stop shard! 33902 [junit4] 2> 127536 INFO (TEST-PeerSyncReplicationTest.test-seed#[BE6E9861DFC86AF9]) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1:36121 36121 [junit4] 2> 127556 INFO (Thread-44) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1:36121 36121 [junit4] 2> 127557 WARN (Thread-44) [ ] o.a.s.c.ZkTestServer Watch limit violations: [junit4] 2> Maximum concurrent create/delete watches above limit: [junit4] 2> [junit4] 2> 5 /solr/aliases.json [junit4] 2> 4 /solr/security.json [junit4] 2> 4 /solr/configs/conf1 [junit4] 2> [junit4] 2> Maximum concurrent data watches above limit: [junit4] 2> [junit4] 2> 5 /solr/clusterstate.json [junit4] 2> 5 /solr/clusterprops.json [junit4] 2> 3 /solr/collections/collection1/state.json [junit4] 2> [junit4] 2> Maximum concurrent children watches above limit: [junit4] 2> [junit4] 2> 93 /solr/overseer/collection-queue-work [junit4] 2> 24 /solr/overseer/queue [junit4] 2> 13 /solr/overseer/queue-work [junit4] 2> 5 /solr/live_nodes [junit4] 2> 5 /solr/collections [junit4] 2> [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=PeerSyncReplicationTest -Dtests.method=test -Dtests.seed=BE6E9861DFC86AF9 -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=cs -Dtests.timezone=PRT -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 94.7s J2 | PeerSyncReplicationTest.test <<< [junit4] > Throwable #1: java.lang.AssertionError: PeerSynced node did not become leader expected:<CloudJettyRunner [url=https://127.0.0.1:42944/r_hy/collection1]> but was:<CloudJettyRunner [url=https://127.0.0.1:38852/r_hy/collection1]> [junit4] > at __randomizedtesting.SeedInfo.seed([BE6E9861DFC86AF9:363AA7BB71340701]:0) [junit4] > at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:154) [junit4] > at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) [junit4] > at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) [junit4] > at java.lang.Thread.run(Thread.java:745) [junit4] 2> 127561 INFO (SUITE-PeerSyncReplicationTest-seed#[BE6E9861DFC86AF9]-worker) [ ] o.a.s.SolrTestCaseJ4 ###deleteCore [junit4] 2> NOTE: leaving temporary files on disk at: /home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.PeerSyncReplicationTest_BE6E9861DFC86AF9-001 [junit4] 2> Nov 23, 2016 2:20:30 AM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks [junit4] 2> WARNING: Will linger awaiting termination of 1 leaked thread(s). [junit4] 2> NOTE: test params are: codec=Asserting(Lucene62): {other_tl1=FSTOrd50, range_facet_l_dv=PostingsFormat(name=Direct), rnd_s=PostingsFormat(name=Asserting), multiDefault=PostingsFormat(name=Asserting), intDefault=FSTOrd50, a_i1=FSTOrd50, range_facet_l=FSTOrd50, _version_=FSTOrd50, a_t=FSTOrd50, id=PostingsFormat(name=Direct), range_facet_i_dv=FSTOrd50, text=PostingsFormat(name=LuceneFixedGap), timestamp=FSTOrd50}, docValues:{range_facet_l_dv=DocValuesFormat(name=Lucene54), range_facet_i_dv=DocValuesFormat(name=Asserting), timestamp=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=355, maxMBSortInHeap=7.07226502789829, sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=cs, timezone=PRT [junit4] 2> NOTE: Linux 4.4.0-47-generic i386/Oracle Corporation 1.8.0_102 (32-bit)/cpus=12,threads=1,free=44246224,total=100663296 [junit4] 2> NOTE: All tests run in this JVM: [TestPhraseSuggestions, CoreAdminHandlerTest, TestOrdValues, TestAddFieldRealTimeGet, BooleanFieldTest, SimpleCollectionCreateDeleteTest, CurrencyFieldXmlFileTest, BufferStoreTest, PeerSyncReplicationTest] [junit4] Completed [39/655 (1!)] on J2 in 95.03s, 1 test, 1 failure <<< FAILURES! [...truncated 55070 lines...]
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
