Build: https://builds.apache.org/job/Lucene-Solr-repro/2684/
[...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/432/consoleText [repro] Revision: 35955b3891ed6621d5faa1c2c20ce0a333bc7b83 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=ForceLeaderTest -Dtests.method=testReplicasInLIRNoLeader -Dtests.seed=CD8D4CEA280B801A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=en-MT -Dtests.timezone=Australia/Lindeman -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] Repro line: ant test -Dtestcase=LIROnShardRestartTest -Dtests.method=testAllReplicasInLIR -Dtests.seed=CD8D4CEA280B801A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=vi-VN -Dtests.timezone=Africa/El_Aaiun -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: d4e016afdf41baf0104f79e82d953a4650df42aa [repro] git fetch [repro] git checkout 35955b3891ed6621d5faa1c2c20ce0a333bc7b83 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro] solr/core [repro] LIROnShardRestartTest [repro] ForceLeaderTest [repro] ant compile-test [...truncated 3605 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.LIROnShardRestartTest|*.ForceLeaderTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=CD8D4CEA280B801A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=vi-VN -Dtests.timezone=Africa/El_Aaiun -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 8132 lines...] [junit4] 2> 304317 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 305317 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 305317 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 305317 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 305318 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 305318 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 305318 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 305318 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 305318 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 305319 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 305319 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 305319 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 305319 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 305319 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 305319 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 305319 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 305320 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 305320 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 305320 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 306321 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 306321 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 306321 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 306321 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 306322 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 306322 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 306322 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 306322 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 306322 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 306323 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 306323 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 306323 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 306323 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 306323 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 306323 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 306324 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 306324 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 306324 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 307325 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 307325 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 307325 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 307325 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 307325 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 307325 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 307326 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 307326 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 307326 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 307341 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 307341 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 307341 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 307342 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 307342 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 307342 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 307342 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 307342 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 307342 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 308348 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 308348 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 308348 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 308348 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 308348 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 308348 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 308349 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 308349 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 308349 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 308349 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 308349 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 308349 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 308350 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 308350 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 308350 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 308351 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 308351 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 308351 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 309351 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 309352 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 309352 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 309352 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 309352 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 309352 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 309352 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 309352 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 309352 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 309355 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 309355 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 309355 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 309356 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 309373 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 309373 WARN (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_lower_terms_collection] after stale state error from server. [junit4] 2> 309373 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_lower_terms_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 309373 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 309373 ERROR (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.AbstractFullDistribZkTestBase No more retries available! Add batch failed due to: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. [junit4] 2> 309373 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ForceLeaderTest Document couldn't be sent, which is expected. [junit4] 2> 309373 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ForceLeaderTest Do force leader... [junit4] 2> 309421 INFO (zkConnectionManagerCallback-345-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 309423 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3) [junit4] 2> 309424 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39550/solr ready [junit4] 2> 309424 INFO (SocketProxy-Acceptor-34221) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=50940,localport=34221], receiveBufferSize:531000 [junit4] 2> 309441 INFO (SocketProxy-Acceptor-34221) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=43539,localport=39400], receiveBufferSize=530904 [junit4] 2> 309442 INFO (qtp1969362116-875) [n:127.0.0.1:34221_ ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :forceleader with params action=FORCELEADER&collection=forceleader_lower_terms_collection&shard=shard1&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 309442 INFO (qtp1969362116-875) [n:127.0.0.1:34221_ c:forceleader_lower_terms_collection ] o.a.s.h.a.CollectionsHandler Force leader invoked, state: znodeVersion: 0 [junit4] 2> live nodes:[127.0.0.1:34221_, 127.0.0.1:36697_, 127.0.0.1:41620_] [junit4] 2> collections:{collection1=DocCollection(collection1//collections/collection1/state.json/10)={ [junit4] 2> "pullReplicas":"0", [junit4] 2> "replicationFactor":"1", [junit4] 2> "shards":{ [junit4] 2> "shard1":{ [junit4] 2> "range":"80000000-ffffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{"core_node3":{ [junit4] 2> "core":"collection1_shard1_replica_n2", [junit4] 2> "base_url":"http://127.0.0.1:41173", [junit4] 2> "node_name":"127.0.0.1:41173_", [junit4] 2> "state":"down", [junit4] 2> "type":"NRT", [junit4] 2> "leader":"true"}}}, [junit4] 2> "shard2":{ [junit4] 2> "range":"0-7fffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{ [junit4] 2> "core_node4":{ [junit4] 2> "core":"collection1_shard2_replica_n1", [junit4] 2> "base_url":"http://127.0.0.1:36697", [junit4] 2> "node_name":"127.0.0.1:36697_", [junit4] 2> "state":"active", [junit4] 2> "type":"NRT", [junit4] 2> "leader":"true"}, [junit4] 2> "core_node6":{ [junit4] 2> "core":"collection1_shard2_replica_n5", [junit4] 2> "base_url":"http://127.0.0.1:34221", [junit4] 2> "node_name":"127.0.0.1:34221_", [junit4] 2> "state":"active", [junit4] 2> "type":"NRT"}}}}, [junit4] 2> "router":{"name":"compositeId"}, [junit4] 2> "maxShardsPerNode":"1", [junit4] 2> "autoAddReplicas":"false", [junit4] 2> "nrtReplicas":"1", [junit4] 2> "tlogReplicas":"0"}, forceleader_lower_terms_collection=LazyCollectionRef(forceleader_lower_terms_collection), control_collection=LazyCollectionRef(control_collection)} [junit4] 2> 309523 INFO (qtp1969362116-875) [n:127.0.0.1:34221_ c:forceleader_lower_terms_collection ] o.a.s.c.ZkShardTerms Successful update of terms at /collections/forceleader_lower_terms_collection/terms/shard1 to Terms{values={core_node6=1, core_node4=2, core_node5=2}, version=5} [junit4] 2> 309523 INFO (qtp1969362116-875) [n:127.0.0.1:34221_ c:forceleader_lower_terms_collection ] o.a.s.c.ZkShardTerms Successful update of terms at /collections/forceleader_lower_terms_collection/terms/shard1 to Terms{values={core_node6=2, core_node4=2, core_node5=2}, version=6} [junit4] 2> 309660 INFO (qtp1969362116-877) [n:127.0.0.1:34221_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 309662 INFO (qtp1969362116-879) [n:127.0.0.1:34221_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 309663 INFO (qtp1969362116-878) [n:127.0.0.1:34221_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 309689 INFO (qtp1969362116-876) [n:127.0.0.1:34221_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 309691 INFO (qtp1969362116-877) [n:127.0.0.1:34221_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 309693 INFO (qtp1969362116-879) [n:127.0.0.1:34221_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 309701 INFO (qtp1969362116-878) [n:127.0.0.1:34221_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 309703 INFO (qtp1969362116-876) [n:127.0.0.1:34221_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 309704 INFO (SocketProxy-Acceptor-36697) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=51486,localport=36697], receiveBufferSize:531000 [junit4] 2> 309704 INFO (SocketProxy-Acceptor-36697) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=42896,localport=51084], receiveBufferSize=530904 [junit4] 2> 309730 INFO (qtp210357670-941) [n:127.0.0.1:36697_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_lower_terms_collection.shard1.replica_n2:INDEX.sizeInBytes} status=0 QTime=21 [junit4] 2> 309752 INFO (qtp210357670-939) [n:127.0.0.1:36697_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_lower_terms_collection.shard1.replica_n2:INDEX.sizeInBytes} status=0 QTime=15 [junit4] 2> 309753 INFO (qtp210357670-943) [n:127.0.0.1:36697_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_lower_terms_collection.shard1.replica_n2:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 309755 INFO (qtp210357670-940) [n:127.0.0.1:36697_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 309771 INFO (qtp210357670-936) [n:127.0.0.1:36697_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 309787 INFO (qtp210357670-1061) [n:127.0.0.1:36697_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 309806 INFO (qtp210357670-941) [n:127.0.0.1:36697_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=17 [junit4] 2> 309809 INFO (qtp210357670-941) [n:127.0.0.1:36697_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 309822 INFO (SocketProxy-Acceptor-41620) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=46526,localport=41620], receiveBufferSize:531000 [junit4] 2> 309846 INFO (SocketProxy-Acceptor-41620) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=38290,localport=55580], receiveBufferSize=530904 [junit4] 2> 309860 INFO (qtp2020964948-810) [n:127.0.0.1:41620_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_lower_terms_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=18 [junit4] 2> 309875 INFO (qtp2020964948-811) [n:127.0.0.1:41620_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_lower_terms_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=13 [junit4] 2> 309879 INFO (qtp2020964948-808) [n:127.0.0.1:41620_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_lower_terms_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 309881 INFO (qtp2020964948-807) [n:127.0.0.1:41620_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 309915 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=17 [junit4] 2> 309933 INFO (qtp2020964948-810) [n:127.0.0.1:41620_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=14 [junit4] 2> 309968 INFO (qtp2020964948-810) [n:127.0.0.1:41620_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=33 [junit4] 2> 309970 INFO (qtp2020964948-810) [n:127.0.0.1:41620_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 310027 INFO (AutoscalingActionExecutor-221-thread-1) [ ] o.a.s.c.a.ExecutePlanAction No operations to execute for event: { [junit4] 2> "id":"6a3bea55e8a46fT715vak6m1aucywurtigw3e1pw", [junit4] 2> "source":".auto_add_replicas", [junit4] 2> "eventTime":29902225181025391, [junit4] 2> "eventType":"NODELOST", [junit4] 2> "properties":{ [junit4] 2> "eventTimes":[29902225181025391], [junit4] 2> "preferredOperation":"movereplica", [junit4] 2> "_enqueue_time_":29902235287465555, [junit4] 2> "nodeNames":["127.0.0.1:41173_"]}} [junit4] 2> 314314 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 314315 WARN (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_lower_terms_collection_shard1_replica_n1] coreNodeName=[core_node4] [junit4] 2> 314524 WARN (qtp1969362116-875) [n:127.0.0.1:34221_ c:forceleader_lower_terms_collection ] o.a.s.h.a.CollectionsHandler Force leader attempt 1. Waiting 5 secs for an active leader. State of the slice: shard1:{ [junit4] 2> "range":"80000000-7fffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{ [junit4] 2> "core_node4":{ [junit4] 2> "core":"forceleader_lower_terms_collection_shard1_replica_n1", [junit4] 2> "base_url":"http://127.0.0.1:41620", [junit4] 2> "node_name":"127.0.0.1:41620_", [junit4] 2> "state":"down", [junit4] 2> "type":"NRT"}, [junit4] 2> "core_node5":{ [junit4] 2> "core":"forceleader_lower_terms_collection_shard1_replica_n3", [junit4] 2> "base_url":"http://127.0.0.1:41173", [junit4] 2> "node_name":"127.0.0.1:41173_", [junit4] 2> "state":"down", [junit4] 2> "type":"NRT"}, [junit4] 2> "core_node6":{ [junit4] 2> "core":"forceleader_lower_terms_collection_shard1_replica_n2", [junit4] 2> "base_url":"http://127.0.0.1:36697", [junit4] 2> "node_name":"127.0.0.1:36697_", [junit4] 2> "state":"down", [junit4] 2> "type":"NRT"}}} [junit4] 2> 316815 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/ [junit4] 2> 316815 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.PeerSync PeerSync: core=forceleader_lower_terms_collection_shard1_replica_n1 url=http://127.0.0.1:41620 START replicas=[http://127.0.0.1:36697/forceleader_lower_terms_collection_shard1_replica_n2/] nUpdates=100 [junit4] 2> 316818 INFO (qtp210357670-939) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n2] webapp= path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 316819 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.PeerSync PeerSync: core=forceleader_lower_terms_collection_shard1_replica_n1 url=http://127.0.0.1:41620 Received 1 versions from http://127.0.0.1:36697/forceleader_lower_terms_collection_shard1_replica_n2/ fingerprint:null [junit4] 2> 316819 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.PeerSync PeerSync: core=forceleader_lower_terms_collection_shard1_replica_n1 url=http://127.0.0.1:41620 No additional versions requested. ourHighThreshold=1622739329278279680 otherLowThreshold=1622739329278279680 ourHighest=1622739329278279680 otherHighest=1622739329278279680 [junit4] 2> 316819 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.PeerSync PeerSync: core=forceleader_lower_terms_collection_shard1_replica_n1 url=http://127.0.0.1:41620 DONE. sync succeeded [junit4] 2> 316819 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 316819 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.SyncStrategy http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/: try and ask http://127.0.0.1:36697/forceleader_lower_terms_collection_shard1_replica_n2/ to sync [junit4] 2> 316821 INFO (qtp210357670-943) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.PeerSync PeerSync: core=forceleader_lower_terms_collection_shard1_replica_n2 url=http://127.0.0.1:36697 START replicas=[http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/] nUpdates=100 [junit4] 2> 316822 INFO (SocketProxy-Acceptor-41620) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=46600,localport=41620], receiveBufferSize:531000 [junit4] 2> 316822 INFO (SocketProxy-Acceptor-41620) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=38290,localport=55654], receiveBufferSize=530904 [junit4] 2> 316917 INFO (qtp2020964948-808) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@6917e2d0[forceleader_lower_terms_collection_shard1_replica_n1] realtime] [junit4] 2> 316917 INFO (qtp2020964948-808) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.IndexFingerprint IndexFingerprint millis:94.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=1622739329278279680, maxInHash=1622739329278279680, versionsHash=-2795804146055088105, numVersions=1, numDocs=1, maxDoc=1} [junit4] 2> 316917 INFO (qtp2020964948-808) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=94 [junit4] 2> 316962 INFO (qtp210357670-943) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.s.SolrIndexSearcher Opening [Searcher@76e91609[forceleader_lower_terms_collection_shard1_replica_n2] realtime] [junit4] 2> 316963 INFO (qtp210357670-943) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.IndexFingerprint IndexFingerprint millis:44.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=1622739329278279680, maxInHash=1622739329278279680, versionsHash=-2795804146055088105, numVersions=1, numDocs=1, maxDoc=1} [junit4] 2> 316963 INFO (qtp210357670-943) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 316963 INFO (qtp210357670-943) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n2] webapp= path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/&wt=javabin&version=2} status=0 QTime=142 [junit4] 2> 316963 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.SyncStrategy http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/: sync completed with http://127.0.0.1:36697/forceleader_lower_terms_collection_shard1_replica_n2/ [junit4] 2> 316964 ERROR (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext WARNING: Potential data loss -- Replica core_node4 became leader after timeout (leaderVoteWait) without being up-to-date with the previous leader [junit4] 2> 316964 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/forceleader_lower_terms_collection/leaders/shard1/leader after winning as /collections/forceleader_lower_terms_collection/leader_elect/shard1/election/74017263731146756-core_node4-n_0000000001 [junit4] 2> 316967 INFO (zkCallback-324-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 316967 INFO (zkCallback-265-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 316967 INFO (zkCallback-265-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 316967 INFO (zkCallback-265-thread-2) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/ shard1 [junit4] 2> 317094 INFO (zkCallback-324-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 317094 INFO (zkCallback-265-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 317094 INFO (zkCallback-265-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 317387 INFO (recoveryExecutor-262-thread-1-processing-n:127.0.0.1:41620_ x:forceleader_lower_terms_collection_shard1_replica_n1 c:forceleader_lower_terms_collection s:shard1 r:core_node4) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.RecoveryStrategy RecoveryStrategy has been closed [junit4] 2> 317387 INFO (recoveryExecutor-262-thread-1-processing-n:127.0.0.1:41620_ x:forceleader_lower_terms_collection_shard1_replica_n1 c:forceleader_lower_terms_collection s:shard1 r:core_node4) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false] [junit4] 2> 317404 INFO (SocketProxy-Acceptor-41620) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=46608,localport=41620], receiveBufferSize:531000 [junit4] 2> 317405 INFO (SocketProxy-Acceptor-41620) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=38290,localport=55662], receiveBufferSize=530904 [junit4] 2> 317407 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/admin/ping params={wt=javabin&version=2} hits=0 status=0 QTime=1 [junit4] 2> 317407 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/admin/ping params={wt=javabin&version=2} status=0 QTime=1 [junit4] 2> 317407 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[forceleader_lower_terms_collection_shard1_replica_n2] [junit4] 2> 317408 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=tlog{file=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.ForceLeaderTest_CD8D4CEA280B801A-002/shard-3-001/cores/forceleader_lower_terms_collection_shard1_replica_n2/data/tlog/tlog.0000000000000000000 refcount=1}} [junit4] 2> 317408 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Publishing state of core [forceleader_lower_terms_collection_shard1_replica_n2] as recovering, leader is [http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/] and I am [http://127.0.0.1:36697/forceleader_lower_terms_collection_shard1_replica_n2/] [junit4] 2> 317409 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.ZkShardTerms Successful update of terms at /collections/forceleader_lower_terms_collection/terms/shard1 to Terms{values={core_node6=2, core_node6_recovering=2, core_node4=2, core_node5=2}, version=7} [junit4] 2> 317411 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Sending prep recovery command to [http://127.0.0.1:41620]; [WaitForState: action=PREPRECOVERY&core=forceleader_lower_terms_collection_shard1_replica_n1&nodeName=127.0.0.1:36697_&coreNodeName=core_node6&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true] [junit4] 2> 317412 INFO (qtp2020964948-810) [n:127.0.0.1:41620_ x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node6, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true [junit4] 2> 317412 INFO (qtp2020964948-810) [n:127.0.0.1:41620_ x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_lower_terms_collection, shard=shard1, thisCore=forceleader_lower_terms_collection_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=down, localState=active, nodeName=127.0.0.1:36697_, coreNodeName=core_node6, onlyIfActiveCheckResult=false, nodeProps: core_node6:{"core":"forceleader_lower_terms_collection_shard1_replica_n2","base_url":"http://127.0.0.1:36697","node_name":"127.0.0.1:36697_","state":"down","type":"NRT"} [junit4] 2> 317512 INFO (zkCallback-265-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 317512 INFO (zkCallback-324-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 317512 INFO (zkCallback-265-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 317513 INFO (watches-267-thread-1) [ ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_lower_terms_collection, shard=shard1, thisCore=forceleader_lower_terms_collection_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:36697_, coreNodeName=core_node6, onlyIfActiveCheckResult=false, nodeProps: core_node6:{"core":"forceleader_lower_terms_collection_shard1_replica_n2","base_url":"http://127.0.0.1:36697","node_name":"127.0.0.1:36697_","state":"recovering","type":"NRT"} [junit4] 2> 317513 INFO (qtp2020964948-810) [n:127.0.0.1:41620_ x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:36697_&onlyIfLeaderActive=true&core=forceleader_lower_terms_collection_shard1_replica_n1&coreNodeName=core_node6&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=101 [junit4] 2> 318014 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Attempting to PeerSync from [http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/] - recoveringAfterStartup=[true] [junit4] 2> 318014 WARN (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.PeerSyncWithLeader no frame of reference to tell if we've missed updates [junit4] 2> 318014 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy PeerSync Recovery was not successful - trying replication. [junit4] 2> 318014 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Starting Replication Recovery. [junit4] 2> 318014 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Attempting to replicate from [http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/]. [junit4] 2> 318018 INFO (qtp2020964948-811) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1622739352668864512,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 318018 INFO (qtp2020964948-811) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@46149f8c commitCommandVersion:1622739352668864512 [junit4] 2> 318108 INFO (SocketProxy-Acceptor-36697) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=51576,localport=36697], receiveBufferSize:531000 [junit4] 2> 318177 INFO (qtp210357670-936) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.p.DistributedUpdateProcessor Ignoring commit while not ACTIVE - state: BUFFERING replay: false [junit4] 2> 318177 INFO (qtp210357670-936) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_lower_terms_collection_shard1_replica_n2] webapp= path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 0 [junit4] 2> 318230 INFO (SocketProxy-Acceptor-36697) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=42896,localport=51174], receiveBufferSize=530904 [junit4] 2> 318249 INFO (qtp2020964948-811) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@1d1bdbbb[forceleader_lower_terms_collection_shard1_replica_n1] realtime] [junit4] 2> 318249 INFO (qtp2020964948-811) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 318250 INFO (qtp2020964948-811) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/update params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 232 [junit4] 2> 318254 INFO (qtp2020964948-808) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/replication params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 QTime=0 [junit4] 2> 318265 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Master's generation: 2 [junit4] 2> 318265 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Master's version: 1547564842862 [junit4] 2> 318265 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Slave's generation: 1 [junit4] 2> 318265 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Slave's version: 0 [junit4] 2> 318265 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Starting replication process [junit4] 2> 318268 INFO (qtp2020964948-807) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/replication params={generation=2&qt=/replication&tlogFiles=false&wt=javabin&version=2&command=filelist} status=0 QTime=2 [junit4] 2> 318268 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Number of files in latest index in master: 4 [junit4] 2> 318270 WARN (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher File _0.cfe did not match. expected checksum is 217746480 and actual is checksum 2259273683. expected length is 405 and actual length is 405 [junit4] 2> 318270 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Starting download (fullCopy=true) to MockDirectoryWrapper(RAMDirectory@603ecf6f lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@61838960) [junit4] 2> 318270 WARN (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher File _0.cfe did not match. expected checksum is 217746480 and actual is checksum 2259273683. expected length is 405 and actual length is 405 [junit4] 2> 318271 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/replication params={generation=2&qt=/replication&file=_0.cfe&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0 [junit4] 2> 318312 WARN (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher File _0.si did not match. expected checksum is 4057686650 and actual is checksum 950829739. expected length is 384 and actual length is 384 [junit4] 2> 318313 INFO (qtp2020964948-810) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/replication params={generation=2&qt=/replication&file=_0.si&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0 [junit4] 2> 318379 WARN (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher File _0.cfs did not match. expected checksum is 4217339397 and actual is checksum 3702554546. expected length is 3122 and actual length is 3122 [junit4] 2> 318381 INFO (qtp2020964948-811) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/replication params={generation=2&qt=/replication&file=_0.cfs&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0 [junit4] 2> 318416 INFO (qtp2020964948-808) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/replication params={generation=2&qt=/replication&file=segments_2&checksum=true&wt=filestream&command=filecontent} status=0 QTime=0 [junit4] 2> 318455 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Bytes downloaded: 4114, Bytes skipped downloading: 0 [junit4] 2> 318455 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher Total time taken for download (fullCopy=true,bytesDownloaded=4114) : 0 secs (null bytes/sec) to MockDirectoryWrapper(RAMDirectory@603ecf6f lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@61838960) [junit4] 2> 318463 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.SolrCore Updating index properties... index=index.20190115150723112 [junit4] 2> 318464 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.h.IndexFetcher removing old index directory MockDirectoryWrapper(RAMDirectory@61aa019e lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@306ac01a) [junit4] 2> 318467 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.AlcoholicMergePolicy: [AlcoholicMergePolicy: minMergeSize=0, mergeFactor=10, maxMergeSize=2105231426, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.1] [junit4] 2> 318468 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.DefaultSolrCoreState New IndexWriter is ready to be used. [junit4] 2> 318469 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.s.SolrIndexSearcher Opening [Searcher@416c149b[forceleader_lower_terms_collection_shard1_replica_n2] main] [junit4] 2> 318486 INFO (searcherExecutor-280-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.SolrCore [forceleader_lower_terms_collection_shard1_replica_n2] Registered new searcher Searcher@416c149b[forceleader_lower_terms_collection_shard1_replica_n2] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):c1:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1547564841670}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))} [junit4] 2> 318504 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy No replay needed. [junit4] 2> 318504 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Replication Recovery was successful. [junit4] 2> 318504 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Registering as Active after recovery. [junit4] 2> 318509 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.ZkShardTerms Successful update of terms at /collections/forceleader_lower_terms_collection/terms/shard1 to Terms{values={core_node6=2, core_node4=2, core_node5=2}, version=8} [junit4] 2> 318509 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Updating version bucket highest from index after successful recovery. [junit4] 2> 318640 INFO (recoveryExecutor-321-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[true] [junit4] 2> 318715 INFO (zkCallback-324-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 318715 INFO (zkCallback-265-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 318715 INFO (zkCallback-265-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_lower_terms_collection/state.json] for collection [forceleader_lower_terms_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 319525 INFO (qtp1969362116-875) [n:127.0.0.1:34221_ c:forceleader_lower_terms_collection ] o.a.s.h.a.CollectionsHandler Successfully issued FORCELEADER command for collection: forceleader_lower_terms_collection, shard: shard1 [junit4] 2> 319525 INFO (qtp1969362116-875) [n:127.0.0.1:34221_ c:forceleader_lower_terms_collection ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={action=FORCELEADER&collection=forceleader_lower_terms_collection&shard=shard1&wt=javabin&version=2} status=0 QTime=10083 [junit4] 2> 319528 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.AbstractDistribZkTestBase Wait for recoveries to finish - collection: forceleader_lower_terms_collection failOnTimeout:true timeout (sec):330 [junit4] 1> replica:core_node4 rstate:active live:true [junit4] 1> replica:core_node4 rstate:active live:true [junit4] 1> replica:core_node5 rstate:down live:false [junit4] 1> replica:core_node6 rstate:active live:true [junit4] 1> no one is recoverying [junit4] 1> replica:core_node5 rstate:down live:false [junit4] 1> replica:core_node6 rstate:active live:true [junit4] 1> no one is recoverying [junit4] 2> 319566 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.AbstractDistribZkTestBase Recoveries finished - collection: forceleader_lower_terms_collection [junit4] 2> 319569 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ForceLeaderTest After forcing leader: shard1:{ [junit4] 2> "range":"80000000-7fffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{ [junit4] 2> "core_node4":{ [junit4] 2> "core":"forceleader_lower_terms_collection_shard1_replica_n1", [junit4] 2> "base_url":"http://127.0.0.1:41620", [junit4] 2> "node_name":"127.0.0.1:41620_", [junit4] 2> "state":"active", [junit4] 2> "type":"NRT", [junit4] 2> "leader":"true"}, [junit4] 2> "core_node5":{ [junit4] 2> "core":"forceleader_lower_terms_collection_shard1_replica_n3", [junit4] 2> "base_url":"http://127.0.0.1:41173", [junit4] 2> "node_name":"127.0.0.1:41173_", [junit4] 2> "state":"down", [junit4] 2> "type":"NRT"}, [junit4] 2> "core_node6":{ [junit4] 2> "core":"forceleader_lower_terms_collection_shard1_replica_n2", [junit4] 2> "base_url":"http://127.0.0.1:36697", [junit4] 2> "node_name":"127.0.0.1:36697_", [junit4] 2> "state":"active", [junit4] 2> "type":"NRT"}}} [junit4] 2> 319610 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ForceLeaderTest Sending doc 4... [junit4] 2> 319762 INFO (qtp2020964948-807) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/forceleader_lower_terms_collection/terms/shard1 to Terms{values={core_node6=3, core_node4=3, core_node5=2}, version=9} [junit4] 2> 319906 INFO (qtp210357670-1061) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_lower_terms_collection_shard1_replica_n2] webapp= path=/update params={update.distrib=FROMLEADER&distrib.from=http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/&wt=javabin&version=2}{add=[4 (1622739354364411904)]} 0 159 [junit4] 2> 319907 INFO (qtp2020964948-807) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/update params={wt=javabin&version=2}{add=[4 (1622739354364411904)]} 0 272 [junit4] 2> 319907 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ForceLeaderTest Committing... [junit4] 2> 319926 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1622739354669547520,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 319947 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@46149f8c commitCommandVersion:1622739354669547520 [junit4] 2> 319967 INFO (qtp210357670-941) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1622739354712539136,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 319967 INFO (qtp210357670-941) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@54f2cd95 commitCommandVersion:1622739354712539136 [junit4] 2> 319990 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@26774c84[forceleader_lower_terms_collection_shard1_replica_n1] main] [junit4] 2> 320050 INFO (qtp210357670-941) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.s.SolrIndexSearcher Opening [Searcher@1d5e863[forceleader_lower_terms_collection_shard1_replica_n2] main] [junit4] 2> 320050 INFO (qtp210357670-941) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 320052 INFO (searcherExecutor-275-thread-1-processing-n:127.0.0.1:41620_ x:forceleader_lower_terms_collection_shard1_replica_n1 c:forceleader_lower_terms_collection s:shard1 r:core_node4) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.SolrCore [forceleader_lower_terms_collection_shard1_replica_n1] Registered new searcher Searcher@26774c84[forceleader_lower_terms_collection_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):c1:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1547564841670}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]) Uninverting(_1(7.7.0):c1:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1547564844811}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))} [junit4] 2> 320052 INFO (searcherExecutor-280-thread-1-processing-n:127.0.0.1:36697_ x:forceleader_lower_terms_collection_shard1_replica_n2 c:forceleader_lower_terms_collection s:shard1 r:core_node6) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.c.SolrCore [forceleader_lower_terms_collection_shard1_replica_n2] Registered new searcher Searcher@1d5e863[forceleader_lower_terms_collection_shard1_replica_n2] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):c1:[diagnostics={java.runtime.version=1.8.0_191-b12, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os=Linux, os.arch=amd64, os.version=4.4.0-112-generic, source=flush, timestamp=1547564841670}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}]) Uninverting(_1(7.7.0):c1:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1547564844834}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))} [junit4] 2> 320053 INFO (qtp210357670-941) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forceleader_lower_terms_collection_shard1_replica_n2] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_lower_terms_collection_shard1_replica_n2] webapp= path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=true&commit=true&softCommit=false&distrib.from=http://127.0.0.1:41620/forceleader_lower_terms_collection_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 85 [junit4] 2> 320088 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 320104 INFO (qtp2020964948-809) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/update params={_stateVer_=forceleader_lower_terms_collection:14&waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 178 [junit4] 2> 320107 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ForceLeaderTest Doc 4 sent and commit issued [junit4] 2> 320110 INFO (SocketProxy-Acceptor-41620) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=46622,localport=41620], receiveBufferSize:531000 [junit4] 2> 320111 INFO (SocketProxy-Acceptor-41620) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=38290,localport=55676], receiveBufferSize=530904 [junit4] 2> 320157 INFO (qtp2020964948-811) [n:127.0.0.1:41620_ c:forceleader_lower_terms_collection s:shard1 r:core_node4 x:forceleader_lower_terms_collection_shard1_replica_n1] o.a.s.c.S.Request [forceleader_lower_terms_collection_shard1_replica_n1] webapp= path=/get params={distrib=false&qt=/get&id=1&wt=javabin&version=2} status=0 QTime=33 [junit4] 2> 320161 INFO (SocketProxy-Acceptor-36697) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=51590,localport=36697], receiveBufferSize:531000 [junit4] 2> 320177 INFO (SocketProxy-Acceptor-36697) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=42896,localport=51188], receiveBufferSize=530904 [junit4] 2> 320179 INFO (qtp210357670-943) [n:127.0.0.1:36697_ c:forceleader_lower_terms_collection s:shard1 r:core_node6 x:forcelead [...truncated too long message...] casInLIRNoLeader-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ZkTestServer Shutting down ZkTestServer. [junit4] 2> 513540 WARN (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer Watch limit violations: [junit4] 2> Maximum concurrent create/delete watches above limit: [junit4] 2> [junit4] 2> 14 /solr/collections/collection1/state.json [junit4] 2> 11 /solr/aliases.json [junit4] 2> 9 /solr/collections/collection1/terms/shard2 [junit4] 2> 5 /solr/collections/forceleader_test_collection/terms/shard1 [junit4] 2> 4 /solr/security.json [junit4] 2> 4 /solr/configs/conf1 [junit4] 2> 2 /solr/collections/collection1/terms/shard1 [junit4] 2> 2 /solr/collections/control_collection/terms/shard1 [junit4] 2> [junit4] 2> Maximum concurrent data watches above limit: [junit4] 2> [junit4] 2> 100 /solr/collections/forceleader_test_collection/state.json [junit4] 2> 11 /solr/clusterprops.json [junit4] 2> 11 /solr/clusterstate.json [junit4] 2> 9 /solr/collections/control_collection/state.json [junit4] 2> 2 /solr/overseer_elect/election/74017268443119626-127.0.0.1:34750_-n_0000000001 [junit4] 2> [junit4] 2> Maximum concurrent children watches above limit: [junit4] 2> [junit4] 2> 11 /solr/collections [junit4] 2> 10 /solr/live_nodes [junit4] 2> [junit4] 2> 513543 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:37583 [junit4] 2> 513543 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[CD8D4CEA280B801A]) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 37583 [junit4] 2> NOTE: download the large Jenkins line-docs file by running 'ant get-jenkins-line-docs' in the lucene directory. [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=ForceLeaderTest -Dtests.method=testReplicasInLIRNoLeader -Dtests.seed=CD8D4CEA280B801A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=vi-VN -Dtests.timezone=Africa/El_Aaiun -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] ERROR 181s J0 | ForceLeaderTest.testReplicasInLIRNoLeader <<< [junit4] > Throwable #1: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: http://127.0.0.1:38908/forceleader_test_collection [junit4] > at __randomizedtesting.SeedInfo.seed([CD8D4CEA280B801A:2B1A782A1189797B]:0) [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654) [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) [junit4] > at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484) [junit4] > at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414) [junit4] > at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110) [junit4] > at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) [junit4] > at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) [junit4] > at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) [junit4] > at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:504) [junit4] > at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:479) [junit4] > at org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:294) [junit4] > at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1063) [junit4] > at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1035) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > Caused by: java.net.SocketTimeoutException: Read timed out [junit4] > at java.net.SocketInputStream.socketRead0(Native Method) [junit4] > at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) [junit4] > at java.net.SocketInputStream.read(SocketInputStream.java:171) [junit4] > at java.net.SocketInputStream.read(SocketInputStream.java:141) [junit4] > at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) [junit4] > at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) [junit4] > at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282) [junit4] > at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) [junit4] > at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) [junit4] > at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) [junit4] > at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) [junit4] > at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165) [junit4] > at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) [junit4] > at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) [junit4] > at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) [junit4] > at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) [junit4] > at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) [junit4] > at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) [junit4] > at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) [junit4] > at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) [junit4] > at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:542) [junit4] > ... 52 more [junit4] 2> NOTE: leaving temporary files on disk at: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.ForceLeaderTest_CD8D4CEA280B801A-002 [junit4] 2> Jan 15, 2019 3:10:38 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks [junit4] 2> WARNING: Will linger awaiting termination of 3 leaked thread(s). [junit4] 2> NOTE: test params are: codec=Lucene70, sim=RandomSimilarity(queryNorm=false): {}, locale=vi-VN, timezone=Africa/El_Aaiun [junit4] 2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 1.8.0_191 (64-bit)/cpus=4,threads=1,free=277475944,total=535822336 [junit4] 2> NOTE: All tests run in this JVM: [ForceLeaderTest, LIROnShardRestartTest, ForceLeaderTest] [junit4] Completed [10/10 (6!)] on J0 in 254.08s, 3 tests, 1 error, 1 skipped <<< FAILURES! [junit4] [junit4] [junit4] Tests with failures [seed: CD8D4CEA280B801A] (first 10 out of 11): [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLowerTerms [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLowerTerms [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLowerTerms [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader [junit4] - org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR [junit4] - org.apache.solr.cloud.LIROnShardRestartTest.testSeveralReplicasInLIR [junit4] - org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR [junit4] - org.apache.solr.cloud.LIROnShardRestartTest.testSeveralReplicasInLIR [junit4] [junit4] [junit4] JVM J0: 2.90 .. 660.43 = 657.53s [junit4] JVM J1: 2.92 .. 502.61 = 499.69s [junit4] JVM J2: 2.89 .. 427.22 = 424.33s [junit4] Execution time total: 11 minutes [junit4] Tests summary: 10 suites, 25 tests, 11 errors, 5 ignored BUILD FAILED /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1572: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1099: There were test failures: 10 suites, 25 tests, 11 errors, 5 ignored [seed: CD8D4CEA280B801A] Total time: 11 minutes 9 seconds [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.cloud.LIROnShardRestartTest [repro] 4/5 failed: org.apache.solr.cloud.ForceLeaderTest [repro] git checkout d4e016afdf41baf0104f79e82d953a4650df42aa Previous HEAD position was 35955b3... LUCENE-8623: Decrease I/O pressure when merging high dimensional points HEAD is now at d4e016a... LUCENE-8474: (partial) removal of accesses to RAMFile and RAMDirectory streams. Removal of GrowableByteArrayDataOutput. [repro] Exiting with code 256 Archiving artifacts [Fast Archiver] No artifacts from Lucene-Solr-repro Repro-Lucene-Solr-SmokeRelease-7.x#428 to compare, so performing full copy of artifacts Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE Email was triggered for: Unstable (Test Failures) Sending email for trigger: Unstable (Test Failures)
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
