Build: https://builds.apache.org/job/Lucene-Solr-repro/2675/
[...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/430/consoleText [repro] Revision: 734f20b298c0846cc319cbb011c3f44398b54005 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=ForceLeaderTest -Dtests.method=testReplicasInLIRNoLeader -Dtests.seed=D7C3616BE0F39CD1 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=sr-Latn-ME -Dtests.timezone=Navajo -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=StressHdfsTest -Dtests.method=test -Dtests.seed=D7C3616BE0F39CD1 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=es-UY -Dtests.timezone=Asia/Baku -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: dcc9ffe186eb1873fcebc56382e3be34245b0ecc [repro] git fetch [repro] git checkout 734f20b298c0846cc319cbb011c3f44398b54005 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro] solr/core [repro] ForceLeaderTest [repro] StressHdfsTest [repro] ant compile-test [...truncated 3605 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.ForceLeaderTest|*.StressHdfsTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=D7C3616BE0F39CD1 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=sr-Latn-ME -Dtests.timezone=Navajo -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 36436 lines...] [junit4] 2> 616607 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 616991 ERROR (indexFetcher-508-thread-1) [ ] o.a.s.h.ReplicationHandler Index fetch failed :org.apache.solr.common.SolrException: No registered leader was found after waiting for 4000ms , collection: forceleader_test_collection slice: shard1 saw state=DocCollection(forceleader_test_collection//collections/forceleader_test_collection/state.json/14)={ [junit4] 2> "pullReplicas":"0", [junit4] 2> "replicationFactor":"0", [junit4] 2> "shards":{"shard1":{ [junit4] 2> "range":"80000000-7fffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{ [junit4] 2> "core_node2":{ [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t1", [junit4] 2> "base_url":"http://127.0.0.1:41484/kiilu", [junit4] 2> "node_name":"127.0.0.1:41484_kiilu", [junit4] 2> "state":"down", [junit4] 2> "type":"TLOG"}, [junit4] 2> "core_node4":{ [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:36535/kiilu", [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t3", [junit4] 2> "node_name":"127.0.0.1:36535_kiilu", [junit4] 2> "force_set_state":"false", [junit4] 2> "type":"TLOG"}, [junit4] 2> "core_node6":{ [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:46742/kiilu", [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t5", [junit4] 2> "node_name":"127.0.0.1:46742_kiilu", [junit4] 2> "force_set_state":"false", [junit4] 2> "type":"TLOG"}}}}, [junit4] 2> "router":{"name":"compositeId"}, [junit4] 2> "maxShardsPerNode":"1", [junit4] 2> "autoAddReplicas":"false", [junit4] 2> "nrtReplicas":"0", [junit4] 2> "tlogReplicas":"3"} with live_nodes=[127.0.0.1:38731_kiilu, 127.0.0.1:46742_kiilu, 127.0.0.1:36535_kiilu] [junit4] 2> at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:902) [junit4] 2> at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:879) [junit4] 2> at org.apache.solr.handler.IndexFetcher.getLeaderReplica(IndexFetcher.java:688) [junit4] 2> at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:381) [junit4] 2> at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346) [junit4] 2> at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:425) [junit4] 2> at org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1171) [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [junit4] 2> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [junit4] 2> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [junit4] 2> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] 2> at java.lang.Thread.run(Thread.java:748) [junit4] 2> [junit4] 2> 616992 INFO (recoveryExecutor-495-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false] [junit4] 2> 617607 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 617607 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 617607 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 617608 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 617608 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 617608 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 617608 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 617608 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 617608 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 617608 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 617608 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 617608 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 617609 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 617609 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 617609 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 617609 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 617609 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 617609 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 618142 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy RecoveryStrategy has been closed [junit4] 2> 618142 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false] [junit4] 2> 618142 INFO (updateExecutor-470-thread-2-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 618142 INFO (updateExecutor-470-thread-2-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 3805ms [junit4] 2> 618610 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 618610 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 618610 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 618610 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 618610 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 618610 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 618610 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 618610 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 618610 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 618611 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 618611 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 618611 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 618611 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 618611 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 618611 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 618611 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 618611 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 618611 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 618745 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/ [junit4] 2> 618746 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu START replicas=[http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/] nUpdates=100 [junit4] 2> 618747 INFO (qtp322138945-12026) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp=/kiilu path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 618747 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu Received 1 versions from http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/ fingerprint:null [junit4] 2> 618747 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu No additional versions requested. ourHighThreshold=1622480344758353920 otherLowThreshold=1622480344758353920 ourHighest=1622480344758353920 otherHighest=1622480344758353920 [junit4] 2> 618747 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu DONE. sync succeeded [junit4] 2> 618747 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 618747 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/: try and ask http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/ to sync [junit4] 2> 618748 INFO (qtp322138945-12027) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:36535/kiilu START replicas=[http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/] nUpdates=100 [junit4] 2> 618749 INFO (qtp2146875188-12051) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 618749 INFO (qtp2146875188-12051) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp=/kiilu path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 618750 INFO (qtp322138945-12027) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 618750 INFO (qtp322138945-12027) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 618750 INFO (qtp322138945-12027) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp=/kiilu path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/&wt=javabin&version=2} status=0 QTime=1 [junit4] 2> 618750 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/: sync completed with http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/ [junit4] 2> 618750 WARN (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext The previous leader marked me forceleader_test_collection_shard1_replica_t5 as down and I haven't recovered yet, so I shouldn't be the leader. [junit4] 2> 618751 ERROR (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Leader Initiated Recovery prevented leadership [junit4] 2> at org.apache.solr.cloud.ShardLeaderElectionContext.checkLIR(ElectionContext.java:631) [junit4] 2> at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:460) [junit4] 2> at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171) [junit4] 2> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:136) [junit4] 2> at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:57) [junit4] 2> at org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:349) [junit4] 2> at org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$1(SolrZkClient.java:287) [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [junit4] 2> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [junit4] 2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] 2> at java.lang.Thread.run(Thread.java:748) [junit4] 2> [junit4] 2> 618751 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery [junit4] 2> 618753 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 618753 WARN (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t3] coreNodeName=[core_node4] [junit4] 2> 618753 INFO (zkCallback-498-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 618754 INFO (zkCallback-476-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 618755 INFO (zkCallback-476-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 618755 INFO (zkCallback-476-thread-4) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 618756 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader parent node, won't remove previous leader registration. [junit4] 2> 618756 WARN (updateExecutor-494-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t5] coreNodeName=[core_node6] [junit4] 2> 618757 INFO (updateExecutor-494-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 618757 INFO (updateExecutor-494-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 6594ms [junit4] 2> 618988 INFO (SocketProxy-Acceptor-36535) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=35440,localport=36535], receiveBufferSize:531000 [junit4] 2> 618991 INFO (SocketProxy-Acceptor-36535) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=33358,localport=56878], receiveBufferSize=530904 [junit4] 2> 618992 INFO (qtp322138945-12025) [n:127.0.0.1:36535_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n2:INDEX.sizeInBytes} status=0 QTime=1 [junit4] 2> 618994 INFO (qtp322138945-12028) [n:127.0.0.1:36535_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n2:INDEX.sizeInBytes} status=0 QTime=1 [junit4] 2> 618995 INFO (qtp322138945-12026) [n:127.0.0.1:36535_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n2:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 618997 INFO (qtp322138945-12027) [n:127.0.0.1:36535_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 618998 INFO (qtp322138945-12024) [n:127.0.0.1:36535_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619000 INFO (qtp322138945-12025) [n:127.0.0.1:36535_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619001 INFO (qtp322138945-12028) [n:127.0.0.1:36535_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619003 INFO (qtp322138945-12026) [n:127.0.0.1:36535_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619006 INFO (qtp965495214-11958) [n:127.0.0.1:38731_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 619006 INFO (qtp965495214-11957) [n:127.0.0.1:38731_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 619007 INFO (qtp965495214-11959) [n:127.0.0.1:38731_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 619008 INFO (qtp965495214-11960) [n:127.0.0.1:38731_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619009 INFO (qtp965495214-11961) [n:127.0.0.1:38731_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619010 INFO (qtp965495214-11958) [n:127.0.0.1:38731_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619011 INFO (qtp965495214-11957) [n:127.0.0.1:38731_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619014 INFO (qtp965495214-11959) [n:127.0.0.1:38731_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619015 INFO (SocketProxy-Acceptor-46742) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=44976,localport=46742], receiveBufferSize:531000 [junit4] 2> 619016 INFO (SocketProxy-Acceptor-46742) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=43194,localport=48402], receiveBufferSize=530904 [junit4] 2> 619017 INFO (qtp2146875188-12050) [n:127.0.0.1:46742_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 619019 INFO (qtp2146875188-12054) [n:127.0.0.1:46742_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 619019 INFO (qtp2146875188-12053) [n:127.0.0.1:46742_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 619021 INFO (qtp2146875188-12051) [n:127.0.0.1:46742_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619023 INFO (qtp2146875188-12052) [n:127.0.0.1:46742_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619024 INFO (qtp2146875188-12050) [n:127.0.0.1:46742_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619025 INFO (qtp2146875188-12054) [n:127.0.0.1:46742_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619027 INFO (qtp2146875188-12053) [n:127.0.0.1:46742_kiilu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 619035 INFO (AutoscalingActionExecutor-437-thread-1) [ ] o.a.s.c.a.ExecutePlanAction No operations to execute for event: { [junit4] 2> "id":"695b47f104d1ffT1c4n7gjs9zk5tdv2un6kqlifs", [junit4] 2> "source":".auto_add_replicas", [junit4] 2> "eventTime":29655237099049471, [junit4] 2> "eventType":"NODELOST", [junit4] 2> "properties":{ [junit4] 2> "eventTimes":[29655237099049471], [junit4] 2> "preferredOperation":"movereplica", [junit4] 2> "_enqueue_time_":29655247107399151, [junit4] 2> "nodeNames":["127.0.0.1:41484_kiilu"]}} [junit4] 2> 619613 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 619613 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 619613 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 619613 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 619613 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 619613 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 619614 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 619614 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 619614 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 619614 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 619614 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 619614 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 619615 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 619615 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 619615 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 619615 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 619615 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 619615 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 620616 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 620616 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 620616 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 620616 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 620616 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 620616 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 620617 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 620617 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 620617 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 620617 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 620617 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 620617 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 620617 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 620617 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 620617 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 620618 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 620618 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 620618 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 621253 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/ [junit4] 2> 621253 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:36535/kiilu START replicas=[http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/] nUpdates=100 [junit4] 2> 621254 INFO (qtp2146875188-12051) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp=/kiilu path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 621254 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:36535/kiilu Received 1 versions from http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/ fingerprint:null [junit4] 2> 621255 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:36535/kiilu No additional versions requested. ourHighThreshold=1622480344758353920 otherLowThreshold=1622480344758353920 ourHighest=1622480344758353920 otherHighest=1622480344758353920 [junit4] 2> 621255 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:36535/kiilu DONE. sync succeeded [junit4] 2> 621255 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 621255 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/: try and ask http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/ to sync [junit4] 2> 621255 INFO (qtp2146875188-12052) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu START replicas=[http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/] nUpdates=100 [junit4] 2> 621256 INFO (qtp322138945-12027) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 621256 INFO (qtp322138945-12027) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp=/kiilu path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 621257 INFO (qtp2146875188-12052) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 621257 INFO (qtp2146875188-12052) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 621257 INFO (qtp2146875188-12052) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp=/kiilu path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/&wt=javabin&version=2} status=0 QTime=2 [junit4] 2> 621257 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/: sync completed with http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/ [junit4] 2> 621258 WARN (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext The previous leader marked me forceleader_test_collection_shard1_replica_t3 as down and I haven't recovered yet, so I shouldn't be the leader. [junit4] 2> 621258 ERROR (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Leader Initiated Recovery prevented leadership [junit4] 2> at org.apache.solr.cloud.ShardLeaderElectionContext.checkLIR(ElectionContext.java:631) [junit4] 2> at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:460) [junit4] 2> at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171) [junit4] 2> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:136) [junit4] 2> at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:57) [junit4] 2> at org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:349) [junit4] 2> at org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$1(SolrZkClient.java:287) [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [junit4] 2> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [junit4] 2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] 2> at java.lang.Thread.run(Thread.java:748) [junit4] 2> [junit4] 2> 621258 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery [junit4] 2> 621259 INFO (zkCallback-476-thread-2) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader parent node, won't remove previous leader registration. [junit4] 2> 621259 WARN (updateExecutor-470-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t3] coreNodeName=[core_node4] [junit4] 2> 621261 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 621261 WARN (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t5] coreNodeName=[core_node6] [junit4] 2> 621262 INFO (zkCallback-476-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 621262 INFO (zkCallback-476-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 621262 INFO (zkCallback-498-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 621262 INFO (zkCallback-476-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 621619 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 621619 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 621619 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 621619 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 621619 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 621619 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 621620 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 621620 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 621620 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 621620 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 621620 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 621620 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 621621 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 621621 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 621621 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 621621 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 621621 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 621621 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.AbstractFullDistribZkTestBase No more retries available! Add batch failed due to: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. [junit4] 2> 621621 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.ForceLeaderTest Document couldn't be sent, which is expected. [junit4] 2> 621627 INFO (zkConnectionManagerCallback-521-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 621628 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3) [junit4] 2> 621630 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35962/solr ready [junit4] 2> 621630 INFO (SocketProxy-Acceptor-38731) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=59844,localport=38731], receiveBufferSize:531000 [junit4] 2> 621631 INFO (SocketProxy-Acceptor-38731) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=42435,localport=33770], receiveBufferSize=530904 [junit4] 2> 621631 INFO (qtp965495214-11961) [n:127.0.0.1:38731_kiilu ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :forceleader with params action=FORCELEADER&collection=forceleader_test_collection&shard=shard1&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 621631 INFO (qtp965495214-11961) [n:127.0.0.1:38731_kiilu c:forceleader_test_collection ] o.a.s.h.a.CollectionsHandler Force leader invoked, state: znodeVersion: 0 [junit4] 2> live nodes:[127.0.0.1:36535_kiilu, 127.0.0.1:38731_kiilu, 127.0.0.1:46742_kiilu] [junit4] 2> collections:{control_collection=DocCollection(control_collection//collections/control_collection/state.json/3)={ [junit4] 2> "pullReplicas":"0", [junit4] 2> "replicationFactor":"1", [junit4] 2> "shards":{"shard1":{ [junit4] 2> "range":"80000000-7fffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{"core_node2":{ [junit4] 2> "core":"control_collection_shard1_replica_n1", [junit4] 2> "base_url":"http://127.0.0.1:38731/kiilu", [junit4] 2> "node_name":"127.0.0.1:38731_kiilu", [junit4] 2> "state":"active", [junit4] 2> "type":"NRT", [junit4] 2> "leader":"true"}}}}, [junit4] 2> "router":{"name":"compositeId"}, [junit4] 2> "maxShardsPerNode":"1", [junit4] 2> "autoAddReplicas":"false", [junit4] 2> "nrtReplicas":"1", [junit4] 2> "tlogReplicas":"0"}, collection1=LazyCollectionRef(collection1), forceleader_test_collection=LazyCollectionRef(forceleader_test_collection)} [junit4] 2> 621671 INFO (qtp965495214-11961) [n:127.0.0.1:38731_kiilu c:forceleader_test_collection ] o.a.s.h.a.CollectionsHandler Cleaning out LIR data, which was: /collections/forceleader_test_collection/leader_initiated_recovery/shard1 (2) [junit4] 2> /collections/forceleader_test_collection/leader_initiated_recovery/shard1/core_node6 (0) [junit4] 2> DATA: [junit4] 2> { [junit4] 2> "state":"down", [junit4] 2> "createdByNodeName":"127.0.0.1:41484_kiilu", [junit4] 2> "createdByCoreNodeName":"core_node2"} [junit4] 2> /collections/forceleader_test_collection/leader_initiated_recovery/shard1/core_node4 (0) [junit4] 2> DATA: [junit4] 2> { [junit4] 2> "state":"down", [junit4] 2> "createdByNodeName":"127.0.0.1:41484_kiilu", [junit4] 2> "createdByCoreNodeName":"core_node2"} [junit4] 2> [junit4] 2> 621947 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=false [junit4] 2> 621948 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t3 stopping background replication from leader [junit4] 2> 623761 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/ [junit4] 2> 623761 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu START replicas=[http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/] nUpdates=100 [junit4] 2> 623762 INFO (qtp322138945-12024) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp=/kiilu path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 623763 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu Received 1 versions from http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/ fingerprint:null [junit4] 2> 623763 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu No additional versions requested. ourHighThreshold=1622480344758353920 otherLowThreshold=1622480344758353920 ourHighest=1622480344758353920 otherHighest=1622480344758353920 [junit4] 2> 623763 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:46742/kiilu DONE. sync succeeded [junit4] 2> 623763 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 623763 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/: try and ask http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/ to sync [junit4] 2> 623764 INFO (qtp322138945-12025) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:36535/kiilu START replicas=[http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/] nUpdates=100 [junit4] 2> 623765 INFO (qtp2146875188-12050) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 623765 INFO (qtp2146875188-12050) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp=/kiilu path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 623776 INFO (qtp322138945-12025) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 623776 INFO (qtp322138945-12025) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 623776 INFO (qtp322138945-12025) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp=/kiilu path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/&wt=javabin&version=2} status=0 QTime=11 [junit4] 2> 623776 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/: sync completed with http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/ [junit4] 2> 623777 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t5 stopping background replication from leader [junit4] 2> 623777 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext Replaying tlog before become new leader [junit4] 2> 623777 WARN (recoveryExecutor-504-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.UpdateLog Starting log replay tlog{file=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.ForceLeaderTest_D7C3616BE0F39CD1-001/shard-3-001/cores/forceleader_test_collection_shard1_replica_t5/data/tlog/tlog.0000000000000000000 refcount=2} active=false starting pos=0 inSortedOrder=true [junit4] 2> 623785 INFO (recoveryExecutor-504-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 start commit{flags=2,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 623785 INFO (recoveryExecutor-504-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@350700ef commitCommandVersion:0 [junit4] 2> 623799 INFO (recoveryExecutor-504-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.s.SolrIndexSearcher Opening [Searcher@1566126c[forceleader_test_collection_shard1_replica_t5] main] [junit4] 2> 623801 INFO (searcherExecutor-502-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SolrCore [forceleader_test_collection_shard1_replica_t5] Registered new searcher Searcher@1566126c[forceleader_test_collection_shard1_replica_t5] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):C1:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1547317850997}]:[attributes={Lucene50StoredFieldsFormat.mode=BEST_SPEED}])))} [junit4] 2> 623802 INFO (recoveryExecutor-504-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 623802 INFO (recoveryExecutor-504-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_test_collection_shard1_replica_t5] {add=[1 (1622480344758353920)]} 0 25 [junit4] 2> 623802 WARN (recoveryExecutor-504-thread-1-processing-n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.UpdateLog Log replay finished. recoveryInfo=RecoveryInfo{adds=1 deletes=0 deleteByQuery=0 errors=0 positionOfStart=0} [junit4] 2> 623802 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/forceleader_test_collection/leaders/shard1/leader after winning as /collections/forceleader_test_collection/leader_elect/shard1/election/74001078468345874-core_node6-n_0000000006 [junit4] 2> 623806 INFO (zkCallback-476-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 623806 INFO (zkCallback-476-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 623806 INFO (zkCallback-476-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 623806 INFO (zkCallback-498-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 623808 INFO (SocketProxy-Acceptor-46742) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=46930,localport=46742], receiveBufferSize:531000 [junit4] 2> 623809 INFO (SocketProxy-Acceptor-46742) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=43194,localport=50356], receiveBufferSize=530904 [junit4] 2> 623810 INFO (qtp2146875188-12053) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp=/kiilu path=/admin/ping params={wt=javabin&version=2} hits=1 status=0 QTime=1 [junit4] 2> 623811 INFO (qtp2146875188-12053) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp=/kiilu path=/admin/ping params={wt=javabin&version=2} status=0 QTime=1 [junit4] 2> 623811 INFO (zkCallback-498-thread-3) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/ shard1 [junit4] 2> 623811 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[forceleader_test_collection_shard1_replica_t3] [junit4] 2> 623812 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=tlog{file=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.ForceLeaderTest_D7C3616BE0F39CD1-001/shard-1-001/cores/forceleader_test_collection_shard1_replica_t3/data/tlog/tlog.0000000000000000000 refcount=1}} [junit4] 2> 623812 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Publishing state of core [forceleader_test_collection_shard1_replica_t3] as recovering, leader is [http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/] and I am [http://127.0.0.1:36535/kiilu/forceleader_test_collection_shard1_replica_t3/] [junit4] 2> 623815 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Sending prep recovery command to [http://127.0.0.1:46742/kiilu]; [WaitForState: action=PREPRECOVERY&core=forceleader_test_collection_shard1_replica_t5&nodeName=127.0.0.1:36535_kiilu&coreNodeName=core_node4&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true] [junit4] 2> 623817 INFO (qtp2146875188-12051) [n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node4, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true [junit4] 2> 623817 INFO (qtp2146875188-12051) [n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_test_collection, shard=shard1, thisCore=forceleader_test_collection_shard1_replica_t5, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=down, localState=active, nodeName=127.0.0.1:36535_kiilu, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"state":"down","base_url":"http://127.0.0.1:36535/kiilu","core":"forceleader_test_collection_shard1_replica_t3","node_name":"127.0.0.1:36535_kiilu","force_set_state":"false","type":"TLOG"} [junit4] 2> 623918 INFO (zkCallback-476-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 623918 INFO (zkCallback-476-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 623918 INFO (zkCallback-476-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 623918 INFO (zkCallback-498-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 623919 INFO (watches-500-thread-3) [ ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_test_collection, shard=shard1, thisCore=forceleader_test_collection_shard1_replica_t5, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:36535_kiilu, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"core":"forceleader_test_collection_shard1_replica_t3","base_url":"http://127.0.0.1:36535/kiilu","node_name":"127.0.0.1:36535_kiilu","state":"recovering","type":"TLOG"} [junit4] 2> 623919 INFO (qtp2146875188-12051) [n:127.0.0.1:46742_kiilu x:forceleader_test_collection_shard1_replica_t5] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:36535_kiilu&onlyIfLeaderActive=true&core=forceleader_test_collection_shard1_replica_t5&coreNodeName=core_node4&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=102 [junit4] 2> 624420 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Starting Replication Recovery. [junit4] 2> 624420 INFO (recoveryExecutor-473-thread-1-processing-n:127.0.0.1:36535_kiilu x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Attempting to replicate from [http://127.0.0.1:46742/kiilu/forceleader_test_collection_shard1_replica_t5/]. [junit4] 2> 624421 INFO (qtp2146875188-12052) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1622480363584487424,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 624421 INFO (qtp2146875188-12052) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 624435 INFO (qtp2146875188-12052) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 624436 INFO (SocketProxy-Acceptor-36535) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=37908,localport=36535], receiveBufferSize:531000 [junit4] 2> 624439 INFO (SocketProxy-Acceptor-36535) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=33358,localport=59346], receiveBufferSize=530904 [junit4] 2> 624439 INFO (qtp322138945-12026) [n:127.0.0.1:36535_kiilu c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.TestInjection Start waiting for replica in sync with leader [junit4] 2> 624443 INFO (SocketProxy-Acceptor-46742) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=47414,localport=46742], receiveBufferSize:531000 [junit4] 2> 624445 INFO (SocketProxy-Acceptor-46742) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=43194,localport=50842], receiveBufferSize=530904 [junit4] 2> 624447 WARN (qtp2146875188-12054) [n:127.0.0.1:46742_kiilu c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.h.ReplicationHandler Exception while invoking 'details' method for replication on master [junit4] 2> org.apache.solr.client.solrj.SolrServerException: Server refused connection at: http://127.0.0.1:41484/kiilu/forceleader_test_collection_shard1_replica_t1 [junit4] 2> at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650) ~[java/:?] [junit4] 2> at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) ~[java/:?] [junit4] 2> at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) ~[java/:?] [junit4] 2> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260) ~[java/:?] [junit4] 2> at org.apache.solr.handler.IndexFetcher.getDetails(IndexFetcher.java:1857) ~[java/:?] [junit4] 2> at org.apache.solr.handler.ReplicationHandler.getReplicationDetails(ReplicationHandler.java:940) [java/:?] [junit4] 2> at org.apache.solr.handler.ReplicationHandler.handleRequestBody(ReplicationHandler.java:304) [java/:?] [junit4] 2> at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) [java/:?] [junit4] 2> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2551) [java/:?] [junit4] 2> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710) [java/:?] [junit4] 2> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516) [java/:?] [junit4] 2> at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395) [java/:?] [junit4] 2> at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341) [java/:?] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) [jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:158) [java/:?] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) [jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) [jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) [jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588) [jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) [jetty-server-9.4.14.v20181114 [...truncated too long message...] 2> 1067925 INFO (zkCallback-808-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (2) [junit4] 2> 1067925 INFO (zkCallback-833-thread-4) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (2) [junit4] 2> 1067925 INFO (zkCallback-801-thread-4) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (2) [junit4] 2> 1067926 WARN (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [ ] o.a.z.s.NIOServerCnxn Unable to read additional data from client sessionid 0x106e79d42e60018, likely client has closed socket [junit4] 2> 1067985 INFO (closeThreadPool-873-thread-7) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.node, tag=null [junit4] 2> 1067986 INFO (closeThreadPool-873-thread-7) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@6715019d: rootName = null, domain = solr.node, service url = null, agent id = null] for registry solr.node / com.codahale.metrics.MetricRegistry@3ddff7c8 [junit4] 2> 1067987 INFO (zkCallback-833-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 1067987 INFO (zkCallback-833-thread-5) [ ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:39993_kiilu [junit4] 2> 1067987 INFO (zkCallback-840-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 1067987 INFO (closeThreadPool-873-thread-7) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jvm, tag=null [junit4] 2> 1067987 INFO (closeThreadPool-873-thread-7) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@55fc2ab6: rootName = null, domain = solr.jvm, service url = null, agent id = null] for registry solr.jvm / com.codahale.metrics.MetricRegistry@6720e0d7 [junit4] 2> 1067987 INFO (closeThreadPool-873-thread-7) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jetty, tag=null [junit4] 2> 1067987 INFO (closeThreadPool-873-thread-7) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@4f155062: rootName = null, domain = solr.jetty, service url = null, agent id = null] for registry solr.jetty / com.codahale.metrics.MetricRegistry@48abe429 [junit4] 2> 1067988 INFO (closeThreadPool-873-thread-7) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.cluster, tag=null [junit4] 2> 1067984 INFO (closeThreadPool-873-thread-1) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@6c3e7307{HTTP/1.1,[http/1.1]}{127.0.0.1:37935} [junit4] 2> 1067989 INFO (closeThreadPool-873-thread-1) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@72b85e4{/kiilu,null,UNAVAILABLE} [junit4] 2> 1067990 INFO (closeThreadPool-873-thread-1) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 1068013 WARN (closeThreadPool-873-thread-1) [ ] o.a.s.c.s.c.SocketProxy Closing 3 connections to: http://127.0.0.1:41717/kiilu, target: http://127.0.0.1:37935/kiilu [junit4] 2> 1068037 INFO (closeThreadPool-873-thread-6) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@13c0c7b6{HTTP/1.1,[http/1.1]}{127.0.0.1:0} [junit4] 2> 1068038 INFO (closeThreadPool-873-thread-6) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@1f4c73e5{/kiilu,null,UNAVAILABLE} [junit4] 2> 1068038 INFO (closeThreadPool-873-thread-6) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 1068039 WARN (closeThreadPool-873-thread-6) [ ] o.a.s.c.s.c.SocketProxy Closing 6 connections to: http://127.0.0.1:41983/kiilu, target: http://127.0.0.1:44015/kiilu [junit4] 2> 1068049 INFO (closeThreadPool-873-thread-7) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@53c3ce9{HTTP/1.1,[http/1.1]}{127.0.0.1:0} [junit4] 2> 1068049 INFO (closeThreadPool-873-thread-7) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@2f861a7b{/kiilu,null,UNAVAILABLE} [junit4] 2> 1068049 INFO (closeThreadPool-873-thread-7) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 1068050 WARN (closeThreadPool-873-thread-7) [ ] o.a.s.c.s.c.SocketProxy Closing 12 connections to: http://127.0.0.1:39993/kiilu, target: http://127.0.0.1:38833/kiilu [junit4] 2> 1068050 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.ZkTestServer Shutting down ZkTestServer. [junit4] 2> 1068076 WARN (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer Watch limit violations: [junit4] 2> Maximum concurrent create/delete watches above limit: [junit4] 2> [junit4] 2> 44 /solr/collections/forceleader_lower_terms_collection/terms/shard1 [junit4] 2> 13 /solr/aliases.json [junit4] 2> 9 /solr/collections/collection1/terms/shard2 [junit4] 2> 5 /solr/security.json [junit4] 2> 5 /solr/configs/conf1 [junit4] 2> 3 /solr/collections/forceleader_lower_terms_collection/state.json [junit4] 2> 3 /solr/collections/collection1/terms/shard1 [junit4] 2> 2 /solr/collections/control_collection/terms/shard1 [junit4] 2> [junit4] 2> Maximum concurrent data watches above limit: [junit4] 2> [junit4] 2> 64 /solr/collections/collection1/state.json [junit4] 2> 46 /solr/collections/forceleader_lower_terms_collection/state.json [junit4] 2> 13 /solr/clusterprops.json [junit4] 2> 13 /solr/clusterstate.json [junit4] 2> 9 /solr/collections/control_collection/state.json [junit4] 2> 2 /solr/overseer_elect/election/74001106516443146-127.0.0.1:41983_kiilu-n_0000000001 [junit4] 2> 2 /solr/collections/forceleader_lower_terms_collection/leader_elect/shard1/election/74001106516443154-core_node5-n_0000000001 [junit4] 2> [junit4] 2> Maximum concurrent children watches above limit: [junit4] 2> [junit4] 2> 13 /solr/collections [junit4] 2> 12 /solr/live_nodes [junit4] 2> [junit4] 2> 1068079 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:44016 [junit4] 2> 1068079 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[D7C3616BE0F39CD1]) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 44016 [junit4] OK 49.3s J1 | ForceLeaderTest.testReplicasInLowerTerms [junit4] 2> NOTE: leaving temporary files on disk at: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.ForceLeaderTest_D7C3616BE0F39CD1-002 [junit4] 2> Jan 12, 2019 6:38:15 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks [junit4] 2> WARNING: Will linger awaiting termination of 1 leaked thread(s). [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {multiDefault=PostingsFormat(name=Direct), a_t=FSTOrd50, id=PostingsFormat(name=MockRandom), text=Lucene50(blocksize=128)}, docValues:{range_facet_l_dv=DocValuesFormat(name=Lucene70), _version_=DocValuesFormat(name=Asserting), multiDefault=DocValuesFormat(name=Direct), a_t=DocValuesFormat(name=Asserting), intDefault=DocValuesFormat(name=Asserting), id_i1=DocValuesFormat(name=Direct), range_facet_i_dv=DocValuesFormat(name=Asserting), id=DocValuesFormat(name=Lucene70), text=DocValuesFormat(name=Memory), intDvoDefault=DocValuesFormat(name=Memory), timestamp=DocValuesFormat(name=Asserting), range_facet_l=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=913, maxMBSortInHeap=6.3750138751725745, sim=RandomSimilarity(queryNorm=false): {}, locale=sr-Latn-ME, timezone=Navajo [junit4] 2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 1.8.0_191 (64-bit)/cpus=4,threads=1,free=254026768,total=502267904 [junit4] 2> NOTE: All tests run in this JVM: [ForceLeaderTest, StressHdfsTest, ForceLeaderTest, ForceLeaderTest] [junit4] Completed [10/10 (4!)] on J1 in 213.48s, 3 tests, 1 error, 1 skipped <<< FAILURES! [junit4] [junit4] [junit4] Tests with failures [seed: D7C3616BE0F39CD1]: [junit4] - org.apache.solr.cloud.hdfs.StressHdfsTest.test [junit4] - org.apache.solr.cloud.hdfs.StressHdfsTest.test [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader [junit4] [junit4] [junit4] JVM J0: 0.84 .. 920.01 = 919.17s [junit4] JVM J1: 0.78 .. 1160.22 = 1159.45s [junit4] JVM J2: 0.77 .. 1021.49 = 1020.72s [junit4] Execution time total: 19 minutes 20 seconds [junit4] Tests summary: 10 suites, 20 tests, 4 errors, 5 ignored BUILD FAILED /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1572: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1099: There were test failures: 10 suites, 20 tests, 4 errors, 5 ignored [seed: D7C3616BE0F39CD1] Total time: 19 minutes 22 seconds [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.cloud.ForceLeaderTest [repro] 2/5 failed: org.apache.solr.cloud.hdfs.StressHdfsTest [repro] git checkout dcc9ffe186eb1873fcebc56382e3be34245b0ecc Previous HEAD position was 734f20b... Ref Guide: fix double footer in page layout for index.html HEAD is now at dcc9ffe... SOLR-13051 improve TRA update processor test - remove some timeouts - better async mechanism linked to SolrCore lifecycle - add some additional tests to be a bit more thorough [repro] Exiting with code 256 Archiving artifacts Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE Email was triggered for: Unstable (Test Failures) Sending email for trigger: Unstable (Test Failures)
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org