Build: https://builds.apache.org/job/Lucene-Solr-repro/2649/
[...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/427/consoleText [repro] Revision: 91a07ce43555607d00814b08d34323efc0dadc84 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=ForceLeaderTest -Dtests.method=testReplicasInLIRNoLeader -Dtests.seed=6D27CF39EEEFD016 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-QA -Dtests.timezone=Africa/Dar_es_Salaam -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=HdfsUnloadDistributedZkTest -Dtests.method=test -Dtests.seed=6D27CF39EEEFD016 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=de-AT -Dtests.timezone=America/Metlakatla -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: df119573dbc5781b2eed357821856b44bd7af5fd [repro] git fetch [repro] git checkout 91a07ce43555607d00814b08d34323efc0dadc84 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro] solr/core [repro] ForceLeaderTest [repro] HdfsUnloadDistributedZkTest [repro] ant compile-test [...truncated 3605 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.ForceLeaderTest|*.HdfsUnloadDistributedZkTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=6D27CF39EEEFD016 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-QA -Dtests.timezone=Africa/Dar_es_Salaam -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 17137 lines...] [junit4] 2> 66209 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 66867 ERROR (indexFetcher-73-thread-1) [ ] o.a.s.h.ReplicationHandler Index fetch failed :org.apache.solr.common.SolrException: No registered leader was found after waiting for 4000ms , collection: forceleader_test_collection slice: shard1 saw state=DocCollection(forceleader_test_collection//collections/forceleader_test_collection/state.json/15)={ [junit4] 2> "pullReplicas":"0", [junit4] 2> "replicationFactor":"0", [junit4] 2> "shards":{"shard1":{ [junit4] 2> "range":"80000000-7fffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{ [junit4] 2> "core_node2":{ [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t1", [junit4] 2> "base_url":"http://127.0.0.1:37825", [junit4] 2> "node_name":"127.0.0.1:37825_", [junit4] 2> "state":"down", [junit4] 2> "type":"TLOG"}, [junit4] 2> "core_node4":{ [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:39665", [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t3", [junit4] 2> "node_name":"127.0.0.1:39665_", [junit4] 2> "force_set_state":"false", [junit4] 2> "type":"TLOG"}, [junit4] 2> "core_node6":{ [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:43074", [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t5", [junit4] 2> "node_name":"127.0.0.1:43074_", [junit4] 2> "force_set_state":"false", [junit4] 2> "type":"TLOG"}}}}, [junit4] 2> "router":{"name":"compositeId"}, [junit4] 2> "maxShardsPerNode":"1", [junit4] 2> "autoAddReplicas":"false", [junit4] 2> "nrtReplicas":"0", [junit4] 2> "tlogReplicas":"3"} with live_nodes=[127.0.0.1:46732_, 127.0.0.1:43074_, 127.0.0.1:39665_] [junit4] 2> at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:902) [junit4] 2> at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:879) [junit4] 2> at org.apache.solr.handler.IndexFetcher.getLeaderReplica(IndexFetcher.java:688) [junit4] 2> at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:381) [junit4] 2> at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346) [junit4] 2> at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:425) [junit4] 2> at org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1171) [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [junit4] 2> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [junit4] 2> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [junit4] 2> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] 2> at java.lang.Thread.run(Thread.java:748) [junit4] 2> [junit4] 2> 66868 INFO (recoveryExecutor-68-thread-1-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false] [junit4] 2> 66868 INFO (updateExecutor-67-thread-2-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 66868 INFO (updateExecutor-67-thread-2-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 6189ms [junit4] 2> 67210 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 67210 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 67210 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 67210 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 67211 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 67211 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 67213 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 67213 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 67213 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 67221 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 67221 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 67221 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 67221 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 67222 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 67222 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 67223 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 67223 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 67223 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 67279 ERROR (indexFetcher-80-thread-1) [ ] o.a.s.h.ReplicationHandler Index fetch failed :org.apache.solr.common.SolrException: No registered leader was found after waiting for 4000ms , collection: forceleader_test_collection slice: shard1 saw state=DocCollection(forceleader_test_collection//collections/forceleader_test_collection/state.json/15)={ [junit4] 2> "pullReplicas":"0", [junit4] 2> "replicationFactor":"0", [junit4] 2> "shards":{"shard1":{ [junit4] 2> "range":"80000000-7fffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{ [junit4] 2> "core_node2":{ [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t1", [junit4] 2> "base_url":"http://127.0.0.1:37825", [junit4] 2> "node_name":"127.0.0.1:37825_", [junit4] 2> "state":"down", [junit4] 2> "type":"TLOG"}, [junit4] 2> "core_node4":{ [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:39665", [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t3", [junit4] 2> "node_name":"127.0.0.1:39665_", [junit4] 2> "force_set_state":"false", [junit4] 2> "type":"TLOG"}, [junit4] 2> "core_node6":{ [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:43074", [junit4] 2> "core":"forceleader_test_collection_shard1_replica_t5", [junit4] 2> "node_name":"127.0.0.1:43074_", [junit4] 2> "force_set_state":"false", [junit4] 2> "type":"TLOG"}}}}, [junit4] 2> "router":{"name":"compositeId"}, [junit4] 2> "maxShardsPerNode":"1", [junit4] 2> "autoAddReplicas":"false", [junit4] 2> "nrtReplicas":"0", [junit4] 2> "tlogReplicas":"3"} with live_nodes=[127.0.0.1:46732_, 127.0.0.1:43074_, 127.0.0.1:39665_] [junit4] 2> at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:902) [junit4] 2> at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:879) [junit4] 2> at org.apache.solr.handler.IndexFetcher.getLeaderReplica(IndexFetcher.java:688) [junit4] 2> at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:381) [junit4] 2> at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:346) [junit4] 2> at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:425) [junit4] 2> at org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1171) [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [junit4] 2> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [junit4] 2> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [junit4] 2> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] 2> at java.lang.Thread.run(Thread.java:748) [junit4] 2> [junit4] 2> 67280 INFO (recoveryExecutor-9-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false] [junit4] 2> 68223 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 68224 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 68224 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 68224 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 68224 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 68224 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 68227 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 68227 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 68227 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 68227 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 68227 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 68227 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 68228 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 68228 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 68228 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 68228 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 68228 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 68228 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 68431 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/ [junit4] 2> 68432 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 START replicas=[http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/] nUpdates=100 [junit4] 2> 68438 INFO (qtp483656621-139) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp= path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=3 [junit4] 2> 68439 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 Received 1 versions from http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/ fingerprint:null [junit4] 2> 68440 INFO (qtp483656621-141) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp= path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 68441 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 No additional versions requested. ourHighThreshold=1622204461853179904 otherLowThreshold=1622204461853179904 ourHighest=1622204461853179904 otherHighest=1622204461853179904 [junit4] 2> 68441 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 DONE. sync succeeded [junit4] 2> 68441 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 68441 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/: try and ask http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/ to sync [junit4] 2> 68442 INFO (qtp483656621-142) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:39665 START replicas=[http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/] nUpdates=100 [junit4] 2> 68443 INFO (qtp2033639374-30) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 68443 INFO (qtp2033639374-30) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp= path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 68444 INFO (qtp483656621-142) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 68444 INFO (qtp483656621-142) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 68444 INFO (qtp483656621-142) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp= path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/&wt=javabin&version=2} status=0 QTime=2 [junit4] 2> 68445 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/: sync completed with http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/ [junit4] 2> 68445 WARN (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext The previous leader marked me forceleader_test_collection_shard1_replica_t5 as down and I haven't recovered yet, so I shouldn't be the leader. [junit4] 2> 68446 ERROR (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Leader Initiated Recovery prevented leadership [junit4] 2> at org.apache.solr.cloud.ShardLeaderElectionContext.checkLIR(ElectionContext.java:631) [junit4] 2> at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:460) [junit4] 2> at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171) [junit4] 2> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:136) [junit4] 2> at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:57) [junit4] 2> at org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:349) [junit4] 2> at org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$1(SolrZkClient.java:287) [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [junit4] 2> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [junit4] 2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] 2> at java.lang.Thread.run(Thread.java:748) [junit4] 2> [junit4] 2> 68446 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery [junit4] 2> 68447 INFO (zkCallback-12-thread-1) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader parent node, won't remove previous leader registration. [junit4] 2> 68447 WARN (updateExecutor-8-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t5] coreNodeName=[core_node6] [junit4] 2> 68447 INFO (updateExecutor-8-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 68447 INFO (updateExecutor-8-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 6108ms [junit4] 2> 68449 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 68449 WARN (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t3] coreNodeName=[core_node4] [junit4] 2> 68450 INFO (zkCallback-71-thread-4) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 68450 INFO (zkCallback-12-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 68450 INFO (zkCallback-71-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 68450 INFO (zkCallback-71-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 68450 INFO (zkCallback-12-thread-4) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 68450 INFO (zkCallback-12-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 68680 INFO (qtp483656621-140) [n:127.0.0.1:39665_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 68682 INFO (qtp483656621-143) [n:127.0.0.1:39665_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 68684 INFO (qtp483656621-139) [n:127.0.0.1:39665_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.forceleader_test_collection.shard1.replica_t3:INDEX.sizeInBytes&key=solr.core.collection1.shard2.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 68686 INFO (qtp483656621-141) [n:127.0.0.1:39665_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 68691 INFO (qtp483656621-142) [n:127.0.0.1:39665_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68697 INFO (qtp483656621-140) [n:127.0.0.1:39665_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 68699 INFO (qtp483656621-143) [n:127.0.0.1:39665_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68701 INFO (qtp483656621-139) [n:127.0.0.1:39665_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68709 INFO (SocketProxy-Acceptor-43074) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=47194,localport=43074], receiveBufferSize:531000 [junit4] 2> 68710 INFO (SocketProxy-Acceptor-43074) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=45969,localport=52426], receiveBufferSize=530904 [junit4] 2> 68725 INFO (qtp2033639374-31) [n:127.0.0.1:43074_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=14 [junit4] 2> 68731 INFO (qtp2033639374-32) [n:127.0.0.1:43074_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=4 [junit4] 2> 68732 INFO (qtp2033639374-33) [n:127.0.0.1:43074_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes&key=solr.core.forceleader_test_collection.shard1.replica_t5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 68734 INFO (qtp2033639374-30) [n:127.0.0.1:43074_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 68736 INFO (qtp2033639374-29) [n:127.0.0.1:43074_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68738 INFO (qtp2033639374-31) [n:127.0.0.1:43074_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68740 INFO (qtp2033639374-32) [n:127.0.0.1:43074_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68753 INFO (qtp2033639374-33) [n:127.0.0.1:43074_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=1 [junit4] 2> 68759 INFO (qtp1581371949-98) [n:127.0.0.1:46732_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 68760 INFO (qtp1581371949-99) [n:127.0.0.1:46732_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 68762 INFO (qtp1581371949-96) [n:127.0.0.1:46732_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard2.replica_n5:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 68764 INFO (qtp1581371949-100) [n:127.0.0.1:46732_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68767 INFO (qtp1581371949-101) [n:127.0.0.1:46732_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68769 INFO (qtp1581371949-98) [n:127.0.0.1:46732_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68773 INFO (qtp1581371949-99) [n:127.0.0.1:46732_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68775 INFO (qtp1581371949-96) [n:127.0.0.1:46732_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 68789 INFO (AutoscalingActionExecutor-9-thread-1) [ ] o.a.s.c.a.ExecutePlanAction No operations to execute for event: { [junit4] 2> "id":"686bfdbdd85ccaT94sp0nm9v9njxjy321dohafou", [junit4] 2> "source":".auto_add_replicas", [junit4] 2> "eventTime":29392135133879498, [junit4] 2> "eventType":"NODELOST", [junit4] 2> "properties":{ [junit4] 2> "eventTimes":[29392135133879498], [junit4] 2> "preferredOperation":"movereplica", [junit4] 2> "_enqueue_time_":29392145142731069, [junit4] 2> "nodeNames":["127.0.0.1:37825_"]}} [junit4] 2> 69229 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 69230 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 69230 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 69230 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 69230 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 69230 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 69230 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 69230 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 69231 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 69231 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 69231 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 69231 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 69231 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 69231 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 69231 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 69232 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 69232 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 69232 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 70233 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 70233 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 70233 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 70233 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 70234 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 70234 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 70235 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 70235 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 70235 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 70236 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 70236 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 70236 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 70236 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 70236 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 70236 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 70237 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 70237 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 70237 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.AbstractFullDistribZkTestBase ERROR: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. ... Sleeping for 1 seconds before re-try ... [junit4] 2> 70949 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/ [junit4] 2> 70949 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:39665 START replicas=[http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/] nUpdates=100 [junit4] 2> 70951 INFO (qtp2033639374-30) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp= path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 70952 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:39665 Received 1 versions from http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/ fingerprint:null [junit4] 2> 70957 INFO (qtp2033639374-29) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp= path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 70957 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:39665 No additional versions requested. ourHighThreshold=1622204461853179904 otherLowThreshold=1622204461853179904 ourHighest=1622204461853179904 otherHighest=1622204461853179904 [junit4] 2> 70957 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:39665 DONE. sync succeeded [junit4] 2> 70957 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 70957 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/: try and ask http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/ to sync [junit4] 2> 70960 INFO (qtp2033639374-31) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 START replicas=[http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/] nUpdates=100 [junit4] 2> 70961 INFO (qtp483656621-141) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 70961 INFO (qtp483656621-141) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp= path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 70962 INFO (qtp2033639374-31) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 70962 INFO (qtp2033639374-31) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 70962 INFO (qtp2033639374-31) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp= path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/&wt=javabin&version=2} status=0 QTime=2 [junit4] 2> 70963 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.SyncStrategy http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/: sync completed with http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/ [junit4] 2> 70963 WARN (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext The previous leader marked me forceleader_test_collection_shard1_replica_t3 as down and I haven't recovered yet, so I shouldn't be the leader. [junit4] 2> 70964 ERROR (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext There was a problem trying to register as the leader:org.apache.solr.common.SolrException: Leader Initiated Recovery prevented leadership [junit4] 2> at org.apache.solr.cloud.ShardLeaderElectionContext.checkLIR(ElectionContext.java:631) [junit4] 2> at org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:460) [junit4] 2> at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171) [junit4] 2> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:136) [junit4] 2> at org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:57) [junit4] 2> at org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:349) [junit4] 2> at org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$1(SolrZkClient.java:287) [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [junit4] 2> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [junit4] 2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] 2> at java.lang.Thread.run(Thread.java:748) [junit4] 2> [junit4] 2> 70964 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContext There may be a better leader candidate than us - going back into recovery [junit4] 2> 70966 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 70966 WARN (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t5] coreNodeName=[core_node6] [junit4] 2> 70967 INFO (zkCallback-71-thread-2) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ShardLeaderElectionContextBase No version found for ephemeral leader parent node, won't remove previous leader registration. [junit4] 2> 70967 WARN (updateExecutor-67-thread-1-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Stopping recovery for core=[forceleader_test_collection_shard1_replica_t3] coreNodeName=[core_node4] [junit4] 2> 70967 INFO (zkCallback-12-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 70967 INFO (zkCallback-71-thread-4) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 70967 INFO (zkCallback-12-thread-5) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 70967 INFO (zkCallback-71-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 70967 INFO (zkCallback-12-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 70967 INFO (zkCallback-71-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 71238 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=0 commError=false errorCode=510 [junit4] 2> 71238 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 71238 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 71239 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=1 commError=false errorCode=510 [junit4] 2> 71239 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 71239 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 71239 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=2 commError=false errorCode=510 [junit4] 2> 71239 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 71239 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 71240 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=3 commError=false errorCode=510 [junit4] 2> 71240 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 71240 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 71240 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=4 commError=false errorCode=510 [junit4] 2> 71240 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 71240 WARN (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Re-trying request to collection(s) [forceleader_test_collection] after stale state error from server. [junit4] 2> 71241 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient Request to collection [forceleader_test_collection] failed due to (510) org.apache.solr.common.SolrException: Could not find a healthy node to handle the request., retry=5 commError=false errorCode=510 [junit4] 2> 71241 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.CloudSolrClient request was not communication error it seems [junit4] 2> 71241 ERROR (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.AbstractFullDistribZkTestBase No more retries available! Add batch failed due to: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. [junit4] 2> 71241 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.ForceLeaderTest Document couldn't be sent, which is expected. [junit4] 2> 71246 INFO (zkConnectionManagerCallback-94-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 71248 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3) [junit4] 2> 71249 INFO (TEST-ForceLeaderTest.testReplicasInLIRNoLeader-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:37822/solr ready [junit4] 2> 71250 INFO (SocketProxy-Acceptor-46732) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=50998,localport=46732], receiveBufferSize:531000 [junit4] 2> 71254 INFO (SocketProxy-Acceptor-46732) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=36314,localport=34148], receiveBufferSize=530904 [junit4] 2> 71255 INFO (qtp1581371949-101) [n:127.0.0.1:46732_ ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :forceleader with params action=FORCELEADER&collection=forceleader_test_collection&shard=shard1&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 71255 INFO (qtp1581371949-101) [n:127.0.0.1:46732_ c:forceleader_test_collection ] o.a.s.h.a.CollectionsHandler Force leader invoked, state: znodeVersion: 9 [junit4] 2> live nodes:[127.0.0.1:39665_, 127.0.0.1:43074_, 127.0.0.1:46732_] [junit4] 2> collections:{collection1=DocCollection(collection1//clusterstate.json/9)={ [junit4] 2> "pullReplicas":"0", [junit4] 2> "replicationFactor":"1", [junit4] 2> "shards":{ [junit4] 2> "shard1":{ [junit4] 2> "range":"80000000-ffffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{"core_node3":{ [junit4] 2> "core":"collection1_shard1_replica_n2", [junit4] 2> "base_url":"http://127.0.0.1:37825", [junit4] 2> "node_name":"127.0.0.1:37825_", [junit4] 2> "state":"down", [junit4] 2> "type":"NRT", [junit4] 2> "leader":"true"}}}, [junit4] 2> "shard2":{ [junit4] 2> "range":"0-7fffffff", [junit4] 2> "state":"active", [junit4] 2> "replicas":{ [junit4] 2> "core_node4":{ [junit4] 2> "core":"collection1_shard2_replica_n1", [junit4] 2> "base_url":"http://127.0.0.1:39665", [junit4] 2> "node_name":"127.0.0.1:39665_", [junit4] 2> "state":"active", [junit4] 2> "type":"NRT", [junit4] 2> "leader":"true"}, [junit4] 2> "core_node6":{ [junit4] 2> "core":"collection1_shard2_replica_n5", [junit4] 2> "base_url":"http://127.0.0.1:46732", [junit4] 2> "node_name":"127.0.0.1:46732_", [junit4] 2> "state":"active", [junit4] 2> "type":"NRT"}}}}, [junit4] 2> "router":{"name":"compositeId"}, [junit4] 2> "maxShardsPerNode":"1", [junit4] 2> "autoAddReplicas":"false", [junit4] 2> "nrtReplicas":"1", [junit4] 2> "tlogReplicas":"0"}, control_collection=LazyCollectionRef(control_collection), forceleader_test_collection=LazyCollectionRef(forceleader_test_collection)} [junit4] 2> 71263 INFO (qtp1581371949-101) [n:127.0.0.1:46732_ c:forceleader_test_collection ] o.a.s.h.a.CollectionsHandler Cleaning out LIR data, which was: /collections/forceleader_test_collection/leader_initiated_recovery/shard1 (2) [junit4] 2> /collections/forceleader_test_collection/leader_initiated_recovery/shard1/core_node6 (0) [junit4] 2> DATA: [junit4] 2> { [junit4] 2> "state":"down", [junit4] 2> "createdByNodeName":"127.0.0.1:37825_", [junit4] 2> "createdByCoreNodeName":"core_node2"} [junit4] 2> /collections/forceleader_test_collection/leader_initiated_recovery/shard1/core_node4 (0) [junit4] 2> DATA: [junit4] 2> { [junit4] 2> "state":"down", [junit4] 2> "createdByNodeName":"127.0.0.1:37825_", [junit4] 2> "createdByCoreNodeName":"core_node2"} [junit4] 2> [junit4] 2> 73058 INFO (recoveryExecutor-68-thread-1-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=false [junit4] 2> 73058 INFO (recoveryExecutor-68-thread-1-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t3 stopping background replication from leader [junit4] 2> 73467 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/ [junit4] 2> 73467 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 START replicas=[http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/] nUpdates=100 [junit4] 2> 73469 INFO (qtp483656621-142) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp= path=/get params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 73470 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 Received 1 versions from http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/ fingerprint:null [junit4] 2> 73472 INFO (qtp483656621-140) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp= path=/get params={distrib=false&qt=/get&checkCanHandleVersionRanges=false&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 73476 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 No additional versions requested. ourHighThreshold=1622204461853179904 otherLowThreshold=1622204461853179904 ourHighest=1622204461853179904 otherHighest=1622204461853179904 [junit4] 2> 73476 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t5 url=http://127.0.0.1:43074 DONE. sync succeeded [junit4] 2> 73476 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 73476 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/: try and ask http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/ to sync [junit4] 2> 73477 INFO (qtp483656621-143) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync PeerSync: core=forceleader_test_collection_shard1_replica_t3 url=http://127.0.0.1:39665 START replicas=[http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/] nUpdates=100 [junit4] 2> 73479 INFO (qtp2033639374-32) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 73479 INFO (qtp2033639374-32) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp= path=/get params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2} status=0 QTime=0 [junit4] 2> 73480 INFO (qtp483656621-143) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.IndexFingerprint IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, maxDoc=0} [junit4] 2> 73480 INFO (qtp483656621-143) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.PeerSync We are already in sync. No need to do a PeerSync [junit4] 2> 73480 INFO (qtp483656621-143) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t3] webapp= path=/get params={distrib=false&qt=/get&getVersions=100&sync=http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/&wt=javabin&version=2} status=0 QTime=3 [junit4] 2> 73481 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SyncStrategy http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/: sync completed with http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/ [junit4] 2> 73481 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ZkController forceleader_test_collection_shard1_replica_t5 stopping background replication from leader [junit4] 2> 73483 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext Replaying tlog before become new leader [junit4] 2> 73491 WARN (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.UpdateLog Starting log replay tlog{file=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.ForceLeaderTest_6D27CF39EEEFD016-001/control-001/cores/forceleader_test_collection_shard1_replica_t5/data/tlog/tlog.0000000000000000000 refcount=2} active=false starting pos=0 inSortedOrder=true [junit4] 2> 73498 INFO (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 start commit{flags=2,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 73498 INFO (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.SolrIndexWriter Calling setCommitData with IW:org.apache.solr.update.SolrIndexWriter@55cd943d commitCommandVersion:0 [junit4] 2> 73705 INFO (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.s.SolrIndexSearcher Opening [Searcher@7c0c89e9[forceleader_test_collection_shard1_replica_t5] main] [junit4] 2> 73708 INFO (searcherExecutor-74-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.SolrCore [forceleader_test_collection_shard1_replica_t5] Registered new searcher Searcher@7c0c89e9[forceleader_test_collection_shard1_replica_t5] main{ExitableDirectoryReader(UninvertingDirectoryReader(Uninverting(_0(7.7.0):c1:[diagnostics={os=Linux, java.vendor=Oracle Corporation, java.version=1.8.0_191, java.vm.version=25.191-b12, lucene.version=7.7.0, os.arch=amd64, java.runtime.version=1.8.0_191-b12, source=flush, os.version=4.4.0-112-generic, timestamp=1547054749042}])))} [junit4] 2> 73711 INFO (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 73711 INFO (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.p.LogUpdateProcessorFactory [forceleader_test_collection_shard1_replica_t5] {add=[1 (1622204461853179904)]} 0 220 [junit4] 2> 73711 WARN (recoveryExecutor-76-thread-1-processing-n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5 c:forceleader_test_collection s:shard1 r:core_node6) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.u.UpdateLog Log replay finished. recoveryInfo=RecoveryInfo{adds=1 deletes=0 deleteByQuery=0 errors=0 positionOfStart=0} [junit4] 2> 73713 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/forceleader_test_collection/leaders/shard1/leader after winning as /collections/forceleader_test_collection/leader_elect/shard1/election/73983834500169732-core_node6-n_0000000006 [junit4] 2> 73718 INFO (zkCallback-71-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73718 INFO (zkCallback-71-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73718 INFO (zkCallback-71-thread-4) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73720 INFO (SocketProxy-Acceptor-43074) [ ] o.a.s.c.s.c.SocketProxy accepted Socket[addr=/127.0.0.1,port=47372,localport=43074], receiveBufferSize:531000 [junit4] 2> 73721 INFO (zkCallback-12-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73721 INFO (zkCallback-12-thread-5) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73721 INFO (zkCallback-12-thread-4) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/ shard1 [junit4] 2> 73721 INFO (zkCallback-12-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73727 INFO (SocketProxy-Acceptor-43074) [ ] o.a.s.c.s.c.SocketProxy proxy connection Socket[addr=/127.0.0.1,port=45969,localport=52604], receiveBufferSize=530904 [junit4] 2> 73739 INFO (qtp2033639374-30) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp= path=/admin/ping params={wt=javabin&version=2} hits=1 status=0 QTime=10 [junit4] 2> 73739 INFO (qtp2033639374-30) [n:127.0.0.1:43074_ c:forceleader_test_collection s:shard1 r:core_node6 x:forceleader_test_collection_shard1_replica_t5] o.a.s.c.S.Request [forceleader_test_collection_shard1_replica_t5] webapp= path=/admin/ping params={wt=javabin&version=2} status=0 QTime=11 [junit4] 2> 73745 INFO (recoveryExecutor-68-thread-1-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[forceleader_test_collection_shard1_replica_t3] [junit4] 2> 73746 INFO (recoveryExecutor-68-thread-1-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=tlog{file=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.ForceLeaderTest_6D27CF39EEEFD016-001/shard-3-001/cores/forceleader_test_collection_shard1_replica_t3/data/tlog/tlog.0000000000000000000 refcount=1}} [junit4] 2> 73746 INFO (recoveryExecutor-68-thread-1-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Publishing state of core [forceleader_test_collection_shard1_replica_t3] as recovering, leader is [http://127.0.0.1:43074/forceleader_test_collection_shard1_replica_t5/] and I am [http://127.0.0.1:39665/forceleader_test_collection_shard1_replica_t3/] [junit4] 2> 73761 INFO (recoveryExecutor-68-thread-1-processing-n:127.0.0.1:39665_ x:forceleader_test_collection_shard1_replica_t3 c:forceleader_test_collection s:shard1 r:core_node4) [n:127.0.0.1:39665_ c:forceleader_test_collection s:shard1 r:core_node4 x:forceleader_test_collection_shard1_replica_t3] o.a.s.c.RecoveryStrategy Sending prep recovery command to [http://127.0.0.1:43074]; [WaitForState: action=PREPRECOVERY&core=forceleader_test_collection_shard1_replica_t5&nodeName=127.0.0.1:39665_&coreNodeName=core_node4&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true] [junit4] 2> 73765 INFO (qtp2033639374-29) [n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node4, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true [junit4] 2> 73765 INFO (qtp2033639374-29) [n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_test_collection, shard=shard1, thisCore=forceleader_test_collection_shard1_replica_t5, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=down, localState=active, nodeName=127.0.0.1:39665_, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"state":"down","base_url":"http://127.0.0.1:39665","core":"forceleader_test_collection_shard1_replica_t3","node_name":"127.0.0.1:39665_","force_set_state":"false","type":"TLOG"} [junit4] 2> 73850 INFO (zkCallback-71-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73850 INFO (zkCallback-71-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73850 INFO (zkCallback-71-thread-4) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73850 INFO (zkCallback-12-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73850 INFO (zkCallback-12-thread-5) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73850 INFO (zkCallback-12-thread-4) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/forceleader_test_collection/state.json] for collection [forceleader_test_collection] has occurred - updating... (live nodes size: [3]) [junit4] 2> 73854 INFO (watches-14-thread-2) [ ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=forceleader_test_collection, shard=shard1, thisCore=forceleader_test_collection_shard1_replica_t5, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:39665_, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"core":"forceleader_test_collection_shard1_replica_t3","base_url":"http://127.0.0.1:39665","node_name":"127.0.0.1:39665_","state":"recovering","type":"TLOG"} [junit4] 2> 73854 INFO (qtp2033639374-29) [n:127.0.0.1:43074_ x:forceleader_test_collection_shard1_replica_t5] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:39665_&onlyIfLeaderActive=true&core=forceleader_test_collection_shard1_replica_t5&coreNodeName=core_node4&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=89 [...truncated too long message...] > 265383 INFO (closeThreadPool-218-thread-2) [ ] o.a.s.c.Overseer Overseer > (id=73983846069305348-127.0.0.1:33249_-n_0000000000) closing [junit4] 2> 265383 INFO (closeThreadPool-228-thread-1) [ ] o.a.s.c.Overseer Overseer (id=73983846069305348-127.0.0.1:33249_-n_0000000000) closing [junit4] 2> 265417 INFO (closeThreadPool-218-thread-3) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@7a72a234{HTTP/1.1,[http/1.1]}{127.0.0.1:0} [junit4] 2> 265418 INFO (closeThreadPool-218-thread-2) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@6f10ee4{HTTP/1.1,[http/1.1]}{127.0.0.1:0} [junit4] 2> 265449 INFO (closeThreadPool-218-thread-2) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@6c15dff9{/,null,UNAVAILABLE} [junit4] 2> 265449 INFO (closeThreadPool-218-thread-3) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@53fc293d{/,null,UNAVAILABLE} [junit4] 2> 265453 INFO (closeThreadPool-218-thread-3) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 265454 INFO (closeThreadPool-218-thread-2) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 265455 WARN (closeThreadPool-218-thread-3) [ ] o.a.s.c.s.c.SocketProxy Closing 12 connections to: http://127.0.0.1:37309/, target: http://127.0.0.1:41304/ [junit4] 2> 265510 INFO (zkCallback-212-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 265540 INFO (closeThreadPool-218-thread-1) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.node, tag=null [junit4] 2> 265540 INFO (closeThreadPool-218-thread-1) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@71d4333c: rootName = null, domain = solr.node, service url = null, agent id = null] for registry solr.node / com.codahale.metrics.MetricRegistry@5fb96f2 [junit4] 2> 265556 INFO (closeThreadPool-218-thread-1) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jvm, tag=null [junit4] 2> 265556 INFO (closeThreadPool-218-thread-1) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@16cad931: rootName = null, domain = solr.jvm, service url = null, agent id = null] for registry solr.jvm / com.codahale.metrics.MetricRegistry@2bf629f [junit4] 2> 265575 INFO (closeThreadPool-218-thread-1) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.jetty, tag=null [junit4] 2> 265575 INFO (closeThreadPool-218-thread-1) [ ] o.a.s.m.r.SolrJmxReporter Closing reporter [org.apache.solr.metrics.reporters.SolrJmxReporter@77b0dfa3: rootName = null, domain = solr.jetty, service url = null, agent id = null] for registry solr.jetty / com.codahale.metrics.MetricRegistry@2f2dff9f [junit4] 2> 265575 INFO (closeThreadPool-218-thread-1) [ ] o.a.s.m.SolrMetricManager Closing metric reporters for registry=solr.cluster, tag=null [junit4] 2> 265635 WARN (closeThreadPool-218-thread-2) [ ] o.a.s.c.s.c.SocketProxy Closing 6 connections to: http://127.0.0.1:33249/, target: http://127.0.0.1:44441/ [junit4] 2> 265869 INFO (zkCallback-205-thread-3) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (1) [junit4] 2> 265869 INFO (zkCallback-205-thread-5) [ ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:35283_ [junit4] 2> 265884 INFO (closeThreadPool-218-thread-5) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@67bc74cb{HTTP/1.1,[http/1.1]}{127.0.0.1:0} [junit4] 2> 265909 INFO (closeThreadPool-218-thread-5) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@7e1ea32f{/,null,UNAVAILABLE} [junit4] 2> 265909 INFO (closeThreadPool-218-thread-5) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 265927 WARN (closeThreadPool-218-thread-5) [ ] o.a.s.c.s.c.SocketProxy Closing 6 connections to: http://127.0.0.1:43423/, target: http://127.0.0.1:38920/ [junit4] 2> 265956 INFO (closeThreadPool-218-thread-1) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@146c9138{HTTP/1.1,[http/1.1]}{127.0.0.1:39871} [junit4] 2> 265957 INFO (closeThreadPool-218-thread-1) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@6a956778{/,null,UNAVAILABLE} [junit4] 2> 265987 INFO (closeThreadPool-218-thread-1) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 266020 WARN (closeThreadPool-218-thread-1) [ ] o.a.s.c.s.c.SocketProxy Closing 3 connections to: http://127.0.0.1:35283/, target: http://127.0.0.1:39871/ [junit4] 2> 266020 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.ZkTestServer Shutting down ZkTestServer. [junit4] 2> 266027 WARN (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer Watch limit violations: [junit4] 2> Maximum concurrent create/delete watches above limit: [junit4] 2> [junit4] 2> 44 /solr/collections/forceleader_lower_terms_collection/terms/shard1 [junit4] 2> 14 /solr/collections/collection1/terms/shard2 [junit4] 2> 13 /solr/aliases.json [junit4] 2> 5 /solr/security.json [junit4] 2> 5 /solr/configs/conf1 [junit4] 2> 3 /solr/collections/forceleader_lower_terms_collection/state.json [junit4] 2> 2 /solr/collections/collection1/terms/shard1 [junit4] 2> 2 /solr/collections/control_collection/terms/shard1 [junit4] 2> [junit4] 2> Maximum concurrent data watches above limit: [junit4] 2> [junit4] 2> 66 /solr/collections/collection1/state.json [junit4] 2> 47 /solr/collections/forceleader_lower_terms_collection/state.json [junit4] 2> 13 /solr/clusterprops.json [junit4] 2> 13 /solr/clusterstate.json [junit4] 2> 9 /solr/collections/control_collection/state.json [junit4] 2> 2 /solr/overseer_elect/election/73983846069305348-127.0.0.1:33249_-n_0000000000 [junit4] 2> 2 /solr/overseer_elect/election/73983846069305359-127.0.0.1:43423_-n_0000000002 [junit4] 2> [junit4] 2> Maximum concurrent children watches above limit: [junit4] 2> [junit4] 2> 13 /solr/collections [junit4] 2> 11 /solr/live_nodes [junit4] 2> [junit4] 2> 266044 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:35702 [junit4] 2> 266044 INFO (TEST-ForceLeaderTest.testReplicasInLowerTerms-seed#[6D27CF39EEEFD016]) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 35702 [junit4] OK 69.2s J1 | ForceLeaderTest.testReplicasInLowerTerms [junit4] 2> NOTE: leaving temporary files on disk at: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.ForceLeaderTest_6D27CF39EEEFD016-001 [junit4] 2> Jan 09, 2019 5:29:01 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks [junit4] 2> WARNING: Will linger awaiting termination of 1 leaked thread(s). [junit4] 2> NOTE: test params are: codec=FastCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST, chunkSize=15232, maxDocsPerChunk=10, blockSize=1), termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST, chunkSize=15232, blockSize=1)), sim=RandomSimilarity(queryNorm=false): {}, locale=ar-QA, timezone=Africa/Dar_es_Salaam [junit4] 2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 1.8.0_191 (64-bit)/cpus=4,threads=1,free=125837544,total=428343296 [junit4] 2> NOTE: All tests run in this JVM: [ForceLeaderTest] [junit4] Completed [9/10 (3!)] on J1 in 260.46s, 3 tests, 1 error, 1 skipped <<< FAILURES! [junit4] [junit4] HEARTBEAT J1 PID([email protected]): 2019-01-09T17:30:35, stalled for 62.7s at: HdfsUnloadDistributedZkTest.test [junit4] Suite: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest [junit4] OK 91.4s J1 | HdfsUnloadDistributedZkTest.test [junit4] Completed [10/10 (3!)] on J1 in 123.94s, 1 test [junit4] [junit4] [junit4] Tests with failures [seed: 6D27CF39EEEFD016]: [junit4] - org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader [junit4] - org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test [junit4] - org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test [junit4] [junit4] [junit4] JVM J0: 2.69 .. 244.68 = 241.98s [junit4] JVM J1: 3.00 .. 392.47 = 389.47s [junit4] JVM J2: 3.29 .. 230.81 = 227.52s [junit4] Execution time total: 6 minutes 32 seconds [junit4] Tests summary: 10 suites, 20 tests, 3 errors, 15 ignored BUILD FAILED /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1572: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1099: There were test failures: 10 suites, 20 tests, 3 errors, 15 ignored [seed: 6D27CF39EEEFD016] Total time: 6 minutes 41 seconds [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.ForceLeaderTest [repro] 2/5 failed: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest [repro] git checkout df119573dbc5781b2eed357821856b44bd7af5fd Previous HEAD position was 91a07ce... SOLR-12983: Create DocValues fields directly from byte[] HEAD is now at df11957... SOLR-12888: Run URP now auto-registers NestedUpdateProcessor before it. [repro] Exiting with code 256 Archiving artifacts [Fast Archiver] No artifacts from Lucene-Solr-repro Repro-Lucene-Solr-NightlyTests-master#1746 to compare, so performing full copy of artifacts Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE Email was triggered for: Unstable (Test Failures) Sending email for trigger: Unstable (Test Failures)
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
