Build: https://builds.apache.org/job/Lucene-Solr-repro/3088/
[...truncated 29 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/11/consoleText [repro] Revision: 03a3562f782a795afb3e5dbb474aca216273cc4d [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=StressHdfsTest -Dtests.method=test -Dtests.seed=C6AA855DD07506D9 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=es-HN -Dtests.timezone=America/Martinique -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=HdfsCollectionsAPIDistributedZkTest -Dtests.method=testCollectionsAPI -Dtests.seed=C6AA855DD07506D9 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-JO -Dtests.timezone=BET -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 07b37ff26becf214bae000dff08b9091d31327a8 [repro] git fetch [repro] git checkout 03a3562f782a795afb3e5dbb474aca216273cc4d [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro] solr/core [repro] HdfsCollectionsAPIDistributedZkTest [repro] StressHdfsTest [repro] ant compile-test [...truncated 3576 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.HdfsCollectionsAPIDistributedZkTest|*.StressHdfsTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.seed=C6AA855DD07506D9 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-JO -Dtests.timezone=BET -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 84647 lines...] [junit4] 2> 713920 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@705bd61a (delete_data_dir_shard3_replica_n6) has a reference count of -1 [junit4] 2> 713920 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@6ee50c15 (delete_data_dir_shard3_replica_n8) has a reference count of -1 [junit4] 2> 713920 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@2ad28ff4 (delete_data_dir_shard1_replica_n1) has a reference count of -1 [junit4] 2> 713924 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 714290 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 714305 INFO (TEST-StressHdfsTest.test-seed#[C6AA855DD07506D9]) [ ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 5 in directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-5-001 of type NRT [junit4] 2> 714326 WARN (closeThreadPool-258-thread-2) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time [junit4] 2> 714326 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0) [junit4] 2> 714326 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ... [junit4] 2> 714326 INFO (closeThreadPool-258-thread-2) [ ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_191-b12 [junit4] 2> 714358 INFO (closeThreadPool-258-thread-2) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 714358 INFO (closeThreadPool-258-thread-2) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 714358 INFO (closeThreadPool-258-thread-2) [ ] o.e.j.s.session node0 Scavenging every 660000ms [junit4] 2> 714358 INFO (closeThreadPool-258-thread-2) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@316881f8{/_kxy/k,null,AVAILABLE} [junit4] 2> 714446 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 714483 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 714485 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-4-001/cores [junit4] 2> 714503 INFO (closeThreadPool-258-thread-2) [ ] o.e.j.s.AbstractConnector Started ServerConnector@55a7342c{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:44932} [junit4] 2> 714503 INFO (closeThreadPool-258-thread-2) [ ] o.e.j.s.Server Started @714615ms [junit4] 2> 714503 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=hdfs://lucene2-us-west.apache.org:37806/hdfs__lucene2-us-west.apache.org_37806__home_jenkins_jenkins-slave_workspace_Lucene-Solr-repro_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002_tempDir-002_jetty5, replicaType=NRT, solrconfig=solrconfig.xml, hostContext=/_kxy/k, hostPort=44932, coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-5-001/cores} [junit4] 2> 714504 ERROR (closeThreadPool-258-thread-2) [ ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 714504 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory [junit4] 2> 714504 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 8.1.0 [junit4] 2> 714504 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 714504 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 714504 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2019-03-30T05:42:58.342Z [junit4] 2> 714635 INFO (zkConnectionManagerCallback-336-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 714706 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 714706 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-5-001/solr.xml [junit4] 2> 714833 WARN (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [ ] o.a.z.s.NIOServerCnxn Unable to read additional data from client sessionid 0x108716157480018, likely client has closed socket [junit4] 2> 714848 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored [junit4] 2> 714848 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored [junit4] 2> 714869 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc, but no JMX reporters were configured - adding default JMX reporter. [junit4] 2> 716509 INFO (TEST-StressHdfsTest.test-seed#[C6AA855DD07506D9]) [ ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 6 in directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001 of type NRT [junit4] 2> 716510 WARN (closeThreadPool-258-thread-1) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time [junit4] 2> 716510 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0) [junit4] 2> 716510 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ... [junit4] 2> 716510 INFO (closeThreadPool-258-thread-1) [ ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_191-b12 [junit4] 2> 716621 INFO (closeThreadPool-258-thread-1) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 716621 INFO (closeThreadPool-258-thread-1) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 716621 INFO (closeThreadPool-258-thread-1) [ ] o.e.j.s.session node0 Scavenging every 600000ms [junit4] 2> 716621 INFO (closeThreadPool-258-thread-1) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@1bbeb5e5{/_kxy/k,null,AVAILABLE} [junit4] 2> 716922 INFO (closeThreadPool-258-thread-1) [ ] o.e.j.s.AbstractConnector Started ServerConnector@1a36a32c{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:45307} [junit4] 2> 716922 INFO (closeThreadPool-258-thread-1) [ ] o.e.j.s.Server Started @717034ms [junit4] 2> 716922 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=hdfs://lucene2-us-west.apache.org:37806/hdfs__lucene2-us-west.apache.org_37806__home_jenkins_jenkins-slave_workspace_Lucene-Solr-repro_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002_tempDir-002_jetty6, replicaType=NRT, solrconfig=solrconfig.xml, hostContext=/_kxy/k, hostPort=45307, coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001/cores} [junit4] 2> 716923 ERROR (closeThreadPool-258-thread-1) [ ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 716923 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory [junit4] 2> 716923 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 8.1.0 [junit4] 2> 716923 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 716923 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 716923 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2019-03-30T05:43:00.761Z [junit4] 2> 717214 INFO (zkConnectionManagerCallback-339-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 717299 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 717370 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001/solr.xml [junit4] 2> 717374 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored [junit4] 2> 717374 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored [junit4] 2> 717375 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc, but no JMX reporters were configured - adding default JMX reporter. [junit4] 2> 718372 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false] [junit4] 2> 718485 WARN (closeThreadPool-258-thread-1) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@6cabe3fd[provider=null,keyStore=null,trustStore=null] [junit4] 2> 718683 WARN (closeThreadPool-258-thread-1) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@62779b07[provider=null,keyStore=null,trustStore=null] [junit4] 2> 718684 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35719/solr [junit4] 2> 718791 INFO (zkConnectionManagerCallback-346-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 718864 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false] [junit4] 2> 718900 INFO (zkConnectionManagerCallback-348-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 718905 WARN (closeThreadPool-258-thread-2) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@1c5ef5c[provider=null,keyStore=null,trustStore=null] [junit4] 2> 718992 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (5) [junit4] 2> 719028 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.ZkController Publish node=127.0.0.1:45307__kxy%2Fk as DOWN [junit4] 2> 719046 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores [junit4] 2> 719046 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:45307__kxy%2Fk [junit4] 2> 719158 INFO (zkCallback-244-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719158 INFO (zkCallback-315-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719158 INFO (zkCallback-275-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719158 INFO (zkCallback-306-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719158 INFO (zkCallback-290-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719158 INFO (zkCallback-268-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719158 INFO (zkCallback-237-thread-4) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719159 INFO (zkCallback-256-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719160 WARN (closeThreadPool-258-thread-2) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@615709ed[provider=null,keyStore=null,trustStore=null] [junit4] 2> 719161 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35719/solr [junit4] 2> 719168 INFO (zkCallback-297-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719188 INFO (zkCallback-325-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719194 INFO (TEST-StressHdfsTest.test-seed#[C6AA855DD07506D9]) [ ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 7 in directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001 of type NRT [junit4] 2> 719194 INFO (zkCallback-332-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719230 WARN (closeThreadPool-258-thread-3) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time [junit4] 2> 719230 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0) [junit4] 2> 719230 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ... [junit4] 2> 719230 INFO (closeThreadPool-258-thread-3) [ ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 1.8.0_191-b12 [junit4] 2> 719243 INFO (closeThreadPool-258-thread-3) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 719243 INFO (closeThreadPool-258-thread-3) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 719243 INFO (closeThreadPool-258-thread-3) [ ] o.e.j.s.session node0 Scavenging every 600000ms [junit4] 2> 719243 INFO (closeThreadPool-258-thread-3) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@4371324d{/_kxy/k,null,AVAILABLE} [junit4] 2> 719245 INFO (zkCallback-347-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (5) -> (6) [junit4] 2> 719267 INFO (closeThreadPool-258-thread-3) [ ] o.e.j.s.AbstractConnector Started ServerConnector@5bbb8a0a{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:37427} [junit4] 2> 719267 INFO (closeThreadPool-258-thread-3) [ ] o.e.j.s.Server Started @719379ms [junit4] 2> 719267 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {solr.data.dir=hdfs://lucene2-us-west.apache.org:37806/hdfs__lucene2-us-west.apache.org_37806__home_jenkins_jenkins-slave_workspace_Lucene-Solr-repro_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002_tempDir-002_jetty7, replicaType=NRT, solrconfig=solrconfig.xml, hostContext=/_kxy/k, hostPort=37427, coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001/cores} [junit4] 2> 719268 ERROR (closeThreadPool-258-thread-3) [ ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 719268 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory [junit4] 2> 719268 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr™ version 8.1.0 [junit4] 2> 719268 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 719268 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 719268 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2019-03-30T05:43:03.106Z [junit4] 2> 719358 INFO (zkConnectionManagerCallback-356-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 719427 INFO (zkConnectionManagerCallback-358-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 719631 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 719631 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001/solr.xml [junit4] 2> 719632 INFO (zkConnectionManagerCallback-360-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 719632 INFO (zkConnectionManagerCallback-365-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 719720 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (6) [junit4] 2> 719721 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35719/solr ready [junit4] 2> 719735 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 719737 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (6) [junit4] 2> 719773 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.ZkController Publish node=127.0.0.1:44932__kxy%2Fk as DOWN [junit4] 2> 719775 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores [junit4] 2> 719775 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:44932__kxy%2Fk [junit4] 2> 719795 INFO (zkCallback-275-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719795 INFO (zkCallback-237-thread-4) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719809 INFO (zkCallback-290-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719809 INFO (zkCallback-244-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719809 INFO (zkCallback-268-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719809 INFO (zkCallback-306-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719809 INFO (zkCallback-315-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719836 INFO (zkCallback-256-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719837 INFO (zkCallback-364-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719837 INFO (zkCallback-332-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719837 INFO (zkCallback-347-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719837 INFO (zkCallback-359-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719841 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored [junit4] 2> 719841 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored [junit4] 2> 719842 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc, but no JMX reporters were configured - adding default JMX reporter. [junit4] 2> 719844 INFO (zkCallback-297-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719844 INFO (zkCallback-325-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (6) -> (7) [junit4] 2> 719899 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 720117 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 720117 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 720119 INFO (closeThreadPool-258-thread-1) [ ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001/cores [junit4] 2> 720476 INFO (zkConnectionManagerCallback-374-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 720550 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (7) [junit4] 2> 720551 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35719/solr ready [junit4] 2> 720552 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 720734 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 720881 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 720881 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 720915 INFO (closeThreadPool-258-thread-2) [ ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-5-001/cores [junit4] 2> 722344 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false] [junit4] 2> 722361 WARN (closeThreadPool-258-thread-3) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@16d48b39[provider=null,keyStore=null,trustStore=null] [junit4] 2> 722384 WARN (closeThreadPool-258-thread-3) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@3071030b[provider=null,keyStore=null,trustStore=null] [junit4] 2> 722418 INFO (closeThreadPool-258-thread-3) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:35719/solr [junit4] 2> 722508 INFO (zkConnectionManagerCallback-381-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 722581 INFO (zkConnectionManagerCallback-383-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 722617 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (7) [junit4] 2> 722637 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.c.ZkController Publish node=127.0.0.1:37427__kxy%2Fk as DOWN [junit4] 2> 722638 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores [junit4] 2> 722638 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:37427__kxy%2Fk [junit4] 2> 722639 INFO (zkCallback-325-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722676 INFO (zkCallback-347-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722676 INFO (zkCallback-332-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722676 INFO (zkCallback-256-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722676 INFO (zkCallback-364-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722684 INFO (zkCallback-382-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722685 INFO (zkCallback-315-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722685 INFO (zkCallback-290-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722685 INFO (zkCallback-268-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722685 INFO (zkCallback-306-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722685 INFO (zkCallback-275-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722685 INFO (zkCallback-244-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722688 INFO (zkCallback-297-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722692 INFO (zkCallback-237-thread-3) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722693 INFO (zkCallback-359-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 722693 INFO (zkCallback-373-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (7) -> (8) [junit4] 2> 723001 INFO (zkConnectionManagerCallback-390-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 723090 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (8) [junit4] 2> 723091 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:35719/solr ready [junit4] 2> 723126 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 723332 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 723477 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 723477 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 723514 INFO (closeThreadPool-258-thread-3) [n:127.0.0.1:37427__kxy%2Fk ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001/cores [junit4] 2> 724796 INFO (qtp171835094-18185) [n:127.0.0.1:33860__kxy%2Fk ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:45307__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 724796 INFO (qtp171835094-18186) [n:127.0.0.1:33860__kxy%2Fk ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:35682__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 724797 INFO (qtp171835094-18187) [n:127.0.0.1:33860__kxy%2Fk ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:33860__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 724797 INFO (qtp171835094-18184) [n:127.0.0.1:33860__kxy%2Fk ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:37427__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 724808 INFO (qtp171835094-18181) [n:127.0.0.1:33860__kxy%2Fk ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:44932__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 724845 INFO (qtp171835094-18377) [n:127.0.0.1:33860__kxy%2Fk ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:42427__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 724873 INFO (qtp171835094-18378) [n:127.0.0.1:33860__kxy%2Fk ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:37605__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 725075 INFO (OverseerThreadFactory-203-thread-3-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:37427__kxy%2Fk for creating new replica of shard shard1 for collection collection1 [junit4] 2> 725077 INFO (OverseerThreadFactory-203-thread-3-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command. [junit4] 2> 725359 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk x:collection1_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT [junit4] 2> 726503 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.1.0 [junit4] 2> 726633 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.IndexSchema [collection1_shard1_replica_n1] Schema name=test [junit4] 2> 727815 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id [junit4] 2> 727961 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n1' using configuration from collection collection1, trusted=true [junit4] 2> 727961 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n1' (registry 'solr.core.collection1.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 727962 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home [junit4] 2> 727962 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled [junit4] 2> 727962 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SolrCore [[collection1_shard1_replica_n1] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-7-001/cores/collection1_shard1_replica_n1], dataDir=[hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node2/data/] [junit4] 2> 727963 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node2/data/snapshot_metadata [junit4] 2> 728015 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 728015 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 728015 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 728276 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 728301 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node2/data [junit4] 2> 728389 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node2/data/index [junit4] 2> 728409 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 728409 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 728409 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 728483 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 728484 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.MockRandomMergePolicy: org.apache.lucene.index.MockRandomMergePolicy@58f75db [junit4] 2> 729016 WARN (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}} [junit4] 2> 729434 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog [junit4] 2> 729434 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 729434 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2 [junit4] 2> 729490 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 729490 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 729561 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=40, maxMergeAtOnceExplicit=32, maxMergedSegmentMB=60.9423828125, floorSegmentMB=0.3017578125, forceMergeDeletesPctAllowed=3.9179815532163307, segmentsPerTier=46.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0, deletesPctAllowed=40.79942449338278 [junit4] 2> 729634 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@24545db3[collection1_shard1_replica_n1] main] [junit4] 2> 729649 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 729650 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 729650 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 729651 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1629408034542321664 [junit4] 2> 729787 INFO (searcherExecutor-276-thread-1-processing-n:127.0.0.1:37427__kxy%2Fk x:collection1_shard1_replica_n1 c:collection1 s:shard1) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SolrCore [collection1_shard1_replica_n1] Registered new searcher Searcher@24545db3[collection1_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 729866 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0}, version=0} [junit4] 2> 729867 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/collection1/leaders/shard1 [junit4] 2> 730072 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue. [junit4] 2> 730072 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 730072 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/ [junit4] 2> 730072 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 730072 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/ has no replicas [junit4] 2> 730072 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/collection1/leaders/shard1/leader after winning as /collections/collection1/leader_elect/shard1/election/74434056741716002-core_node2-n_0000000000 [junit4] 2> 730074 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/ shard1 [junit4] 2> 730081 INFO (zkCallback-382-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 730095 INFO (zkCallback-382-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 730145 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary [junit4] 2> 730146 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=4787 [junit4] 2> 730218 INFO (qtp171835094-18184) [n:127.0.0.1:33860__kxy%2Fk c:collection1 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={node=127.0.0.1:37427__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2} status=0 QTime=5421 [junit4] 2> 730290 INFO (OverseerCollectionConfigSetProcessor-74434056741715972-127.0.0.1:40557__kxy%2Fk-n_0000000000) [n:127.0.0.1:40557__kxy%2Fk ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000007 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 730307 INFO (OverseerThreadFactory-203-thread-4-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:45307__kxy%2Fk for creating new replica of shard shard1 for collection collection1 [junit4] 2> 730308 INFO (OverseerThreadFactory-203-thread-4-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command. [junit4] 2> 730337 INFO (zkCallback-382-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 730443 INFO (zkCallback-382-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 730443 INFO (zkCallback-382-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 730448 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n3&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT [junit4] 2> 730642 INFO (zkCallback-382-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 730642 INFO (zkCallback-382-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 730666 INFO (zkCallback-382-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 731634 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.1.0 [junit4] 2> 731744 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.IndexSchema [collection1_shard1_replica_n3] Schema name=test [junit4] 2> 732650 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id [junit4] 2> 732812 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n3' using configuration from collection collection1, trusted=true [junit4] 2> 732813 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n3' (registry 'solr.core.collection1.shard1.replica_n3') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 732813 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home [junit4] 2> 732813 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled [junit4] 2> 732813 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [[collection1_shard1_replica_n3] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-6-001/cores/collection1_shard1_replica_n3], dataDir=[hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/] [junit4] 2> 732884 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/snapshot_metadata [junit4] 2> 733050 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 733050 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 733050 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 733118 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 733154 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data [junit4] 2> 733458 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/index [junit4] 2> 733517 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 733517 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 733517 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 733587 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 733588 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.MockRandomMergePolicy: org.apache.lucene.index.MockRandomMergePolicy@4c88fa37 [junit4] 2> 734374 WARN (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}} [junit4] 2> 734792 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog [junit4] 2> 734792 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 734792 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2 [junit4] 2> 734974 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 734974 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 734976 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=40, maxMergeAtOnceExplicit=32, maxMergedSegmentMB=60.9423828125, floorSegmentMB=0.3017578125, forceMergeDeletesPctAllowed=3.9179815532163307, segmentsPerTier=46.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0, deletesPctAllowed=40.79942449338278 [junit4] 2> 735068 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@c0e15fe[collection1_shard1_replica_n3] main] [junit4] 2> 735082 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 735083 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 735083 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 735083 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1629408040238186496 [junit4] 2> 735100 INFO (searcherExecutor-281-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@c0e15fe[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 735137 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0, core_node4=0}, version=1} [junit4] 2> 735137 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/collection1/leaders/shard1 [junit4] 2> 735195 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ZkController Core needs to recover:collection1_shard1_replica_n3 [junit4] 2> 735224 INFO (updateExecutor-342-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 735249 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true [junit4] 2> 735250 INFO (qtp760765234-18285) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n3&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=4801 [junit4] 2> 735250 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy startupVersions is empty [junit4] 2> 735280 INFO (qtp1708575716-18319) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1] webapp=/_kxy/k path=/admin/ping params={wt=javabin&version=2} hits=0 status=0 QTime=13 [junit4] 2> 735280 INFO (qtp1708575716-18319) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1] webapp=/_kxy/k path=/admin/ping params={wt=javabin&version=2} status=0 QTime=13 [junit4] 2> 735281 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[collection1_shard1_replica_n3] [junit4] 2> 735354 INFO (qtp171835094-18185) [n:127.0.0.1:33860__kxy%2Fk c:collection1 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={node=127.0.0.1:45307__kxy%252Fk&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2} status=0 QTime=10558 [junit4] 2> 735357 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Starting to buffer updates. HDFSUpdateLog{state=ACTIVE, tlog=null} [junit4] 2> 735357 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Publishing state of core [collection1_shard1_replica_n3] as recovering, leader is [http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/] and I am [http://127.0.0.1:45307/_kxy/k/collection1_shard1_replica_n3/] [junit4] 2> 735558 INFO (zkCallback-382-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 735558 INFO (zkCallback-382-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 735558 INFO (zkCallback-382-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 735571 INFO (zkCallback-347-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 735572 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Sending prep recovery command to [http://127.0.0.1:37427/_kxy/k]; [WaitForState: action=PREPRECOVERY&core=collection1_shard1_replica_n1&nodeName=127.0.0.1:45307__kxy%252Fk&coreNodeName=core_node4&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true] [junit4] 2> 735577 INFO (zkCallback-347-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 735577 INFO (zkCallback-347-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 735593 INFO (qtp1708575716-18315) [n:127.0.0.1:37427__kxy%2Fk x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node4, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true [junit4] 2> 735595 INFO (qtp1708575716-18315) [n:127.0.0.1:37427__kxy%2Fk x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:45307__kxy%2Fk, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"dataDir":"hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/","base_url":"http://127.0.0.1:45307/_kxy/k","node_name":"127.0.0.1:45307__kxy%2Fk","type":"NRT","ulogDir":"hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node4/data/tlog","core":"collection1_shard1_replica_n3","shared_storage":"true","state":"recovering"} [junit4] 2> 735595 INFO (qtp1708575716-18315) [n:127.0.0.1:37427__kxy%2Fk x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:45307__kxy%252Fk&onlyIfLeaderActive=true&core=collection1_shard1_replica_n1&coreNodeName=core_node4&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=1 [junit4] 2> 736098 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Attempting to PeerSync from [http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/] - recoveringAfterStartup=[true] [junit4] 2> 736098 WARN (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.PeerSyncWithLeader no frame of reference to tell if we've missed updates [junit4] 2> 736098 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy PeerSync Recovery was not successful - trying replication. [junit4] 2> 736098 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Starting Replication Recovery. [junit4] 2> 736098 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Attempting to replicate from [http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/]. [junit4] 2> 736118 INFO (qtp1708575716-18316) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1629408041323462656,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 736152 INFO (qtp1708575716-18316) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 736230 INFO (qtp760765234-18284) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.DistributedUpdateProcessor Ignoring commit while not ACTIVE - state: BUFFERING replay: false [junit4] 2> 736230 INFO (qtp760765234-18284) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/_kxy/k path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:37427/_kxy/k/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 0 [junit4] 2> 736285 INFO (qtp1708575716-18316) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 736286 INFO (qtp1708575716-18316) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/_kxy/k path=/update params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 187 [junit4] 2> 736341 INFO (qtp1708575716-18317) [n:127.0.0.1:37427__kxy%2Fk c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1] webapp=/_kxy/k path=/replication params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 QTime=53 [junit4] 2> 736342 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Master's generation: 1 [junit4] 2> 736342 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Master's version: 0 [junit4] 2> 736342 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Slave's generation: 1 [junit4] 2> 736342 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Slave's version: 0 [junit4] 2> 736342 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher New index in Master. Deleting mine... [junit4] 2> 736369 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@2e148c2c[collection1_shard1_replica_n3] main] [junit4] 2> 736410 INFO (OverseerCollectionConfigSetProcessor-74434056741715972-127.0.0.1:40557__kxy%2Fk-n_0000000000) [n:127.0.0.1:40557__kxy%2Fk ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000004 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 736412 INFO (searcherExecutor-281-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@2e148c2c[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 736412 INFO (OverseerThreadFactory-203-thread-5-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:35682__kxy%2Fk for creating new replica of shard shard1 for collection collection1 [junit4] 2> 736413 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy No replay needed. [junit4] 2> 736413 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Replication Recovery was successful. [junit4] 2> 736413 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Registering as Active after recovery. [junit4] 2> 736431 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Updating version bucket highest from index after successful recovery. [junit4] 2> 736431 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1629408041651666944 [junit4] 2> 736432 INFO (recoveryExecutor-344-thread-1-processing-n:127.0.0.1:45307__kxy%2Fk x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45307__kxy%2Fk c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[true] [junit4] 2> 736447 INFO (OverseerThreadFactory-203-thread-5-processing-n:127.0.0.1:40557__kxy%2Fk) [n:127.0.0.1:40557__kxy%2Fk c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command. [junit4] 2> 736539 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk x:collection1_shard1_replica_n5] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n5&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT [junit4] 2> 736556 INFO (zkCallback-382-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736556 INFO (zkCallback-382-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736556 INFO (zkCallback-382-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736560 INFO (zkCallback-347-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736560 INFO (zkCallback-347-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736560 INFO (zkCallback-347-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736771 INFO (zkCallback-347-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736771 INFO (zkCallback-347-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736791 INFO (zkCallback-347-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736791 INFO (zkCallback-382-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736791 INFO (zkCallback-382-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 736791 INFO (zkCallback-382-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/collection1/state.json] for collection [collection1] has occurred - updating... (live nodes size: [8]) [junit4] 2> 737954 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.1.0 [junit4] 2> 738068 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.IndexSchema [collection1_shard1_replica_n5] Schema name=test [junit4] 2> 739791 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id [junit4] 2> 739954 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n5' using configuration from collection collection1, trusted=true [junit4] 2> 739955 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n5' (registry 'solr.core.collection1.shard1.replica_n5') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@259bf6dc [junit4] 2> 739971 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home [junit4] 2> 739971 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled [junit4] 2> 739971 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.SolrCore [[collection1_shard1_replica_n5] ] Opening new SolrCore at [/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/shard-1-001/cores/collection1_shard1_replica_n5], dataDir=[hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node6/data/] [junit4] 2> 739972 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node6/data/snapshot_metadata [junit4] 2> 740026 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 740027 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 740027 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 740158 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 740159 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node6/data [junit4] 2> 740316 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://lucene2-us-west.apache.org:37806/solr_hdfs_home/collection1/core_node6/data/index [junit4] 2> 740354 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 740354 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 740354 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 740462 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 740463 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.MockRandomMergePolicy: org.apache.lucene.index.MockRandomMergePolicy@20bc513c [junit4] 2> 741025 WARN (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}} [junit4] 2> 741301 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog [junit4] 2> 741301 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 741301 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2 [junit4] 2> 741351 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.CommitTracker Hard AutoCommit: disabled [junit4] 2> 741351 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.CommitTracker Soft AutoCommit: disabled [junit4] 2> 741365 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=40, maxMergeAtOnceExplicit=32, maxMergedSegmentMB=60.9423828125, floorSegmentMB=0.3017578125, forceMergeDeletesPctAllowed=3.9179815532163307, segmentsPerTier=46.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0, deletesPctAllowed=40.79942449338278 [junit4] 2> 741562 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.SolrIndexSearcher Opening [Searcher@385e2046[collection1_shard1_replica_n5] main] [junit4] 2> 741600 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 741600 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 741601 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 741601 INFO (qtp1712054871-18139) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1629408047072804864 [junit4] 2> 741683 INFO (searcherExecutor-286-thread-1-processing-n:127.0.0.1:35682__kxy%2Fk x:collection1_shard1_replica_n5 c:collection1 s:shard1) [n:127.0.0.1:35682__kxy%2Fk c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.SolrCore [collection1_shard1_replica_n5] Registered new searcher Searcher@385e2046[colle [...truncated too long message...] pache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > Caused by: java.net.SocketTimeoutException: Read timed out [junit4] > at java.net.SocketInputStream.socketRead0(Native Method) [junit4] > at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) [junit4] > at java.net.SocketInputStream.read(SocketInputStream.java:171) [junit4] > at java.net.SocketInputStream.read(SocketInputStream.java:141) [junit4] > at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) [junit4] > at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) [junit4] > at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282) [junit4] > at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) [junit4] > at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) [junit4] > at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) [junit4] > at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) [junit4] > at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165) [junit4] > at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) [junit4] > at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) [junit4] > at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) [junit4] > at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) [junit4] > at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) [junit4] > at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) [junit4] > at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) [junit4] > at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) [junit4] > at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) [junit4] > at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:548) [junit4] > ... 53 more [junit4] 2> 1012086 WARN (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.a.h.h.s.d.DirectoryScanner DirectoryScanner: shutdown has been called [junit4] 2> 1012091 INFO (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.w.WebAppContext@261d707e{datanode,/,null,UNAVAILABLE}{/datanode} [junit4] 2> 1012091 INFO (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@64bb06bb{HTTP/1.1,[http/1.1]}{localhost:0} [junit4] 2> 1012091 INFO (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 1012092 INFO (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@57fc3dca{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,UNAVAILABLE} [junit4] 2> 1012146 WARN (BP-1411784138-127.0.0.1-1553924534507 heartbeating to lucene2-us-west.apache.org/127.0.0.1:37806) [ ] o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager interrupted [junit4] 2> 1012146 WARN (BP-1411784138-127.0.0.1-1553924534507 heartbeating to lucene2-us-west.apache.org/127.0.0.1:37806) [ ] o.a.h.h.s.d.DataNode Ending block pool service for: Block pool BP-1411784138-127.0.0.1-1553924534507 (Datanode Uuid aea1f837-cb06-474c-b550-ecd13bc42b4b) service to lucene2-us-west.apache.org/127.0.0.1:37806 [junit4] 2> 1012165 WARN (refreshUsed-/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/tempDir-001/hdfsBaseDir/data/data1/current/BP-1411784138-127.0.0.1-1553924534507) [ ] o.a.h.f.CachingGetSpaceUsed Thread Interrupted waiting to refresh disk information: sleep interrupted [junit4] 2> 1012205 WARN (refreshUsed-/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002/tempDir-001/hdfsBaseDir/data/data2/current/BP-1411784138-127.0.0.1-1553924534507) [ ] o.a.h.f.CachingGetSpaceUsed Thread Interrupted waiting to refresh disk information: sleep interrupted [junit4] 2> 1012244 INFO (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.w.WebAppContext@2bfecc87{hdfs,/,null,UNAVAILABLE}{/hdfs} [junit4] 2> 1012245 INFO (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.e.j.s.AbstractConnector Stopped ServerConnector@4d3f14a{HTTP/1.1,[http/1.1]}{lucene2-us-west.apache.org:0} [junit4] 2> 1012245 INFO (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.e.j.s.session node0 Stopped scavenging [junit4] 2> 1012245 INFO (SUITE-StressHdfsTest-seed#[C6AA855DD07506D9]-worker) [ ] o.e.j.s.h.ContextHandler Stopped o.e.j.s.ServletContextHandler@16e65e64{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,UNAVAILABLE} [junit4] 2> NOTE: leaving temporary files on disk at: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_C6AA855DD07506D9-002 [junit4] 2> Mar 30, 2019 5:47:57 AM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks [junit4] 2> WARNING: Will linger awaiting termination of 16 leaked thread(s). [junit4] 2> NOTE: test params are: codec=Asserting(Lucene80): {multiDefault=PostingsFormat(name=LuceneFixedGap), _root_=PostingsFormat(name=Asserting), id=Lucene50(blocksize=128), text=PostingsFormat(name=LuceneVarGapDocFreqInterval), txt_t=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{range_facet_l_dv=DocValuesFormat(name=Asserting), _version_=DocValuesFormat(name=Lucene80), intDefault=DocValuesFormat(name=Lucene80), id_i1=DocValuesFormat(name=Lucene80), range_facet_i_dv=DocValuesFormat(name=Lucene80), intDvoDefault=DocValuesFormat(name=Direct), range_facet_l=DocValuesFormat(name=Lucene80), timestamp=DocValuesFormat(name=Lucene80)}, maxPointsInLeafNode=684, maxMBSortInHeap=5.830008782199224, sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@11ed1428), locale=ar-JO, timezone=BET [junit4] 2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 1.8.0_191 (64-bit)/cpus=4,threads=4,free=152333400,total=502267904 [junit4] 2> NOTE: All tests run in this JVM: [StressHdfsTest, StressHdfsTest] [junit4] Completed [9/10 (6!)] on J0 in 346.01s, 1 test, 1 error <<< FAILURES! [junit4] [junit4] HEARTBEAT J1 PID([email protected]): 2019-03-30T05:48:40, stalled for 69.8s at: StressHdfsTest.test [junit4] HEARTBEAT J1 PID([email protected]): 2019-03-30T05:49:40, stalled for 130s at: StressHdfsTest.test [junit4] HEARTBEAT J1 PID([email protected]): 2019-03-30T05:50:40, stalled for 190s at: StressHdfsTest.test [junit4] HEARTBEAT J1 PID([email protected]): 2019-03-30T05:51:40, stalled for 250s at: StressHdfsTest.test [junit4] Suite: org.apache.solr.cloud.hdfs.StressHdfsTest [junit4] OK 256s J1 | StressHdfsTest.test [junit4] Completed [10/10 (6!)] on J1 in 263.71s, 1 test [junit4] [junit4] [junit4] Tests with failures [seed: C6AA855DD07506D9]: [junit4] - org.apache.solr.cloud.hdfs.StressHdfsTest.test [junit4] - org.apache.solr.cloud.hdfs.StressHdfsTest.test [junit4] - org.apache.solr.cloud.hdfs.StressHdfsTest.test [junit4] - org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI [junit4] - org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest.testCollectionsAPI [junit4] - org.apache.solr.cloud.hdfs.StressHdfsTest.test [junit4] [junit4] [junit4] JVM J0: 1.65 .. 1019.30 = 1017.66s [junit4] JVM J1: 1.71 .. 1247.12 = 1245.41s [junit4] JVM J2: 1.75 .. 998.41 = 996.66s [junit4] Execution time total: 20 minutes 47 seconds [junit4] Tests summary: 10 suites, 100 tests, 5 errors, 1 failure BUILD FAILED /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1573: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1100: There were test failures: 10 suites, 100 tests, 5 errors, 1 failure [seed: C6AA855DD07506D9] Total time: 20 minutes 50 seconds [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.cloud.api.collections.HdfsCollectionsAPIDistributedZkTest [repro] 4/5 failed: org.apache.solr.cloud.hdfs.StressHdfsTest [repro] git checkout 07b37ff26becf214bae000dff08b9091d31327a8 Previous HEAD position was 03a3562... SOLR-13349:High CPU usage in Solr due to Java 8 bug HEAD is now at 07b37ff... SOLR-13355: Obey 'ALL' for handlers with other predefined perms [repro] Exiting with code 256 Archiving artifacts [Fast Archiver] No artifacts from Lucene-Solr-repro Repro-Lucene-Solr-BadApples-Tests-8.x#58 to compare, so performing full copy of artifacts Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE Email was triggered for: Unstable (Test Failures) Sending email for trigger: Unstable (Test Failures)
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
