Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1854/
2 tests failed. FAILED: org.apache.solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest.test Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([97127C10F1843A6]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest Error Message: Suite timeout exceeded (>= 7200000 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 7200000 msec). at __randomizedtesting.SeedInfo.seed([97127C10F1843A6]:0) Build Log: [...truncated 15468 lines...] [junit4] Suite: org.apache.solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest [junit4] 2> Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/init-core-data-001 [junit4] 2> 104742 WARN (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=2 numCloses=2 [junit4] 2> 104743 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 104745 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: @org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl="None") [junit4] 2> 104745 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> 104746 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: / [junit4] 2> 105950 WARN (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.h.u.NativeCodeLoader Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [junit4] 1> Formatting using clusterid: testClusterID [junit4] 2> 107593 WARN (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.h.m.i.MetricsConfig Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties [junit4] 2> 107879 WARN (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j [junit4] 2> 107919 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 11.0.1+13-LTS [junit4] 2> 107922 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 107923 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 107923 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.session node0 Scavenging every 660000ms [junit4] 2> 107925 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@6186339{static,/static,jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,AVAILABLE} [junit4] 2> 108359 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.h.ContextHandler Started o.e.j.w.WebAppContext@9f2733f{hdfs,/,file:///x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/jetty-localhost-41042-hdfs-_-any-11080779452450029231.dir/webapp/,AVAILABLE}{/hdfs} [junit4] 2> 108362 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.AbstractConnector Started ServerConnector@1e9fa992{HTTP/1.1,[http/1.1]}{localhost:41042} [junit4] 2> 108362 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.Server Started @108437ms [junit4] 2> 109315 WARN (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j [junit4] 2> 109320 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 11.0.1+13-LTS [junit4] 2> 109322 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 109322 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 109322 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.session node0 Scavenging every 600000ms [junit4] 2> 109322 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@62cb30e7{static,/static,jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,AVAILABLE} [junit4] 2> 109494 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.h.ContextHandler Started o.e.j.w.WebAppContext@6cf032b7{datanode,/,file:///x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/jetty-localhost-45003-datanode-_-any-13326213864534826221.dir/webapp/,AVAILABLE}{/datanode} [junit4] 2> 109495 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.AbstractConnector Started ServerConnector@76308492{HTTP/1.1,[http/1.1]}{localhost:45003} [junit4] 2> 109495 INFO (SUITE-HdfsTlogReplayBufferedWhileIndexingTest-seed#[97127C10F1843A6]-worker) [ ] o.e.j.s.Server Started @109570ms [junit4] 2> 110914 INFO (Block report processor) [ ] BlockStateChange BLOCK* processReport 0x20a675002646a432: Processing first storage report for DS-6b8187db-2db9-4b79-9322-f02cec5dca52 from datanode 149538cc-7e7c-4e20-9f38-30518e8aade0 [junit4] 2> 110929 INFO (Block report processor) [ ] BlockStateChange BLOCK* processReport 0x20a675002646a432: from storage DS-6b8187db-2db9-4b79-9322-f02cec5dca52 node DatanodeRegistration(127.0.0.1:36929, datanodeUuid=149538cc-7e7c-4e20-9f38-30518e8aade0, infoPort=44743, infoSecurePort=0, ipcPort=40991, storageInfo=lv=-57;cid=testClusterID;nsid=1802227240;c=1558656235061), blocks: 0, hasStaleStorage: true, processing time: 10 msecs, invalidatedBlocks: 0 [junit4] 2> 110930 INFO (Block report processor) [ ] BlockStateChange BLOCK* processReport 0x20a675002646a432: Processing first storage report for DS-8f1a5b79-19b1-4064-bbd6-7622567e6842 from datanode 149538cc-7e7c-4e20-9f38-30518e8aade0 [junit4] 2> 110930 INFO (Block report processor) [ ] BlockStateChange BLOCK* processReport 0x20a675002646a432: from storage DS-8f1a5b79-19b1-4064-bbd6-7622567e6842 node DatanodeRegistration(127.0.0.1:36929, datanodeUuid=149538cc-7e7c-4e20-9f38-30518e8aade0, infoPort=44743, infoSecurePort=0, ipcPort=40991, storageInfo=lv=-57;cid=testClusterID;nsid=1802227240;c=1558656235061), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 [junit4] 2> 111014 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 111014 INFO (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 111014 INFO (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer Starting server [junit4] 2> 111114 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer start zk server on port:41112 [junit4] 2> 111114 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:41112 [junit4] 2> 111114 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 41112 [junit4] 2> 111126 INFO (zkConnectionManagerCallback-3331-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 111141 INFO (zkConnectionManagerCallback-3333-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 111149 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml to /configs/conf1/solrconfig.xml [junit4] 2> 111152 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/schema15.xml to /configs/conf1/schema.xml [junit4] 2> 111154 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml to /configs/conf1/solrconfig.snippet.randomindexconfig.xml [junit4] 2> 111157 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/stopwords.txt to /configs/conf1/stopwords.txt [junit4] 2> 111159 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/protwords.txt to /configs/conf1/protwords.txt [junit4] 2> 111161 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/currency.xml to /configs/conf1/currency.xml [junit4] 2> 111166 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml to /configs/conf1/enumsConfig.xml [junit4] 2> 111168 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json to /configs/conf1/open-exchange-rates.json [junit4] 2> 111172 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt to /configs/conf1/mapping-ISOLatin1Accent.txt [junit4] 2> 111176 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt to /configs/conf1/old_synonyms.txt [junit4] 2> 111178 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkTestServer put /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/synonyms.txt to /configs/conf1/synonyms.txt [junit4] 2> 111194 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.AbstractFullDistribZkTestBase Will use NRT replicas unless explicitly asked otherwise [junit4] 2> 111535 WARN (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time [junit4] 2> 111535 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0) [junit4] 2> 111535 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ... [junit4] 2> 111535 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 11.0.1+13-LTS [junit4] 2> 111547 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 111547 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 111547 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.s.session node0 Scavenging every 660000ms [junit4] 2> 111548 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@38c8d24a{/,null,AVAILABLE} [junit4] 2> 111552 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.s.AbstractConnector Started ServerConnector@6f74d1ca{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:43064} [junit4] 2> 111552 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.s.Server Started @111626ms [junit4] 2> 111552 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/, solr.data.dir=hdfs://localhost:41099/hdfs__localhost_41099__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001_tempDir-002_control_data, hostPort=43064, coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/control-001/cores} [junit4] 2> 111553 ERROR (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 111553 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory [junit4] 2> 111553 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr? version 9.0.0 [junit4] 2> 111553 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 111553 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 111553 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2019-05-24T00:03:59.396453Z [junit4] 2> 111573 INFO (zkConnectionManagerCallback-3335-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 111581 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 111581 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/control-001/solr.xml [junit4] 2> 111591 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored [junit4] 2> 111591 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored [junit4] 2> 111603 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27, but no JMX reporters were configured - adding default JMX reporter. [junit4] 2> 111772 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false] [junit4] 2> 111786 WARN (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@79a4396f[provider=null,keyStore=null,trustStore=null] [junit4] 2> 111824 WARN (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@7066a532[provider=null,keyStore=null,trustStore=null] [junit4] 2> 111830 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:41112/solr [junit4] 2> 111851 INFO (zkConnectionManagerCallback-3342-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 111856 INFO (zkConnectionManagerCallback-3344-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 112064 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:43064_ [junit4] 2> 112065 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.c.Overseer Overseer (id=73354616241324036-127.0.0.1:43064_-n_0000000000) starting [junit4] 2> 112085 INFO (zkConnectionManagerCallback-3351-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 112089 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:41112/solr ready [junit4] 2> 112090 INFO (OverseerStateUpdate-73354616241324036-127.0.0.1:43064_-n_0000000000) [n:127.0.0.1:43064_ ] o.a.s.c.Overseer Starting to work on the main queue : 127.0.0.1:43064_ [junit4] 2> 112091 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:43064_ [junit4] 2> 112129 INFO (zkCallback-3343-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 112130 INFO (zkCallback-3350-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 112153 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 112210 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 112260 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 112261 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 112264 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [n:127.0.0.1:43064_ ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/control-001/cores [junit4] 2> 112341 INFO (zkConnectionManagerCallback-3357-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 112343 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 112345 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:41112/solr ready [junit4] 2> 112357 INFO (qtp163619927-3320) [n:127.0.0.1:43064_ ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:43064_&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 112370 INFO (OverseerThreadFactory-155-thread-1-processing-n:127.0.0.1:43064_) [n:127.0.0.1:43064_ ] o.a.s.c.a.c.CreateCollectionCmd Create collection control_collection [junit4] 2> 112499 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ x:control_collection_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT [junit4] 2> 112500 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ x:control_collection_shard1_replica_n1] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores [junit4] 2> 113542 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 9.0.0 [junit4] 2> 113729 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.s.IndexSchema [control_collection_shard1_replica_n1] Schema name=test [junit4] 2> 113739 WARN (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SolrResourceLoader Solr loaded a deprecated plugin/analysis class [solr.TrieIntField]. Please consult documentation how to replace it accordingly. [junit4] 2> 113744 WARN (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SolrResourceLoader Solr loaded a deprecated plugin/analysis class [solr.TrieFloatField]. Please consult documentation how to replace it accordingly. [junit4] 2> 113746 WARN (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SolrResourceLoader Solr loaded a deprecated plugin/analysis class [solr.TrieLongField]. Please consult documentation how to replace it accordingly. [junit4] 2> 113748 WARN (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SolrResourceLoader Solr loaded a deprecated plugin/analysis class [solr.TrieDoubleField]. Please consult documentation how to replace it accordingly. [junit4] 2> 113783 WARN (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SolrResourceLoader Solr loaded a deprecated plugin/analysis class [solr.TrieDateField]. Please consult documentation how to replace it accordingly. [junit4] 2> 114048 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id [junit4] 2> 114218 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 'control_collection_shard1_replica_n1' using configuration from collection control_collection, trusted=true [junit4] 2> 114219 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.control_collection.shard1.replica_n1' (registry 'solr.core.control_collection.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 114227 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://localhost:41099/solr_hdfs_home [junit4] 2> 114227 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled [junit4] 2> 114228 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore [[control_collection_shard1_replica_n1] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/control-001/cores/control_collection_shard1_replica_n1], dataDir=[hdfs://localhost:41099/solr_hdfs_home/control_collection/core_node2/data/] [junit4] 2> 114230 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/control_collection/core_node2/data/snapshot_metadata [junit4] 2> 114246 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 114246 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 114246 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 114560 WARN (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.s.h.HdfsDirectory The NameNode is in SafeMode - Solr will wait 5 seconds and try again. [junit4] 2> 119792 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 119804 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/control_collection/core_node2/data [junit4] 2> 119879 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/control_collection/core_node2/data/index [junit4] 2> 119887 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 119887 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 119887 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 119933 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 119934 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=8, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.7193527785811858] [junit4] 2> 121181 WARN (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}} [junit4] 2> 121345 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog [junit4] 2> 121345 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 121345 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2 [junit4] 2> 121408 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: if uncommitted for 10000ms; [junit4] 2> 121408 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: if uncommitted for 3000ms; [junit4] 2> 121420 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=43, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0] [junit4] 2> 121632 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@76eb5e4e[control_collection_shard1_replica_n1] main] [junit4] 2> 121657 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 121658 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 121672 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 121682 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1634369535501926400 [junit4] 2> 121716 INFO (searcherExecutor-160-thread-1-processing-n:127.0.0.1:43064_ x:control_collection_shard1_replica_n1 c:control_collection s:shard1) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore [control_collection_shard1_replica_n1] Registered new searcher Searcher@76eb5e4e[control_collection_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 121742 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/control_collection/terms/shard1 to Terms{values={core_node2=0}, version=0} [junit4] 2> 121742 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/control_collection/leaders/shard1 [junit4] 2> 121751 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue. [junit4] 2> 121751 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 121751 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:43064/control_collection_shard1_replica_n1/ [junit4] 2> 121751 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 121752 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.SyncStrategy http://127.0.0.1:43064/control_collection_shard1_replica_n1/ has no replicas [junit4] 2> 121752 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/control_collection/leaders/shard1/leader after winning as /collections/control_collection/leader_elect/shard1/election/73354616241324036-core_node2-n_0000000000 [junit4] 2> 121755 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:43064/control_collection_shard1_replica_n1/ shard1 [junit4] 2> 121760 INFO (zkCallback-3343-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/control_collection/state.json] for collection [control_collection] has occurred - updating... (live nodes size: [1]) [junit4] 2> 121763 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary [junit4] 2> 121765 INFO (zkCallback-3343-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/control_collection/state.json] for collection [control_collection] has occurred - updating... (live nodes size: [1]) [junit4] 2> 121767 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ c:control_collection s:shard1 x:control_collection_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=9269 [junit4] 2> 121793 INFO (qtp163619927-3320) [n:127.0.0.1:43064_ ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 45 seconds. Check all shard replicas [junit4] 2> 121892 INFO (zkCallback-3343-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/control_collection/state.json] for collection [control_collection] has occurred - updating... (live nodes size: [1]) [junit4] 2> 121892 INFO (zkCallback-3343-thread-1) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/control_collection/state.json] for collection [control_collection] has occurred - updating... (live nodes size: [1]) [junit4] 2> 121894 INFO (qtp163619927-3320) [n:127.0.0.1:43064_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:43064_&wt=javabin&version=2} status=0 QTime=9537 [junit4] 2> 121901 INFO (zkCallback-3343-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/control_collection/state.json] for collection [control_collection] has occurred - updating... (live nodes size: [1]) [junit4] 2> 121927 INFO (zkConnectionManagerCallback-3363-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 121938 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 121940 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:41112/solr ready [junit4] 2> 121943 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.ChaosMonkey monkey: init - expire sessions:false cause connection loss:false [junit4] 2> 121946 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 121959 INFO (OverseerCollectionConfigSetProcessor-73354616241324036-127.0.0.1:43064_-n_0000000000) [n:127.0.0.1:43064_ ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000000 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 121963 INFO (OverseerThreadFactory-155-thread-2-processing-n:127.0.0.1:43064_) [n:127.0.0.1:43064_ ] o.a.s.c.a.c.CreateCollectionCmd Create collection collection1 [junit4] 2> 122184 WARN (OverseerThreadFactory-155-thread-2-processing-n:127.0.0.1:43064_) [n:127.0.0.1:43064_ ] o.a.s.c.a.c.CreateCollectionCmd It is unusual to create a collection (collection1) without cores. [junit4] 2> 122195 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ ] o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 45 seconds. Check all shard replicas [junit4] 2> 122196 INFO (qtp163619927-3322) [n:127.0.0.1:43064_ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2} status=0 QTime=250 [junit4] 2> 122213 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.AbstractFullDistribZkTestBase Creating jetty instances pullReplicaCount=0 numOtherReplicas=2 [junit4] 2> 122718 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-1-001 of type NRT [junit4] 2> 122740 WARN (closeThreadPool-3364-thread-1) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time [junit4] 2> 122741 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0) [junit4] 2> 122741 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ... [junit4] 2> 122741 INFO (closeThreadPool-3364-thread-1) [ ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 11.0.1+13-LTS [junit4] 2> 122842 INFO (closeThreadPool-3364-thread-1) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 122842 INFO (closeThreadPool-3364-thread-1) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 122842 INFO (closeThreadPool-3364-thread-1) [ ] o.e.j.s.session node0 Scavenging every 600000ms [junit4] 2> 122876 INFO (closeThreadPool-3364-thread-1) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@55ae1464{/,null,AVAILABLE} [junit4] 2> 122877 INFO (closeThreadPool-3364-thread-1) [ ] o.e.j.s.AbstractConnector Started ServerConnector@7651cd84{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:34664} [junit4] 2> 122877 INFO (closeThreadPool-3364-thread-1) [ ] o.e.j.s.Server Started @122951ms [junit4] 2> 122877 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/, solrconfig=solrconfig.xml, solr.data.dir=hdfs://localhost:41099/hdfs__localhost_41099__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001_tempDir-002_jetty1, hostPort=34664, coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-1-001/cores} [junit4] 2> 122878 ERROR (closeThreadPool-3364-thread-1) [ ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 122878 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory [junit4] 2> 122878 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr? version 9.0.0 [junit4] 2> 122878 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 122878 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 122878 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2019-05-24T00:04:10.721343Z [junit4] 2> 123285 INFO (zkConnectionManagerCallback-3366-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 123287 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 123287 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-1-001/solr.xml [junit4] 2> 123294 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored [junit4] 2> 123294 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored [junit4] 2> 123326 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27, but no JMX reporters were configured - adding default JMX reporter. [junit4] 2> 123615 INFO (TEST-HdfsTlogReplayBufferedWhileIndexingTest.test-seed#[97127C10F1843A6]) [ ] o.a.s.c.AbstractFullDistribZkTestBase create jetty 2 in directory /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-2-001 of type NRT [junit4] 2> 123628 WARN (closeThreadPool-3364-thread-2) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time [junit4] 2> 123629 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0) [junit4] 2> 123629 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ... [junit4] 2> 123629 INFO (closeThreadPool-3364-thread-2) [ ] o.e.j.s.Server jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 11.0.1+13-LTS [junit4] 2> 123651 INFO (closeThreadPool-3364-thread-2) [ ] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 123651 INFO (closeThreadPool-3364-thread-2) [ ] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 123651 INFO (closeThreadPool-3364-thread-2) [ ] o.e.j.s.session node0 Scavenging every 660000ms [junit4] 2> 123652 INFO (closeThreadPool-3364-thread-2) [ ] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@18a03aa6{/,null,AVAILABLE} [junit4] 2> 123652 INFO (closeThreadPool-3364-thread-2) [ ] o.e.j.s.AbstractConnector Started ServerConnector@3073c1a3{HTTP/1.1,[http/1.1, h2c]}{127.0.0.1:39421} [junit4] 2> 123653 INFO (closeThreadPool-3364-thread-2) [ ] o.e.j.s.Server Started @123727ms [junit4] 2> 123653 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/, solrconfig=solrconfig.xml, solr.data.dir=hdfs://localhost:41099/hdfs__localhost_41099__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001_tempDir-002_jetty2, hostPort=39421, coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-2-001/cores} [junit4] 2> 123653 ERROR (closeThreadPool-3364-thread-2) [ ] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be missing or incomplete. [junit4] 2> 123653 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.s.SolrDispatchFilter Using logger factory org.apache.logging.slf4j.Log4jLoggerFactory [junit4] 2> 123653 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr? version 9.0.0 [junit4] 2> 123653 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null [junit4] 2> 123653 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir: null [junit4] 2> 123653 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time: 2019-05-24T00:04:11.496879Z [junit4] 2> 123696 INFO (zkConnectionManagerCallback-3369-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 123698 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in ZooKeeper) [junit4] 2> 123698 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-2-001/solr.xml [junit4] 2> 123706 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored [junit4] 2> 123706 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored [junit4] 2> 123709 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.c.SolrXmlConfig MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27, but no JMX reporters were configured - adding default JMX reporter. [junit4] 2> 123968 INFO (OverseerCollectionConfigSetProcessor-73354616241324036-127.0.0.1:43064_-n_0000000000) [n:127.0.0.1:43064_ ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000002 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 123978 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false] [junit4] 2> 123980 WARN (closeThreadPool-3364-thread-2) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@22e2a2bc[provider=null,keyStore=null,trustStore=null] [junit4] 2> 123984 WARN (closeThreadPool-3364-thread-2) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@544a73c1[provider=null,keyStore=null,trustStore=null] [junit4] 2> 123985 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized: WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false] [junit4] 2> 123986 INFO (closeThreadPool-3364-thread-2) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:41112/solr [junit4] 2> 123988 WARN (closeThreadPool-3364-thread-1) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@14d481a9[provider=null,keyStore=null,trustStore=null] [junit4] 2> 124054 INFO (zkConnectionManagerCallback-3377-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 124063 WARN (closeThreadPool-3364-thread-1) [ ] o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for SslContextFactory@36b0a2c3[provider=null,keyStore=null,trustStore=null] [junit4] 2> 124065 INFO (closeThreadPool-3364-thread-1) [ ] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:41112/solr [junit4] 2> 124121 INFO (zkConnectionManagerCallback-3381-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 124159 INFO (zkConnectionManagerCallback-3384-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 124171 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1) [junit4] 2> 124180 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.c.ZkController Publish node=127.0.0.1:39421_ as DOWN [junit4] 2> 124182 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores [junit4] 2> 124182 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:39421_ [junit4] 2> 124184 INFO (zkCallback-3362-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 124184 INFO (zkCallback-3350-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 124185 INFO (zkCallback-3343-thread-3) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2) [junit4] 2> 124190 INFO (zkConnectionManagerCallback-3388-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 124197 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2) [junit4] 2> 124203 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.c.ZkController Publish node=127.0.0.1:34664_ as DOWN [junit4] 2> 124205 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 transient cores [junit4] 2> 124205 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.c.ZkController Register node as live in ZooKeeper:/live_nodes/127.0.0.1:34664_ [junit4] 2> 124207 INFO (zkCallback-3362-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3) [junit4] 2> 124207 INFO (zkCallback-3343-thread-3) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3) [junit4] 2> 124208 INFO (zkCallback-3350-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3) [junit4] 2> 124214 INFO (zkCallback-3380-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (3) [junit4] 2> 124230 INFO (zkCallback-3387-thread-1) [ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3) [junit4] 2> 124233 INFO (zkConnectionManagerCallback-3393-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 124236 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3) [junit4] 2> 124238 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:41112/solr ready [junit4] 2> 124241 INFO (zkConnectionManagerCallback-3400-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 124243 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3) [junit4] 2> 124244 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:41112/solr ready [junit4] 2> 124293 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 124329 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history in memory. [junit4] 2> 124365 WARN (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.m.r.j.JmxMetricsReporter Unable to register meter [junit4] 2> => javax.management.InstanceNotFoundException: solr:dom1=node,category=UPDATE,scope=updateShardHandler,name=threadPool.updateOnlyExecutor.completed [junit4] 2> at java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1083) [junit4] 2> javax.management.InstanceNotFoundException: solr:dom1=node,category=UPDATE,scope=updateShardHandler,name=threadPool.updateOnlyExecutor.completed [junit4] 2> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1083) ~[?:?] [junit4] 2> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:423) ~[?:?] [junit4] 2> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:411) ~[?:?] [junit4] 2> at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546) ~[?:?] [junit4] 2> at org.apache.solr.metrics.reporters.jmx.JmxMetricsReporter$JmxListener.registerMBean(JmxMetricsReporter.java:531) ~[java/:?] [junit4] 2> at org.apache.solr.metrics.reporters.jmx.JmxMetricsReporter$JmxListener.onMeterAdded(JmxMetricsReporter.java:648) ~[java/:?] [junit4] 2> at org.apache.solr.metrics.reporters.jmx.JmxMetricsReporter.lambda$start$0(JmxMetricsReporter.java:736) ~[java/:?] [junit4] 2> at java.util.HashMap.forEach(HashMap.java:1336) ~[?:?] [junit4] 2> at org.apache.solr.metrics.reporters.jmx.JmxMetricsReporter.start(JmxMetricsReporter.java:732) ~[java/:?] [junit4] 2> at org.apache.solr.metrics.reporters.SolrJmxReporter.doInit(SolrJmxReporter.java:109) ~[java/:?] [junit4] 2> at org.apache.solr.metrics.SolrMetricReporter.init(SolrMetricReporter.java:70) ~[java/:?] [junit4] 2> at org.apache.solr.metrics.SolrMetricManager.loadReporter(SolrMetricManager.java:916) ~[java/:?] [junit4] 2> at org.apache.solr.metrics.SolrMetricManager.loadReporters(SolrMetricManager.java:843) ~[java/:?] [junit4] 2> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:654) ~[java/:?] [junit4] 2> at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:255) ~[java/:?] [junit4] 2> at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:175) ~[java/:?] [junit4] 2> at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:136) ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:750) ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) ~[?:?] [junit4] 2> at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734) ~[?:?] [junit4] 2> at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734) ~[?:?] [junit4] 2> at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) ~[?:?] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744) ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1449) ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1513) ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1158) ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:995) ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:467) ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:384) ~[java/:?] [junit4] 2> at org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69) ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.apache.solr.client.solrj.embedded.JettySolrRunner.retryOnPortBindFailure(JettySolrRunner.java:558) ~[java/:?] [junit4] 2> at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:497) ~[java/:?] [junit4] 2> at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:465) ~[java/:?] [junit4] 2> at org.apache.solr.cloud.AbstractFullDistribZkTestBase.lambda$createJettys$2(AbstractFullDistribZkTestBase.java:464) ~[java/:?] [junit4] 2> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?] [junit4] 2> at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] [junit4] 2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[java/:?] [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] [junit4] 2> at java.lang.Thread.run(Thread.java:834) [?:?] [junit4] 2> 124384 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 124475 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 124483 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 124483 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 124487 INFO (closeThreadPool-3364-thread-1) [n:127.0.0.1:34664_ ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-1-001/cores [junit4] 2> 124540 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 124540 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 124546 INFO (closeThreadPool-3364-thread-2) [n:127.0.0.1:39421_ ] o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-2-001/cores [junit4] 2> 125188 INFO (qtp771633406-3392) [n:127.0.0.1:34664_ ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:39421_&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 125191 INFO (qtp771633406-3394) [n:127.0.0.1:34664_ ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params node=127.0.0.1:34664_&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 125220 INFO (OverseerThreadFactory-155-thread-3-processing-n:127.0.0.1:43064_) [n:127.0.0.1:43064_ c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:39421_ for creating new replica of shard shard1 for collection collection1 [junit4] 2> 125227 INFO (OverseerThreadFactory-155-thread-3-processing-n:127.0.0.1:43064_) [n:127.0.0.1:43064_ c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command. [junit4] 2> 125260 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ x:collection1_shard1_replica_n1] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT [junit4] 2> 126467 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 9.0.0 [junit4] 2> 126571 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.IndexSchema [collection1_shard1_replica_n1] Schema name=test [junit4] 2> 126842 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id [junit4] 2> 126985 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n1' using configuration from collection collection1, trusted=true [junit4] 2> 126986 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n1' (registry 'solr.core.collection1.shard1.replica_n1') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 126988 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://localhost:41099/solr_hdfs_home [junit4] 2> 126988 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled [junit4] 2> 126988 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SolrCore [[collection1_shard1_replica_n1] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-2-001/cores/collection1_shard1_replica_n1], dataDir=[hdfs://localhost:41099/solr_hdfs_home/collection1/core_node2/data/] [junit4] 2> 126991 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/collection1/core_node2/data/snapshot_metadata [junit4] 2> 127014 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 127015 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 127015 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 127054 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 127058 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/collection1/core_node2/data [junit4] 2> 127123 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/collection1/core_node2/data/index [junit4] 2> 127138 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 127139 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 127139 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 127174 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 127175 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=8, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.7193527785811858] [junit4] 2> 127324 WARN (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}} [junit4] 2> 127562 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog [junit4] 2> 127562 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 127562 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2 [junit4] 2> 127584 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.CommitTracker Hard AutoCommit: if uncommitted for 10000ms; [junit4] 2> 127584 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.CommitTracker Soft AutoCommit: if uncommitted for 3000ms; [junit4] 2> 127597 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=43, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0] [junit4] 2> 127655 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.SolrIndexSearcher Opening [Searcher@79a37a3c[collection1_shard1_replica_n1] main] [junit4] 2> 127657 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 127658 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 127659 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 127659 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1634369541769265152 [junit4] 2> 127717 INFO (searcherExecutor-183-thread-1-processing-n:127.0.0.1:39421_ x:collection1_shard1_replica_n1 c:collection1 s:shard1) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SolrCore [collection1_shard1_replica_n1] Registered new searcher Searcher@79a37a3c[collection1_shard1_replica_n1] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 127720 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0}, version=0} [junit4] 2> 127720 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/collection1/leaders/shard1 [junit4] 2> 128109 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue. [junit4] 2> 128109 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync [junit4] 2> 128109 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy Sync replicas to http://127.0.0.1:39421/collection1_shard1_replica_n1/ [junit4] 2> 128109 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me [junit4] 2> 128109 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.SyncStrategy http://127.0.0.1:39421/collection1_shard1_replica_n1/ has no replicas [junit4] 2> 128109 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node /collections/collection1/leaders/shard1/leader after winning as /collections/collection1/leader_elect/shard1/election/73354616241324043-core_node2-n_0000000000 [junit4] 2> 128111 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ShardLeaderElectionContext I am the new leader: http://127.0.0.1:39421/collection1_shard1_replica_n1/ shard1 [junit4] 2> 128116 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.c.ZkController I am the leader, no recovery necessary [junit4] 2> 128119 INFO (qtp955831380-3405) [n:127.0.0.1:39421_ c:collection1 s:shard1 x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=2859 [junit4] 2> 128129 INFO (qtp771633406-3392) [n:127.0.0.1:34664_ c:collection1 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={node=127.0.0.1:39421_&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2} status=0 QTime=2941 [junit4] 2> 129215 INFO (OverseerThreadFactory-155-thread-4-processing-n:127.0.0.1:43064_) [n:127.0.0.1:43064_ c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:34664_ for creating new replica of shard shard1 for collection collection1 [junit4] 2> 129215 INFO (OverseerCollectionConfigSetProcessor-73354616241324036-127.0.0.1:43064_-n_0000000000) [n:127.0.0.1:43064_ ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000004 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 129218 INFO (OverseerThreadFactory-155-thread-4-processing-n:127.0.0.1:43064_) [n:127.0.0.1:43064_ c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command. [junit4] 2> 129260 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ x:collection1_shard1_replica_n3] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n3&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT [junit4] 2> 130287 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.SolrConfig Using Lucene MatchVersion: 9.0.0 [junit4] 2> 130380 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.IndexSchema [collection1_shard1_replica_n3] Schema name=test [junit4] 2> 130600 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id [junit4] 2> 130676 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n3' using configuration from collection collection1, trusted=true [junit4] 2> 130676 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n3' (registry 'solr.core.collection1.shard1.replica_n3') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@61b4ef27 [junit4] 2> 130677 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory solr.hdfs.home=hdfs://localhost:41099/solr_hdfs_home [junit4] 2> 130677 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled [junit4] 2> 130677 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [[collection1_shard1_replica_n3] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.HdfsTlogReplayBufferedWhileIndexingTest_97127C10F1843A6-001/shard-1-001/cores/collection1_shard1_replica_n3], dataDir=[hdfs://localhost:41099/solr_hdfs_home/collection1/core_node4/data/] [junit4] 2> 130686 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/collection1/core_node4/data/snapshot_metadata [junit4] 2> 130700 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 130700 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 130700 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 130719 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 130728 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/collection1/core_node4/data [junit4] 2> 130779 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory creating directory factory for path hdfs://localhost:41099/solr_hdfs_home/collection1/core_node4/data/index [junit4] 2> 130791 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct memory allocation set to [true] [junit4] 2> 130791 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of [4194304] will allocate [1] slabs and use ~[4194304] bytes [junit4] 2> 130791 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache [junit4] 2> 130810 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.b.BlockDirectory Block cache on write is disabled [junit4] 2> 130811 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=8, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.7193527785811858] [junit4] 2> 130926 WARN (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}} [junit4] 2> 131100 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog [junit4] 2> 131100 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 131100 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2 [junit4] 2> 131121 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.CommitTracker Hard AutoCommit: if uncommitted for 10000ms; [junit4] 2> 131121 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.CommitTracker Soft AutoCommit: if uncommitted for 3000ms; [junit4] 2> 131131 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy: minMergeSize=1000, mergeFactor=43, maxMergeSize=9223372036854775807, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.0] [junit4] 2> 131160 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@28eefedd[collection1_shard1_replica_n3] main] [junit4] 2> 131164 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 131165 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 131166 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 131167 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1634369545447669760 [junit4] 2> 131172 INFO (searcherExecutor-188-thread-1-processing-n:127.0.0.1:34664_ x:collection1_shard1_replica_n3 c:collection1 s:shard1) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@28eefedd[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 131191 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0, core_node4=0}, version=1} [junit4] 2> 131191 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/collection1/leaders/shard1 [junit4] 2> 131197 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ZkController Core needs to recover:collection1_shard1_replica_n3 [junit4] 2> 131210 INFO (updateExecutor-3378-thread-1-processing-n:127.0.0.1:34664_ x:collection1_shard1_replica_n3 c:collection1 s:shard1) [n:127.0.0.1:34664_ c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 131241 INFO (qtp771633406-3390) [n:127.0.0.1:34664_ c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n3&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=1980 [junit4] 2> 131255 INFO (qtp771633406-3394) [n:127.0.0.1:34664_ c:collection1 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={node=127.0.0.1:34664_&act [...truncated too long message...] ail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/top-level-ivy-settings.xml resolve: jar-checksums: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/null645439230 [copy] Copying 240 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/null645439230 [delete] Deleting directory /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/null645439230 check-working-copy: [ivy:cachepath] :: resolving dependencies :: #;working@lucene1-us-west [ivy:cachepath] confs: [default] [ivy:cachepath] found org.eclipse.jgit#org.eclipse.jgit;5.3.0.201903130848-r in public [ivy:cachepath] found com.jcraft#jsch;0.1.54 in public [ivy:cachepath] found com.jcraft#jzlib;1.1.1 in public [ivy:cachepath] found com.googlecode.javaewah#JavaEWAH;1.1.6 in public [ivy:cachepath] found org.slf4j#slf4j-api;1.7.2 in public [ivy:cachepath] found org.bouncycastle#bcpg-jdk15on;1.60 in public [ivy:cachepath] found org.bouncycastle#bcprov-jdk15on;1.60 in public [ivy:cachepath] found org.bouncycastle#bcpkix-jdk15on;1.60 in public [ivy:cachepath] found org.slf4j#slf4j-nop;1.7.2 in public [ivy:cachepath] :: resolution report :: resolve 57ms :: artifacts dl 6ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 9 | 0 | 0 | 0 || 9 | 0 | --------------------------------------------------------------------- [wc-checker] Initializing working copy... [wc-checker] Checking working copy status... -jenkins-base: BUILD SUCCESSFUL Total time: 388 minutes 26 seconds Archiving artifacts java.lang.InterruptedException: no matches found within 10000 at hudson.FilePath$ValidateAntFileMask.hasMatch(FilePath.java:2847) at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2726) at hudson.FilePath$ValidateAntFileMask.invoke(FilePath.java:2707) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086) Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to lucene at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741) at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357) at hudson.remoting.Channel.call(Channel.java:955) at hudson.FilePath.act(FilePath.java:1072) at hudson.FilePath.act(FilePath.java:1061) at hudson.FilePath.validateAntFileMask(FilePath.java:2705) at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243) at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744) at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690) at hudson.model.Build$BuildExecution.post2(Build.java:186) at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635) at hudson.model.Run.execute(Run.java:1835) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:429) Caused: hudson.FilePath$TunneledInterruptedException at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3088) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused: java.lang.InterruptedException: java.lang.InterruptedException: no matches found within 10000 at hudson.FilePath.act(FilePath.java:1074) at hudson.FilePath.act(FilePath.java:1061) at hudson.FilePath.validateAntFileMask(FilePath.java:2705) at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243) at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744) at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690) at hudson.model.Build$BuildExecution.post2(Build.java:186) at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635) at hudson.model.Run.execute(Run.java:1835) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:97) at hudson.model.Executor.run(Executor.java:429) No artifacts found that match the file pattern "**/*.events,heapdumps/**,**/hs_err_pid*". Configuration error? Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE Email was triggered for: Unstable (Test Failures) Sending email for trigger: Unstable (Test Failures)
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org