Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1517/

3 tests failed.
FAILED:  org.apache.lucene.search.TestInetAddressRangeQueries.testRandomBig

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
        at 
__randomizedtesting.SeedInfo.seed([11AE52FE229C5CFA:96F92F71B3C5207A]:0)
        at 
org.apache.lucene.util.bkd.HeapPointWriter.writePackedValue(HeapPointWriter.java:107)
        at 
org.apache.lucene.util.bkd.HeapPointWriter.append(HeapPointWriter.java:128)
        at org.apache.lucene.util.bkd.PointReader.split(PointReader.java:68)
        at 
org.apache.lucene.util.bkd.OfflinePointReader.split(OfflinePointReader.java:169)
        at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1787)
        at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801)
        at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801)
        at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801)
        at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801)
        at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1004)
        at 
org.apache.lucene.index.RandomCodec$1$1.writeField(RandomCodec.java:141)
        at 
org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62)
        at org.apache.lucene.codecs.PointsWriter.merge(PointsWriter.java:186)
        at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:144)
        at 
org.apache.lucene.codecs.asserting.AssertingPointsFormat$AssertingPointsWriter.merge(AssertingPointsFormat.java:142)
        at 
org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:200)
        at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:160)
        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4477)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4138)
        at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
        at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2332)
        at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5144)
        at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1776)
        at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1465)
        at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:190)
        at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:160)
        at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:75)
        at 
org.apache.lucene.search.TestInetAddressRangeQueries.testRandomBig(TestInetAddressRangeQueries.java:81)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)


FAILED:  org.apache.solr.cloud.hdfs.StressHdfsTest.test

Error Message:
Could not find collection:delete_data_dir

Stack Trace:
java.lang.AssertionError: Could not find collection:delete_data_dir
        at 
__randomizedtesting.SeedInfo.seed([D09E26B97E1D25E3:58CA1963D0E1481B]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at org.junit.Assert.assertNotNull(Assert.java:526)
        at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
        at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
        at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
        at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
        at 
org.apache.solr.cloud.hdfs.StressHdfsTest.test(StressHdfsTest.java:114)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
        at 
__randomizedtesting.SeedInfo.seed([D09E26B97E1D25E3:E32C0E7D73AAFF54]:0)
        at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:78)
        at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:51)
        at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:164)
        at 
org.apache.lucene.store.RAMOutputStream.writeBytes(RAMOutputStream.java:150)
        at org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:278)
        at 
org.apache.lucene.store.MockIndexOutputWrapper.copyBytes(MockIndexOutputWrapper.java:165)
        at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.write(Lucene50CompoundFormat.java:100)
        at 
org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:5051)
        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4541)
        at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4138)
        at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
        at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2332)
        at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5144)
        at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1776)
        at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1465)
        at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:171)
        at 
org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit(TestDocTermOrds.java:167)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)




Build Log:
[...truncated 2129 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/core/test/temp/junit4-J0-20180331_140000_6494366924731613523611.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) ----
   [junit4] codec: CheapBastard, pf: FST50, dvf: Memory
   [junit4] <<< JVM J0: EOF ----

[...truncated 7231 lines...]
   [junit4] Suite: org.apache.lucene.search.TestInetAddressRangeQueries
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestInetAddressRangeQueries -Dtests.method=testRandomBig 
-Dtests.seed=11AE52FE229C5CFA -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=America/Virgin -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR    396s J1 | TestInetAddressRangeQueries.testRandomBig <<<
   [junit4]    > Throwable #1: java.lang.OutOfMemoryError: Java heap space
   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([11AE52FE229C5CFA:96F92F71B3C5207A]:0)
   [junit4]    >        at 
org.apache.lucene.util.bkd.HeapPointWriter.writePackedValue(HeapPointWriter.java:107)
   [junit4]    >        at 
org.apache.lucene.util.bkd.HeapPointWriter.append(HeapPointWriter.java:128)
   [junit4]    >        at 
org.apache.lucene.util.bkd.PointReader.split(PointReader.java:68)
   [junit4]    >        at 
org.apache.lucene.util.bkd.OfflinePointReader.split(OfflinePointReader.java:169)
   [junit4]    >        at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1787)
   [junit4]    >        at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801)
   [junit4]    >        at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801)
   [junit4]    >        at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801)
   [junit4]    >        at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1801)
   [junit4]    >        at 
org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1004)
   [junit4]    >        at 
org.apache.lucene.index.RandomCodec$1$1.writeField(RandomCodec.java:141)
   [junit4]    >        at 
org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62)
   [junit4]    >        at 
org.apache.lucene.codecs.PointsWriter.merge(PointsWriter.java:186)
   [junit4]    >        at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:144)
   [junit4]    >        at 
org.apache.lucene.codecs.asserting.AssertingPointsFormat$AssertingPointsWriter.merge(AssertingPointsFormat.java:142)
   [junit4]    >        at 
org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:200)
   [junit4]    >        at 
org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:160)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4477)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4138)
   [junit4]    >        at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2332)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5144)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1776)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1465)
   [junit4]    >        at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:190)
   [junit4]    >        at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:160)
   [junit4]    >        at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:75)
   [junit4]    >        at 
org.apache.lucene.search.TestInetAddressRangeQueries.testRandomBig(TestInetAddressRangeQueries.java:81)
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/misc/test/J1/temp/lucene.search.TestInetAddressRangeQueries_11AE52FE229C5CFA-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{id=PostingsFormat(name=Asserting)}, 
docValues:{ipRangeField=DocValuesFormat(name=Asserting), 
id=DocValuesFormat(name=Memory)}, maxPointsInLeafNode=1990, 
maxMBSortInHeap=5.574823827647666, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@6815aada),
 locale=ja-JP-u-ca-japanese-x-lvariant-JP, timezone=America/Virgin
   [junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 
1.8.0_152 (64-bit)/cpus=4,threads=1,free=247668256,total=300417024
   [junit4]   2> NOTE: All tests run in this JVM: [TestLazyDocument, 
TestIndexSplitter, TestInetAddressRangeQueries]
   [junit4] Completed [13/13 (1!)] on J1 in 439.70s, 4 tests, 1 error <<< 
FAILURES!

[...truncated 1 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/misc/test/temp/junit4-J1-20180331_200432_8424219434939951538779.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) ----
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/heapdumps/java_pid19826.hprof
 ...
   [junit4] Heap dump file created [574735744 bytes in 3.537 secs]
   [junit4] <<< JVM J1: EOF ----

[...truncated 3939 lines...]
   [junit4] Suite: org.apache.solr.cloud.hdfs.StressHdfsTest
   [junit4]   2> 424868 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/init-core-data-001
   [junit4]   2> 424869 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 424882 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 424882 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 429589 WARN  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.h.u.NativeCodeLoader Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 431924 WARN  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 432513 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log Logging to 
org.apache.logging.slf4j.Log4jLogger@78428620 via org.mortbay.log.Slf4jLog
   [junit4]   2> 432619 WARN  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 433956 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 434043 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log Extract 
jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/hdfs
 to ./temp/Jetty_lucene2.us.west_apache_org_43995_hdfs____1wq84g/webapp
   [junit4]   2> 436073 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log Started 
HttpServer2$selectchannelconnectorwithsafestar...@lucene2-us-west.apache.org:43995
   [junit4]   2> 439515 WARN  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 439528 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 439578 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log Extract 
jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_46351_datanode____.9dpzd8/webapp
   [junit4]   2> 440442 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46351
   [junit4]   2> 442626 WARN  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 442628 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 442779 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log Extract 
jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_45406_datanode____.thram8/webapp
   [junit4]   2> 444164 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45406
   [junit4]   2> 447179 ERROR (DataNode: 
[[[DISK]file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/tempDir-001/hdfsBaseDir/data/data1/,
 
[DISK]file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/tempDir-001/hdfsBaseDir/data/data2/]]
  heartbeating to lucene2-us-west.apache.org/127.0.0.1:44838) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 447188 ERROR (DataNode: 
[[[DISK]file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/tempDir-001/hdfsBaseDir/data/data3/,
 
[DISK]file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/tempDir-001/hdfsBaseDir/data/data4/]]
  heartbeating to lucene2-us-west.apache.org/127.0.0.1:44838) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 448645 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x114b725814536a: from storage 
DS-ad2598fe-de93-416d-989c-ae7c1b3b3737 node 
DatanodeRegistration(127.0.0.1:39359, 
datanodeUuid=c389dc97-6b42-4180-b435-b09f8c9225d8, infoPort=35234, 
infoSecurePort=0, ipcPort=45879, 
storageInfo=lv=-56;cid=testClusterID;nsid=498589;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 96 msecs
   [junit4]   2> 448718 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x114b725758c3fa: from storage 
DS-ee7264bf-937b-4e6e-93e6-4f63bbd65b69 node 
DatanodeRegistration(127.0.0.1:42140, 
datanodeUuid=8df7c683-4f4a-4783-8354-530639098557, infoPort=40686, 
infoSecurePort=0, ipcPort=37135, 
storageInfo=lv=-56;cid=testClusterID;nsid=498589;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 73 msecs
   [junit4]   2> 448718 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x114b725814536a: from storage 
DS-92002ffe-1009-4380-a0c1-9d12af6376a6 node 
DatanodeRegistration(127.0.0.1:39359, 
datanodeUuid=c389dc97-6b42-4180-b435-b09f8c9225d8, infoPort=35234, 
infoSecurePort=0, ipcPort=45879, 
storageInfo=lv=-56;cid=testClusterID;nsid=498589;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 448737 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x114b725758c3fa: from storage 
DS-ebcdd22b-f5de-4221-8390-8d7bcf03ff21 node 
DatanodeRegistration(127.0.0.1:42140, 
datanodeUuid=8df7c683-4f4a-4783-8354-530639098557, infoPort=40686, 
infoSecurePort=0, ipcPort=37135, 
storageInfo=lv=-56;cid=testClusterID;nsid=498589;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 450767 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.ZkTestServer 
STARTING ZK TEST SERVER
   [junit4]   2> 450768 INFO  (Thread-482) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 450768 INFO  (Thread-482) [    ] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 450812 ERROR (Thread-482) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 450868 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.ZkTestServer 
start zk server on port:36994
   [junit4]   2> 450917 INFO  (zkConnectionManagerCallback-76-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 450941 INFO  (zkConnectionManagerCallback-78-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 450982 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 451003 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 451005 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 451019 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 451022 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 451037 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 451047 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 451092 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 451094 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 451095 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 451097 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 451099 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase Will use NRT replicas unless explicitly 
asked otherwise
   [junit4]   2> 451544 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.Server 
jetty-9.4.8.v20171121, build timestamp: 2017-11-22T03:27:37+06:00, git hash: 
82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 451545 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
   [junit4]   2> 451545 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session No 
SessionScavenger set, using defaults
   [junit4]   2> 451545 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session 
Scavenging every 600000ms
   [junit4]   2> 451547 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1b4a0f76{/,null,AVAILABLE}
   [junit4]   2> 451547 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@237916c6{HTTP/1.1,[http/1.1]}{127.0.0.1:38253}
   [junit4]   2> 451547 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.Server 
Started @451695ms
   [junit4]   2> 451547 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://lucene2-us-west.apache.org:44838/hdfs__lucene2-us-west.apache.org_44838__home_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001_tempDir-002_control_data,
 replicaType=NRT, hostContext=/, hostPort=38253, 
coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/control-001/cores}
   [junit4]   2> 451548 ERROR 
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 451548 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 451548 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 451548 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 451548 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 451548 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-03-31T21:10:31.266Z
   [junit4]   2> 451563 INFO  (zkConnectionManagerCallback-80-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 451565 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 451565 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/control-001/solr.xml
   [junit4]   2> 451569 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 451569 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 451570 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984, but no JMX 
reporters were configured - adding default JMX reporter.
   [junit4]   2> 451594 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:36994/solr
   [junit4]   2> 451608 INFO  (zkConnectionManagerCallback-84-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 451649 INFO  (zkConnectionManagerCallback-86-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 452230 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 452230 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:38253_
   [junit4]   2> 452241 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.c.Overseer Overseer (id=72376625345331204-127.0.0.1:38253_-n_0000000000) 
starting
   [junit4]   2> 452283 INFO  (zkConnectionManagerCallback-91-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 452310 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:36994/solr ready
   [junit4]   2> 452391 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:38253_
   [junit4]   2> 452529 INFO  (zkCallback-90-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 452610 INFO  (zkCallback-85-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 454063 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 454114 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 454114 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 454115 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:38253_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/control-001/cores
   [junit4]   2> 454269 INFO  (zkConnectionManagerCallback-95-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 454272 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 454273 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:36994/solr ready
   [junit4]   2> 454296 INFO  (qtp364414420-970) [n:127.0.0.1:38253_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:38253_&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 454371 INFO  (OverseerThreadFactory-444-thread-1) [    ] 
o.a.s.c.a.c.CreateCollectionCmd Create collection control_collection
   [junit4]   2> 454665 INFO  (qtp364414420-974) [n:127.0.0.1:38253_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 454666 INFO  (qtp364414420-974) [n:127.0.0.1:38253_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 454851 INFO  (zkCallback-85-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 455803 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 455876 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema [control_collection_shard1_replica_n1] Schema name=test
   [junit4]   2> 456444 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 456945 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.CoreContainer Creating SolrCore 'control_collection_shard1_replica_n1' 
using configuration from collection control_collection, trusted=true
   [junit4]   2> 456946 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.control_collection.shard1.replica_n1' (registry 
'solr.core.control_collection.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 457019 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home
   [junit4]   2> 457019 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 457019 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 457037 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore [[control_collection_shard1_replica_n1] ] Opening new SolrCore 
at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/control-001/cores/control_collection_shard1_replica_n1],
 
dataDir=[hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/control_collection/core_node2/data/]
   [junit4]   2> 457039 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/control_collection/core_node2/data/snapshot_metadata
   [junit4]   2> 457129 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 457129 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 457129 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 461378 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 461418 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/control_collection/core_node2/data
   [junit4]   2> 461634 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/control_collection/core_node2/data/index
   [junit4]   2> 461740 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 461740 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 461740 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 461848 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 461849 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=16, maxMergeAtOnceExplicit=43, maxMergedSegmentMB=36.146484375, 
floorSegmentMB=1.333984375, forceMergeDeletesPctAllowed=11.672759429152403, 
segmentsPerTier=28.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.8725608275807664
   [junit4]   2> 463191 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39359 is added to 
blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-ee7264bf-937b-4e6e-93e6-4f63bbd65b69:NORMAL:127.0.0.1:42140|RBW],
 
ReplicaUC[[DISK]DS-92002ffe-1009-4380-a0c1-9d12af6376a6:NORMAL:127.0.0.1:39359|RBW]]}
 size 69
   [junit4]   2> 463203 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:42140 is added to 
blk_1073741825_1001 size 69
   [junit4]   2> 463669 WARN  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 463970 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 463970 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 463971 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 464087 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 464088 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 464112 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=11, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=2.1845703125, 
noCFSRatio=0.14266876503527906]
   [junit4]   2> 464530 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@427ef202[control_collection_shard1_replica_n1] main]
   [junit4]   2> 464575 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 464576 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 464592 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 464612 INFO  
(searcherExecutor-447-thread-1-processing-n:127.0.0.1:38253_ 
x:control_collection_shard1_replica_n1 c:control_collection s:shard1) 
[n:127.0.0.1:38253_ c:control_collection s:shard1  
x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore 
[control_collection_shard1_replica_n1] Registered new searcher 
Searcher@427ef202[control_collection_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 464612 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1596489092908974080
   [junit4]   2> 464618 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/control_collection/terms/shard1 to Terms{values={core_node2=0}, 
version=0}
   [junit4]   2> 464670 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 464670 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 464670 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.0.1:38253/control_collection_shard1_replica_n1/
   [junit4]   2> 464670 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 464670 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy 
http://127.0.0.1:38253/control_collection_shard1_replica_n1/ has no replicas
   [junit4]   2> 464670 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 464703 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:38253/control_collection_shard1_replica_n1/ shard1
   [junit4]   2> 464705 INFO  (zkCallback-85-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 464756 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 464759 INFO  (qtp364414420-974) [n:127.0.0.1:38253_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=10093
   [junit4]   2> 464870 INFO  (qtp364414420-970) [n:127.0.0.1:38253_    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 464882 INFO  (zkCallback-85-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 465873 INFO  (qtp364414420-970) [n:127.0.0.1:38253_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:38253_&wt=javabin&version=2}
 status=0 QTime=11576
   [junit4]   2> 465926 INFO  (zkConnectionManagerCallback-99-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 465949 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 465962 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:36994/solr ready
   [junit4]   2> 465998 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.ChaosMonkey 
monkey: init - expire sessions:false cause connection loss:false
   [junit4]   2> 466017 INFO  (qtp364414420-974) [n:127.0.0.1:38253_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 466033 INFO  (OverseerThreadFactory-444-thread-2) [    ] 
o.a.s.c.a.c.CreateCollectionCmd Create collection collection1
   [junit4]   2> 466033 INFO  
(OverseerCollectionConfigSetProcessor-72376625345331204-127.0.0.1:38253_-n_0000000000)
 [    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 466034 WARN  (OverseerThreadFactory-444-thread-2) [    ] 
o.a.s.c.a.c.CreateCollectionCmd It is unusual to create a collection 
(collection1) without cores.
   [junit4]   2> 466263 INFO  (qtp364414420-974) [n:127.0.0.1:38253_    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 466263 INFO  (qtp364414420-974) [n:127.0.0.1:38253_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2}
 status=0 QTime=246
   [junit4]   2> 467230 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-1-001
 of type NRT
   [junit4]   2> 467232 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.Server 
jetty-9.4.8.v20171121, build timestamp: 2017-11-22T03:27:37+06:00, git hash: 
82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 467253 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
   [junit4]   2> 467253 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session No 
SessionScavenger set, using defaults
   [junit4]   2> 467253 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session 
Scavenging every 600000ms
   [junit4]   2> 467255 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@225638d5{/,null,AVAILABLE}
   [junit4]   2> 467255 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@5c01a9c3{HTTP/1.1,[http/1.1]}{127.0.0.1:46254}
   [junit4]   2> 467255 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.Server 
Started @467402ms
   [junit4]   2> 467255 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://lucene2-us-west.apache.org:44838/hdfs__lucene2-us-west.apache.org_44838__home_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001_tempDir-002_jetty1,
 solrconfig=solrconfig.xml, hostContext=/, hostPort=46254, 
coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-1-001/cores}
   [junit4]   2> 467255 ERROR 
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 467255 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 467256 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 467256 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 467256 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 467256 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-03-31T21:10:46.974Z
   [junit4]   2> 467323 INFO  (zkConnectionManagerCallback-101-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 467329 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 467329 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-1-001/solr.xml
   [junit4]   2> 467346 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 467346 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 467381 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984, but no JMX 
reporters were configured - adding default JMX reporter.
   [junit4]   2> 467418 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:36994/solr
   [junit4]   2> 467542 INFO  (zkConnectionManagerCallback-105-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 467701 INFO  (zkConnectionManagerCallback-107-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 467719 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 467731 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 467741 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 467741 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:46254_
   [junit4]   2> 467743 INFO  (zkCallback-90-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 467778 INFO  (zkCallback-106-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 467778 INFO  (zkCallback-85-thread-2) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 467778 INFO  (zkCallback-98-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 468035 INFO  
(OverseerCollectionConfigSetProcessor-72376625345331204-127.0.0.1:38253_-n_0000000000)
 [    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000002 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 468203 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 468253 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 468253 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 468255 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-1-001/cores
   [junit4]   2> 468260 INFO  (zkConnectionManagerCallback-112-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 468307 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 468310 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:46254_    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:36994/solr ready
   [junit4]   2> 468436 INFO  (qtp364414420-972) [n:127.0.0.1:38253_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params 
node=127.0.0.1:46254_&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 468486 INFO  (OverseerThreadFactory-444-thread-3) [    ] 
o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:46254_ for creating new 
replica
   [junit4]   2> 468517 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n21&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 468618 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 468662 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.s.IndexSchema 
[collection1_shard1_replica_n21] Schema name=test
   [junit4]   2> 469171 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 469269 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard1_replica_n21' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 469270 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_n21' (registry 
'solr.core.collection1.shard1.replica_n21') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 469270 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home
   [junit4]   2> 469270 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 469270 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 469270 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SolrCore 
[[collection1_shard1_replica_n21] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-1-001/cores/collection1_shard1_replica_n21],
 
dataDir=[hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/collection1/core_node22/data/]
   [junit4]   2> 469284 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/collection1/core_node22/data/snapshot_metadata
   [junit4]   2> 469321 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 469321 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 469321 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 469367 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 469368 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/collection1/core_node22/data
   [junit4]   2> 469430 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/collection1/core_node22/data/index
   [junit4]   2> 469449 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 469449 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 469449 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 469486 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 469486 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=16, maxMergeAtOnceExplicit=43, maxMergedSegmentMB=36.146484375, 
floorSegmentMB=1.333984375, forceMergeDeletesPctAllowed=11.672759429152403, 
segmentsPerTier=28.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.8725608275807664
   [junit4]   2> 469703 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39359 is added to 
blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-ee7264bf-937b-4e6e-93e6-4f63bbd65b69:NORMAL:127.0.0.1:42140|RBW],
 
ReplicaUC[[DISK]DS-ad2598fe-de93-416d-989c-ae7c1b3b3737:NORMAL:127.0.0.1:39359|FINALIZED]]}
 size 0
   [junit4]   2> 469725 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:42140 is added to 
blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-ad2598fe-de93-416d-989c-ae7c1b3b3737:NORMAL:127.0.0.1:39359|FINALIZED],
 
ReplicaUC[[DISK]DS-ebcdd22b-f5de-4221-8390-8d7bcf03ff21:NORMAL:127.0.0.1:42140|FINALIZED]]}
 size 0
   [junit4]   2> 469795 WARN  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 470441 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 470441 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 470441 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 470608 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 470608 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 470638 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=11, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=2.1845703125, 
noCFSRatio=0.14266876503527906]
   [junit4]   2> 470794 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@2a9baf67[collection1_shard1_replica_n21] main]
   [junit4]   2> 470815 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 470816 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 470816 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 470817 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1596489099415388160
   [junit4]   2> 470905 INFO  
(searcherExecutor-458-thread-1-processing-n:127.0.0.1:46254_ 
x:collection1_shard1_replica_n21 c:collection1 s:shard1) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SolrCore 
[collection1_shard1_replica_n21] Registered new searcher 
Searcher@2a9baf67[collection1_shard1_replica_n21] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 470908 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.ZkShardTerms 
Successful update of terms at /collections/collection1/terms/shard1 to 
Terms{values={core_node22=0}, version=0}
   [junit4]   2> 470927 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 470927 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 470927 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SyncStrategy 
Sync replicas to http://127.0.0.1:46254/collection1_shard1_replica_n21/
   [junit4]   2> 470927 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SyncStrategy 
Sync Success - now sync replicas to me
   [junit4]   2> 470927 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SyncStrategy 
http://127.0.0.1:46254/collection1_shard1_replica_n21/ has no replicas
   [junit4]   2> 470927 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 470947 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:46254/collection1_shard1_replica_n21/ shard1
   [junit4]   2> 470998 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.ZkController 
I am the leader, no recovery necessary
   [junit4]   2> 471000 INFO  (qtp508768157-1033) [n:127.0.0.1:46254_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n21&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=2483
   [junit4]   2> 471021 INFO  (qtp364414420-972) [n:127.0.0.1:38253_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:46254_&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2}
 status=0 QTime=2585
   [junit4]   2> 472388 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 2 in directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-2-001
 of type NRT
   [junit4]   2> 472389 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.Server 
jetty-9.4.8.v20171121, build timestamp: 2017-11-22T03:27:37+06:00, git hash: 
82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 472390 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
   [junit4]   2> 472390 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session No 
SessionScavenger set, using defaults
   [junit4]   2> 472402 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.session 
Scavenging every 660000ms
   [junit4]   2> 472402 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@75f812b4{/,null,AVAILABLE}
   [junit4]   2> 472403 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@6853344b{HTTP/1.1,[http/1.1]}{127.0.0.1:43625}
   [junit4]   2> 472403 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.e.j.s.Server 
Started @472550ms
   [junit4]   2> 472403 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://lucene2-us-west.apache.org:44838/hdfs__lucene2-us-west.apache.org_44838__home_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001_tempDir-002_jetty2,
 replicaType=NRT, solrconfig=solrconfig.xml, hostContext=/, hostPort=43625, 
coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-2-001/cores}
   [junit4]   2> 472403 ERROR 
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 472423 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 472423 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 472423 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 472423 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 472423 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-03-31T21:10:52.141Z
   [junit4]   2> 472460 INFO  (zkConnectionManagerCallback-114-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 472475 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 472475 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-2-001/solr.xml
   [junit4]   2> 472512 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 472512 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 472513 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.SolrXmlConfig 
MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984, but no JMX 
reporters were configured - adding default JMX reporter.
   [junit4]   2> 472514 INFO  
(OverseerCollectionConfigSetProcessor-72376625345331204-127.0.0.1:38253_-n_0000000000)
 [    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000004 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 472550 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:36994/solr
   [junit4]   2> 472658 INFO  (zkConnectionManagerCallback-118-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 472885 INFO  (zkConnectionManagerCallback-120-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 473163 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 473175 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 473210 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 473210 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:43625_
   [junit4]   2> 473226 INFO  (zkCallback-106-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 473227 INFO  (zkCallback-111-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 473227 INFO  (zkCallback-90-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 473247 INFO  (zkCallback-98-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 473396 INFO  (zkCallback-119-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 473413 INFO  (zkCallback-85-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 475534 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 475572 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 475572 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 475607 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-2-001/cores
   [junit4]   2> 475691 INFO  (zkConnectionManagerCallback-125-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 475694 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3)
   [junit4]   2> 475707 INFO  
(TEST-StressHdfsTest.test-seed#[D09E26B97E1D25E3]) [n:127.0.0.1:43625_    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:36994/solr ready
   [junit4]   2> 476127 INFO  (qtp364414420-974) [n:127.0.0.1:38253_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params 
node=127.0.0.1:43625_&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 476148 INFO  (OverseerThreadFactory-444-thread-4) [    ] 
o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:43625_ for creating new 
replica
   [junit4]   2> 476200 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n23&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 477475 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 477696 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.s.IndexSchema 
[collection1_shard1_replica_n23] Schema name=test
   [junit4]   2> 478443 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 478566 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard1_replica_n23' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 478566 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_n23' (registry 
'solr.core.collection1.shard1.replica_n23') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@11c11984
   [junit4]   2> 478567 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home
   [junit4]   2> 478567 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 478567 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 478567 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.c.SolrCore 
[[collection1_shard1_replica_n23] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/shard-2-001/cores/collection1_shard1_replica_n23],
 
dataDir=[hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/collection1/core_node24/data/]
   [junit4]   2> 478568 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/collection1/core_node24/data/snapshot_metadata
   [junit4]   2> 478606 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 478606 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 478606 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 479218 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 479237 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/collection1/core_node24/data
   [junit4]   2> 479442 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:44838/solr_hdfs_home/collection1/core_node24/data/index
   [junit4]   2> 479465 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 479465 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 479465 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 481044 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 481044 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=16, maxMergeAtOnceExplicit=43, maxMergedSegmentMB=36.146484375, 
floorSegmentMB=1.333984375, forceMergeDeletesPctAllowed=11.672759429152403, 
segmentsPerTier=28.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.8725608275807664
   [junit4]   2> 481798 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:42140 is added to 
blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-92002ffe-1009-4380-a0c1-9d12af6376a6:NORMAL:127.0.0.1:39359|RBW],
 
ReplicaUC[[DISK]DS-ee7264bf-937b-4e6e-93e6-4f63bbd65b69:NORMAL:127.0.0.1:42140|FINALIZED]]}
 size 0
   [junit4]   2> 481818 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:39359 is added to 
blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-92002ffe-1009-4380-a0c1-9d12af6376a6:NORMAL:127.0.0.1:39359|RBW],
 
ReplicaUC[[DISK]DS-ee7264bf-937b-4e6e-93e6-4f63bbd65b69:NORMAL:127.0.0.1:42140|FINALIZED]]}
 size 0
   [junit4]   2> 481979 WARN  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 482520 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 482520 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 482520 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 482601 INFO  (qtp1803150723-1078) [n:127.0.0.1:43625_ 
c:collection1 s:shard1  x:collection1_shard1_replica_n23] o.a.s.u.Commit

[...truncated too long message...]

s/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/tempDir-001/hdfsBaseDir/data/data2/]]
  heartbeating to lucene2-us-west.apache.org/127.0.0.1:44838) [    ] 
o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager 
interrupted
   [junit4]   2> 1079702 WARN  (DataNode: 
[[[DISK]file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/tempDir-001/hdfsBaseDir/data/data1/,
 
[DISK]file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001/tempDir-001/hdfsBaseDir/data/data2/]]
  heartbeating to lucene2-us-west.apache.org/127.0.0.1:44838) [    ] 
o.a.h.h.s.d.DataNode Ending block pool service for: Block pool 
BP-1134351673-127.0.0.1-1522530610393 (Datanode Uuid 
c389dc97-6b42-4180-b435-b09f8c9225d8) service to 
lucene2-us-west.apache.org/127.0.0.1:44838
   [junit4]   2> 1079904 INFO  
(SUITE-StressHdfsTest-seed#[D09E26B97E1D25E3]-worker) [    ] o.m.log Stopped 
HttpServer2$selectchannelconnectorwithsafestar...@lucene2-us-west.apache.org:0
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_D09E26B97E1D25E3-001
   [junit4]   2> Mar 31, 2018 9:21:00 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 34 leaked 
thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{multiDefault=FSTOrd50, id=FST50, text=PostingsFormat(name=Direct), 
txt_t=PostingsFormat(name=Direct)}, 
docValues:{range_facet_l_dv=DocValuesFormat(name=Direct), 
_version_=DocValuesFormat(name=Memory), 
intDefault=DocValuesFormat(name=Memory), id_i1=DocValuesFormat(name=Memory), 
range_facet_i_dv=DocValuesFormat(name=Lucene70), 
intDvoDefault=DocValuesFormat(name=Direct), 
range_facet_l=DocValuesFormat(name=Lucene70), 
timestamp=DocValuesFormat(name=Lucene70)}, maxPointsInLeafNode=1390, 
maxMBSortInHeap=6.367370283873199, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@46736205),
 locale=ar, timezone=America/Goose_Bay
   [junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 
1.8.0_152 (64-bit)/cpus=4,threads=7,free=216929408,total=512229376
   [junit4]   2> NOTE: All tests run in this JVM: [TestCollationFieldDocValues, 
TestReloadDeadlock, TestSolrQueryParser, TestDownShardTolerantSearch, 
TestPKIAuthenticationPlugin, SimpleMLTQParserTest, TestRestoreCore, 
TestGroupingSearch, IndexBasedSpellCheckerTest, HighlighterMaxOffsetTest, 
CoreMergeIndexesAdminHandlerTest, TestPseudoReturnFields, 
DistributedQueryElevationComponentTest, TestShardHandlerFactory, 
TestJsonFacetsWithNestedObjects, NumberUtilsTest, StressHdfsTest]
   [junit4] Completed [95/795 (1!)] on J0 in 665.20s, 1 test, 1 failure <<< 
FAILURES!

[...truncated 904 lines...]
   [junit4] Suite: org.apache.solr.uninverting.TestDocTermOrds
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestDocTermOrds 
-Dtests.method=testTriggerUnInvertLimit -Dtests.seed=D09E26B97E1D25E3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-EC -Dtests.timezone=Africa/Banjul -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR    116s J0 | TestDocTermOrds.testTriggerUnInvertLimit <<<
   [junit4]    > Throwable #1: java.lang.OutOfMemoryError: Java heap space
   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([D09E26B97E1D25E3:E32C0E7D73AAFF54]:0)
   [junit4]    >        at 
org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:78)
   [junit4]    >        at 
org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:51)
   [junit4]    >        at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:164)
   [junit4]    >        at 
org.apache.lucene.store.RAMOutputStream.writeBytes(RAMOutputStream.java:150)
   [junit4]    >        at 
org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:278)
   [junit4]    >        at 
org.apache.lucene.store.MockIndexOutputWrapper.copyBytes(MockIndexOutputWrapper.java:165)
   [junit4]    >        at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.write(Lucene50CompoundFormat.java:100)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:5051)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4541)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4138)
   [junit4]    >        at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2332)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5144)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1776)
   [junit4]    >        at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1465)
   [junit4]    >        at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:171)
   [junit4]    >        at 
org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit(TestDocTermOrds.java:167)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{field=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
foo=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
id=BlockTreeOrds(blocksize=128)}, docValues:{}, maxPointsInLeafNode=1301, 
maxMBSortInHeap=5.485367135353044, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@40ced376),
 locale=es-EC, timezone=Africa/Banjul
   [junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 
1.8.0_152 (64-bit)/cpus=4,threads=1,free=97054136,total=477626368
   [junit4]   2> NOTE: All tests run in this JVM: [TestCollationFieldDocValues, 
TestReloadDeadlock, TestSolrQueryParser, TestDownShardTolerantSearch, 
TestPKIAuthenticationPlugin, SimpleMLTQParserTest, TestRestoreCore, 
TestGroupingSearch, IndexBasedSpellCheckerTest, HighlighterMaxOffsetTest, 
CoreMergeIndexesAdminHandlerTest, TestPseudoReturnFields, 
DistributedQueryElevationComponentTest, TestShardHandlerFactory, 
TestJsonFacetsWithNestedObjects, NumberUtilsTest, StressHdfsTest, 
BaseCdcrDistributedZkTest, OutputWriterTest, TestSort, TestChildDocTransformer, 
TestDocumentBuilder, TestDelegationWithHadoopAuth, 
HdfsTlogReplayBufferedWhileIndexingTest, SolrXmlInZkTest, 
SolrIndexSplitterTest, TestIBSimilarityFactory, UtilsToolTest, 
OpenCloseCoreStressTest, TestDefaultStatsCache, BadIndexSchemaTest, TestJoin, 
TestCorePropertiesReload, TestFastOutputStream, ConfigSetsAPITest, 
HdfsCollectionsAPIDistributedZkTest, TestConfigSetsAPIExclusivity, 
TestSolrFieldCacheBean, ConnectionReuseTest, LIRRollingUpdatesTest, 
DirectUpdateHandlerTest, QueryResultKeyTest, BasicDistributedZkTest, 
TestMiniSolrCloudClusterSSL, TestDynamicLoading, TestStressLucene, 
PeerSyncWithIndexFingerprintCachingTest, TestComplexPhraseLeadingWildcard, 
TestFieldCacheSort, SmileWriterTest, TestCloudDeleteByQuery, 
NodeAddedTriggerIntegrationTest, RollingRestartTest, DebugComponentTest, 
TestRandomRequestDistribution, TestSolrConfigHandlerCloud, UUIDFieldTest, 
MoveReplicaHDFSTest, TestRestManager, TestLegacyTerms, AnalyticsQueryTest, 
TestWordDelimiterFilterFactory, UpdateParamsTest, EchoParamsTest, 
RequestHandlersTest, DistributedDebugComponentTest, CursorPagingTest, 
XmlUpdateRequestHandlerTest, BigEndianAscendingWordSerializerTest, 
TestFieldResource, TestDocSet, TestNumericTerms64, TestCrossCoreJoin, 
TestLargeCluster, TestSchemaVersionResource, SchemaApiFailureTest, 
TestDeleteCollectionOnDownNodes, HighlighterConfigTest, 
AddSchemaFieldsUpdateProcessorFactoryTest, TestTolerantUpdateProcessorCloud, 
ComputePlanActionTest, SynonymTokenizerTest, TestConfig, BufferStoreTest, 
TermVectorComponentDistributedTest, SparseHLLTest, SpellCheckCollatorTest, 
CollectionsAPIDistributedZkTest, TestCloudNestedDocsSort, TestExactStatsCache, 
PluginInfoTest, TestJavabinTupleStreamParser, CdcrVersionReplicationTest, 
DistributedFacetPivotLargeTest, TestPullReplica, TestScoreJoinQPScore, 
TestSchemaManager, TestBulkSchemaConcurrent, HttpPartitionOnCommitTest, 
NodeMutatorTest, TestDocTermOrds]
   [junit4] Completed [350/795 (2!)] on J0 in 135.10s, 10 tests, 1 error <<< 
FAILURES!

[...truncated 1650 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/temp/junit4-J0-20180331_210259_4441441195043301222483.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) ----
   [junit4] java.lang.OutOfMemoryError: Java heap space
   [junit4] Dumping heap to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/heapdumps/java_pid3574.hprof
 ...
   [junit4] Heap dump file created [465518051 bytes in 2.398 secs]
   [junit4] <<< JVM J0: EOF ----

[...truncated 9303 lines...]
BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/build.xml:651:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/build.xml:585:
 Some of the tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid19826.hprof
* java_pid3574.hprof

Total time: 598 minutes 59 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to