Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1520/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.StressHdfsTest.test

Error Message:
Could not find collection:delete_data_dir

Stack Trace:
java.lang.AssertionError: Could not find collection:delete_data_dir
        at 
__randomizedtesting.SeedInfo.seed([378D2E79D9079884:BFD911A377FBF57C]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at org.junit.Assert.assertNotNull(Assert.java:526)
        at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
        at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
        at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
        at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
        at 
org.apache.solr.cloud.hdfs.StressHdfsTest.test(StressHdfsTest.java:114)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 1903 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/core/test/temp/junit4-J0-20180403_151108_1835239698415570049497.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) ----
   [junit4] codec: FastDecompressionCompressingStoredFields, pf: Direct, dvf: 
Direct
   [junit4] <<< JVM J0: EOF ----

[...truncated 11798 lines...]
   [junit4] Suite: org.apache.solr.cloud.hdfs.StressHdfsTest
   [junit4]   2> 1361412 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/init-core-data-001
   [junit4]   2> 1361412 WARN  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=13 numCloses=13
   [junit4]   2> 1361412 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 1361413 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 1361413 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 1361461 WARN  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 1361472 WARN  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 1361474 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 1361490 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/hdfs
 to ./temp/Jetty_localhost_38684_hdfs____qqbl5j/webapp
   [junit4]   2> 1361913 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38684
   [junit4]   2> 1361989 WARN  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 1361990 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 1362000 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_60957_datanode____.6km8zc/webapp
   [junit4]   2> 1362416 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:60957
   [junit4]   2> 1362453 WARN  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 1362456 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 1362467 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_55830_datanode____pm7a70/webapp
   [junit4]   2> 1362536 ERROR (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data1/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data2/]]
  heartbeating to localhost/127.0.0.1:43392) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 1362543 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x51d25ef27ba083: from storage 
DS-ec3757b4-fa33-4fe0-921e-e2926c90461f node 
DatanodeRegistration(127.0.0.1:51531, 
datanodeUuid=1c7aca6a-1a8f-4a60-bd6e-e76829eb5caf, infoPort=40000, 
infoSecurePort=0, ipcPort=56443, 
storageInfo=lv=-56;cid=testClusterID;nsid=1644791276;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 1 msecs
   [junit4]   2> 1362543 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x51d25ef27ba083: from storage 
DS-03d54e3c-f2e2-41f9-91c1-8cf0c9d5e286 node 
DatanodeRegistration(127.0.0.1:51531, 
datanodeUuid=1c7aca6a-1a8f-4a60-bd6e-e76829eb5caf, infoPort=40000, 
infoSecurePort=0, ipcPort=56443, 
storageInfo=lv=-56;cid=testClusterID;nsid=1644791276;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 1362916 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:55830
   [junit4]   2> 1363145 ERROR (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data3/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data4/]]
  heartbeating to localhost/127.0.0.1:43392) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 1363168 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x51d25f17c99741: from storage 
DS-b4d4ad83-2f61-4ecd-abf0-4006c9149c58 node 
DatanodeRegistration(127.0.0.1:48036, 
datanodeUuid=9834a91f-1391-41ff-81ff-b7e27abc6ba2, infoPort=45745, 
infoSecurePort=0, ipcPort=36108, 
storageInfo=lv=-56;cid=testClusterID;nsid=1644791276;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 1 msecs
   [junit4]   2> 1363168 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x51d25f17c99741: from storage 
DS-b73b7621-8762-4439-816a-ce3e3796b547 node 
DatanodeRegistration(127.0.0.1:48036, 
datanodeUuid=9834a91f-1391-41ff-81ff-b7e27abc6ba2, infoPort=45745, 
infoSecurePort=0, ipcPort=36108, 
storageInfo=lv=-56;cid=testClusterID;nsid=1644791276;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 1363587 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.ZkTestServer 
STARTING ZK TEST SERVER
   [junit4]   2> 1363587 INFO  (Thread-4518) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1363587 INFO  (Thread-4518) [    ] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1363590 ERROR (Thread-4518) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1363687 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.ZkTestServer 
start zk server on port:39717
   [junit4]   2> 1363690 INFO  (zkConnectionManagerCallback-2008-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1363693 INFO  (zkConnectionManagerCallback-2010-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1363696 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 1363697 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 1363698 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 1363699 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 1363700 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 1363700 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 1363701 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 1363702 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 1363703 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 1363704 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 1363704 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 1363705 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase Will use TLOG replicas unless explicitly 
asked otherwise
   [junit4]   2> 1363788 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.Server 
jetty-9.4.8.v20171121, build timestamp: 2017-11-22T00:27:37+03:00, git hash: 
82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 1363789 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
   [junit4]   2> 1363789 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session No 
SessionScavenger set, using defaults
   [junit4]   2> 1363789 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session 
Scavenging every 660000ms
   [junit4]   2> 1363789 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@3c58a52d{/,null,AVAILABLE}
   [junit4]   2> 1363789 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@1657648f{HTTP/1.1,[http/1.1]}{127.0.0.1:59758}
   [junit4]   2> 1363789 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.Server 
Started @1363834ms
   [junit4]   2> 1363789 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:43392/hdfs__localhost_43392__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001_tempDir-002_control_data,
 hostContext=/, hostPort=59758, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/control-001/cores}
   [junit4]   2> 1363790 ERROR 
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1363790 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1363790 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 1363790 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1363790 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1363790 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-04-03T18:07:57.933Z
   [junit4]   2> 1363792 INFO  (zkConnectionManagerCallback-2012-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1363792 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 1363792 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/control-001/solr.xml
   [junit4]   2> 1363795 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 1363795 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 1363796 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e, but no JMX 
reporters were configured - adding default JMX reporter.
   [junit4]   2> 1363799 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:39717/solr
   [junit4]   2> 1363803 INFO  (zkConnectionManagerCallback-2016-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1363805 INFO  (zkConnectionManagerCallback-2018-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1363868 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1363868 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:59758_
   [junit4]   2> 1363869 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.c.Overseer Overseer (id=73566939196555268-127.0.0.1:59758_-n_0000000000) 
starting
   [junit4]   2> 1363875 INFO  (zkConnectionManagerCallback-2025-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1363876 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39717/solr ready
   [junit4]   2> 1363884 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:59758_
   [junit4]   2> 1363892 INFO  (zkCallback-2024-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1363893 INFO  (zkCallback-2017-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1364075 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1364082 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1364083 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1364084 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59758_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/control-001/cores
   [junit4]   2> 1364099 INFO  (zkConnectionManagerCallback-2030-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1364100 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1364101 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39717/solr ready
   [junit4]   2> 1364102 INFO  (qtp338804798-10096) [n:127.0.0.1:59758_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:59758_&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1364103 INFO  (OverseerThreadFactory-2708-thread-1) [    ] 
o.a.s.c.a.c.CreateCollectionCmd Create collection control_collection
   [junit4]   2> 1364213 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 1364213 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 1364316 INFO  (zkCallback-2017-thread-1) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 1365225 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 1365237 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema [control_collection_shard1_replica_n1] Schema name=test
   [junit4]   2> 1365351 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 1365363 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.CoreContainer Creating SolrCore 'control_collection_shard1_replica_n1' 
using configuration from collection control_collection, trusted=true
   [junit4]   2> 1365364 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.control_collection.shard1.replica_n1' (registry 
'solr.core.control_collection.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1365364 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:43392/solr_hdfs_home
   [junit4]   2> 1365364 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 1365364 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 1365364 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore [[control_collection_shard1_replica_n1] ] Opening new SolrCore 
at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/control-001/cores/control_collection_shard1_replica_n1],
 
dataDir=[hdfs://localhost:43392/solr_hdfs_home/control_collection/core_node2/data/]
   [junit4]   2> 1365365 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/control_collection/core_node2/data/snapshot_metadata
   [junit4]   2> 1365376 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 1365376 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 1365376 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 1365807 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 1365810 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/control_collection/core_node2/data
   [junit4]   2> 1365826 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/control_collection/core_node2/data/index
   [junit4]   2> 1365831 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 1365831 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 1365831 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 1365837 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 1365837 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=6, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.0]
   [junit4]   2> 1365855 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48036 is added to 
blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-03d54e3c-f2e2-41f9-91c1-8cf0c9d5e286:NORMAL:127.0.0.1:51531|RBW],
 
ReplicaUC[[DISK]DS-b73b7621-8762-4439-816a-ce3e3796b547:NORMAL:127.0.0.1:48036|FINALIZED]]}
 size 0
   [junit4]   2> 1365856 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51531 is added to 
blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-03d54e3c-f2e2-41f9-91c1-8cf0c9d5e286:NORMAL:127.0.0.1:51531|RBW],
 
ReplicaUC[[DISK]DS-b73b7621-8762-4439-816a-ce3e3796b547:NORMAL:127.0.0.1:48036|FINALIZED]]}
 size 0
   [junit4]   2> 1365862 WARN  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 1365903 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 1365903 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1365903 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 1365913 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 1365913 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 1365914 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=6, maxMergeAtOnceExplicit=4, maxMergedSegmentMB=61.3681640625, 
floorSegmentMB=0.7890625, forceMergeDeletesPctAllowed=27.19596942832348, 
segmentsPerTier=20.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.19340933567791485
   [junit4]   2> 1365934 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@30dad8ba[control_collection_shard1_replica_n1] main]
   [junit4]   2> 1365938 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1365938 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1365939 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 1365940 INFO  
(searcherExecutor-2711-thread-1-processing-n:127.0.0.1:59758_ 
x:control_collection_shard1_replica_n1 c:control_collection s:shard1) 
[n:127.0.0.1:59758_ c:control_collection s:shard1  
x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore 
[control_collection_shard1_replica_n1] Registered new searcher 
Searcher@30dad8ba[control_collection_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1365941 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1596749386962960384
   [junit4]   2> 1365950 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/control_collection/terms/shard1 to Terms{values={core_node2=0}, 
version=0}
   [junit4]   2> 1365952 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1365952 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1365952 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.0.1:59758/control_collection_shard1_replica_n1/
   [junit4]   2> 1365952 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 1365952 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy 
http://127.0.0.1:59758/control_collection_shard1_replica_n1/ has no replicas
   [junit4]   2> 1365952 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1365955 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:59758/control_collection_shard1_replica_n1/ shard1
   [junit4]   2> 1366056 INFO  (zkCallback-2017-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 1366059 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 1366061 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_ 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=1848
   [junit4]   2> 1366064 INFO  (qtp338804798-10096) [n:127.0.0.1:59758_    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 1366105 INFO  
(OverseerCollectionConfigSetProcessor-73566939196555268-127.0.0.1:59758_-n_0000000000)
 [    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1366161 INFO  (zkCallback-2017-thread-2) [    ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 1367064 INFO  (qtp338804798-10096) [n:127.0.0.1:59758_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:59758_&wt=javabin&version=2}
 status=0 QTime=2962
   [junit4]   2> 1367068 INFO  (zkConnectionManagerCallback-2035-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1367069 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1367069 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39717/solr ready
   [junit4]   2> 1367069 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.ChaosMonkey 
monkey: init - expire sessions:false cause connection loss:false
   [junit4]   2> 1367070 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1367071 INFO  (OverseerThreadFactory-2708-thread-2) [    ] 
o.a.s.c.a.c.CreateCollectionCmd Create collection collection1
   [junit4]   2> 1367072 WARN  (OverseerThreadFactory-2708-thread-2) [    ] 
o.a.s.c.a.c.CreateCollectionCmd It is unusual to create a collection 
(collection1) without cores.
   [junit4]   2> 1367276 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 1367276 INFO  (qtp338804798-10100) [n:127.0.0.1:59758_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2}
 status=0 QTime=206
   [junit4]   2> 1367380 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-1-001
 of type TLOG
   [junit4]   2> 1367380 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.Server 
jetty-9.4.8.v20171121, build timestamp: 2017-11-22T00:27:37+03:00, git hash: 
82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 1367381 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
   [junit4]   2> 1367381 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session No 
SessionScavenger set, using defaults
   [junit4]   2> 1367381 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session 
Scavenging every 660000ms
   [junit4]   2> 1367381 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@49d3ae0b{/,null,AVAILABLE}
   [junit4]   2> 1367382 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@14087f0{HTTP/1.1,[http/1.1]}{127.0.0.1:59302}
   [junit4]   2> 1367382 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.Server 
Started @1367427ms
   [junit4]   2> 1367382 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:43392/hdfs__localhost_43392__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001_tempDir-002_jetty1,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/, hostPort=59302, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-1-001/cores}
   [junit4]   2> 1367382 ERROR 
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1367382 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1367382 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 1367382 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1367382 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1367382 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-04-03T18:08:01.525Z
   [junit4]   2> 1367384 INFO  (zkConnectionManagerCallback-2037-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1367385 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 1367385 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-1-001/solr.xml
   [junit4]   2> 1367388 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 1367388 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 1367389 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e, but no JMX 
reporters were configured - adding default JMX reporter.
   [junit4]   2> 1367392 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:39717/solr
   [junit4]   2> 1367395 INFO  (zkConnectionManagerCallback-2041-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1367399 INFO  (zkConnectionManagerCallback-2043-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1367404 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 1367405 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1367407 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 1367407 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:59302_
   [junit4]   2> 1367410 INFO  (zkCallback-2024-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1367410 INFO  (zkCallback-2017-thread-2) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1367410 INFO  (zkCallback-2034-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1367420 INFO  (zkCallback-2042-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 1367533 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1367541 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1367541 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1367542 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-1-001/cores
   [junit4]   2> 1367545 INFO  (zkConnectionManagerCallback-2050-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1367546 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 1367546 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:59302_    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39717/solr ready
   [junit4]   2> 1367568 INFO  (qtp564771768-10157) [n:127.0.0.1:59302_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params 
node=127.0.0.1:59302_&action=ADDREPLICA&collection=collection1&shard=shard1&type=TLOG&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1367571 INFO  
(OverseerCollectionConfigSetProcessor-73566939196555268-127.0.0.1:59758_-n_0000000000)
 [    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000002 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1367571 INFO  (OverseerThreadFactory-2708-thread-3) [    ] 
o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:59302_ for creating new 
replica
   [junit4]   2> 1367573 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_t21&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=TLOG
   [junit4]   2> 1368587 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 1368599 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.s.IndexSchema 
[collection1_shard1_replica_t21] Schema name=test
   [junit4]   2> 1368705 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 1368718 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard1_replica_t21' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 1368719 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_t21' (registry 
'solr.core.collection1.shard1.replica_t21') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1368719 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:43392/solr_hdfs_home
   [junit4]   2> 1368719 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 1368719 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 1368719 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.SolrCore 
[[collection1_shard1_replica_t21] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-1-001/cores/collection1_shard1_replica_t21],
 dataDir=[hdfs://localhost:43392/solr_hdfs_home/collection1/core_node22/data/]
   [junit4]   2> 1368720 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/collection1/core_node22/data/snapshot_metadata
   [junit4]   2> 1368727 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 1368727 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 1368727 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 1368733 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 1368733 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/collection1/core_node22/data
   [junit4]   2> 1368747 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/collection1/core_node22/data/index
   [junit4]   2> 1368752 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 1368752 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 1368752 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 1368756 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 1368757 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=6, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.0]
   [junit4]   2> 1368773 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51531 is added to 
blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-b4d4ad83-2f61-4ecd-abf0-4006c9149c58:NORMAL:127.0.0.1:48036|RBW],
 
ReplicaUC[[DISK]DS-ec3757b4-fa33-4fe0-921e-e2926c90461f:NORMAL:127.0.0.1:51531|RBW]]}
 size 0
   [junit4]   2> 1368774 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48036 is added to 
blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-b4d4ad83-2f61-4ecd-abf0-4006c9149c58:NORMAL:127.0.0.1:48036|RBW],
 
ReplicaUC[[DISK]DS-ec3757b4-fa33-4fe0-921e-e2926c90461f:NORMAL:127.0.0.1:51531|RBW]]}
 size 0
   [junit4]   2> 1368779 WARN  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 1368825 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 1368825 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1368825 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 1368834 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 1368834 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 1368840 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=6, maxMergeAtOnceExplicit=4, maxMergedSegmentMB=61.3681640625, 
floorSegmentMB=0.7890625, forceMergeDeletesPctAllowed=27.19596942832348, 
segmentsPerTier=20.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.19340933567791485
   [junit4]   2> 1368846 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@6b5077a3[collection1_shard1_replica_t21] main]
   [junit4]   2> 1368847 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1368847 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1368848 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 1368849 INFO  
(searcherExecutor-2722-thread-1-processing-n:127.0.0.1:59302_ 
x:collection1_shard1_replica_t21 c:collection1 s:shard1) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.SolrCore 
[collection1_shard1_replica_t21] Registered new searcher 
Searcher@6b5077a3[collection1_shard1_replica_t21] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1368849 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1596749390012219392
   [junit4]   2> 1368853 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.ZkShardTerms 
Successful update of terms at /collections/collection1/terms/shard1 to 
Terms{values={core_node22=0}, version=0}
   [junit4]   2> 1368857 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 1368857 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 1368857 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.SyncStrategy 
Sync replicas to http://127.0.0.1:59302/collection1_shard1_replica_t21/
   [junit4]   2> 1368857 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.SyncStrategy 
Sync Success - now sync replicas to me
   [junit4]   2> 1368857 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.SyncStrategy 
http://127.0.0.1:59302/collection1_shard1_replica_t21/ has no replicas
   [junit4]   2> 1368857 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 1368858 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.ZkController 
collection1_shard1_replica_t21 stopping background replication from leader
   [junit4]   2> 1368860 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:59302/collection1_shard1_replica_t21/ shard1
   [junit4]   2> 1369011 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.c.ZkController 
I am the leader, no recovery necessary
   [junit4]   2> 1369012 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t21] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_t21&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=TLOG}
 status=0 QTime=1439
   [junit4]   2> 1369014 INFO  (qtp564771768-10157) [n:127.0.0.1:59302_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:59302_&action=ADDREPLICA&collection=collection1&shard=shard1&type=TLOG&wt=javabin&version=2}
 status=0 QTime=1446
   [junit4]   2> 1369100 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 2 in directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-2-001
 of type TLOG
   [junit4]   2> 1369100 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.Server 
jetty-9.4.8.v20171121, build timestamp: 2017-11-22T00:27:37+03:00, git hash: 
82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 1369101 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
   [junit4]   2> 1369101 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session No 
SessionScavenger set, using defaults
   [junit4]   2> 1369101 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session 
Scavenging every 660000ms
   [junit4]   2> 1369102 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6f88eb6d{/,null,AVAILABLE}
   [junit4]   2> 1369102 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@5a7a8185{HTTP/1.1,[http/1.1]}{127.0.0.1:34588}
   [junit4]   2> 1369102 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.Server 
Started @1369147ms
   [junit4]   2> 1369102 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:43392/hdfs__localhost_43392__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001_tempDir-002_jetty2,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/, hostPort=34588, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-2-001/cores}
   [junit4]   2> 1369102 ERROR 
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1369103 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1369103 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 1369103 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1369103 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1369103 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-04-03T18:08:03.246Z
   [junit4]   2> 1369108 INFO  (zkConnectionManagerCallback-2052-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1369109 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 1369109 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-2-001/solr.xml
   [junit4]   2> 1369112 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 1369112 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
Configuration parameter autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 1369113 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.SolrXmlConfig 
MBean server found: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e, but no JMX 
reporters were configured - adding default JMX reporter.
   [junit4]   2> 1369116 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:39717/solr
   [junit4]   2> 1369117 INFO  (zkConnectionManagerCallback-2056-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1369119 INFO  (zkConnectionManagerCallback-2058-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1369122 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 1369124 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 1369125 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 1369125 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:34588_
   [junit4]   2> 1369126 INFO  (zkCallback-2034-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1369126 INFO  (zkCallback-2042-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1369126 INFO  (zkCallback-2024-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1369127 INFO  (zkCallback-2049-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1369127 INFO  (zkCallback-2017-thread-2) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1369128 INFO  (zkCallback-2057-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 1369280 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1369288 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1369288 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1369289 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.c.CorePropertiesLocator Found 0 core definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-2-001/cores
   [junit4]   2> 1369292 INFO  (zkConnectionManagerCallback-2065-thread-1) [    
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1369293 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3)
   [junit4]   2> 1369293 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [n:127.0.0.1:34588_    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:39717/solr ready
   [junit4]   2> 1369320 INFO  (qtp178214126-10200) [n:127.0.0.1:34588_    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params 
node=127.0.0.1:34588_&action=ADDREPLICA&collection=collection1&shard=shard1&type=TLOG&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 1369322 INFO  
(OverseerCollectionConfigSetProcessor-73566939196555268-127.0.0.1:59758_-n_0000000000)
 [    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000004 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 1369322 INFO  (OverseerThreadFactory-2708-thread-4) [    ] 
o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:34588_ for creating new 
replica
   [junit4]   2> 1369324 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_t23&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=TLOG
   [junit4]   2> 1370337 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 1370348 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.s.IndexSchema 
[collection1_shard1_replica_t23] Schema name=test
   [junit4]   2> 1370451 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 1370462 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard1_replica_t23' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 1370463 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_t23' (registry 
'solr.core.collection1.shard1.replica_t23') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@509d187e
   [junit4]   2> 1370463 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:43392/solr_hdfs_home
   [junit4]   2> 1370463 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 1370463 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 1370463 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.c.SolrCore 
[[collection1_shard1_replica_t23] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-2-001/cores/collection1_shard1_replica_t23],
 dataDir=[hdfs://localhost:43392/solr_hdfs_home/collection1/core_node24/data/]
   [junit4]   2> 1370464 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/collection1/core_node24/data/snapshot_metadata
   [junit4]   2> 1370470 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 1370470 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 1370470 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 1370476 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 1370476 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/collection1/core_node24/data
   [junit4]   2> 1370490 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:43392/solr_hdfs_home/collection1/core_node24/data/index
   [junit4]   2> 1370496 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 1370496 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 1370496 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 1370502 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 1370502 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=6, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.0]
   [junit4]   2> 1370511 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:48036 is added to 
blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-ec3757b4-fa33-4fe0-921e-e2926c90461f:NORMAL:127.0.0.1:51531|RBW],
 
ReplicaUC[[DISK]DS-b73b7621-8762-4439-816a-ce3e3796b547:NORMAL:127.0.0.1:48036|FINALIZED]]}
 size 0
   [junit4]   2> 1370512 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51531 is added to 
blk_1073741827_1003 size 69
   [junit4]   2> 1370516 WARN  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 1370549 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 1370549 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 1370549 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 1370558 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 1370558 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 1370566 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=6, maxMergeAtOnceExplicit=4, maxMergedSegmentMB=61.3681640625, 
floorSegmentMB=0.7890625, forceMergeDeletesPctAllowed=27.19596942832348, 
segmentsPerTier=20.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.19340933567791485
   [junit4]   2> 1370578 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@45fb23a2[collection1_shard1_replica_t23] main]
   [junit4]   2> 1370579 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 1370580 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 1370580 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 1370581 INFO  
(searcherExecutor-2733-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.c.SolrCore 
[collection1_shard1_replica_t23] Registered new searcher 
Searcher@45fb23a2[collection1_shard1_replica_t23] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 1370581 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1596749391828353024
   [junit4]   2> 1370585 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.c.ZkShardTerms 
Successful update of terms at /collections/collection1/terms/shard1 to 
Terms{values={core_node24=0, core_node22=0}, version=1}
   [junit4]   2> 1370586 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.c.ZkController 
Core needs to recover:collection1_shard1_replica_t23
   [junit4]   2> 1370586 INFO  
(updateExecutor-2053-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1) [n:127.0.0.1:34588_ 
c:collection1 s:shard1 r:core_node24 x:collection1_shard1_replica_t23] 
o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 1370587 INFO  (qtp178214126-10204) [n:127.0.0.1:34588_ 
c:collection1 s:shard1  x:collection1_shard1_replica_t23] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_t23&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=TLOG}
 status=0 QTime=1262
   [junit4]   2> 1370587 INFO  
(recoveryExecutor-2054-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1 r:core_node24) 
[n:127.0.0.1:34588_ c:collection1 s:shard1 r:core_node24 
x:collection1_shard1_replica_t23] o.a.s.c.RecoveryStrategy Starting recovery 
process. recoveringAfterStartup=true
   [junit4]   2> 1370588 INFO  
(recoveryExecutor-2054-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1 r:core_node24) 
[n:127.0.0.1:34588_ c:collection1 s:shard1 r:core_node24 
x:collection1_shard1_replica_t23] o.a.s.c.RecoveryStrategy ###### 
startupVersions=[[]]
   [junit4]   2> 1370588 INFO  
(recoveryExecutor-2054-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1 r:core_node24) 
[n:127.0.0.1:34588_ c:collection1 s:shard1 r:core_node24 
x:collection1_shard1_replica_t23] o.a.s.c.ZkController 
collection1_shard1_replica_t23 stopping background replication from leader
   [junit4]   2> 1370590 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1 r:core_node22 x:collection1_shard1_replica_t21] 
o.a.s.c.S.Request [collection1_shard1_replica_t21]  webapp= path=/admin/ping 
params={wt=javabin&version=2} hits=0 status=0 QTime=0
   [junit4]   2> 1370590 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_ 
c:collection1 s:shard1 r:core_node22 x:collection1_shard1_replica_t21] 
o.a.s.c.S.Request [collection1_shard1_replica_t21]  webapp= path=/admin/ping 
params={wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 1370606 INFO  (qtp178214126-10200) [n:127.0.0.1:34588_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:34588_&action=ADDREPLICA&collection=collection1&shard=shard1&type=TLOG&wt=javabin&version=2}
 status=0 QTime=1286
   [junit4]   2> 1370607 INFO  
(recoveryExecutor-2054-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1 r:core_node24) 
[n:127.0.0.1:34588_ c:collection1 s:shard1 r:core_node24 
x:collection1_shard1_replica_t23] o.a.s.c.RecoveryStrategy Begin buffering 
updates. core=[collection1_shard1_replica_t23]
   [junit4]   2> 1370607 INFO  
(recoveryExecutor-2054-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1 r:core_node24) 
[n:127.0.0.1:34588_ c:collection1 s:shard1 r:core_node24 
x:collection1_shard1_replica_t23] o.a.s.u.UpdateLog Starting to buffer updates. 
HDFSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 1370607 INFO  
(recoveryExecutor-2054-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1 r:core_node24) 
[n:127.0.0.1:34588_ c:collection1 s:shard1 r:core_node24 
x:collection1_shard1_replica_t23] o.a.s.c.RecoveryStrategy Publishing state of 
core [collection1_shard1_replica_t23] as recovering, leader is 
[http://127.0.0.1:59302/collection1_shard1_replica_t21/] and I am 
[http://127.0.0.1:34588/collection1_shard1_replica_t23/]
   [junit4]   2> 1370621 INFO  
(recoveryExecutor-2054-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1 r:core_node24) 
[n:127.0.0.1:34588_ c:collection1 s:shard1 r:core_node24 
x:collection1_shard1_replica_t23] o.a.s.c.ZkShardTerms Successful update of 
terms at /collections/collection1/terms/shard1 to 
Terms{values={core_node24_recovering=0, core_node24=0, core_node22=0}, 
version=2}
   [junit4]   2> 1370629 INFO  
(recoveryExecutor-2054-thread-1-processing-n:127.0.0.1:34588_ 
x:collection1_shard1_replica_t23 c:collection1 s:shard1 r:core_node24) 
[n:127.0.0.1:34588_ c:collection1 s:shard1 r:core_node24 
x:collection1_shard1_replica_t23] o.a.s.c.RecoveryStrategy Sending prep 
recovery command to [http://127.0.0.1:59302]; [WaitForState: 
action=PREPRECOVERY&core=collection1_shard1_replica_t21&nodeName=127.0.0.1:34588_&coreNodeName=core_node24&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 1370630 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_    ] 
o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node24, state: 
recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true, 
maxTime: 183 s
   [junit4]   2> 1370631 INFO  (qtp564771768-10161) [n:127.0.0.1:59302_    ] 
o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, 
shard=shard1, thisCore=collection1_shard1_replica_t21, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=down, localState=active, nodeName=127.0.0.1:34588_, 
coreNodeName=core_node24, onlyIfActiveCheckResult=false, nodeProps: 
core_node24:{"core":"collection1_shard1_replica_t23","base_url":"http://127.0.0.1:34588","node_name":"127.0.0.1:34588_","state":"down","type":"TLOG"}
   [junit4]   2> 1370708 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 3 in directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-3-001
 of type TLOG
   [junit4]   2> 1370709 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.Server 
jetty-9.4.8.v20171121, build timestamp: 2017-11-22T00:27:37+03:00, git hash: 
82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 1370710 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session 
DefaultSessionIdManager workerName=node0
   [junit4]   2> 1370710 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session No 
SessionScavenger set, using defaults
   [junit4]   2> 1370710 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.session 
Scavenging every 600000ms
   [junit4]   2> 1370710 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1c9cab23{/,null,AVAILABLE}
   [junit4]   2> 1370710 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@51ab9a53{HTTP/1.1,[http/1.1]}{127.0.0.1:34835}
   [junit4]   2> 1370711 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.e.j.s.Server 
Started @1370755ms
   [junit4]   2> 1370711 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:43392/hdfs__localhost_43392__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-master_checkout_solr_build_solr-core_test_J0_temp_solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001_tempDir-002_jetty3,
 replicaType=TLOG, solrconfig=solrconfig.xml, hostContext=/, hostPort=34835, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/shard-3-001/cores}
   [junit4]   2> 1370711 ERROR 
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1370712 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1370712 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 1370712 INFO  
(TEST-StressHdfsTest.test-seed#[378D2E79D9079884]) [    ] o.

[...truncated too long message...]

48)
   [junit4]   2> 1461468 WARN  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.h.h.s.d.DirectoryScanner DirectoryScanner: shutdown has been called
   [junit4]   2> 1461487 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Stopped 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
   [junit4]   2> 1461589 WARN  (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data3/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data4/]]
  heartbeating to localhost/127.0.0.1:43392) [    ] 
o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager 
interrupted
   [junit4]   2> 1461589 WARN  (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data3/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data4/]]
  heartbeating to localhost/127.0.0.1:43392) [    ] o.a.h.h.s.d.DataNode Ending 
block pool service for: Block pool BP-1256470555-10.41.0.5-1522778875586 
(Datanode Uuid 9834a91f-1391-41ff-81ff-b7e27abc6ba2) service to 
localhost/127.0.0.1:43392
   [junit4]   2> 1461591 WARN  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] 
o.a.h.h.s.d.DirectoryScanner DirectoryScanner: shutdown has been called
   [junit4]   2> 1461609 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Stopped 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
   [junit4]   2> 1461711 WARN  (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data1/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data2/]]
  heartbeating to localhost/127.0.0.1:43392) [    ] 
o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager 
interrupted
   [junit4]   2> 1461711 WARN  (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data1/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001/tempDir-001/hdfsBaseDir/data/data2/]]
  heartbeating to localhost/127.0.0.1:43392) [    ] o.a.h.h.s.d.DataNode Ending 
block pool service for: Block pool BP-1256470555-10.41.0.5-1522778875586 
(Datanode Uuid 1c7aca6a-1a8f-4a60-bd6e-e76829eb5caf) service to 
localhost/127.0.0.1:43392
   [junit4]   2> 1461717 INFO  
(SUITE-StressHdfsTest-seed#[378D2E79D9079884]-worker) [    ] o.m.log Stopped 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
   [junit4]   2> 1461720 WARN  (548583748@qtp-660170267-1 - Acceptor0 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38684) [    ] 
o.a.h.h.HttpServer2 HttpServer Acceptor: isRunning is false. Rechecking.
   [junit4]   2> 1461720 WARN  (548583748@qtp-660170267-1 - Acceptor0 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38684) [    ] 
o.a.h.h.HttpServer2 HttpServer Acceptor: isRunning is false
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.hdfs.StressHdfsTest_378D2E79D9079884-001
   [junit4]   2> Apr 03, 2018 6:09:35 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 35 leaked 
thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{multiDefault=PostingsFormat(name=LuceneFixedGap), 
id=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
 text=FSTOrd50, txt_t=FSTOrd50}, 
docValues:{range_facet_l_dv=DocValuesFormat(name=Asserting), 
_version_=DocValuesFormat(name=Lucene70), 
multiDefault=DocValuesFormat(name=Memory), 
intDefault=DocValuesFormat(name=Lucene70), id_i1=DocValuesFormat(name=Memory), 
range_facet_i_dv=DocValuesFormat(name=Lucene70), 
id=DocValuesFormat(name=Asserting), text=DocValuesFormat(name=Direct), 
intDvoDefault=DocValuesFormat(name=Direct), 
range_facet_l=DocValuesFormat(name=Lucene70), 
timestamp=DocValuesFormat(name=Lucene70), txt_t=DocValuesFormat(name=Direct)}, 
maxPointsInLeafNode=960, maxMBSortInHeap=5.819471104401383, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@50a06921),
 locale=ro-RO, timezone=Asia/Calcutta
   [junit4]   2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 
1.8.0_152 (64-bit)/cpus=4,threads=2,free=303832872,total=524812288
   [junit4]   2> NOTE: All tests run in this JVM: 
[TestHighFrequencyDictionaryFactory, SpatialHeatmapFacetsTest, 
TestSubQueryTransformerDistrib, BlockJoinFacetRandomTest, TestDocTermOrds, 
IndexBasedSpellCheckerTest, TestExceedMaxTermLength, TestBinaryField, 
TlogReplayBufferedWhileIndexingTest, HdfsTlogReplayBufferedWhileIndexingTest, 
ChaosMonkeyNothingIsSafeWithPullReplicasTest, TestSizeLimitedDistributedMap, 
TestIndexSearcher, OpenExchangeRatesOrgProviderTest, MigrateRouteKeyTest, 
ConnectionReuseTest, TestTolerantUpdateProcessorCloud, ComputePlanActionTest, 
FullHLLTest, TestQuerySenderNoQuery, SolrIndexSplitterTest, TestCrossCoreJoin, 
TestCSVResponseWriter, TestRestoreCore, TestDynamicLoading, TestLegacyField, 
TestBlobHandler, CloneFieldUpdateProcessorFactoryTest, TestExactStatsCache, 
SpellingQueryConverterTest, TestExtendedDismaxParser, 
TestSolrCloudWithKerberosAlt, RAMDirectoryFactoryTest, NodeLostTriggerTest, 
CursorPagingTest, TestDefaultStatsCache, LeaderFailureAfterFreshStartTest, 
TestManagedSchemaAPI, BadCopyFieldTest, BinaryUpdateRequestHandlerTest, 
SimpleFacetsTest, SuggesterWFSTTest, TestPivotHelperCode, 
SolrRequestParserTest, TestDistribStateManager, TestDelegationWithHadoopAuth, 
TestRemoteStreaming, StatsReloadRaceTest, MoveReplicaHDFSTest, 
TestManagedStopFilterFactory, TestShardHandlerFactory, 
HighlighterMaxOffsetTest, TestFieldCacheWithThreads, TestSchemaNameResource, 
ShardRoutingTest, ConjunctionSolrSpellCheckerTest, TestRandomFaceting, 
DebugComponentTest, ZkFailoverTest, CreateRoutedAliasTest, 
TestStressInPlaceUpdates, CurrencyFieldTypeTest, TestUseDocValuesAsStored, 
TestCloudSearcherWarming, StressHdfsTest]
   [junit4] Completed [339/795 (1!)] on J0 in 110.32s, 1 test, 1 failure <<< 
FAILURES!

[...truncated 47558 lines...]
-ecj-javadoc-lint-src:
    [mkdir] Created dir: /tmp/ecj956616456
 [ecj-lint] Compiling 1169 source files to /tmp/ecj956616456
 [ecj-lint] Processing annotations
 [ecj-lint] Annotations processed
 [ecj-lint] Processing annotations
 [ecj-lint] No elements to process
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/x1/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] ----------
 [ecj-lint] 1. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/java/org/apache/solr/cloud/autoscaling/NodeLostTrigger.java
 (at line 32)
 [ecj-lint]     import org.apache.solr.client.solrj.cloud.SolrCloudManager;
 [ecj-lint]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] The import org.apache.solr.client.solrj.cloud.SolrCloudManager is 
never used
 [ecj-lint] ----------
 [ecj-lint] 2. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/java/org/apache/solr/cloud/autoscaling/NodeLostTrigger.java
 (at line 36)
 [ecj-lint]     import org.apache.solr.core.SolrResourceLoader;
 [ecj-lint]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] The import org.apache.solr.core.SolrResourceLoader is never used
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 3. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/core/src/java/org/apache/solr/cloud/autoscaling/TriggerUtils.java
 (at line 20)
 [ecj-lint]     import java.util.Collection;
 [ecj-lint]            ^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] The import java.util.Collection is never used
 [ecj-lint] ----------
 [ecj-lint] 3 problems (3 errors)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/build.xml:651:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/build.xml:101:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build.xml:685:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/common-build.xml:2089:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/common-build.xml:2128:
 Compile failed; see the compiler error output for details.

Total time: 247 minutes 56 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to