Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.6/5/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.StressHdfsTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.NRTCachingDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:353)  at 
org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3012)  
at org.apache.solr.core.SolrCore.close(SolrCore.java:1546)  at 
org.apache.solr.core.SolrCore.<init>(SolrCore.java:965)  at 
org.apache.solr.core.SolrCore.<init>(SolrCore.java:830)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:917)  at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:555)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.NRTCachingDirectory
        at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
        at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
        at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:353)
        at 
org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3012)
        at org.apache.solr.core.SolrCore.close(SolrCore.java:1546)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:965)
        at org.apache.solr.core.SolrCore.<init>(SolrCore.java:830)
        at org.apache.solr.core.CoreContainer.create(CoreContainer.java:917)
        at 
org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:555)
        at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


        at __randomizedtesting.SeedInfo.seed([FD6F0C19DBF83B16]:0)
        at org.junit.Assert.fail(Assert.java:93)
        at org.junit.Assert.assertTrue(Assert.java:43)
        at org.junit.Assert.assertNull(Assert.java:551)
        at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:302)
        at sun.reflect.GeneratedMethodAccessor91.invoke(Unknown Source)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:38609/_/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:38609/_/collection1
        at 
__randomizedtesting.SeedInfo.seed([FD6F0C19DBF83B16:753B33C3750456EE]:0)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:621)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
        at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
        at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
        at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
        at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.buildRandomIndex(TestInPlaceUpdatesDistrib.java:1127)
        at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:327)
        at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:154)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
        at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
        at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
        at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
        at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
        at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
        at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
        at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
        at java.net.SocketInputStream.read(SocketInputStream.java:171)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
        at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
        at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
        at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
        at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
        at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
        at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
        at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
        at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
        at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
        at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
        at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
        at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
        at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
        at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
        at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:515)
        ... 49 more




Build Log:
[...truncated 13366 lines...]
   [junit4] Suite: org.apache.solr.cloud.hdfs.StressHdfsTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/init-core-data-001
   [junit4]   2> 4343636 WARN  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=36 numCloses=36
   [junit4]   2> 4343636 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using TrieFields
   [junit4]   2> 4343646 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 4343647 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 4343682 WARN  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] 
o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 4343688 WARN  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 4343690 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 4343703 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.2-tests.jar!/webapps/hdfs
 to ./temp/Jetty_localhost_36593_hdfs____7sh3zq/webapp
   [junit4]   2> 4344113 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36593
   [junit4]   2> 4344188 WARN  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 4344190 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 4344204 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.2-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_40419_datanode____.a70ibd/webapp
   [junit4]   2> 4344589 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40419
   [junit4]   2> 4344638 WARN  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 4344639 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 4344654 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.2-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_55593_datanode____s5q02e/webapp
   [junit4]   2> 4344703 INFO  (IPC Server handler 3 on 48626) [    ] 
BlockStateChange BLOCK* processReport: from storage 
DS-0c13f431-6c7e-4b5a-b9a1-f71d68b9a42e node 
DatanodeRegistration(127.0.0.1:51940, 
datanodeUuid=3e3f1ff8-925e-4fe2-9c5f-5fedfd50395f, infoPort=58987, 
infoSecurePort=0, ipcPort=43785, 
storageInfo=lv=-56;cid=testClusterID;nsid=2013320046;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 0 msecs
   [junit4]   2> 4344703 INFO  (IPC Server handler 3 on 48626) [    ] 
BlockStateChange BLOCK* processReport: from storage 
DS-b9a862ce-08e3-44d5-a1bf-2ce8e0683052 node 
DatanodeRegistration(127.0.0.1:51940, 
datanodeUuid=3e3f1ff8-925e-4fe2-9c5f-5fedfd50395f, infoPort=58987, 
infoSecurePort=0, ipcPort=43785, 
storageInfo=lv=-56;cid=testClusterID;nsid=2013320046;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 4345049 INFO  
(SUITE-StressHdfsTest-seed#[FD6F0C19DBF83B16]-worker) [    ] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:55593
   [junit4]   2> 4345167 INFO  (IPC Server handler 1 on 48626) [    ] 
BlockStateChange BLOCK* processReport: from storage 
DS-eb776af6-ce15-45a1-8553-8428b2407452 node 
DatanodeRegistration(127.0.0.1:36543, 
datanodeUuid=0c45b18e-f8ec-4c3b-9edd-49a80f90a6ad, infoPort=39075, 
infoSecurePort=0, ipcPort=45466, 
storageInfo=lv=-56;cid=testClusterID;nsid=2013320046;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 0 msecs
   [junit4]   2> 4345167 INFO  (IPC Server handler 1 on 48626) [    ] 
BlockStateChange BLOCK* processReport: from storage 
DS-58c3ef74-f5d4-483b-b603-da71c367fdb5 node 
DatanodeRegistration(127.0.0.1:36543, 
datanodeUuid=0c45b18e-f8ec-4c3b-9edd-49a80f90a6ad, infoPort=39075, 
infoSecurePort=0, ipcPort=45466, 
storageInfo=lv=-56;cid=testClusterID;nsid=2013320046;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 4345244 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.ZkTestServer 
STARTING ZK TEST SERVER
   [junit4]   2> 4345245 INFO  (Thread-67660) [    ] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 4345245 INFO  (Thread-67660) [    ] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 4345251 ERROR (Thread-67660) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 4345345 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.ZkTestServer 
start zk server on port:48499
   [junit4]   2> 4345361 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 4345364 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 4345366 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 4345368 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 4345370 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 4345372 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 4345374 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 4345375 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 4345379 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 4345381 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 4345383 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 4345513 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.SolrTestCaseJ4 
Writing core.properties file to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/control-001/cores/collection1
   [junit4]   2> 4345515 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 4345516 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@50dfe5bb{/,null,AVAILABLE}
   [junit4]   2> 4345517 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@39abf055{HTTP/1.1,[http/1.1]}{127.0.0.1:44148}
   [junit4]   2> 4345518 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.e.j.s.Server 
Started @4348667ms
   [junit4]   2> 4345518 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:48626/hdfs__localhost_48626__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-6.6_checkout_solr_build_solr-core_test_J2_temp_solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001_tempDir-002_control_data,
 hostContext=/, hostPort=44148, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/control-001/cores}
   [junit4]   2> 4345518 ERROR 
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 4345519 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
6.6.0
   [junit4]   2> 4345519 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 4345519 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 4345519 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-05-16T20:23:18.630Z
   [junit4]   2> 4345522 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 4345522 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/control-001/solr.xml
   [junit4]   2> 4345530 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 4345531 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:48499/solr
   [junit4]   2> 4345554 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:44148_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 4345555 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:44148_    ] 
o.a.s.c.OverseerElectionContext I am going to be the leader 127.0.0.1:44148_
   [junit4]   2> 4345556 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:44148_    ] 
o.a.s.c.Overseer Overseer (id=97974104775524356-127.0.0.1:44148_-n_0000000000) 
starting
   [junit4]   2> 4345564 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:44148_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:44148_
   [junit4]   2> 4345566 INFO  
(zkCallback-15594-thread-1-processing-n:127.0.0.1:44148_) [n:127.0.0.1:44148_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 4345677 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:44148_    ] 
o.a.s.c.CorePropertiesLocator Found 1 core definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/control-001/cores
   [junit4]   2> 4345677 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:44148_    ] 
o.a.s.c.CorePropertiesLocator Cores are: [collection1]
   [junit4]   2> 4345678 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 4345681 INFO  
(OverseerStateUpdate-97974104775524356-127.0.0.1:44148_-n_0000000000) 
[n:127.0.0.1:44148_    ] o.a.s.c.o.ReplicaMutator Assigning new node to shard 
shard=shard1
   [junit4]   2> 4346699 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 6.6.0
   [junit4]   2> 4346717 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.s.IndexSchema 
[collection1] Schema name=test
   [junit4]   2> 4346861 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 4346879 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1' using configuration from collection 
control_collection, trusted=true
   [junit4]   2> 4346879 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:48626/solr_hdfs_home
   [junit4]   2> 4346880 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 4346880 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 4346880 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.c.SolrCore 
[[collection1] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/control-001/cores/collection1],
 
dataDir=[hdfs://localhost:48626/solr_hdfs_home/control_collection/core_node1/data/]
   [junit4]   2> 4346880 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.JmxMonitoredMap JMX monitoring is enabled. Adding Solr mbeans to JMX 
Server: com.sun.jmx.mbeanserver.JmxMBeanServer@750c8bf
   [junit4]   2> 4346880 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/control_collection/core_node1/data/snapshot_metadata
   [junit4]   2> 4346890 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 4346890 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 4346890 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 4346900 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 4346900 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/control_collection/core_node1/data
   [junit4]   2> 4346926 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/control_collection/core_node1/data/index
   [junit4]   2> 4346934 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 4346934 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 4346934 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 4346943 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 4346943 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=49, maxMergeAtOnceExplicit=23, maxMergedSegmentMB=85.140625, 
floorSegmentMB=1.2900390625, forceMergeDeletesPctAllowed=21.2454442398125, 
segmentsPerTier=45.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.7283743607865888
   [junit4]   2> 4346955 INFO  (IPC Server handler 4 on 48626) [    ] 
BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:36543 is 
added to blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0c13f431-6c7e-4b5a-b9a1-f71d68b9a42e:NORMAL:127.0.0.1:51940|RBW],
 
ReplicaUC[[DISK]DS-eb776af6-ce15-45a1-8553-8428b2407452:NORMAL:127.0.0.1:36543|FINALIZED]]}
 size 0
   [junit4]   2> 4346956 INFO  (IPC Server handler 5 on 48626) [    ] 
BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51940 is 
added to blk_1073741825_1001{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-eb776af6-ce15-45a1-8553-8428b2407452:NORMAL:127.0.0.1:36543|FINALIZED],
 
ReplicaUC[[DISK]DS-b9a862ce-08e3-44d5-a1bf-2ce8e0683052:NORMAL:127.0.0.1:51940|FINALIZED]]}
 size 0
   [junit4]   2> 4346965 WARN  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 4347015 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 4347015 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 4347015 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 4347029 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 4347029 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 4347031 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=20, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.11624885577571611]
   [junit4]   2> 4347036 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.s.SolrIndexSearcher Opening [Searcher@36ba057c[collection1] main]
   [junit4]   2> 4347038 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 4347039 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 4347039 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.h.ReplicationHandler Commits will be reserved for  10000
   [junit4]   2> 4347041 INFO  
(searcherExecutor-12021-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection   x:collection1] 
o.a.s.c.SolrCore [collection1] Registered new searcher 
Searcher@36ba057c[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 4347043 INFO  
(coreLoadExecutor-12020-thread-1-processing-n:127.0.0.1:44148_) 
[n:127.0.0.1:44148_ c:control_collection   x:collection1] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1567585678292680704
   [junit4]   2> 4347051 INFO  
(coreZkRegister-12013-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection s:shard1 
r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext Enough replicas 
found to continue.
   [junit4]   2> 4347052 INFO  
(coreZkRegister-12013-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection s:shard1 
r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext I may be the new 
leader - try and sync
   [junit4]   2> 4347052 INFO  
(coreZkRegister-12013-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection s:shard1 
r:core_node1 x:collection1] o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.0.1:44148/collection1/
   [junit4]   2> 4347053 INFO  
(coreZkRegister-12013-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection s:shard1 
r:core_node1 x:collection1] o.a.s.c.SyncStrategy Sync Success - now sync 
replicas to me
   [junit4]   2> 4347053 INFO  
(coreZkRegister-12013-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection s:shard1 
r:core_node1 x:collection1] o.a.s.c.SyncStrategy 
http://127.0.0.1:44148/collection1/ has no replicas
   [junit4]   2> 4347053 INFO  
(coreZkRegister-12013-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection s:shard1 
r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext Found all 
replicas participating in election, clear LIR
   [junit4]   2> 4347058 INFO  
(coreZkRegister-12013-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection s:shard1 
r:core_node1 x:collection1] o.a.s.c.ShardLeaderElectionContext I am the new 
leader: http://127.0.0.1:44148/collection1/ shard1
   [junit4]   2> 4347189 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 4347191 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:48499/solr ready
   [junit4]   2> 4347191 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.ChaosMonkey 
monkey: init - expire sessions:false cause connection loss:false
   [junit4]   2> 4347191 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase Creating collection1 with stateFormat=2
   [junit4]   2> 4347209 INFO  
(coreZkRegister-12013-thread-1-processing-n:127.0.0.1:44148_ x:collection1 
c:control_collection) [n:127.0.0.1:44148_ c:control_collection s:shard1 
r:core_node1 x:collection1] o.a.s.c.ZkController I am the leader, no recovery 
necessary
   [junit4]   2> 4347324 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.SolrTestCaseJ4 
Writing core.properties file to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-1-001/cores/collection1
   [junit4]   2> 4347325 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-1-001
   [junit4]   2> 4347326 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 4347328 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6c1c42d8{/,null,AVAILABLE}
   [junit4]   2> 4347328 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@5bf353a6{HTTP/1.1,[http/1.1]}{127.0.0.1:54229}
   [junit4]   2> 4347329 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.e.j.s.Server 
Started @4350478ms
   [junit4]   2> 4347329 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:48626/hdfs__localhost_48626__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-6.6_checkout_solr_build_solr-core_test_J2_temp_solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001_tempDir-002_jetty1,
 solrconfig=solrconfig.xml, hostContext=/, hostPort=54229, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-1-001/cores}
   [junit4]   2> 4347329 ERROR 
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 4347330 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
6.6.0
   [junit4]   2> 4347330 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 4347330 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 4347330 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-05-16T20:23:20.441Z
   [junit4]   2> 4347336 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 4347336 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-1-001/solr.xml
   [junit4]   2> 4347343 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 4347344 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:48499/solr
   [junit4]   2> 4347357 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:54229_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 4347360 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:54229_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 4347363 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:54229_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:54229_
   [junit4]   2> 4347365 INFO  (zkCallback-15598-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 4347365 INFO  
(zkCallback-15594-thread-1-processing-n:127.0.0.1:44148_) [n:127.0.0.1:44148_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 4347373 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 4347438 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:54229_    ] 
o.a.s.c.CorePropertiesLocator Found 1 core definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-1-001/cores
   [junit4]   2> 4347438 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:54229_    ] 
o.a.s.c.CorePropertiesLocator Cores are: [collection1]
   [junit4]   2> 4347440 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 4347441 INFO  
(OverseerStateUpdate-97974104775524356-127.0.0.1:44148_-n_0000000000) 
[n:127.0.0.1:44148_    ] o.a.s.c.o.ReplicaMutator Assigning new node to shard 
shard=shard1
   [junit4]   2> 4347544 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 4348455 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.SolrConfig Using 
Lucene MatchVersion: 6.6.0
   [junit4]   2> 4348473 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.s.IndexSchema 
[collection1] Schema name=test
   [junit4]   2> 4348614 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.s.IndexSchema Loaded 
schema test/1.0 with uniqueid field id
   [junit4]   2> 4348629 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1' using configuration from collection 
collection1, trusted=true
   [junit4]   2> 4348629 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:48626/solr_hdfs_home
   [junit4]   2> 4348629 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Solr Kerberos Authentication disabled
   [junit4]   2> 4348629 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 4348630 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.SolrCore 
[[collection1] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-1-001/cores/collection1],
 dataDir=[hdfs://localhost:48626/solr_hdfs_home/collection1/core_node1/data/]
   [junit4]   2> 4348630 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.JmxMonitoredMap JMX 
monitoring is enabled. Adding Solr mbeans to JMX Server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@750c8bf
   [junit4]   2> 4348630 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/collection1/core_node1/data/snapshot_metadata
   [junit4]   2> 4348640 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 4348640 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Block cache target memory usage, slab size of [8388608] will allocate [1] slabs 
and use ~[8388608] bytes
   [junit4]   2> 4348640 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Creating new single instance HDFS BlockCache
   [junit4]   2> 4348647 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.s.b.BlockDirectory 
Block cache on write is disabled
   [junit4]   2> 4348648 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/collection1/core_node1/data
   [junit4]   2> 4348668 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/collection1/core_node1/data/index
   [junit4]   2> 4348676 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 4348676 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Block cache target memory usage, slab size of [8388608] will allocate [1] slabs 
and use ~[8388608] bytes
   [junit4]   2> 4348676 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Creating new single instance HDFS BlockCache
   [junit4]   2> 4348684 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.s.b.BlockDirectory 
Block cache on write is disabled
   [junit4]   2> 4348684 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.u.RandomMergePolicy 
RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: 
[TieredMergePolicy: maxMergeAtOnce=49, maxMergeAtOnceExplicit=23, 
maxMergedSegmentMB=85.140625, floorSegmentMB=1.2900390625, 
forceMergeDeletesPctAllowed=21.2454442398125, segmentsPerTier=45.0, 
maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.7283743607865888
   [junit4]   2> 4348697 INFO  (IPC Server handler 2 on 48626) [    ] 
BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:36543 is 
added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0c13f431-6c7e-4b5a-b9a1-f71d68b9a42e:NORMAL:127.0.0.1:51940|RBW],
 
ReplicaUC[[DISK]DS-58c3ef74-f5d4-483b-b603-da71c367fdb5:NORMAL:127.0.0.1:36543|RBW]]}
 size 0
   [junit4]   2> 4348698 INFO  (IPC Server handler 3 on 48626) [    ] 
BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51940 is 
added to blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0c13f431-6c7e-4b5a-b9a1-f71d68b9a42e:NORMAL:127.0.0.1:51940|RBW],
 
ReplicaUC[[DISK]DS-58c3ef74-f5d4-483b-b603-da71c367fdb5:NORMAL:127.0.0.1:36543|RBW]]}
 size 0
   [junit4]   2> 4348706 WARN  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.c.RequestHandlers 
INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class 
= DumpRequestHandler,attributes = {initParams=a, name=/dump, 
class=DumpRequestHandler},args = {defaults={a=A,b=B}}}
   [junit4]   2> 4348781 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.u.UpdateHandler Using 
UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 4348781 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 4348781 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 4348794 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.u.CommitTracker Hard 
AutoCommit: disabled
   [junit4]   2> 4348794 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.u.CommitTracker Soft 
AutoCommit: disabled
   [junit4]   2> 4348797 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.u.RandomMergePolicy 
RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=20, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.11624885577571611]
   [junit4]   2> 4348801 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.s.SolrIndexSearcher 
Opening [Searcher@40fbf44a[collection1] main]
   [junit4]   2> 4348802 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 4348803 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 4348803 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.h.ReplicationHandler 
Commits will be reserved for  10000
   [junit4]   2> 4348804 INFO  
(searcherExecutor-12032-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1   x:collection1] 
o.a.s.c.SolrCore [collection1] Registered new searcher 
Searcher@40fbf44a[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 4348806 INFO  
(coreLoadExecutor-12031-thread-1-processing-n:127.0.0.1:54229_) 
[n:127.0.0.1:54229_ c:collection1   x:collection1] o.a.s.u.UpdateLog Could not 
find max version in index or recent updates, using new clock 1567585680141320192
   [junit4]   2> 4348814 INFO  
(coreZkRegister-12026-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1 s:shard1 r:core_node1 
x:collection1] o.a.s.c.ShardLeaderElectionContext Enough replicas found to 
continue.
   [junit4]   2> 4348814 INFO  
(coreZkRegister-12026-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1 s:shard1 r:core_node1 
x:collection1] o.a.s.c.ShardLeaderElectionContext I may be the new leader - try 
and sync
   [junit4]   2> 4348814 INFO  
(coreZkRegister-12026-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1 s:shard1 r:core_node1 
x:collection1] o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.0.1:54229/collection1/
   [junit4]   2> 4348814 INFO  
(coreZkRegister-12026-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1 s:shard1 r:core_node1 
x:collection1] o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 4348814 INFO  
(coreZkRegister-12026-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1 s:shard1 r:core_node1 
x:collection1] o.a.s.c.SyncStrategy http://127.0.0.1:54229/collection1/ has no 
replicas
   [junit4]   2> 4348814 INFO  
(coreZkRegister-12026-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1 s:shard1 r:core_node1 
x:collection1] o.a.s.c.ShardLeaderElectionContext Found all replicas 
participating in election, clear LIR
   [junit4]   2> 4348819 INFO  
(coreZkRegister-12026-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1 s:shard1 r:core_node1 
x:collection1] o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:54229/collection1/ shard1
   [junit4]   2> 4348922 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 4348970 INFO  
(coreZkRegister-12026-thread-1-processing-n:127.0.0.1:54229_ x:collection1 
c:collection1) [n:127.0.0.1:54229_ c:collection1 s:shard1 r:core_node1 
x:collection1] o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 4349074 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [2])
   [junit4]   2> 4349078 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.SolrTestCaseJ4 
Writing core.properties file to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-2-001/cores/collection1
   [junit4]   2> 4349079 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 2 in directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-2-001
   [junit4]   2> 4349079 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 4349081 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6fb98c49{/,null,AVAILABLE}
   [junit4]   2> 4349081 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@3c55f077{HTTP/1.1,[http/1.1]}{127.0.0.1:40744}
   [junit4]   2> 4349082 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.e.j.s.Server 
Started @4352231ms
   [junit4]   2> 4349083 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:48626/hdfs__localhost_48626__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-6.6_checkout_solr_build_solr-core_test_J2_temp_solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001_tempDir-002_jetty2,
 solrconfig=solrconfig.xml, hostContext=/, hostPort=40744, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-2-001/cores}
   [junit4]   2> 4349083 ERROR 
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 4349083 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
6.6.0
   [junit4]   2> 4349083 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 4349083 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 4349083 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-05-16T20:23:22.194Z
   [junit4]   2> 4349086 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 4349086 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-2-001/solr.xml
   [junit4]   2> 4349094 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 4349095 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:48499/solr
   [junit4]   2> 4349106 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:40744_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (2)
   [junit4]   2> 4349110 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:40744_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 4349114 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:40744_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:40744_
   [junit4]   2> 4349117 INFO  
(zkCallback-15594-thread-1-processing-n:127.0.0.1:44148_) [n:127.0.0.1:44148_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 4349117 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 4349117 INFO  (zkCallback-15598-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 4349125 INFO  
(zkCallback-15610-thread-1-processing-n:127.0.0.1:40744_) [n:127.0.0.1:40744_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
   [junit4]   2> 4349254 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:40744_    ] 
o.a.s.c.CorePropertiesLocator Found 1 core definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-2-001/cores
   [junit4]   2> 4349254 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:40744_    ] 
o.a.s.c.CorePropertiesLocator Cores are: [collection1]
   [junit4]   2> 4349255 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 4349257 INFO  
(OverseerStateUpdate-97974104775524356-127.0.0.1:44148_-n_0000000000) 
[n:127.0.0.1:44148_    ] o.a.s.c.o.ReplicaMutator Assigning new node to shard 
shard=shard1
   [junit4]   2> 4349359 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 4349359 INFO  
(zkCallback-15610-thread-1-processing-n:127.0.0.1:40744_) [n:127.0.0.1:40744_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 4350272 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.SolrConfig Using 
Lucene MatchVersion: 6.6.0
   [junit4]   2> 4350289 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.s.IndexSchema 
[collection1] Schema name=test
   [junit4]   2> 4350427 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.s.IndexSchema Loaded 
schema test/1.0 with uniqueid field id
   [junit4]   2> 4350442 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1' using configuration from collection 
collection1, trusted=true
   [junit4]   2> 4350442 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:48626/solr_hdfs_home
   [junit4]   2> 4350442 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Solr Kerberos Authentication disabled
   [junit4]   2> 4350442 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 4350442 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.SolrCore 
[[collection1] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-2-001/cores/collection1],
 dataDir=[hdfs://localhost:48626/solr_hdfs_home/collection1/core_node2/data/]
   [junit4]   2> 4350443 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.JmxMonitoredMap JMX 
monitoring is enabled. Adding Solr mbeans to JMX Server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@750c8bf
   [junit4]   2> 4350443 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/collection1/core_node2/data/snapshot_metadata
   [junit4]   2> 4350453 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 4350453 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Block cache target memory usage, slab size of [8388608] will allocate [1] slabs 
and use ~[8388608] bytes
   [junit4]   2> 4350453 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Creating new single instance HDFS BlockCache
   [junit4]   2> 4350460 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.s.b.BlockDirectory 
Block cache on write is disabled
   [junit4]   2> 4350461 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/collection1/core_node2/data
   [junit4]   2> 4350481 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
creating directory factory for path 
hdfs://localhost:48626/solr_hdfs_home/collection1/core_node2/data/index
   [junit4]   2> 4350489 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Number of slabs of block cache [1] with direct memory allocation set to [true]
   [junit4]   2> 4350490 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Block cache target memory usage, slab size of [8388608] will allocate [1] slabs 
and use ~[8388608] bytes
   [junit4]   2> 4350490 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.HdfsDirectoryFactory 
Creating new single instance HDFS BlockCache
   [junit4]   2> 4350497 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.s.b.BlockDirectory 
Block cache on write is disabled
   [junit4]   2> 4350497 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.u.RandomMergePolicy 
RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: 
[TieredMergePolicy: maxMergeAtOnce=49, maxMergeAtOnceExplicit=23, 
maxMergedSegmentMB=85.140625, floorSegmentMB=1.2900390625, 
forceMergeDeletesPctAllowed=21.2454442398125, segmentsPerTier=45.0, 
maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.7283743607865888
   [junit4]   2> 4350507 INFO  (IPC Server handler 1 on 48626) [    ] 
BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51940 is 
added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-58c3ef74-f5d4-483b-b603-da71c367fdb5:NORMAL:127.0.0.1:36543|RBW],
 
ReplicaUC[[DISK]DS-b9a862ce-08e3-44d5-a1bf-2ce8e0683052:NORMAL:127.0.0.1:51940|RBW]]}
 size 0
   [junit4]   2> 4350508 INFO  (IPC Server handler 2 on 48626) [    ] 
BlockStateChange BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:36543 is 
added to blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-b9a862ce-08e3-44d5-a1bf-2ce8e0683052:NORMAL:127.0.0.1:51940|RBW],
 
ReplicaUC[[DISK]DS-eb776af6-ce15-45a1-8553-8428b2407452:NORMAL:127.0.0.1:36543|FINALIZED]]}
 size 0
   [junit4]   2> 4350517 WARN  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.c.RequestHandlers 
INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class 
= DumpRequestHandler,attributes = {initParams=a, name=/dump, 
class=DumpRequestHandler},args = {defaults={a=A,b=B}}}
   [junit4]   2> 4350571 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.u.UpdateHandler Using 
UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 4350571 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 4350571 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 4350585 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.u.CommitTracker Hard 
AutoCommit: disabled
   [junit4]   2> 4350585 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.u.CommitTracker Soft 
AutoCommit: disabled
   [junit4]   2> 4350587 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.u.RandomMergePolicy 
RandomMergePolicy wrapping class 
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: 
minMergeSize=1677721, mergeFactor=20, maxMergeSize=2147483648, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.11624885577571611]
   [junit4]   2> 4350591 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.s.SolrIndexSearcher 
Opening [Searcher@52d24adc[collection1] main]
   [junit4]   2> 4350593 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 4350593 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 4350593 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.h.ReplicationHandler 
Commits will be reserved for  10000
   [junit4]   2> 4350595 INFO  
(searcherExecutor-12043-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
c:collection1) [n:127.0.0.1:40744_ c:collection1   x:collection1] 
o.a.s.c.SolrCore [collection1] Registered new searcher 
Searcher@52d24adc[collection1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 4350597 INFO  
(coreLoadExecutor-12042-thread-1-processing-n:127.0.0.1:40744_) 
[n:127.0.0.1:40744_ c:collection1   x:collection1] o.a.s.u.UpdateLog Could not 
find max version in index or recent updates, using new clock 1567585682019319808
   [junit4]   2> 4350601 INFO  
(coreZkRegister-12037-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
c:collection1) [n:127.0.0.1:40744_ c:collection1 s:shard1 r:core_node2 
x:collection1] o.a.s.c.ZkController Core needs to recover:collection1
   [junit4]   2> 4350601 INFO  
(updateExecutor-15607-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 4350602 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Starting recovery process. 
recoveringAfterStartup=true
   [junit4]   2> 4350603 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy ###### startupVersions=[[]]
   [junit4]   2> 4350603 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Begin buffering updates. 
core=[collection1]
   [junit4]   2> 4350603 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.u.UpdateLog Starting to buffer updates. 
HDFSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 4350603 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Publishing state of core 
[collection1] as recovering, leader is [http://127.0.0.1:54229/collection1/] 
and I am [http://127.0.0.1:40744/collection1/]
   [junit4]   2> 4350607 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Sending prep recovery 
command to [http://127.0.0.1:54229]; [WaitForState: 
action=PREPRECOVERY&core=collection1&nodeName=127.0.0.1:40744_&coreNodeName=core_node2&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 4350608 INFO  (qtp916797385-99874) [n:127.0.0.1:54229_    ] 
o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node2, state: 
recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true
   [junit4]   2> 4350608 INFO  (qtp916797385-99874) [n:127.0.0.1:54229_    ] 
o.a.s.h.a.PrepRecoveryOp Will wait a max of 183 seconds to see collection1 
(shard1 of collection1) have state: recovering
   [junit4]   2> 4350608 INFO  (qtp916797385-99874) [n:127.0.0.1:54229_    ] 
o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, 
shard=shard1, thisCore=collection1, leaderDoesNotNeedRecovery=false, isLeader? 
true, live=true, checkLive=true, currentState=down, localState=active, 
nodeName=127.0.0.1:40744_, coreNodeName=core_node2, 
onlyIfActiveCheckResult=false, nodeProps: 
core_node2:{"core":"collection1","base_url":"http://127.0.0.1:40744","node_name":"127.0.0.1:40744_","state":"down"}
   [junit4]   2> 4350707 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 4350707 INFO  
(zkCallback-15610-thread-1-processing-n:127.0.0.1:40744_) [n:127.0.0.1:40744_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [3])
   [junit4]   2> 4350890 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.SolrTestCaseJ4 
Writing core.properties file to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-3-001/cores/collection1
   [junit4]   2> 4350891 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.AbstractFullDistribZkTestBase create jetty 3 in directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-3-001
   [junit4]   2> 4350892 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.e.j.s.Server 
jetty-9.3.14.v20161028
   [junit4]   2> 4350894 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@3ac360f7{/,null,AVAILABLE}
   [junit4]   2> 4350894 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.e.j.s.AbstractConnector Started 
ServerConnector@12d4ded{HTTP/1.1,[http/1.1]}{127.0.0.1:38700}
   [junit4]   2> 4350895 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.e.j.s.Server 
Started @4354044ms
   [junit4]   2> 4350895 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:48626/hdfs__localhost_48626__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-NightlyTests-6.6_checkout_solr_build_solr-core_test_J2_temp_solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001_tempDir-002_jetty3,
 solrconfig=solrconfig.xml, hostContext=/, hostPort=38700, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-3-001/cores}
   [junit4]   2> 4350896 ERROR 
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 4350896 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
6.6.0
   [junit4]   2> 4350896 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 4350896 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 4350896 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2017-05-16T20:23:24.007Z
   [junit4]   2> 4350899 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 4350899 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.SolrXmlConfig 
Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-3-001/solr.xml
   [junit4]   2> 4350906 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] 
o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: 
socketTimeout=340000&connTimeout=45000&retry=true
   [junit4]   2> 4350907 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [    ] o.a.s.c.ZkContainer 
Zookeeper client=127.0.0.1:48499/solr
   [junit4]   2> 4350917 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:38700_    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (3)
   [junit4]   2> 4350920 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:38700_    ] 
o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 4350922 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:38700_    ] 
o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:38700_
   [junit4]   2> 4350924 INFO  (zkCallback-15598-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 4350925 INFO  
(zkCallback-15610-thread-1-processing-n:127.0.0.1:40744_) [n:127.0.0.1:40744_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 4350925 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 4350925 INFO  
(zkCallback-15594-thread-2-processing-n:127.0.0.1:44148_) [n:127.0.0.1:44148_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 4350926 INFO  
(zkCallback-15617-thread-1-processing-n:127.0.0.1:38700_) [n:127.0.0.1:38700_   
 ] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
   [junit4]   2> 4351035 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:38700_    ] 
o.a.s.c.CorePropertiesLocator Found 1 core definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/../../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_FD6F0C19DBF83B16-001/shard-3-001/cores
   [junit4]   2> 4351035 INFO  
(TEST-StressHdfsTest.test-seed#[FD6F0C19DBF83B16]) [n:127.0.0.1:38700_    ] 
o.a.s.c.CorePropertiesLocator Cores are: [collection1]
   [junit4]   2> 4351036 INFO  
(coreLoadExecutor-12053-thread-1-processing-n:127.0.0.1:38700_) 
[n:127.0.0.1:38700_ c:collection1   x:collection1] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 4351038 INFO  
(OverseerStateUpdate-97974104775524356-127.0.0.1:44148_-n_0000000000) 
[n:127.0.0.1:44148_    ] o.a.s.c.o.ReplicaMutator Assigning new node to shard 
shard=shard1
   [junit4]   2> 4351141 INFO  
(zkCallback-15617-thread-1-processing-n:127.0.0.1:38700_) [n:127.0.0.1:38700_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 4351141 INFO  
(zkCallback-15610-thread-1-processing-n:127.0.0.1:40744_) [n:127.0.0.1:40744_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 4351141 INFO  
(zkCallback-15604-thread-1-processing-n:127.0.0.1:54229_) [n:127.0.0.1:54229_   
 ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/collection1/state.json] for collection [collection1] has 
occurred - updating... (live nodes size: [4])
   [junit4]   2> 4351609 INFO  (qtp916797385-99874) [n:127.0.0.1:54229_    ] 
o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, 
shard=shard1, thisCore=collection1, leaderDoesNotNeedRecovery=false, isLeader? 
true, live=true, checkLive=true, currentState=recovering, localState=active, 
nodeName=127.0.0.1:40744_, coreNodeName=core_node2, 
onlyIfActiveCheckResult=false, nodeProps: 
core_node2:{"core":"collection1","dataDir":"hdfs://localhost:48626/solr_hdfs_home/collection1/core_node2/data/","base_url":"http://127.0.0.1:40744","node_name":"127.0.0.1:40744_","state":"recovering","ulogDir":"hdfs://localhost:48626/solr_hdfs_home/collection1/core_node2/data/tlog"}
   [junit4]   2> 4351609 INFO  (qtp916797385-99874) [n:127.0.0.1:54229_    ] 
o.a.s.h.a.PrepRecoveryOp Waited coreNodeName: core_node2, state: recovering, 
checkLive: true, onlyIfLeader: true for: 1 seconds.
   [junit4]   2> 4351609 INFO  (qtp916797385-99874) [n:127.0.0.1:54229_    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={nodeName=127.0.0.1:40744_&onlyIfLeaderActive=true&core=collection1&coreNodeName=core_node2&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=1001
   [junit4]   2> 4352061 INFO  
(coreLoadExecutor-12053-thread-1-processing-n:127.0.0.1:38700_) 
[n:127.0.0.1:38700_ c:collection1   x:collection1] o.a.s.c.SolrConfig Using 
Lucene MatchVersion: 6.6.0
   [junit4]   2> 4352094 INFO  
(coreLoadExecutor-12053-thread-1-processing-n:127.0.0.1:38700_) 
[n:127.0.0.1:38700_ c:collection1   x:collection1] o.a.s.s.IndexSchema 
[collection1] Schema name=test
   [junit4]   2> 4352110 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.c.RecoveryStrategy Attempting to PeerSync 
from [http://127.0.0.1:54229/collection1/] - recoveringAfterStartup=[true]
   [junit4]   2> 4352110 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.u.PeerSync PeerSync: core=collection1 
url=http://127.0.0.1:40744 START replicas=[http://127.0.0.1:54229/collection1/] 
nUpdates=100
   [junit4]   2> 4352112 INFO  (qtp916797385-99878) [n:127.0.0.1:54229_ 
c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.u.IndexFingerprint 
IndexFingerprint millis:0.0 result:{maxVersionSpecified=9223372036854775807, 
maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, 
maxDoc=0}
   [junit4]   2> 4352112 INFO  (qtp916797385-99878) [n:127.0.0.1:54229_ 
c:collection1 s:shard1 r:core_node1 x:collection1] o.a.s.c.S.Request 
[collection1]  webapp= path=/get 
params={distrib=false&qt=/get&getFingerprint=9223372036854775807&wt=javabin&version=2}
 status=0 QTime=0
   [junit4]   2> 4352115 INFO  
(recoveryExecutor-15608-thread-1-processing-n:127.0.0.1:40744_ x:collection1 
s:shard1 c:collection1 r:core_node2) [n:127.0.0.1:40744_ c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.u.IndexFingerprint IndexFingerprint 
millis:2.0 result:{maxVersionSpecified=9223372036854775807, 
maxVersionEncountered=0, maxInHash=0, versionsHash=0, numVersions=0, numDocs=0, 
maxDoc=0}
   [junit4]   2> 4352116 INFO  (recoveryExecut

[...truncated too long message...]

junit4]   2>    at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
   [junit4]   2>        at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
   [junit4]   2>        at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
   [junit4]   2>        at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
   [junit4]   2>        at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
   [junit4]   2>        at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
   [junit4]   2>        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
   [junit4]   2>        at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
   [junit4]   2>        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
   [junit4]   2>        at 
org.eclipse.jetty.server.Server.handle(Server.java:534)
   [junit4]   2>        at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
   [junit4]   2>        at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
   [junit4]   2>        at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
   [junit4]   2>        at 
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
   [junit4]   2>        at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
   [junit4]   2>        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
   [junit4]   2>        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
   [junit4]   2>        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
   [junit4]   2>        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
   [junit4]   2>        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
   [junit4]   2>        at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> Caused by: java.lang.InterruptedException: sleep interrupted
   [junit4]   2>        at java.lang.Thread.sleep(Native Method)
   [junit4]   2>        at 
org.apache.lucene.store.SlowClosingMockIndexInputWrapper.close(SlowClosingMockIndexInputWrapper.java:39)
   [junit4]   2>        ... 47 more
   [junit4]   2> 
   [junit4]   2> 5493509 INFO  (qtp1291488397-88436) [n:127.0.0.1:44069__ 
c:collection1 s:shard1 r:core_node3 x:collection1] o.a.s.m.SolrMetricManager 
Closing metric reporters for: solr.core.collection1
   [junit4]   2> NOTE: test params are: codec=Lucene62, 
sim=RandomSimilarity(queryNorm=false,coord=crazy): {}, locale=sl-SI, 
timezone=SystemV/YST9YDT
   [junit4]   2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 
1.8.0_121 (64-bit)/cpus=4,threads=1,free=98571024,total=534773760
   [junit4]   2> NOTE: All tests run in this JVM: [ShardRoutingCustomTest, 
WordBreakSolrSpellCheckerTest, TestSolrCLIRunExample, HLLUtilTest, 
TestFiltering, CopyFieldTest, TestClusterProperties, TestFieldCacheVsDocValues, 
TestComplexPhraseLeadingWildcard, TestSubQueryTransformer, 
DirectSolrConnectionTest, TriLevelCompositeIdRoutingTest, 
UpdateRequestProcessorFactoryTest, SampleTest, 
LeaderInitiatedRecoveryOnShardRestartTest, DataDrivenBlockJoinTest, 
TestConfigOverlay, TestAuthorizationFramework, TestLeaderElectionZkExpiry, 
TestSQLHandlerNonCloud, DistributedFacetPivotLargeTest, ConnectionReuseTest, 
TestSystemCollAutoCreate, TestSearchPerf, TestFieldResource, 
SimplePostToolTest, TestSolr4Spatial2, TestSolrIndexConfig, HdfsThreadLeakTest, 
TestSuggestSpellingConverter, TestOverriddenPrefixQueryForCustomFieldType, 
ChangedSchemaMergeTest, SortByFunctionTest, SortSpecParsingTest, 
TestFuzzyAnalyzedSuggestions, SimpleMLTQParserTest, 
TestDeleteCollectionOnDownNodes, SuggesterFSTTest, TestRequestForwarding, 
TestExactSharedStatsCache, ZkControllerTest, ZkStateReaderTest, 
DistributedQueryComponentOptimizationTest, CoreAdminRequestStatusTest, 
TestElisionMultitermQuery, BufferStoreTest, TestCoreContainer, 
DistributedMLTComponentTest, CSVRequestHandlerTest, 
TestSolrQueryParserResource, TestSchemalessBufferedUpdates, 
URLClassifyProcessorTest, OutOfBoxZkACLAndCredentialsProvidersTest, 
TestCSVResponseWriter, TestStressVersions, IndexSchemaRuntimeFieldTest, 
RemoteQueryErrorTest, SharedFSAutoReplicaFailoverUtilsTest, 
TestTolerantUpdateProcessorCloud, SegmentsInfoRequestHandlerTest, 
TestWordDelimiterFilterFactory, TestStressCloudBlindAtomicUpdates, 
SchemaVersionSpecificBehaviorTest, CleanupOldIndexTest, TestDynamicLoading, 
BigEndianAscendingWordDeserializerTest, TestSolrDynamicMBean, 
TestTrackingShardHandlerFactory, TestManagedSynonymFilterFactory, 
DistributedFacetPivotSmallTest, CheckHdfsIndexTest, 
TestLMDirichletSimilarityFactory, TestExportWriter, TestScoreJoinQPScore, 
TestDocTermOrds, TolerantUpdateProcessorTest, CollectionsAPIDistributedZkTest, 
SearchHandlerTest, OverseerModifyCollectionTest, 
TestHighFrequencyDictionaryFactory, TestFastOutputStream, 
OverseerTaskQueueTest, ShardRoutingTest, TestNumericTerms32, 
TestHashQParserPlugin, BasicFunctionalityTest, ConvertedLegacyTest, 
SolrCloudExampleTest, RuleEngineTest, TestSolrCloudWithDelegationTokens, 
TermVectorComponentTest, TestDistributedStatsComponentCardinality, 
SoftAutoCommitTest, TestSimpleTrackingShardHandler, ConnectionManagerTest, 
SpellCheckCollatorTest, TestConfigReload, TestManagedSchemaThreadSafety, 
TestRandomRequestDistribution, SolrMetricsIntegrationTest, 
MoreLikeThisHandlerTest, DeleteReplicaTest, TestXmlQParser, 
SystemInfoHandlerTest, DistributedQueueTest, DocValuesTest, 
AnalyticsMergeStrategyTest, TestStandardQParsers, TestReqParamsAPI, 
FileUtilsTest, MultiTermTest, JsonLoaderTest, ZkCLITest, TestFieldSortValues, 
TestSolrCloudWithHadoopAuthPlugin, LoggingHandlerTest, BasicDistributedZkTest, 
ChaosMonkeySafeLeaderTest, BasicDistributedZk2Test, SyncSliceTest, 
OpenCloseCoreStressTest, OverseerTest, BasicZkTest, RecoveryZkTest, 
FullSolrCloudDistribCmdsTest, ClusterStateUpdateTest, LeaderElectionTest, 
ZkSolrClientTest, TestDistributedSearch, HardAutoCommitTest, TestRangeQuery, 
UpdateParamsTest, TestStressRecovery, PreAnalyzedFieldTest, 
TestSystemIdResolver, SpellingQueryConverterTest, DOMUtilTest, 
RAMDirectoryFactoryTest, TestSolrJ, TestLRUCache, TestUtils, ZkNodePropsTest, 
SliceStateTest, TestRTGBase, DistributedIntervalFacetingTest, 
CdcrReplicationDistributedZkTest, CdcrRequestHandlerTest, 
CollectionsAPISolrJTest, DeleteLastCustomShardedReplicaTest, DeleteShardTest, 
DistribCursorPagingTest, HttpPartitionTest, LeaderElectionContextKeyTest, 
LeaderFailureAfterFreshStartTest, MissingSegmentRecoveryTest, 
MultiThreadedOCPTest, SaslZkACLProviderTest, ShardSplitTest, 
TestOnReconnectListenerSupport, TestRebalanceLeaders, TestReplicaProperties, 
TestRequestStatusCollectionAPI, TestSSLRandomization, TestSegmentSorting, 
TestSizeLimitedDistributedMap, TestSolrCloudWithKerberosAlt, 
TestSolrCloudWithSecureImpersonation, TestStressInPlaceUpdates, 
HdfsChaosMonkeyNothingIsSafeTest, HdfsCollectionsAPIDistributedZkTest, 
HdfsNNFailoverTest, HdfsRecoveryZkTest, HdfsRestartWhileUpdatingTest, 
TestClusterStateMutator, RulesTest, CoreSorterTest, 
ExitableDirectoryReaderTest, HdfsDirectoryFactoryTest, TestInfoStreamLogging, 
TestInitParams, TestShardHandlerFactory, TestSolrConfigHandler, 
TestSolrCoreSnapshots, RequestLoggingTest, InfoHandlerTest, MetricsHandlerTest, 
SecurityConfHandlerTest, StatsReloadRaceTest, TestCollectionAPIs, 
TestConfigsApi, DistributedDebugComponentTest, DistributedFacetExistsSmallTest, 
FacetPivotSmallTest, InfixSuggestersTest, ShufflingReplicaListTransformerTest, 
SpatialHeatmapFacetsTest, SubstringBytesRefFilterTest, TestMacroExpander, 
TestMacros, JSONWriterTest, SmileWriterTest, TestCustomDocTransformer, 
TestRawTransformer, TestChildDocTransformer, TestClassNameShortening, 
TestCopyFieldCollectionResource, TestDynamicFieldCollectionResource, 
TestDynamicFieldResource, TestFieldTypeResource, TestSchemaNameResource, 
TestSchemaResource, TestSchemaSimilarityResource, TestSchemaVersionResource, 
TestSerializedLuceneMatchVersion, BooleanFieldTest, DocValuesMissingTest, 
PreAnalyzedFieldManagedSchemaCloudTest, SpatialRPTFieldTypeTest, 
AnalyticsQueryTest, LargeFieldTest, MergeStrategyTest, TestNoOpRegenerator, 
TestPayloadCheckQParserPlugin, TestQueryWrapperFilter, 
TestRandomCollapseQParserPlugin, TestSearcherReuse, TestSolrCoreParser, 
TestStressUserVersions, TestTrieFacet, TestJsonFacets, 
BasicAuthIntegrationTest, HttpSolrCallGetCoreTest, ResponseHeaderTest, 
TestBlendedInfixSuggestions, HdfsDirectoryTest, SolrIndexMetricsTest, 
TestInPlaceUpdatesDistrib]
   [junit4] Completed [711/711 (2!)] on J1 in 737.99s, 1 test, 1 error <<< 
FAILURES!

[...truncated 1 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/solr/build/solr-core/test/temp/junit4-J1-20170516_191049_9002658684027530429065.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) ----
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/heapdumps/java_pid19708.hprof
 ...
   [junit4] Heap dump file created [687754645 bytes in 4.843 secs]
   [junit4] <<< JVM J1: EOF ----

[...truncated 7484 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/build.xml:783:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.6/checkout/build.xml:727:
 Some of the tests produced a heap dump, but did not fail. Maybe a suppressed 
OutOfMemoryError? Dumps created:
* java_pid19708.hprof

Total time: 329 minutes 36 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to