Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/406/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.index.hdfs.CheckHdfsIndexTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.index.hdfs.CheckHdfsIndexTest:     1) Thread[id=585, 
name=qtp868643065-585, state=TIMED_WAITING, group=TGRP-CheckHdfsIndexTest]      
   at sun.misc.Unsafe.park(Native Method)         at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)         
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
         at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
         at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
         at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
         at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
        at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.index.hdfs.CheckHdfsIndexTest: 
   1) Thread[id=585, name=qtp868643065-585, state=TIMED_WAITING, 
group=TGRP-CheckHdfsIndexTest]
        at sun.misc.Unsafe.park(Native Method)
        at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
        at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
        at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
        at java.lang.Thread.run(Thread.java:748)
        at __randomizedtesting.SeedInfo.seed([E280E39C30BA0997]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.index.hdfs.CheckHdfsIndexTest

Error Message:
There are still zombie threads that couldn't be terminated:    1) 
Thread[id=585, name=qtp868643065-585, state=TIMED_WAITING, 
group=TGRP-CheckHdfsIndexTest]         at sun.misc.Unsafe.park(Native Method)   
      at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
         at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
         at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
         at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
         at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
        at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=585, name=qtp868643065-585, state=TIMED_WAITING, 
group=TGRP-CheckHdfsIndexTest]
        at sun.misc.Unsafe.park(Native Method)
        at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
        at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
        at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
        at java.lang.Thread.run(Thread.java:748)
        at __randomizedtesting.SeedInfo.seed([E280E39C30BA0997]:0)




Build Log:
[...truncated 12181 lines...]
   [junit4] Suite: org.apache.solr.index.hdfs.CheckHdfsIndexTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/init-core-data-001
   [junit4]   2> 7192 WARN  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=7 numCloses=7
   [junit4]   2> 7192 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 7193 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 7633 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 7633 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /q/
   [junit4]   2> 8577 WARN  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.h.u.NativeCodeLoader Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 9467 WARN  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 9656 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via 
org.mortbay.log.Slf4jLog
   [junit4]   2> 9677 WARN  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 10201 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 10393 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/hdfs
 to ./temp/Jetty_localhost_52123_hdfs____.aw3o4h/webapp
   [junit4]   2> 12773 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:52123
   [junit4]   2> 13462 WARN  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 13465 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 13474 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_59047_datanode____y6khyu/webapp
   [junit4]   2> 13944 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:59047
   [junit4]   2> 14198 WARN  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] 
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 14201 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
jetty-6.1.26
   [junit4]   2> 14221 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.7.4-tests.jar!/webapps/datanode
 to ./temp/Jetty_localhost_45335_datanode____.3miag5/webapp
   [junit4]   2> 14647 INFO  
(SUITE-CheckHdfsIndexTest-seed#[E280E39C30BA0997]-worker) [    ] o.m.log 
Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45335
   [junit4]   2> 14867 ERROR (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/tempDir-001/hdfsBaseDir/data/data1/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/tempDir-001/hdfsBaseDir/data/data2/]]
  heartbeating to localhost/127.0.0.1:36182) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 15248 ERROR (DataNode: 
[[[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/tempDir-001/hdfsBaseDir/data/data3/,
 
[DISK]file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/tempDir-001/hdfsBaseDir/data/data4/]]
  heartbeating to localhost/127.0.0.1:36182) [    ] 
o.a.h.h.s.d.DirectoryScanner 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
   [junit4]   2> 15368 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x43a28c030b296e: from storage 
DS-0b669953-9e03-481b-a865-0df43aea675c node 
DatanodeRegistration(127.0.0.1:51036, 
datanodeUuid=5f2e4db4-0594-46e1-a76b-cf2f8fcefea5, infoPort=45163, 
infoSecurePort=0, ipcPort=60851, 
storageInfo=lv=-56;cid=testClusterID;nsid=1005183451;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 13 msecs
   [junit4]   2> 15371 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x43a28c0434f2ce: from storage 
DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50 node 
DatanodeRegistration(127.0.0.1:59512, 
datanodeUuid=0533e912-ace6-4f79-9d19-743a90291b4a, infoPort=47456, 
infoSecurePort=0, ipcPort=43345, 
storageInfo=lv=-56;cid=testClusterID;nsid=1005183451;c=0), blocks: 0, 
hasStaleStorage: true, processing time: 4 msecs
   [junit4]   2> 15372 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x43a28c030b296e: from storage 
DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4 node 
DatanodeRegistration(127.0.0.1:51036, 
datanodeUuid=5f2e4db4-0594-46e1-a76b-cf2f8fcefea5, infoPort=45163, 
infoSecurePort=0, ipcPort=60851, 
storageInfo=lv=-56;cid=testClusterID;nsid=1005183451;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 15372 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* processReport 0x43a28c0434f2ce: from storage 
DS-24f4402d-6762-4664-b7dd-64190cae7e67 node 
DatanodeRegistration(127.0.0.1:59512, 
datanodeUuid=0533e912-ace6-4f79-9d19-743a90291b4a, infoPort=47456, 
infoSecurePort=0, ipcPort=43345, 
storageInfo=lv=-56;cid=testClusterID;nsid=1005183451;c=0), blocks: 0, 
hasStaleStorage: false, processing time: 0 msecs
   [junit4]   2> 16811 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 16813 INFO  (Thread-115) [    ] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 16813 INFO  (Thread-115) [    ] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 16913 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.ZkTestServer start zk server on port:57913
   [junit4]   2> 16935 ERROR (Thread-115) [    ] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 17043 INFO  (zkConnectionManagerCallback-6-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 17071 INFO  (zkConnectionManagerCallback-8-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 17095 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 17111 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 17118 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 17124 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 17130 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 17140 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 17150 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 17153 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 17157 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to /configs/conf1/mapping-ISOLatin1Accent.txt
   [junit4]   2> 17160 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
 to /configs/conf1/old_synonyms.txt
   [junit4]   2> 17162 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractZkTestCase put 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
 to /configs/conf1/synonyms.txt
   [junit4]   2> 17209 INFO  (zkConnectionManagerCallback-11-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 17227 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractFullDistribZkTestBase Will use NRT replicas unless explicitly 
asked otherwise
   [junit4]   2> 17579 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-22T04:27:37+07:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 17627 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 17627 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 17630 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.session Scavenging every 660000ms
   [junit4]   2> 17711 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@6774333b{/q,null,AVAILABLE}
   [junit4]   2> 17766 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.AbstractConnector Started ServerConnector@d00def{SSL,[ssl, 
http/1.1]}{127.0.0.1:53641}
   [junit4]   2> 17766 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.Server Started @19847ms
   [junit4]   2> 17766 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:36182/hdfs__localhost_36182__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-Tests-7.x_solr_build_solr-core_test_J2_temp_solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001_tempDir-002_control_data,
 replicaType=NRT, hostContext=/q, hostPort=53641, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/control-001/cores}
   [junit4]   2> 17800 ERROR 
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 17800 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.3.0
   [junit4]   2> 17800 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port 
null
   [junit4]   2> 17802 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 17803 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-02-16T12:54:00.005Z
   [junit4]   2> 17804 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrResourceLoader solr home defaulted to 'solr/' (could not find 
system property or JNDI)
   [junit4]   2> 17844 INFO  (zkConnectionManagerCallback-13-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 17849 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 17851 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrXmlConfig Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/control-001/solr.xml
   [junit4]   2> 17858 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrXmlConfig Configuration parameter 
autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 17858 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrXmlConfig Configuration parameter 
autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 17861 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 17877 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:57913/solr
   [junit4]   2> 17898 INFO  (zkConnectionManagerCallback-17-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 17947 INFO  
(zkConnectionManagerCallback-19-thread-1-processing-n:127.0.0.1:53641_q) 
[n:127.0.0.1:53641_q    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 18192 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:53641_q    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 18194 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:53641_q    ] o.a.s.c.OverseerElectionContext I am going to be the 
leader 127.0.0.1:53641_q
   [junit4]   2> 18197 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:53641_q    ] o.a.s.c.Overseer Overseer 
(id=73305238729457669-127.0.0.1:53641_q-n_0000000000) starting
   [junit4]   2> 18293 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:53641_q    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:53641_q
   [junit4]   2> 18317 INFO  
(zkCallback-18-thread-1-processing-n:127.0.0.1:53641_q) [n:127.0.0.1:53641_q    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 18501 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:53641_q    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8
   [junit4]   2> 18502 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:53641_q    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8
   [junit4]   2> 18502 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:53641_q    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8
   [junit4]   2> 18508 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:53641_q    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/control-001/cores
   [junit4]   2> 18566 INFO  (zkConnectionManagerCallback-25-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 18568 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 18569 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:57913/solr ready
   [junit4]   2> 18826 INFO  (qtp1074252388-217) [n:127.0.0.1:53641_q    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:53641_q&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 18838 INFO  
(OverseerThreadFactory-34-thread-1-processing-n:127.0.0.1:53641_q) 
[n:127.0.0.1:53641_q    ] o.a.s.c.a.c.CreateCollectionCmd Create collection 
control_collection
   [junit4]   2> 19015 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 19016 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 19128 INFO  
(zkCallback-18-thread-1-processing-n:127.0.0.1:53641_q) [n:127.0.0.1:53641_q    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 20062 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.3.0
   [junit4]   2> 20100 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema [control_collection_shard1_replica_n1] Schema name=test
   [junit4]   2> 20206 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 20229 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.CoreContainer Creating SolrCore 'control_collection_shard1_replica_n1' 
using configuration from collection control_collection, trusted=true
   [junit4]   2> 20231 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.control_collection.shard1.replica_n1' (registry 
'solr.core.control_collection.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8
   [junit4]   2> 20236 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:36182/solr_hdfs_home
   [junit4]   2> 20236 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 20236 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore solr.RecoveryStrategy.Builder
   [junit4]   2> 20236 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SolrCore [[control_collection_shard1_replica_n1] ] Opening new SolrCore 
at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/control-001/cores/control_collection_shard1_replica_n1],
 
dataDir=[hdfs://localhost:36182/solr_hdfs_home/control_collection/core_node2/data/]
   [junit4]   2> 20240 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:36182/solr_hdfs_home/control_collection/core_node2/data/snapshot_metadata
   [junit4]   2> 20253 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 20253 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 20253 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Creating new global HDFS BlockCache
   [junit4]   2> 20655 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 20661 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:36182/solr_hdfs_home/control_collection/core_node2/data
   [junit4]   2> 20692 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:36182/solr_hdfs_home/control_collection/core_node2/data/index
   [junit4]   2> 20699 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 20699 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 20704 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 20704 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=49, maxMergeAtOnceExplicit=36, maxMergedSegmentMB=61.865234375, 
floorSegmentMB=2.162109375, forceMergeDeletesPctAllowed=10.625284389970572, 
segmentsPerTier=23.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.6636055112926569
   [junit4]   2> 20914 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741825_1001{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 69
   [junit4]   2> 20914 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741825_1001 size 69
   [junit4]   2> 21330 WARN  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 21433 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 21433 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 21434 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 21455 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 21455 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 21466 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.AlcoholicMergePolicy: [AlcoholicMergePolicy: 
minMergeSize=0, mergeFactor=10, maxMergeSize=1457030358, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.1]
   [junit4]   2> 21606 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@55e02791[control_collection_shard1_replica_n1] main]
   [junit4]   2> 21610 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 21621 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 21623 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 21627 INFO  
(searcherExecutor-37-thread-1-processing-n:127.0.0.1:53641_q 
x:control_collection_shard1_replica_n1 s:shard1 c:control_collection) 
[n:127.0.0.1:53641_q c:control_collection s:shard1  
x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore 
[control_collection_shard1_replica_n1] Registered new searcher 
Searcher@55e02791[control_collection_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 21639 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1592562175276220416
   [junit4]   2> 21691 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 21691 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 21691 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync replicas to 
https://127.0.0.1:53641/q/control_collection_shard1_replica_n1/
   [junit4]   2> 21692 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy Sync Success - now sync replicas to me
   [junit4]   2> 21692 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.SyncStrategy 
https://127.0.0.1:53641/q/control_collection_shard1_replica_n1/ has no replicas
   [junit4]   2> 21692 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 21725 INFO  
(zkCallback-18-thread-1-processing-n:127.0.0.1:53641_q) [n:127.0.0.1:53641_q    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 21727 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:53641/q/control_collection_shard1_replica_n1/ shard1
   [junit4]   2> 21728 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.c.ZkController I am the leader, no recovery necessary
   [junit4]   2> 21740 INFO  (qtp1074252388-220) [n:127.0.0.1:53641_q 
c:control_collection s:shard1  x:control_collection_shard1_replica_n1] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=2727
   [junit4]   2> 21790 INFO  (qtp1074252388-217) [n:127.0.0.1:53641_q    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 21791 INFO  
(OverseerCollectionConfigSetProcessor-73305238729457669-127.0.0.1:53641_q-n_0000000000)
 [n:127.0.0.1:53641_q    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000000 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 21837 INFO  
(zkCallback-18-thread-1-processing-n:127.0.0.1:53641_q) [n:127.0.0.1:53641_q    
] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged 
path:/collections/control_collection/state.json] for collection 
[control_collection] has occurred - updating... (live nodes size: [1])
   [junit4]   2> 22790 INFO  (qtp1074252388-217) [n:127.0.0.1:53641_q    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:53641_q&wt=javabin&version=2}
 status=0 QTime=3966
   [junit4]   2> 22811 INFO  (zkConnectionManagerCallback-29-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 22816 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
   [junit4]   2> 22817 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:57913/solr ready
   [junit4]   2> 22819 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.ChaosMonkey monkey: init - expire sessions:false cause connection 
loss:false
   [junit4]   2> 22826 INFO  (qtp1074252388-217) [n:127.0.0.1:53641_q    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 22829 INFO  
(OverseerThreadFactory-34-thread-2-processing-n:127.0.0.1:53641_q) 
[n:127.0.0.1:53641_q    ] o.a.s.c.a.c.CreateCollectionCmd Create collection 
collection1
   [junit4]   2> 22834 WARN  
(OverseerThreadFactory-34-thread-2-processing-n:127.0.0.1:53641_q) 
[n:127.0.0.1:53641_q    ] o.a.s.c.a.c.CreateCollectionCmd It is unusual to 
create a collection (collection1) without cores.
   [junit4]   2> 23066 INFO  (qtp1074252388-217) [n:127.0.0.1:53641_q    ] 
o.a.s.h.a.CollectionsHandler Wait for new collection to be active for at most 
30 seconds. Check all shard replicas
   [junit4]   2> 23066 INFO  (qtp1074252388-217) [n:127.0.0.1:53641_q    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={replicationFactor=1&collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2}
 status=0 QTime=240
   [junit4]   2> 23204 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/shard-1-001
 of type NRT
   [junit4]   2> 23207 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-22T04:27:37+07:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 23222 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 23222 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 23222 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.session Scavenging every 660000ms
   [junit4]   2> 23223 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@259a7a62{/q,null,AVAILABLE}
   [junit4]   2> 23223 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.AbstractConnector Started ServerConnector@798ef98e{SSL,[ssl, 
http/1.1]}{127.0.0.1:52038}
   [junit4]   2> 23223 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.e.j.s.Server Started @25304ms
   [junit4]   2> 23224 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=hdfs://localhost:36182/hdfs__localhost_36182__x1_jenkins_jenkins-slave_workspace_Lucene-Solr-Tests-7.x_solr_build_solr-core_test_J2_temp_solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001_tempDir-002_jetty1,
 solrconfig=solrconfig.xml, hostContext=/q, hostPort=52038, 
coreRootDirectory=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/shard-1-001/cores}
   [junit4]   2> 23224 ERROR 
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 23224 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter  ___      _       Welcome to Apache Solr? version 
7.3.0
   [junit4]   2> 23224 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port 
null
   [junit4]   2> 23224 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 23224 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter |___/\___/_|_|    Start time: 
2018-02-16T12:54:05.426Z
   [junit4]   2> 23244 INFO  (zkConnectionManagerCallback-31-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 23251 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in 
ZooKeeper)
   [junit4]   2> 23251 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrXmlConfig Loading container configuration from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/shard-1-001/solr.xml
   [junit4]   2> 23256 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrXmlConfig Configuration parameter 
autoReplicaFailoverWorkLoopDelay is ignored
   [junit4]   2> 23256 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrXmlConfig Configuration parameter 
autoReplicaFailoverBadNodeExpiration is ignored
   [junit4]   2> 23257 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.SolrXmlConfig MBean server found: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8, but no JMX reporters were 
configured - adding default JMX reporter.
   [junit4]   2> 23276 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:57913/solr
   [junit4]   2> 23304 INFO  (zkConnectionManagerCallback-35-thread-1) [    ] 
o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 23344 INFO  
(zkConnectionManagerCallback-37-thread-1-processing-n:127.0.0.1:52038_q) 
[n:127.0.0.1:52038_q    ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 23352 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:52038_q    ] o.a.s.c.c.ZkStateReader Updated live nodes from 
ZooKeeper... (0) -> (1)
   [junit4]   2> 23356 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:52038_q    ] o.a.s.c.Overseer Overseer (id=null) closing
   [junit4]   2> 23361 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:52038_q    ] o.a.s.c.ZkController Register node as live in 
ZooKeeper:/live_nodes/127.0.0.1:52038_q
   [junit4]   2> 23371 INFO  (zkCallback-28-thread-1) [    ] 
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 23377 INFO  
(zkCallback-36-thread-1-processing-n:127.0.0.1:52038_q) [n:127.0.0.1:52038_q    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 23377 INFO  
(zkCallback-18-thread-2-processing-n:127.0.0.1:53641_q) [n:127.0.0.1:53641_q    
] o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
   [junit4]   2> 23535 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:52038_q    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.node' (registry 'solr.node') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8
   [junit4]   2> 23566 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:52038_q    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jvm' (registry 'solr.jvm') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8
   [junit4]   2> 23567 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:52038_q    ] o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.jetty' (registry 'solr.jetty') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8
   [junit4]   2> 23570 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) 
[n:127.0.0.1:52038_q    ] o.a.s.c.CorePropertiesLocator Found 0 core 
definitions underneath 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/../../../../../../../../../../x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/shard-1-001/cores
   [junit4]   2> 23723 INFO  (qtp893459442-278) [n:127.0.0.1:52038_q    ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params 
node=127.0.0.1:52038_q&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 23727 INFO  
(OverseerCollectionConfigSetProcessor-73305238729457669-127.0.0.1:53641_q-n_0000000000)
 [n:127.0.0.1:53641_q    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000002 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 23737 INFO  
(OverseerThreadFactory-34-thread-3-processing-n:127.0.0.1:53641_q) 
[n:127.0.0.1:53641_q    ] o.a.s.c.a.c.AddReplicaCmd Node Identified 
127.0.0.1:52038_q for creating new replica
   [junit4]   2> 23757 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q    ] 
o.a.s.h.a.CoreAdminOperation core create command 
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n21&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 23758 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q    ] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 4 
transient cores
   [junit4]   2> 24791 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SolrConfig 
Using Lucene MatchVersion: 7.3.0
   [junit4]   2> 24951 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.s.IndexSchema 
[collection1_shard1_replica_n21] Schema name=test
   [junit4]   2> 25112 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 25162 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.CoreContainer 
Creating SolrCore 'collection1_shard1_replica_n21' using configuration from 
collection collection1, trusted=true
   [junit4]   2> 25163 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_n21' (registry 
'solr.core.collection1.shard1.replica_n21') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@f8d32d8
   [junit4]   2> 25163 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://localhost:36182/solr_hdfs_home
   [junit4]   2> 25163 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 25163 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SolrCore 
solr.RecoveryStrategy.Builder
   [junit4]   2> 25163 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SolrCore 
[[collection1_shard1_replica_n21] ] Opening new SolrCore at 
[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J2/temp/solr.index.hdfs.CheckHdfsIndexTest_E280E39C30BA0997-001/shard-1-001/cores/collection1_shard1_replica_n21],
 dataDir=[hdfs://localhost:36182/solr_hdfs_home/collection1/core_node22/data/]
   [junit4]   2> 25165 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:36182/solr_hdfs_home/collection1/core_node22/data/snapshot_metadata
   [junit4]   2> 25216 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 25216 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 25229 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 25231 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:36182/solr_hdfs_home/collection1/core_node22/data
   [junit4]   2> 25660 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://localhost:36182/solr_hdfs_home/collection1/core_node22/data/index
   [junit4]   2> 25677 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 25677 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[8388608] will allocate [1] slabs and use ~[8388608] bytes
   [junit4]   2> 25687 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 25688 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=49, maxMergeAtOnceExplicit=36, maxMergedSegmentMB=61.865234375, 
floorSegmentMB=2.162109375, forceMergeDeletesPctAllowed=10.625284389970572, 
segmentsPerTier=23.0, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.6636055112926569
   [junit4]   2> 25785 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741826_1002{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 25796 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741826_1002 size 69
   [junit4]   2> 25917 WARN  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 25994 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.UpdateHandler 
Using UpdateLog implementation: org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 25994 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 25994 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.HdfsUpdateLog 
Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 26011 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.CommitTracker 
Hard AutoCommit: disabled
   [junit4]   2> 26011 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.CommitTracker 
Soft AutoCommit: disabled
   [junit4]   2> 26045 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.AlcoholicMergePolicy: [AlcoholicMergePolicy: 
minMergeSize=0, mergeFactor=10, maxMergeSize=1457030358, 
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, 
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, 
noCFSRatio=0.1]
   [junit4]   2> 26081 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@7bb45349[collection1_shard1_replica_n21] main]
   [junit4]   2> 26082 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 26082 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 26083 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 26094 INFO  
(searcherExecutor-48-thread-1-processing-n:127.0.0.1:52038_q 
x:collection1_shard1_replica_n21 s:shard1 c:collection1) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SolrCore 
[collection1_shard1_replica_n21] Registered new searcher 
Searcher@7bb45349[collection1_shard1_replica_n21] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 26095 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1592562179948675072
   [junit4]   2> 26109 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 26109 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 26109 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SyncStrategy 
Sync replicas to https://127.0.0.1:52038/q/collection1_shard1_replica_n21/
   [junit4]   2> 26109 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SyncStrategy 
Sync Success - now sync replicas to me
   [junit4]   2> 26109 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.SyncStrategy 
https://127.0.0.1:52038/q/collection1_shard1_replica_n21/ has no replicas
   [junit4]   2> 26109 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.ShardLeaderElectionContext Found all replicas participating in 
election, clear LIR
   [junit4]   2> 26118 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
https://127.0.0.1:52038/q/collection1_shard1_replica_n21/ shard1
   [junit4]   2> 26119 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.c.ZkController 
I am the leader, no recovery necessary
   [junit4]   2> 26122 INFO  (qtp893459442-273) [n:127.0.0.1:52038_q 
c:collection1 s:shard1  x:collection1_shard1_replica_n21] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n21&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=2364
   [junit4]   2> 26128 INFO  (qtp893459442-278) [n:127.0.0.1:52038_q    ] 
o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections 
params={node=127.0.0.1:52038_q&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2}
 status=0 QTime=2405
   [junit4]   2> 26152 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.SolrTestCaseJ4 ###Starting testChecksumsOnlyVerbose
   [junit4]   2> 27629 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 27630 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741829_1005{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|FINALIZED]]}
 size 0
   [junit4]   2> 27632 WARN  (DataStreamer for file /solr/_0.nvd) [    ] 
o.a.h.h.DFSClient Caught exception 
   [junit4]   2> java.lang.InterruptedException
   [junit4]   2>        at java.lang.Object.wait(Native Method)
   [junit4]   2>        at java.lang.Thread.join(Thread.java:1252)
   [junit4]   2>        at java.lang.Thread.join(Thread.java:1326)
   [junit4]   2>        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:716)
   [junit4]   2>        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:476)
   [junit4]   2>        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:652)
   [junit4]   2> 27703 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 27705 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741830_1006 size 133
   [junit4]   2> 27733 INFO  
(OverseerCollectionConfigSetProcessor-73305238729457669-127.0.0.1:53641_q-n_0000000000)
 [n:127.0.0.1:53641_q    ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000004 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 27877 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 27913 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 28066 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741832_1008{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 125
   [junit4]   2> 28068 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741832_1008 size 125
   [junit4]   2> 28529 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 28533 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741833_1009{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 28571 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 28573 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741834_1010{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 28640 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 28664 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741835_1011 size 70
   [junit4]   2> 28754 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741836_1012{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW]]}
 size 401
   [junit4]   2> 28754 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741836_1012 size 401
   [junit4]   2> 29161 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 29161 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741828_1004{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 29221 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 29224 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741837_1013{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|FINALIZED],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|FINALIZED]]}
 size 0
   [junit4]   2> 29247 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 29248 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741827_1003{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 29364 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741838_1014{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 29366 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741838_1014 size 111
   [junit4]   2> 30231 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 30232 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741841_1017{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 30290 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741842_1018{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW]]}
 size 133
   [junit4]   2> 30291 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741842_1018 size 133
   [junit4]   2> 30723 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741843_1019{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 30727 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741843_1019 size 472
   [junit4]   2> 30740 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741844_1020{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 30740 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741844_1020{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 30745 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741839_1015{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 30745 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741839_1015 size 185427
   [junit4]   2> 30764 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741845_1021{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 30765 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741845_1021 size 3718
   [junit4]   2> 30786 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741846_1022{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|FINALIZED]]}
 size 0
   [junit4]   2> 30787 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741846_1022 size 5247
   [junit4]   2> 30790 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741840_1016{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 30793 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741840_1016 size 70220
   [junit4]   2> 30809 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 30810 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741847_1023{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|FINALIZED]]}
 size 0
   [junit4]   2> 30824 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741848_1024{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 30824 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741848_1024{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|FINALIZED],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|FINALIZED]]}
 size 0
   [junit4]   2> 30862 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 30863 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741849_1025{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 30928 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 30930 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741850_1026{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|FINALIZED],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|FINALIZED]]}
 size 0
   [junit4]   2> 30974 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741851_1027{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 30976 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741851_1027 size 75
   [junit4]   2> 31020 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741852_1028{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 31022 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741852_1028 size 742
   [junit4]   2> 31290 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 31295 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741854_1030{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 31306 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741853_1029{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|RBW],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|RBW]]}
 size 0
   [junit4]   2> 31309 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741853_1029 size 755044
   [junit4]   2> 31475 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|FINALIZED]]}
 size 0
   [junit4]   2> 31475 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741855_1031{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6be8dbf6-429a-4606-98d7-fbf66e69eba4:NORMAL:127.0.0.1:51036|FINALIZED],
 
ReplicaUC[[DISK]DS-54ed3cd1-e2d9-4fc0-bc70-a6ee56888c50:NORMAL:127.0.0.1:59512|FINALIZED]]}
 size 0
   [junit4]   2> 31483 INFO  (IPC Server handler 6 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741846_1022 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31487 INFO  (IPC Server handler 8 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741850_1026 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31488 INFO  (IPC Server handler 0 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741829_1005 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31489 INFO  (IPC Server handler 1 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741843_1019 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31491 INFO  (IPC Server handler 7 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741842_1018 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31492 INFO  (IPC Server handler 3 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741835_1011 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31493 INFO  (IPC Server handler 2 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741833_1009 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31493 INFO  (IPC Server handler 9 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741836_1012 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31494 INFO  (IPC Server handler 4 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741834_1010 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31495 INFO  (IPC Server handler 5 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741832_1008 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31496 INFO  (IPC Server handler 6 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741844_1020 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31500 INFO  (IPC Server handler 8 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741839_1015 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31501 INFO  (IPC Server handler 0 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741848_1024 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31501 INFO  (IPC Server handler 1 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741849_1025 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31502 INFO  (IPC Server handler 7 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741831_1007 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31503 INFO  (IPC Server handler 3 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741840_1016 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31504 INFO  (IPC Server handler 2 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741837_1013 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31505 INFO  (IPC Server handler 9 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741841_1017 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31507 INFO  (IPC Server handler 4 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741845_1021 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31511 INFO  (IPC Server handler 5 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741828_1004 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31512 INFO  (IPC Server handler 6 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741838_1014 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31512 INFO  (IPC Server handler 8 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741827_1003 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31513 INFO  (IPC Server handler 0 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741847_1023 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31514 INFO  (IPC Server handler 1 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741830_1006 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31515 INFO  (IPC Server handler 7 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741851_1027 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 31516 INFO  (IPC Server handler 3 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741852_1028 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 31539 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:51036 is added to 
blk_1073741856_1032{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-24f4402d-6762-4664-b7dd-64190cae7e67:NORMAL:127.0.0.1:59512|RBW],
 
ReplicaUC[[DISK]DS-0b669953-9e03-481b-a865-0df43aea675c:NORMAL:127.0.0.1:51036|RBW]]}
 size 0
   [junit4]   2> 31548 INFO  (Block report processor) [    ] BlockStateChange 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:59512 is added to 
blk_1073741856_1032 size 134
   [junit4]   2> 34465 INFO  
(org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@43b4d353)
 [    ] BlockStateChange BLOCK* BlockManager: ask 127.0.0.1:59512 to delete 
[blk_1073741827_1003, blk_1073741828_1004, blk_1073741829_1005, 
blk_1073741830_1006, blk_1073741831_1007, blk_1073741832_1008, 
blk_1073741833_1009, blk_1073741834_1010, blk_1073741835_1011, 
blk_1073741836_1012, blk_1073741837_1013, blk_1073741838_1014, 
blk_1073741839_1015, blk_1073741840_1016, blk_1073741841_1017, 
blk_1073741842_1018, blk_1073741843_1019, blk_1073741844_1020, 
blk_1073741845_1021, blk_1073741846_1022, blk_1073741847_1023, 
blk_1073741848_1024, blk_1073741849_1025, blk_1073741850_1026, 
blk_1073741851_1027, blk_1073741852_1028]
   [junit4]   2> 37491 INFO  
(org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@43b4d353)
 [    ] BlockStateChange BLOCK* BlockManager: ask 127.0.0.1:51036 to delete 
[blk_1073741827_1003, blk_1073741828_1004, blk_1073741829_1005, 
blk_1073741830_1006, blk_1073741831_1007, blk_1073741832_1008, 
blk_1073741833_1009, blk_1073741834_1010, blk_1073741835_1011, 
blk_1073741836_1012, blk_1073741837_1013, blk_1073741838_1014, 
blk_1073741839_1015, blk_1073741840_1016, blk_1073741841_1017, 
blk_1073741842_1018, blk_1073741843_1019, blk_1073741844_1020, 
blk_1073741845_1021, blk_1073741846_1022, blk_1073741847_1023, 
blk_1073741848_1024, blk_1073741849_1025, blk_1073741850_1026, 
blk_1073741851_1027, blk_1073741852_1028]
   [junit4]   2> 38710 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.s.h.HdfsDirectory Closing hdfs directory hdfs://localhost:36182/solr
   [junit4]   2> 38712 INFO  (IPC Server handler 4 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741854_1030 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 38712 INFO  (IPC Server handler 4 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741853_1029 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 38712 INFO  (IPC Server handler 4 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741855_1031 127.0.0.1:59512 
127.0.0.1:51036 
   [junit4]   2> 38712 INFO  (IPC Server handler 4 on 36182) [    ] 
BlockStateChange BLOCK* addToInvalidates: blk_1073741856_1032 127.0.0.1:51036 
127.0.0.1:59512 
   [junit4]   2> 38712 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.SolrTestCaseJ4 ###Ending testChecksumsOnlyVerbose
   [junit4]   2> 38713 INFO  
(TEST-CheckHdfsIndexTest.testChecksumsOnlyVerbose-seed#[E280E39C30BA0997]) [    
] o.a.s.c.ChaosMonkey monkey: stop j

[...truncated too long message...]

rc/test/org/apache/solr/highlight/HighlighterTest.java (at line 204)
 [ecj-lint]     Analyzer a1 = new WhitespaceAnalyzer();
 [ecj-lint]              ^^
 [ecj-lint] Resource leak: 'a1' is never closed
 [ecj-lint] ----------
 [ecj-lint] 15. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 207)
 [ecj-lint]     OffsetWindowTokenFilter tots = new 
OffsetWindowTokenFilter(tokenStream);
 [ecj-lint]                             ^^^^
 [ecj-lint] Resource leak: 'tots' is never closed
 [ecj-lint] ----------
 [ecj-lint] 16. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/highlight/HighlighterTest.java
 (at line 211)
 [ecj-lint]     Analyzer a2 = new WhitespaceAnalyzer();
 [ecj-lint]              ^^
 [ecj-lint] Resource leak: 'a2' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 17. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/response/TestJavabinTupleStreamParser.java
 (at line 72)
 [ecj-lint]     JavabinTupleStreamParser parser = new 
JavabinTupleStreamParser(new ByteArrayInputStream(bytes), true);
 [ecj-lint]                              ^^^^^^
 [ecj-lint] Resource leak: 'parser' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 18. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/schema/TestSortableTextField.java
 (at line 491)
 [ecj-lint]     final SolrClient client = new EmbeddedSolrServer(h.getCore());
 [ecj-lint]                      ^^^^^^
 [ecj-lint] Resource leak: 'client' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 19. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/search/TestDocSet.java
 (at line 243)
 [ecj-lint]     return loadfactor!=0 ? new HashDocSet(a,0,n,1/loadfactor) : new 
HashDocSet(a,0,n);
 [ecj-lint]                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 20. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/search/TestDocSet.java
 (at line 528)
 [ecj-lint]     DocSet a = new BitDocSet(bs);
 [ecj-lint]            ^
 [ecj-lint] Resource leak: 'a' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 21. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/security/TestPKIAuthenticationPlugin.java
 (at line 78)
 [ecj-lint]     final MockPKIAuthenticationPlugin mock = new 
MockPKIAuthenticationPlugin(null, nodeName);
 [ecj-lint]                                       ^^^^
 [ecj-lint] Resource leak: 'mock' is never closed
 [ecj-lint] ----------
 [ecj-lint] 22. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/security/TestPKIAuthenticationPlugin.java
 (at line 133)
 [ecj-lint]     MockPKIAuthenticationPlugin mock1 = new 
MockPKIAuthenticationPlugin(null, nodeName) {
 [ecj-lint]                                 ^^^^^
 [ecj-lint] Resource leak: 'mock1' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 23. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/security/TestRuleBasedAuthorizationPlugin.java
 (at line 380)
 [ecj-lint]     RuleBasedAuthorizationPlugin plugin = new 
RuleBasedAuthorizationPlugin();
 [ecj-lint]                                  ^^^^^^
 [ecj-lint] Resource leak: 'plugin' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 24. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/security/TestSha256AuthenticationProvider.java
 (at line 49)
 [ecj-lint]     BasicAuthPlugin basicAuthPlugin = new BasicAuthPlugin();
 [ecj-lint]                     ^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: 'basicAuthPlugin' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 25. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/spelling/SimpleQueryConverter.java
 (at line 42)
 [ecj-lint]     WhitespaceAnalyzer analyzer = new WhitespaceAnalyzer();
 [ecj-lint]                        ^^^^^^^^
 [ecj-lint] Resource leak: 'analyzer' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 26. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 139)
 [ecj-lint]     IndexWriter w = new IndexWriter(d, 
newIndexWriterConfig(analyzer));
 [ecj-lint]                 ^
 [ecj-lint] Resource leak: 'w' is never closed
 [ecj-lint] ----------
 [ecj-lint] 27. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 172)
 [ecj-lint]     throw iae;
 [ecj-lint]     ^^^^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] 28. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/uninverting/TestFieldCacheVsDocValues.java
 (at line 178)
 [ecj-lint]     return;
 [ecj-lint]     ^^^^^^^
 [ecj-lint] Resource leak: 'w' is not closed at this location
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 29. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 134)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 30. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 333)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 31. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 367)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 32. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 413)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(streamingClients, 5, 0);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 33. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 458)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] 34. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/SolrCmdDistributorTest.java
 (at line 516)
 [ecj-lint]     SolrCmdDistributor cmdDistrib = new 
SolrCmdDistributor(updateShardHandler);
 [ecj-lint]                        ^^^^^^^^^^
 [ecj-lint] Resource leak: 'cmdDistrib' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 35. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/SolrIndexSplitterTest.java
 (at line 181)
 [ecj-lint]     EmbeddedSolrServer server1 = new 
EmbeddedSolrServer(h.getCoreContainer(), "split1");
 [ecj-lint]                        ^^^^^^^
 [ecj-lint] Resource leak: 'server1' is never closed
 [ecj-lint] ----------
 [ecj-lint] 36. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/SolrIndexSplitterTest.java
 (at line 182)
 [ecj-lint]     EmbeddedSolrServer server2 = new 
EmbeddedSolrServer(h.getCoreContainer(), "split2");
 [ecj-lint]                        ^^^^^^^
 [ecj-lint] Resource leak: 'server2' is never closed
 [ecj-lint] ----------
 [ecj-lint] ----------
 [ecj-lint] 37. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/core/src/test/org/apache/solr/update/processor/RecordingUpdateProcessorFactory.java
 (at line 67)
 [ecj-lint]     return recording ? new 
RecordingUpdateRequestProcessor(commandQueue, next) : next;
 [ecj-lint]                        
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 [ecj-lint] Resource leak: '<unassigned Closeable value>' is never closed
 [ecj-lint] ----------
 [ecj-lint] 37 problems (4 errors, 33 warnings)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:618: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/build.xml:101: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build.xml:682: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2087:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/common-build.xml:2120:
 Compile failed; see the compiler error output for details.

Total time: 54 minutes 14 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to