Build: https://builds.apache.org/job/Lucene-Solr-repro/3779/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/45/consoleText

[repro] Revision: 325e72c45f6420da61907523d4b7361c2ab5c41b

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=LegacyCloudClusterPropTest 
-Dtests.method=testCreateCollectionSwitchLegacyCloud 
-Dtests.seed=A2E34C95B48671C0 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-US -Dtests.timezone=Pacific/Pitcairn -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=StressHdfsTest -Dtests.method=test 
-Dtests.seed=A2E34C95B48671C0 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=bg -Dtests.timezone=Asia/Magadan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TimeRoutedAliasUpdateProcessorTest 
-Dtests.method=testPreemptiveCreation -Dtests.seed=A2E34C95B48671C0 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=it-IT -Dtests.timezone=IET -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5a697344ed1be537ef2acdd18aab653283593370
[repro] JUnit rest result XML files will be moved to: ./repro-reports
[repro] git fetch

[...truncated 7 lines...]
[repro] git checkout 325e72c45f6420da61907523d4b7361c2ab5c41b

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]    solr/core
[repro]       StressHdfsTest
[repro]       TimeRoutedAliasUpdateProcessorTest
[repro]       LegacyCloudClusterPropTest
[repro] ant compile-test

[...truncated 3599 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.StressHdfsTest|*.TimeRoutedAliasUpdateProcessorTest|*.LegacyCloudClusterPropTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=A2E34C95B48671C0 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=bg -Dtests.timezone=Asia/Magadan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 22859 lines...]
   [junit4]   2> 921908 ERROR (Finalizer) [     ] o.a.s.c.SolrCore REFCOUNT 
ERROR: unreferenced org.apache.solr.core.SolrCore@9c9247e 
(delete_data_dir_shard2_replica_n4) has a reference count of -1
   [junit4]   2> 921973 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] o.a.s.s.IndexSchema 
Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 921993 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n13' using 
configuration from collection collection1, trusted=true
   [junit4]   2> 921993 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.collection1.shard1.replica_n13' (registry 
'solr.core.collection1.shard1.replica_n13') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@76baa2dc
   [junit4]   2> 921993 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home
   [junit4]   2> 921993 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 921993 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] o.a.s.c.SolrCore 
[[collection1_shard1_replica_n13] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/shard-3-001/cores/collection1_shard1_replica_n13],
 
dataDir=[hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/collection1/core_node14/data/]
   [junit4]   2> 921994 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/collection1/core_node14/data/snapshot_metadata
   [junit4]   2> 922002 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 922002 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 922002 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 922012 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 922013 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/collection1/core_node14/data
   [junit4]   2> 922028 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/collection1/core_node14/data/index
   [junit4]   2> 922033 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 922033 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 922033 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 922039 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 922040 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.MockRandomMergePolicy: 
org.apache.lucene.index.MockRandomMergePolicy@fbbbc91
   [junit4]   2> 922051 WARN  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 922095 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 922095 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] o.a.s.u.UpdateLog 
Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 922095 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 922104 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 922104 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 922106 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=35, maxMergeAtOnceExplicit=13, maxMergedSegmentMB=10.888671875, 
floorSegmentMB=0.76171875, forceMergeDeletesPctAllowed=26.448736341408868, 
segmentsPerTier=17.0, maxCFSSegmentSizeMB=1.5166015625, 
noCFSRatio=0.1173367555155477, deletesPctAllowed=25.551844838455374
   [junit4]   2> 922110 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@565eecd7[collection1_shard1_replica_n13] main]
   [junit4]   2> 922113 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 922113 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 922114 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 922114 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] o.a.s.u.UpdateLog 
Could not find max version in index or recent updates, using new clock 
1651636259005661184
   [junit4]   2> 922117 INFO  
(searcherExecutor-1742-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] o.a.s.c.SolrCore 
[collection1_shard1_replica_n13] Registered new searcher 
Searcher@565eecd7[collection1_shard1_replica_n13] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 922120 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] o.a.s.c.ZkShardTerms 
Successful update of terms at /collections/collection1/terms/shard1 to 
Terms{values={core_node6=0, core_node10=0, core_node8=0, core_node12=0, 
core_node2=0, core_node14=0, core_node4=0}, version=6}
   [junit4]   2> 922120 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] 
o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
/collections/collection1/leaders/shard1
   [junit4]   2> 922123 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] o.a.s.c.ZkController 
Core needs to recover:collection1_shard1_replica_n13
   [junit4]   2> 922123 INFO  
(updateExecutor-1773-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1) [n:127.0.0.1:35733__ 
c:collection1 s:shard1 r:core_node14 x:collection1_shard1_replica_n13 ] 
o.a.s.u.DefaultSolrCoreState Running recovery
   [junit4]   2> 922124 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Starting recovery 
process. recoveringAfterStartup=true
   [junit4]   2> 922125 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy startupVersions is 
empty
   [junit4]   2> 922126 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:collection1 s:shard1  x:collection1_shard1_replica_n13 ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n13&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT}
 status=0 QTime=1843
   [junit4]   2> 922127 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp=/_ path=/admin/ping 
params={wt=javabin&version=2} hits=0 status=0 QTime=0
   [junit4]   2> 922127 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp=/_ path=/admin/ping 
params={wt=javabin&version=2} status=0 QTime=0
   [junit4]   2> 922128 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Begin buffering 
updates. core=[collection1_shard1_replica_n13]
   [junit4]   2> 922129 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.u.UpdateLog Starting to buffer 
updates. HDFSUpdateLog{state=ACTIVE, tlog=null}
   [junit4]   2> 922129 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Publishing state of 
core [collection1_shard1_replica_n13] as recovering, leader is 
[http://127.0.0.1:44427/_/collection1_shard1_replica_n1/] and I am 
[http://127.0.0.1:35733/_/collection1_shard1_replica_n13/]
   [junit4]   2> 922130 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Sending prep 
recovery command to [http://127.0.0.1:44427/_]; [WaitForState: 
action=PREPRECOVERY&core=collection1_shard1_replica_n1&nodeName=127.0.0.1:35733__&coreNodeName=core_node14&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
   [junit4]   2> 922131 INFO  (qtp1305744872-32335) [n:127.0.0.1:44427__    
x:collection1_shard1_replica_n1 ] o.a.s.h.a.PrepRecoveryOp Going to wait for 
coreNodeName: core_node14, state: recovering, checkLive: true, onlyIfLeader: 
true, onlyIfLeaderActive: true
   [junit4]   2> 922131 INFO  (qtp1305744872-32335) [n:127.0.0.1:44427__    
x:collection1_shard1_replica_n1 ] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(recovering): collection=collection1, shard=shard1, 
thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=down, 
localState=active, nodeName=127.0.0.1:35733__, coreNodeName=core_node14, 
onlyIfActiveCheckResult=false, nodeProps: core_node14:{
   [junit4]   2>   "core":"collection1_shard1_replica_n13",
   [junit4]   2>   "base_url":"http://127.0.0.1:35733/_";,
   [junit4]   2>   "node_name":"127.0.0.1:35733__",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "type":"NRT"}
   [junit4]   2> 922131 INFO  (qtp1305744872-32335) [n:127.0.0.1:44427__    
x:collection1_shard1_replica_n1 ] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(recovering): collection=collection1, shard=shard1, 
thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=down, 
localState=active, nodeName=127.0.0.1:35733__, coreNodeName=core_node14, 
onlyIfActiveCheckResult=false, nodeProps: core_node14:{
   [junit4]   2>   "core":"collection1_shard1_replica_n13",
   [junit4]   2>   "base_url":"http://127.0.0.1:35733/_";,
   [junit4]   2>   "node_name":"127.0.0.1:35733__",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "type":"NRT"}
   [junit4]   2> 922131 INFO  (qtp1305744872-32335) [n:127.0.0.1:44427__    
x:collection1_shard1_replica_n1 ] o.a.s.h.a.PrepRecoveryOp In 
WaitForState(recovering): collection=collection1, shard=shard1, 
thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, 
isLeader? true, live=true, checkLive=true, currentState=down, 
localState=active, nodeName=127.0.0.1:35733__, coreNodeName=core_node14, 
onlyIfActiveCheckResult=false, nodeProps: core_node14:{
   [junit4]   2>   "core":"collection1_shard1_replica_n13",
   [junit4]   2>   "base_url":"http://127.0.0.1:35733/_";,
   [junit4]   2>   "node_name":"127.0.0.1:35733__",
   [junit4]   2>   "state":"down",
   [junit4]   2>   "type":"NRT"}
   [junit4]   2> 922131 INFO  (qtp1058315564-32473) [n:127.0.0.1:34088__ 
c:collection1    ] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/collections 
params={node=127.0.0.1:35733__&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2}
 status=0 QTime=13880
   [junit4]   2> 922131 INFO  
(TEST-StressHdfsTest.test-seed#[A2E34C95B48671C0]) [     ] 
o.a.s.c.AbstractFullDistribZkTestBase Waiting to see 7 active replicas in 
collection: collection1
   [junit4]   2> 922233 INFO  (watches-1800-thread-2) [     ] 
o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, 
shard=shard1, thisCore=collection1_shard1_replica_n1, 
leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, 
currentState=recovering, localState=active, nodeName=127.0.0.1:35733__, 
coreNodeName=core_node14, onlyIfActiveCheckResult=false, nodeProps: 
core_node14:{
   [junit4]   2>   
"dataDir":"hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/collection1/core_node14/data/",
   [junit4]   2>   "base_url":"http://127.0.0.1:35733/_";,
   [junit4]   2>   "node_name":"127.0.0.1:35733__",
   [junit4]   2>   "type":"NRT",
   [junit4]   2>   
"ulogDir":"hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/collection1/core_node14/data/tlog",
   [junit4]   2>   "core":"collection1_shard1_replica_n13",
   [junit4]   2>   "shared_storage":"true",
   [junit4]   2>   "state":"recovering"}
   [junit4]   2> 922233 INFO  (qtp1305744872-32335) [n:127.0.0.1:44427__    
x:collection1_shard1_replica_n1 ] o.a.s.s.HttpSolrCall [admin] webapp=null 
path=/admin/cores 
params={nodeName=127.0.0.1:35733__&onlyIfLeaderActive=true&core=collection1_shard1_replica_n1&coreNodeName=core_node14&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2}
 status=0 QTime=102
   [junit4]   2> 922278 INFO  
(OverseerCollectionConfigSetProcessor-75823322245890052-127.0.0.1:39905__-n_0000000000)
 [n:127.0.0.1:39905__     ] o.a.s.c.OverseerTaskQueue Response ZK path: 
/overseer/collection-queue-work/qnr-0000000016 doesn't exist.  Requestor may 
have disconnected from ZooKeeper
   [junit4]   2> 922733 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Attempting to 
PeerSync from [http://127.0.0.1:44427/_/collection1_shard1_replica_n1/] - 
recoveringAfterStartup=[true]
   [junit4]   2> 922734 WARN  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.u.PeerSyncWithLeader no frame of 
reference to tell if we've missed updates
   [junit4]   2> 922734 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy PeerSync Recovery 
was not successful - trying replication.
   [junit4]   2> 922734 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Starting 
Replication Recovery.
   [junit4]   2> 922734 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Attempting to 
replicate from [http://127.0.0.1:44427/_/collection1_shard1_replica_n1/].
   [junit4]   2> 922739 INFO  (qtp1305744872-32336) [n:127.0.0.1:44427__ 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1651636259656826880,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 922739 INFO  (qtp1305744872-32336) [n:127.0.0.1:44427__ 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 922739 INFO  (qtp1938999741-32277) [n:127.0.0.1:40773__ 
c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5 ] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1651636259661021184,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 922739 INFO  (qtp1938999741-32277) [n:127.0.0.1:40773__ 
c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5 ] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 922740 INFO  (qtp1058315564-32261) [n:127.0.0.1:34088__ 
c:collection1 s:shard1 r:core_node8 x:collection1_shard1_replica_n7 ] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1651636259662069760,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 922740 INFO  (qtp1058315564-32261) [n:127.0.0.1:34088__ 
c:collection1 s:shard1 r:core_node8 x:collection1_shard1_replica_n7 ] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 922740 INFO  (qtp1797443466-32315) [n:127.0.0.1:39203__ 
c:collection1 s:shard1 r:core_node10 x:collection1_shard1_replica_n9 ] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1651636259662069760,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 922740 INFO  (qtp1938999741-32277) [n:127.0.0.1:40773__ 
c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5 ] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 922740 INFO  (qtp1938999741-32277) [n:127.0.0.1:40773__ 
c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n5]  webapp=/_ 
path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:44427/_/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 0
   [junit4]   2> 922740 INFO  (qtp1305744872-32336) [n:127.0.0.1:44427__ 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 922741 INFO  (qtp1797443466-32315) [n:127.0.0.1:39203__ 
c:collection1 s:shard1 r:core_node10 x:collection1_shard1_replica_n9 ] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 922741 INFO  (qtp1058315564-32261) [n:127.0.0.1:34088__ 
c:collection1 s:shard1 r:core_node8 x:collection1_shard1_replica_n7 ] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 922741 INFO  (qtp1058315564-32261) [n:127.0.0.1:34088__ 
c:collection1 s:shard1 r:core_node8 x:collection1_shard1_replica_n7 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n7]  webapp=/_ 
path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:44427/_/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 1
   [junit4]   2> 922741 INFO  (qtp1536722153-32360) [n:127.0.0.1:40245__ 
c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3 ] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1651636259663118336,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 922741 INFO  (qtp1536722153-32360) [n:127.0.0.1:40245__ 
c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3 ] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
   [junit4]   2> 922742 INFO  (qtp184284234-32382) [n:127.0.0.1:43576__ 
c:collection1 s:shard1 r:core_node12 x:collection1_shard1_replica_n11 ] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1651636259664166912,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
   [junit4]   2> 922744 INFO  (qtp184284234-32382) [n:127.0.0.1:43576__ 
c:collection1 s:shard1 r:core_node12 x:collection1_shard1_replica_n11 ] 
o.a.s.u.SolrIndexWriter Calling setCommitData with 
IW:org.apache.solr.update.SolrIndexWriter@7d9a7e8b 
commitCommandVersion:1651636259664166912
   [junit4]   2> 922744 INFO  (qtp1536722153-32360) [n:127.0.0.1:40245__ 
c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3 ] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 922744 INFO  (qtp1536722153-32360) [n:127.0.0.1:40245__ 
c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3]  webapp=/_ 
path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:44427/_/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 3
   [junit4]   2> 922744 INFO  (qtp1797443466-32315) [n:127.0.0.1:39203__ 
c:collection1 s:shard1 r:core_node10 x:collection1_shard1_replica_n9 ] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 922744 INFO  (qtp1797443466-32315) [n:127.0.0.1:39203__ 
c:collection1 s:shard1 r:core_node10 x:collection1_shard1_replica_n9 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n9]  webapp=/_ 
path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:44427/_/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 4
   [junit4]   2> 922751 INFO  (qtp540668260-32293) [n:127.0.0.1:35733__ 
c:collection1 s:shard1 r:core_node14 x:collection1_shard1_replica_n13 ] 
o.a.s.u.p.DistributedUpdateProcessor Ignoring commit while not ACTIVE - state: 
BUFFERING replay: false
   [junit4]   2> 922751 INFO  (qtp540668260-32293) [n:127.0.0.1:35733__ 
c:collection1 s:shard1 r:core_node14 x:collection1_shard1_replica_n13 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n13]  webapp=/_ 
path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:44427/_/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 0
   [junit4]   2> 923160 INFO  (qtp184284234-32382) [n:127.0.0.1:43576__ 
c:collection1 s:shard1 r:core_node12 x:collection1_shard1_replica_n11 ] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@bd8ca40[collection1_shard1_replica_n11] realtime]
   [junit4]   2> 923160 INFO  (qtp184284234-32382) [n:127.0.0.1:43576__ 
c:collection1 s:shard1 r:core_node12 x:collection1_shard1_replica_n11 ] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
   [junit4]   2> 923160 INFO  (qtp184284234-32382) [n:127.0.0.1:43576__ 
c:collection1 s:shard1 r:core_node12 x:collection1_shard1_replica_n11 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n11]  webapp=/_ 
path=/update 
params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=http://127.0.0.1:44427/_/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=}
 0 418
   [junit4]   2> 923161 INFO  (qtp1305744872-32336) [n:127.0.0.1:44427__ 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1]  webapp=/_ 
path=/update 
params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&wt=javabin&version=2}{commit=}
 0 426
   [junit4]   2> 923163 INFO  (qtp1305744872-32337) [n:127.0.0.1:44427__ 
c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1 ] 
o.a.s.c.S.Request [collection1_shard1_replica_n1]  webapp=/_ path=/replication 
params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 
QTime=0
   [junit4]   2> 923163 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.h.IndexFetcher Master's generation: 1
   [junit4]   2> 923163 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.h.IndexFetcher Master's version: 0
   [junit4]   2> 923163 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.h.IndexFetcher Slave's generation: 1
   [junit4]   2> 923163 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.h.IndexFetcher Slave's version: 0
   [junit4]   2> 923163 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.h.IndexFetcher New index in Master. 
Deleting mine...
   [junit4]   2> 923165 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.s.SolrIndexSearcher Opening 
[Searcher@539ea3f3[collection1_shard1_replica_n13] main]
   [junit4]   2> 923170 INFO  
(searcherExecutor-1742-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.SolrCore 
[collection1_shard1_replica_n13] Registered new searcher 
Searcher@539ea3f3[collection1_shard1_replica_n13] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 923171 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy No replay needed.
   [junit4]   2> 923171 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Replication 
Recovery was successful.
   [junit4]   2> 923171 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Registering as 
Active after recovery.
   [junit4]   2> 923172 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Updating version 
bucket highest from index after successful recovery.
   [junit4]   2> 923172 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.u.UpdateLog Could not find max version 
in index or recent updates, using new clock 1651636260115054592
   [junit4]   2> 923174 INFO  
(recoveryExecutor-1780-thread-1-processing-n:127.0.0.1:35733__ 
x:collection1_shard1_replica_n13 c:collection1 s:shard1 r:core_node14) 
[n:127.0.0.1:35733__ c:collection1 s:shard1 r:core_node14 
x:collection1_shard1_replica_n13 ] o.a.s.c.RecoveryStrategy Finished recovery 
process, successful=[true]
   [junit4]   2> 923276 INFO  
(TEST-StressHdfsTest.test-seed#[A2E34C95B48671C0]) [     ] o.a.s.SolrTestCaseJ4 
###Starting test
   [junit4]   2> 923277 INFO  
(TEST-StressHdfsTest.test-seed#[A2E34C95B48671C0]) [     ] 
o.a.s.c.AbstractFullDistribZkTestBase Not turning on auto soft commit
   [junit4]   2> 923279 INFO  (qtp1058315564-32262) [n:127.0.0.1:34088__     ] 
o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
replicationFactor=2&maxShardsPerNode=1&collection.configName=conf1&name=delete_data_dir&action=CREATE&numShards=3&wt=javabin&version=2
 and sendToOCPQueue=true
   [junit4]   2> 923282 INFO  
(OverseerThreadFactory-1639-thread-5-processing-n:127.0.0.1:39905__) 
[n:127.0.0.1:39905__     ] o.a.s.c.a.c.CreateCollectionCmd Create collection 
delete_data_dir
   [junit4]   2> 923490 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__    
x:delete_data_dir_shard1_replica_n1 ] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=delete_data_dir_shard1_replica_n1&action=CREATE&numShards=3&collection=delete_data_dir&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 923491 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__    
x:delete_data_dir_shard1_replica_n2 ] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=delete_data_dir_shard1_replica_n2&action=CREATE&numShards=3&collection=delete_data_dir&shard=shard1&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 923493 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__    
x:delete_data_dir_shard2_replica_n3 ] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=delete_data_dir_shard2_replica_n3&action=CREATE&numShards=3&collection=delete_data_dir&shard=shard2&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 923496 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__    
x:delete_data_dir_shard2_replica_n4 ] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=delete_data_dir_shard2_replica_n4&action=CREATE&numShards=3&collection=delete_data_dir&shard=shard2&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 923498 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__    
x:delete_data_dir_shard3_replica_n5 ] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=delete_data_dir_shard3_replica_n5&action=CREATE&numShards=3&collection=delete_data_dir&shard=shard3&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 923502 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__    
x:delete_data_dir_shard3_replica_n6 ] o.a.s.h.a.CoreAdminOperation core create 
command 
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=delete_data_dir_shard3_replica_n6&action=CREATE&numShards=3&collection=delete_data_dir&shard=shard3&wt=javabin&version=2&replicaType=NRT
   [junit4]   2> 924507 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.4.0
   [junit4]   2> 924514 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.4.0
   [junit4]   2> 924516 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.4.0
   [junit4]   2> 924521 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.4.0
   [junit4]   2> 924523 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.4.0
   [junit4]   2> 924553 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.4.0
   [junit4]   2> 924561 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.s.IndexSchema [delete_data_dir_shard1_replica_n1] Schema name=test
   [junit4]   2> 924563 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.s.IndexSchema [delete_data_dir_shard2_replica_n3] Schema name=test
   [junit4]   2> 924589 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.s.IndexSchema [delete_data_dir_shard1_replica_n2] Schema name=test
   [junit4]   2> 924618 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.s.IndexSchema [delete_data_dir_shard3_replica_n5] Schema name=test
   [junit4]   2> 924637 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.s.IndexSchema [delete_data_dir_shard3_replica_n6] Schema name=test
   [junit4]   2> 924650 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.s.IndexSchema [delete_data_dir_shard2_replica_n4] Schema name=test
   [junit4]   2> 924781 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 924785 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 924811 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.CoreContainer Creating SolrCore 'delete_data_dir_shard1_replica_n1' 
using configuration from collection delete_data_dir, trusted=true
   [junit4]   2> 924811 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.delete_data_dir.shard1.replica_n1' (registry 
'solr.core.delete_data_dir.shard1.replica_n1') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@76baa2dc
   [junit4]   2> 924811 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home
   [junit4]   2> 924811 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 924812 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.SolrCore [[delete_data_dir_shard1_replica_n1] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/control-001/cores/delete_data_dir_shard1_replica_n1],
 
dataDir=[hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node7/data/]
   [junit4]   2> 924812 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node7/data/snapshot_metadata
   [junit4]   2> 924813 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 924826 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 924829 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.CoreContainer Creating SolrCore 'delete_data_dir_shard2_replica_n3' 
using configuration from collection delete_data_dir, trusted=true
   [junit4]   2> 924830 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.delete_data_dir.shard2.replica_n3' (registry 
'solr.core.delete_data_dir.shard2.replica_n3') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@76baa2dc
   [junit4]   2> 924830 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home
   [junit4]   2> 924830 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 924830 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.SolrCore [[delete_data_dir_shard2_replica_n3] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/shard-7-001/cores/delete_data_dir_shard2_replica_n3],
 
dataDir=[hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node9/data/]
   [junit4]   2> 924831 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node9/data/snapshot_metadata
   [junit4]   2> 924835 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924837 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924837 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924837 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924839 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924839 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924843 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924846 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924847 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node9/data
   [junit4]   2> 924847 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node7/data
   [junit4]   2> 924848 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 924851 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.CoreContainer Creating SolrCore 'delete_data_dir_shard1_replica_n2' 
using configuration from collection delete_data_dir, trusted=true
   [junit4]   2> 924851 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.delete_data_dir.shard1.replica_n2' (registry 
'solr.core.delete_data_dir.shard1.replica_n2') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@76baa2dc
   [junit4]   2> 924851 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home
   [junit4]   2> 924851 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 924852 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.SolrCore [[delete_data_dir_shard1_replica_n2] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/shard-5-001/cores/delete_data_dir_shard1_replica_n2],
 
dataDir=[hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node8/data/]
   [junit4]   2> 924857 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node8/data/snapshot_metadata
   [junit4]   2> 924861 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.CoreContainer Creating SolrCore 'delete_data_dir_shard3_replica_n5' 
using configuration from collection delete_data_dir, trusted=true
   [junit4]   2> 924862 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.delete_data_dir.shard3.replica_n5' (registry 
'solr.core.delete_data_dir.shard3.replica_n5') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@76baa2dc
   [junit4]   2> 924862 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home
   [junit4]   2> 924862 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 924862 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.SolrCore [[delete_data_dir_shard3_replica_n5] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/shard-3-001/cores/delete_data_dir_shard3_replica_n5],
 
dataDir=[hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node11/data/]
   [junit4]   2> 924863 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node11/data/snapshot_metadata
   [junit4]   2> 924869 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924869 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924869 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924869 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.CoreContainer Creating SolrCore 'delete_data_dir_shard3_replica_n6' 
using configuration from collection delete_data_dir, trusted=true
   [junit4]   2> 924869 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.delete_data_dir.shard3.replica_n6' (registry 
'solr.core.delete_data_dir.shard3.replica_n6') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@76baa2dc
   [junit4]   2> 924869 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home
   [junit4]   2> 924869 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 924869 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.SolrCore [[delete_data_dir_shard3_replica_n6] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/shard-1-001/cores/delete_data_dir_shard3_replica_n6],
 
dataDir=[hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node12/data/]
   [junit4]   2> 924871 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node12/data/snapshot_metadata
   [junit4]   2> 924876 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924876 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924876 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924879 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 924879 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node9/data/index
   [junit4]   2> 924880 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924880 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924880 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924883 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924883 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924883 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node8/data
   [junit4]   2> 924887 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node11/data
   [junit4]   2> 924892 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node7/data/index
   [junit4]   2> 924893 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924893 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924893 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924894 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924895 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node12/data
   [junit4]   2> 924905 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.CoreContainer Creating SolrCore 'delete_data_dir_shard2_replica_n4' 
using configuration from collection delete_data_dir, trusted=true
   [junit4]   2> 924905 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node8/data/index
   [junit4]   2> 924905 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 
'solr.core.delete_data_dir.shard2.replica_n4' (registry 
'solr.core.delete_data_dir.shard2.replica_n4') enabled at server: 
com.sun.jmx.mbeanserver.JmxMBeanServer@76baa2dc
   [junit4]   2> 924905 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory 
solr.hdfs.home=hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home
   [junit4]   2> 924906 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory Solr Kerberos Authentication disabled
   [junit4]   2> 924906 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.SolrCore [[delete_data_dir_shard2_replica_n4] ] Opening new SolrCore at 
[/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/shard-6-001/cores/delete_data_dir_shard2_replica_n4],
 
dataDir=[hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node10/data/]
   [junit4]   2> 924906 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node10/data/snapshot_metadata
   [junit4]   2> 924910 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node11/data/index
   [junit4]   2> 924910 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924910 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924910 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924911 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924911 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924911 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924913 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924913 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924913 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924918 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924918 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924918 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924919 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node12/data/index
   [junit4]   2> 924919 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924920 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.MockRandomMergePolicy: 
org.apache.lucene.index.MockRandomMergePolicy@134f508e
   [junit4]   2> 924933 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924933 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924933 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924934 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924934 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.MockRandomMergePolicy: 
org.apache.lucene.index.MockRandomMergePolicy@197e0585
   [junit4]   2> 924935 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924935 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=37, maxMergeAtOnceExplicit=11, maxMergedSegmentMB=0.482421875, 
floorSegmentMB=2.0009765625, forceMergeDeletesPctAllowed=28.63807189861556, 
segmentsPerTier=15.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0, 
deletesPctAllowed=49.87996357847712
   [junit4]   2> 924935 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924936 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924936 INFO  (qtp1305744872-32339) [n:127.0.0.1:44427__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n2 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.MockRandomMergePolicy: 
org.apache.lucene.index.MockRandomMergePolicy@511f1ce2
   [junit4]   2> 924937 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node10/data
   [junit4]   2> 924943 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924943 INFO  (qtp1058315564-32472) [n:127.0.0.1:34088__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n6 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.MockRandomMergePolicy: 
org.apache.lucene.index.MockRandomMergePolicy@349c3c7e
   [junit4]   2> 924977 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory creating directory factory for path 
hdfs://lucene2-us-west.apache.org:34965/solr_hdfs_home/delete_data_dir/core_node10/data/index
   [junit4]   2> 924984 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory Number of slabs of block cache [1] with direct 
memory allocation set to [true]
   [junit4]   2> 924984 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory Block cache target memory usage, slab size of 
[4194304] will allocate [1] slabs and use ~[4194304] bytes
   [junit4]   2> 924984 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.HdfsDirectoryFactory Creating new single instance HDFS BlockCache
   [junit4]   2> 924994 WARN  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 924999 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.s.b.BlockDirectory Block cache on write is disabled
   [junit4]   2> 924999 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=37, maxMergeAtOnceExplicit=11, maxMergedSegmentMB=0.482421875, 
floorSegmentMB=2.0009765625, forceMergeDeletesPctAllowed=28.63807189861556, 
segmentsPerTier=15.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0, 
deletesPctAllowed=49.87996357847712
   [junit4]   2> 925009 WARN  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 925034 WARN  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 925055 WARN  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = 
requestHandler,name = /dump,class = DumpRequestHandler,attributes = 
{initParams=a, name=/dump, class=DumpRequestHandler},args = 
{defaults={a=A,b=B}}}
   [junit4]   2> 925098 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 925098 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 925098 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 925111 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 925111 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 925111 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 925113 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 925113 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 925113 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 925114 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 925114 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 925117 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=35, maxMergeAtOnceExplicit=13, maxMergedSegmentMB=10.888671875, 
floorSegmentMB=0.76171875, forceMergeDeletesPctAllowed=26.448736341408868, 
segmentsPerTier=17.0, maxCFSSegmentSizeMB=1.5166015625, 
noCFSRatio=0.1173367555155477, deletesPctAllowed=25.551844838455374
   [junit4]   2> 925124 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@3f74de1e[delete_data_dir_shard1_replica_n1] main]
   [junit4]   2> 925125 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 925125 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 925126 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 925126 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 925126 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 925126 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1651636262163972096
   [junit4]   2> 925128 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.u.UpdateHandler Using UpdateLog implementation: 
org.apache.solr.update.HdfsUpdateLog
   [junit4]   2> 925128 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=21, maxMergeAtOnceExplicit=26, maxMergedSegmentMB=0.7490234375, 
floorSegmentMB=0.3134765625, forceMergeDeletesPctAllowed=3.186622553159739, 
segmentsPerTier=10.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0, 
deletesPctAllowed=49.81053054092058
   [junit4]   2> 925128 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH 
numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536
   [junit4]   2> 925128 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.u.HdfsUpdateLog Initializing HdfsUpdateLog: tlogDfsReplication=2
   [junit4]   2> 925129 INFO  
(searcherExecutor-1747-thread-1-processing-n:127.0.0.1:39905__ 
x:delete_data_dir_shard1_replica_n1 c:delete_data_dir s:shard1) 
[n:127.0.0.1:39905__ c:delete_data_dir s:shard1  
x:delete_data_dir_shard1_replica_n1 ] o.a.s.c.SolrCore 
[delete_data_dir_shard1_replica_n1] Registered new searcher 
Searcher@3f74de1e[delete_data_dir_shard1_replica_n1] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 925132 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 925132 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 925132 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/delete_data_dir/terms/shard1 to Terms{values={core_node7=0}, 
version=0}
   [junit4]   2> 925133 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
/collections/delete_data_dir/leaders/shard1
   [junit4]   2> 925139 INFO  (qtp899933926-32195) [n:127.0.0.1:39905__ 
c:delete_data_dir s:shard1  x:delete_data_dir_shard1_replica_n1 ] 
o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for 
shard shard1: total=2 found=1 timeoutin=14999ms
   [junit4]   2> 925139 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=35, maxMergeAtOnceExplicit=13, maxMergedSegmentMB=10.888671875, 
floorSegmentMB=0.76171875, forceMergeDeletesPctAllowed=26.448736341408868, 
segmentsPerTier=17.0, maxCFSSegmentSizeMB=1.5166015625, 
noCFSRatio=0.1173367555155477, deletesPctAllowed=25.551844838455374
   [junit4]   2> 925140 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.u.CommitTracker Hard AutoCommit: disabled
   [junit4]   2> 925140 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.u.CommitTracker Soft AutoCommit: disabled
   [junit4]   2> 925140 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@3912f427[delete_data_dir_shard3_replica_n5] main]
   [junit4]   2> 925142 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class 
org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: 
maxMergeAtOnce=21, maxMergeAtOnceExplicit=26, maxMergedSegmentMB=0.7490234375, 
floorSegmentMB=0.3134765625, forceMergeDeletesPctAllowed=3.186622553159739, 
segmentsPerTier=10.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=1.0, 
deletesPctAllowed=49.81053054092058
   [junit4]   2> 925143 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 925143 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@6bc68e93[delete_data_dir_shard2_replica_n3] main]
   [junit4]   2> 925143 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 925144 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 925144 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1651636262182846464
   [junit4]   2> 925146 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.s.SolrIndexSearcher Opening 
[Searcher@20707f2[delete_data_dir_shard2_replica_n4] main]
   [junit4]   2> 925155 INFO  
(searcherExecutor-1750-thread-1-processing-n:127.0.0.1:35733__ 
x:delete_data_dir_shard3_replica_n5 c:delete_data_dir s:shard3) 
[n:127.0.0.1:35733__ c:delete_data_dir s:shard3  
x:delete_data_dir_shard3_replica_n5 ] o.a.s.c.SolrCore 
[delete_data_dir_shard3_replica_n5] Registered new searcher 
Searcher@3912f427[delete_data_dir_shard3_replica_n5] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 925158 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 925159 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 925159 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: 
/configs/conf1
   [junit4]   2> 925159 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 925159 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using 
ZooKeeperStorageIO:path=/configs/conf1
   [junit4]   2> 925160 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1651636262199623680
   [junit4]   2> 925160 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms.
   [junit4]   2> 925160 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.u.UpdateLog Could not find max version in index or recent updates, using 
new clock 1651636262199623680
   [junit4]   2> 925162 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/delete_data_dir/terms/shard3 to Terms{values={core_node11=0}, 
version=0}
   [junit4]   2> 925162 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
/collections/delete_data_dir/leaders/shard3
   [junit4]   2> 925163 INFO  
(searcherExecutor-1748-thread-1-processing-n:127.0.0.1:43576__ 
x:delete_data_dir_shard2_replica_n3 c:delete_data_dir s:shard2) 
[n:127.0.0.1:43576__ c:delete_data_dir s:shard2  
x:delete_data_dir_shard2_replica_n3 ] o.a.s.c.SolrCore 
[delete_data_dir_shard2_replica_n3] Registered new searcher 
Searcher@6bc68e93[delete_data_dir_shard2_replica_n3] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 925163 INFO  
(searcherExecutor-1752-thread-1-processing-n:127.0.0.1:40245__ 
x:delete_data_dir_shard2_replica_n4 c:delete_data_dir s:shard2) 
[n:127.0.0.1:40245__ c:delete_data_dir s:shard2  
x:delete_data_dir_shard2_replica_n4 ] o.a.s.c.SolrCore 
[delete_data_dir_shard2_replica_n4] Registered new searcher 
Searcher@20707f2[delete_data_dir_shard2_replica_n4] 
main{ExitableDirectoryReader(UninvertingDirectoryReader())}
   [junit4]   2> 925166 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/delete_data_dir/terms/shard2 to Terms{values={core_node9=0}, 
version=0}
   [junit4]   2> 925168 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
/collections/delete_data_dir/leaders/shard2
   [junit4]   2> 925168 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
   [junit4]   2> 925169 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/delete_data_dir/terms/shard2 to Terms{values={core_node10=0, 
core_node9=0}, version=1}
   [junit4]   2> 925169 INFO  (qtp1536722153-32363) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.ShardLeaderElectionContextBase make sure parent is created 
/collections/delete_data_dir/leaders/shard2
   [junit4]   2> 925169 INFO  (qtp540668260-32295) [n:127.0.0.1:35733__ 
c:delete_data_dir s:shard3  x:delete_data_dir_shard3_replica_n5 ] 
o.a.s.c.ShardLeaderElectionContext Waiting until we see more replicas up for 
shard shard3: total=2 found=1 timeoutin=14999ms
   [junit4]   2> 925172 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.ShardLeaderElectionContext Enough replicas found to continue.
   [junit4]   2> 925172 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.ShardLeaderElectionContext I may be the new leader - try and sync
   [junit4]   2> 925172 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.SyncStrategy Sync replicas to 
http://127.0.0.1:43576/_/delete_data_dir_shard2_replica_n3/
   [junit4]   2> 925173 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.PeerSync PeerSync: core=delete_data_dir_shard2_replica_n3 
url=http://127.0.0.1:43576/_ START 
replicas=[http://127.0.0.1:40245/_/delete_data_dir_shard2_replica_n4/] 
nUpdates=100
   [junit4]   2> 925173 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.u.PeerSync PeerSync: core=delete_data_dir_shard2_replica_n3 
url=http://127.0.0.1:43576/_ DONE.  We have no versions.  sync failed.
   [junit4]   2> 925175 INFO  (qtp1536722153-32365) [n:127.0.0.1:40245__ 
c:delete_data_dir s:shard2 r:core_node10 x:delete_data_dir_shard2_replica_n4 ] 
o.a.s.c.S.Request [delete_data_dir_shard2_replica_n4]  webapp=/_ path=/get 
params={distrib=false&qt=/get&fingerprint=false&getVersions=100&wt=javabin&version=2}
 status=0 QTime=1
   [junit4]   2> 925175 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.SyncStrategy Leader's attempt to sync with shard failed, moving to the 
next candidate
   [junit4]   2> 925175 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.ShardLeaderElectionContext We failed sync, but we have no versions - we 
can't sync in that case - we were active before, so become leader anyway
   [junit4]   2> 925176 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.ShardLeaderElectionContextBase Creating leader registration node 
/collections/delete_data_dir/leaders/shard2/leader after winning as 
/collections/delete_data_dir/leader_elect/shard2/election/75823322245890075-core_node9-n_0000000000
   [junit4]   2> 925177 INFO  (qtp184284234-32384) [n:127.0.0.1:43576__ 
c:delete_data_dir s:shard2  x:delete_data_dir_shard2_replica_n3 ] 
o.a.s.c.ShardLeaderElectionContext I am the new leader: 
http://127.0.0.1:43576/_/delete_data_dir_shard2_replica_n3/ shard2
   [junit4]   2> 925279 INFO  (zkCallback-1822-thread-1) [     ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnect

[...truncated too long message...]

parse host and port list: 127.0.0.1:46811
   [junit4]   2> 1082205 INFO  
(TEST-StressHdfsTest.test-seed#[A2E34C95B48671C0]) [     ] o.a.s.c.ZkTestServer 
connecting to 127.0.0.1 46811
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=StressHdfsTest 
-Dtests.method=test -Dtests.seed=A2E34C95B48671C0 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=bg -Dtests.timezone=Asia/Magadan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR    448s J2 | StressHdfsTest.test <<<
   [junit4]    > Throwable #1: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:34966/_/delete_data_dir: Async exception during 
distributed update: java.util.concurrent.TimeoutException: Idle timeout 
expired: 30000/30000 ms
   [junit4]    >        at 
__randomizedtesting.SeedInfo.seed([A2E34C95B48671C0:2AB7734F1A7A1C38]:0)
   [junit4]    >        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:665)
   [junit4]    >        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:265)
   [junit4]    >        at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
   [junit4]    >        at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
   [junit4]    >        at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
   [junit4]    >        at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1127)
   [junit4]    >        at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:896)
   [junit4]    >        at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:828)
   [junit4]    >        at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
   [junit4]    >        at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:940)
   [junit4]    >        at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:903)
   [junit4]    >        at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:918)
   [junit4]    >        at 
org.apache.solr.cloud.hdfs.StressHdfsTest.createAndDeleteCollection(StressHdfsTest.java:203)
   [junit4]    >        at 
org.apache.solr.cloud.hdfs.StressHdfsTest.test(StressHdfsTest.java:103)
   [junit4]    >        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
   [junit4]    >        at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
   [junit4]    >        at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> 1082207 WARN  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] 
o.a.h.h.s.d.DirectoryScanner DirectoryScanner: shutdown has been called
   [junit4]   2> 1082211 INFO  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.w.WebAppContext@14f7b726{datanode,/,null,UNAVAILABLE}{/datanode}
   [junit4]   2> 1082211 INFO  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] 
o.e.j.s.AbstractConnector Stopped 
ServerConnector@4a2fdf19{HTTP/1.1,[http/1.1]}{localhost:0}
   [junit4]   2> 1082211 INFO  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] o.e.j.s.session 
node0 Stopped scavenging
   [junit4]   2> 1082211 INFO  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@5b5a6a79{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,UNAVAILABLE}
   [junit4]   2> 1082224 WARN  (BP-1693348131-127.0.0.1-1575122790771 
heartbeating to lucene2-us-west.apache.org/127.0.0.1:35322) [     ] 
o.a.h.h.s.d.IncrementalBlockReportManager IncrementalBlockReportManager 
interrupted
   [junit4]   2> 1082224 WARN  (BP-1693348131-127.0.0.1-1575122790771 
heartbeating to lucene2-us-west.apache.org/127.0.0.1:35322) [     ] 
o.a.h.h.s.d.DataNode Ending block pool service for: Block pool 
BP-1693348131-127.0.0.1-1575122790771 (Datanode Uuid 
07e5d9b3-8e54-4a02-bde1-f345152f264e) service to 
lucene2-us-west.apache.org/127.0.0.1:35322
   [junit4]   2> 1082253 WARN  
(refreshUsed-/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/tempDir-001/hdfsBaseDir/data/data1/current/BP-1693348131-127.0.0.1-1575122790771)
 [     ] o.a.h.f.CachingGetSpaceUsed Thread Interrupted waiting to refresh disk 
information: sleep interrupted
   [junit4]   2> 1082277 WARN  
(refreshUsed-/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002/tempDir-001/hdfsBaseDir/data/data2/current/BP-1693348131-127.0.0.1-1575122790771)
 [     ] o.a.h.f.CachingGetSpaceUsed Thread Interrupted waiting to refresh disk 
information: sleep interrupted
   [junit4]   2> 1082324 INFO  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.w.WebAppContext@1354999b{hdfs,/,null,UNAVAILABLE}{/hdfs}
   [junit4]   2> 1082325 INFO  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] 
o.e.j.s.AbstractConnector Stopped 
ServerConnector@39d7e162{HTTP/1.1,[http/1.1]}{lucene2-us-west.apache.org:0}
   [junit4]   2> 1082325 INFO  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] o.e.j.s.session 
node0 Stopped scavenging
   [junit4]   2> 1082325 INFO  
(SUITE-StressHdfsTest-seed#[A2E34C95B48671C0]-worker) [     ] 
o.e.j.s.h.ContextHandler Stopped 
o.e.j.s.ServletContextHandler@2b2806b2{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.0-tests.jar!/webapps/static,UNAVAILABLE}
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.StressHdfsTest_A2E34C95B48671C0-002
   [junit4]   2> Nov 30, 2019 2:14:01 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 16 leaked 
thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene84): 
{multiDefault=PostingsFormat(name=Asserting), 
_root_=PostingsFormat(name=LuceneFixedGap), id=FSTOrd50, 
text=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
txt_t=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, 
docValues:{range_facet_l_dv=DocValuesFormat(name=Lucene80), 
multiDefault=DocValuesFormat(name=Direct), 
_root_=DocValuesFormat(name=Asserting), 
intDefault=DocValuesFormat(name=Asserting), 
range_facet_l=DocValuesFormat(name=Asserting), 
_version_=DocValuesFormat(name=Asserting), id_i1=DocValuesFormat(name=Direct), 
range_facet_i_dv=DocValuesFormat(name=Asserting), 
id=DocValuesFormat(name=Lucene80), text=DocValuesFormat(name=Lucene80), 
intDvoDefault=DocValuesFormat(name=Lucene80), 
timestamp=DocValuesFormat(name=Asserting), 
txt_t=DocValuesFormat(name=Lucene80)}, maxPointsInLeafNode=712, 
maxMBSortInHeap=5.028785400416921, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@6b46222a),
 locale=bg, timezone=Asia/Magadan
   [junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle Corporation 
1.8.0_191 (64-bit)/cpus=4,threads=4,free=170255064,total=518520832
   [junit4]   2> NOTE: All tests run in this JVM: 
[TimeRoutedAliasUpdateProcessorTest, StressHdfsTest, 
TimeRoutedAliasUpdateProcessorTest, StressHdfsTest]
   [junit4] Completed [15/15 (4!)] on J2 in 452.90s, 1 test, 1 error <<< 
FAILURES!
   [junit4] 
   [junit4] 
   [junit4] Tests with failures [seed: A2E34C95B48671C0]:
   [junit4]   - org.apache.solr.cloud.hdfs.StressHdfsTest.test
   [junit4]   - org.apache.solr.cloud.hdfs.StressHdfsTest.test
   [junit4]   - org.apache.solr.cloud.hdfs.StressHdfsTest.test
   [junit4]   - org.apache.solr.cloud.hdfs.StressHdfsTest.test
   [junit4] 
   [junit4] 
   [junit4] JVM J0:     0.72 ..  1059.83 =  1059.12s
   [junit4] JVM J1:     0.71 ..  1053.71 =  1053.00s
   [junit4] JVM J2:     0.74 ..  1088.11 =  1087.37s
   [junit4] Execution time total: 18 minutes 8 seconds
   [junit4] Tests summary: 15 suites, 40 tests, 4 errors, 5 ignored (5 
assumptions)

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1590:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1117:
 There were test failures: 15 suites, 40 tests, 4 errors, 5 ignored (5 
assumptions) [seed: A2E34C95B48671C0]

Total time: 18 minutes 10 seconds

[repro] Setting last failure code to 256

[repro] Failures w/original seeds at 325e72c45f6420da61907523d4b7361c2ab5c41b:
[repro]   0/5 failed: org.apache.solr.cloud.LegacyCloudClusterPropTest
[repro]   0/5 failed: 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest
[repro]   4/5 failed: org.apache.solr.cloud.hdfs.StressHdfsTest
[repro] git checkout 5a697344ed1be537ef2acdd18aab653283593370
Previous HEAD position was 325e72c... SOLR-13977: solr create -c not working 
under Windows 10
HEAD is now at 5a69734... SOLR-13805: NPE when calling /solr/admin/info/health 
on standalone solr
[repro] Exiting with code 256
Archiving artifacts
[Fast Archiver] No artifacts from Lucene-Solr-repro 
Repro-Lucene-Solr-NightlyTests-8.x#283 to compare, so performing full copy of 
artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Email was triggered for: Unstable (Test Failures)
Sending email for trigger: Unstable (Test Failures)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to