Build: https://builds.apache.org/job/Lucene-Solr-repro/2700/
[...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/46/consoleText [repro] Revision: f140971bdffa0aa25d04bd43d74431938294d432 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=HdfsRestartWhileUpdatingTest -Dtests.method=test -Dtests.seed=1B990FFF714B6410 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP -Dtests.timezone=US/Mountain -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=RestartWhileUpdatingTest -Dtests.method=test -Dtests.seed=1B990FFF714B6410 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=mt-MT -Dtests.timezone=Australia/Broken_Hill -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: c51838479a83cbc4956ad660712f6e06dd732ff8 [repro] git fetch [repro] git checkout f140971bdffa0aa25d04bd43d74431938294d432 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro] solr/core [repro] HdfsRestartWhileUpdatingTest [repro] RestartWhileUpdatingTest [repro] ant compile-test [...truncated 3568 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.HdfsRestartWhileUpdatingTest|*.RestartWhileUpdatingTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=1B990FFF714B6410 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP -Dtests.timezone=US/Mountain -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 21612 lines...] [junit4] 2> 246511 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@60c563cf (control_collection_shard1_replica_n1) has a reference count of -1 [junit4] 2> 246516 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@6d8d04b9 (collection1_shard1_replica_n3) has a reference count of -1 [junit4] 2> 246516 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@35c35606 (collection1_shard1_replica_n1) has a reference count of -1 [junit4] 2> 246518 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@57cd7607 (collection1_shard1_replica_n5) has a reference count of -1 [junit4] 2> 246519 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@e77bbc8 (collection1_shard1_replica_n5) has a reference count of -1 [junit4] 2> 246519 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@329d8b0b (collection1_shard1_replica_n3) has a reference count of -1 [junit4] 2> 246519 ERROR (Finalizer) [ ] o.a.s.c.SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@204213bd (collection1_shard1_replica_n1) has a reference count of -1 [junit4] 2> 246533 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id [junit4] 2> 246566 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n3' using configuration from collection collection1, trusted=true [junit4] 2> 246566 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n3' (registry 'solr.core.collection1.shard1.replica_n3') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@40c91cf4 [junit4] 2> 246567 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [[collection1_shard1_replica_n3] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_1B990FFF714B6410-002/shard-2-001/cores/collection1_shard1_replica_n3], dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_1B990FFF714B6410-002/shard-2-001/cores/collection1_shard1_replica_n3/data/] [junit4] 2> 246571 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=19, maxMergeAtOnceExplicit=41, maxMergedSegmentMB=97.541015625, floorSegmentMB=1.357421875, forceMergeDeletesPctAllowed=11.115188937090268, segmentsPerTier=12.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.5987367564478258, deletesPctAllowed=33.419722422116344 [junit4] 2> 246582 WARN (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}} [junit4] 2> 246655 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog [junit4] 2> 246655 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 246658 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.CommitTracker Hard AutoCommit: if uncommitted for 30000ms; [junit4] 2> 246658 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.CommitTracker Soft AutoCommit: if uncommitted for 3000ms; [junit4] 2> 246660 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=8, maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.17040485074340178] [junit4] 2> 246661 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@39a3231d[collection1_shard1_replica_n3] main] [junit4] 2> 246663 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 246663 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 246664 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 246664 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1623129027539632128 [junit4] 2> 246669 INFO (searcherExecutor-170-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@39a3231d[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 246672 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0, core_node4=0}, version=1} [junit4] 2> 246673 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/collection1/leaders/shard1 [junit4] 2> 246678 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.c.ZkController Core needs to recover:collection1_shard1_replica_n3 [junit4] 2> 246679 INFO (updateExecutor-247-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 246679 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true [junit4] 2> 246680 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy startupVersions is empty [junit4] 2> 246682 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n3] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n3&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=1837 [junit4] 2> 246688 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1] webapp=/c/u path=/admin/ping params={wt=javabin&version=2} hits=0 status=0 QTime=2 [junit4] 2> 246688 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1] webapp=/c/u path=/admin/ping params={wt=javabin&version=2} status=0 QTime=2 [junit4] 2> 246689 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[collection1_shard1_replica_n3] [junit4] 2> 246689 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null} [junit4] 2> 246689 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Publishing state of core [collection1_shard1_replica_n3] as recovering, leader is [https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/] and I am [https://127.0.0.1:45583/c/u/collection1_shard1_replica_n3/] [junit4] 2> 246694 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={node=127.0.0.1:45583_c%252Fu&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2} status=0 QTime=3979 [junit4] 2> 246694 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Sending prep recovery command to [https://127.0.0.1:34807/c/u]; [WaitForState: action=PREPRECOVERY&core=collection1_shard1_replica_n1&nodeName=127.0.0.1:45583_c%252Fu&coreNodeName=core_node4&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true] [junit4] 2> 246697 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node4, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true [junit4] 2> 246700 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=down, localState=active, nodeName=127.0.0.1:45583_c%2Fu, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"core":"collection1_shard1_replica_n3","base_url":"https://127.0.0.1:45583/c/u","node_name":"127.0.0.1:45583_c%2Fu","state":"down","type":"NRT"} [junit4] 2> 246734 INFO (OverseerCollectionConfigSetProcessor-72652085889662981-127.0.0.1:39617_c%2Fu-n_0000000000) [n:127.0.0.1:39617_c%2Fu ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000005 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 246738 INFO (qtp1479506840-1033) [n:127.0.0.1:45583_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n3:INDEX.sizeInBytes} status=0 QTime=1 [junit4] 2> 246739 INFO (qtp1479506840-1034) [n:127.0.0.1:45583_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n3:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246741 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n3:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246742 INFO (qtp1479506840-1031) [n:127.0.0.1:45583_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246744 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246745 INFO (qtp1479506840-1033) [n:127.0.0.1:45583_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246747 INFO (qtp1479506840-1034) [n:127.0.0.1:45583_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246748 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246749 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246750 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246752 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246755 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246757 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246758 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246760 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246761 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246763 INFO (qtp968959859-958) [n:127.0.0.1:39617_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246764 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246765 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.control_collection.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246767 INFO (qtp968959859-959) [n:127.0.0.1:39617_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246769 INFO (qtp968959859-957) [n:127.0.0.1:39617_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246770 INFO (qtp968959859-958) [n:127.0.0.1:39617_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246771 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246773 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246774 INFO (qtp2120659010-1078) [n:127.0.0.1:46025_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246775 INFO (qtp2120659010-1074) [n:127.0.0.1:46025_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246776 INFO (qtp2120659010-1075) [n:127.0.0.1:46025_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246778 INFO (qtp2120659010-1076) [n:127.0.0.1:46025_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246779 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={prefix=CONTAINER.fs.usableSpace,CONTAINER.fs.totalSpace,CORE.coreName&wt=javabin&version=2&group=solr.node,solr.core} status=0 QTime=0 [junit4] 2> 246781 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246782 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246783 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.collection1.shard1.replica_n1:INDEX.sizeInBytes} status=0 QTime=0 [junit4] 2> 246784 INFO (OverseerThreadFactory-128-thread-5-processing-n:127.0.0.1:39617_c%2Fu) [n:127.0.0.1:39617_c%2Fu c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:46025_c%2Fu for creating new replica of shard shard1 for collection collection1 [junit4] 2> 246785 INFO (OverseerThreadFactory-128-thread-5-processing-n:127.0.0.1:39617_c%2Fu) [n:127.0.0.1:39617_c%2Fu c:collection1 s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command. [junit4] 2> 246805 INFO (watches-238-thread-1) [ ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:45583_c%2Fu, coreNodeName=core_node4, onlyIfActiveCheckResult=false, nodeProps: core_node4:{"core":"collection1_shard1_replica_n3","base_url":"https://127.0.0.1:45583/c/u","node_name":"127.0.0.1:45583_c%2Fu","state":"recovering","type":"NRT"} [junit4] 2> 246805 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:45583_c%252Fu&onlyIfLeaderActive=true&core=collection1_shard1_replica_n1&coreNodeName=core_node4&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=107 [junit4] 2> 246822 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5] o.a.s.h.a.CoreAdminOperation core create command qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n5&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT [junit4] 2> 247306 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Attempting to PeerSync from [https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/] - recoveringAfterStartup=[true] [junit4] 2> 247306 WARN (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.PeerSyncWithLeader no frame of reference to tell if we've missed updates [junit4] 2> 247306 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy PeerSync Recovery was not successful - trying replication. [junit4] 2> 247307 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Starting Replication Recovery. [junit4] 2> 247307 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Attempting to replicate from [https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/]. [junit4] 2> 247309 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 start commit{_version_=1623129028215963648,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false} [junit4] 2> 247310 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit. [junit4] 2> 247312 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.DirectUpdateHandler2 end_commit_flush [junit4] 2> 247367 INFO (qtp1479506840-1034) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.DistributedUpdateProcessor Ignoring commit while not ACTIVE - state: BUFFERING replay: false [junit4] 2> 247367 INFO (qtp1479506840-1034) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&waitSearcher=true&openSearcher=false&commit=true&softCommit=false&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&commit_end_point=replicas&wt=javabin&version=2&expungeDeletes=false}{commit=} 0 0 [junit4] 2> 247368 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={waitSearcher=true&openSearcher=false&commit=true&softCommit=false&wt=javabin&version=2}{commit=} 0 60 [junit4] 2> 247371 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1] webapp=/c/u path=/replication params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 QTime=0 [junit4] 2> 247371 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Master's generation: 1 [junit4] 2> 247371 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Master's version: 0 [junit4] 2> 247371 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Slave's generation: 1 [junit4] 2> 247371 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher Slave's version: 0 [junit4] 2> 247371 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.h.IndexFetcher New index in Master. Deleting mine... [junit4] 2> 247375 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.s.SolrIndexSearcher Opening [Searcher@785aa8d6[collection1_shard1_replica_n3] main] [junit4] 2> 247378 INFO (searcherExecutor-170-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.SolrCore [collection1_shard1_replica_n3] Registered new searcher Searcher@785aa8d6[collection1_shard1_replica_n3] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 247378 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy No replay needed. [junit4] 2> 247378 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Replication Recovery was successful. [junit4] 2> 247378 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Registering as Active after recovery. [junit4] 2> 247379 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Updating version bucket highest from index after successful recovery. [junit4] 2> 247379 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1623129028289363968 [junit4] 2> 247381 INFO (recoveryExecutor-249-thread-1-processing-n:127.0.0.1:45583_c%2Fu x:collection1_shard1_replica_n3 c:collection1 s:shard1 r:core_node4) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[true] [junit4] 2> 247845 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.SolrConfig Using Lucene MatchVersion: 9.0.0 [junit4] 2> 247862 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.IndexSchema [collection1_shard1_replica_n5] Schema name=test [junit4] 2> 247959 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id [junit4] 2> 247985 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.CoreContainer Creating SolrCore 'collection1_shard1_replica_n5' using configuration from collection collection1, trusted=true [junit4] 2> 247986 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.core.collection1.shard1.replica_n5' (registry 'solr.core.collection1.shard1.replica_n5') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@40c91cf4 [junit4] 2> 247986 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.SolrCore [[collection1_shard1_replica_n5] ] Opening new SolrCore at [/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_1B990FFF714B6410-002/shard-3-001/cores/collection1_shard1_replica_n5], dataDir=[/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsRestartWhileUpdatingTest_1B990FFF714B6410-002/shard-3-001/cores/collection1_shard1_replica_n5/data/] [junit4] 2> 247990 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.TieredMergePolicy: [TieredMergePolicy: maxMergeAtOnce=19, maxMergeAtOnceExplicit=41, maxMergedSegmentMB=97.541015625, floorSegmentMB=1.357421875, forceMergeDeletesPctAllowed=11.115188937090268, segmentsPerTier=12.0, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.5987367564478258, deletesPctAllowed=33.419722422116344 [junit4] 2> 248001 WARN (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.RequestHandlers INVALID paramSet a in requestHandler {type = requestHandler,name = /dump,class = DumpRequestHandler,attributes = {initParams=a, name=/dump, class=DumpRequestHandler},args = {defaults={a=A,b=B}}} [junit4] 2> 248068 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.UpdateHandler Using UpdateLog implementation: org.apache.solr.update.UpdateLog [junit4] 2> 248068 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.UpdateLog Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10 numVersionBuckets=65536 [junit4] 2> 248071 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.CommitTracker Hard AutoCommit: if uncommitted for 30000ms; [junit4] 2> 248071 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.CommitTracker Soft AutoCommit: if uncommitted for 3000ms; [junit4] 2> 248073 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.RandomMergePolicy RandomMergePolicy wrapping class org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=8, maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=true, maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.17040485074340178] [junit4] 2> 248079 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.SolrIndexSearcher Opening [Searcher@4baac4a8[collection1_shard1_replica_n5] main] [junit4] 2> 248081 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.r.ManagedResourceStorage Configured ZooKeeperStorageIO with znodeBase: /configs/conf1 [junit4] 2> 248082 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.r.ManagedResourceStorage Loaded null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1 [junit4] 2> 248082 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.h.ReplicationHandler Commits will be reserved for 10000ms. [junit4] 2> 248083 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.u.UpdateLog Could not find max version in index or recent updates, using new clock 1623129029027561472 [junit4] 2> 248087 INFO (searcherExecutor-175-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.SolrCore [collection1_shard1_replica_n5] Registered new searcher Searcher@4baac4a8[collection1_shard1_replica_n5] main{ExitableDirectoryReader(UninvertingDirectoryReader())} [junit4] 2> 248090 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0, core_node6=0, core_node4=0}, version=2} [junit4] 2> 248090 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.ShardLeaderElectionContextBase make sure parent is created /collections/collection1/leaders/shard1 [junit4] 2> 248094 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.c.ZkController Core needs to recover:collection1_shard1_replica_n5 [junit4] 2> 248096 INFO (updateExecutor-265-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 248096 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true [junit4] 2> 248097 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy startupVersions is empty [junit4] 2> 248100 INFO (qtp2120659010-1077) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 x:collection1_shard1_replica_n5] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n5&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT} status=0 QTime=1278 [junit4] 2> 248102 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1] webapp=/c/u path=/admin/ping params={wt=javabin&version=2} hits=0 status=0 QTime=1 [junit4] 2> 248102 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.S.Request [collection1_shard1_replica_n1] webapp=/c/u path=/admin/ping params={wt=javabin&version=2} status=0 QTime=1 [junit4] 2> 248102 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Begin buffering updates. core=[collection1_shard1_replica_n5] [junit4] 2> 248102 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.UpdateLog Starting to buffer updates. FSUpdateLog{state=ACTIVE, tlog=null} [junit4] 2> 248102 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Publishing state of core [collection1_shard1_replica_n5] as recovering, leader is [https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/] and I am [https://127.0.0.1:46025/c/u/collection1_shard1_replica_n5/] [junit4] 2> 248106 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Sending prep recovery command to [https://127.0.0.1:34807/c/u]; [WaitForState: action=PREPRECOVERY&core=collection1_shard1_replica_n1&nodeName=127.0.0.1:46025_c%252Fu&coreNodeName=core_node6&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true] [junit4] 2> 248110 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp Going to wait for coreNodeName: core_node6, state: recovering, checkLive: true, onlyIfLeader: true, onlyIfLeaderActive: true [junit4] 2> 248110 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu x:collection1_shard1_replica_n1] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=down, localState=active, nodeName=127.0.0.1:46025_c%2Fu, coreNodeName=core_node6, onlyIfActiveCheckResult=false, nodeProps: core_node6:{"core":"collection1_shard1_replica_n5","base_url":"https://127.0.0.1:46025/c/u","node_name":"127.0.0.1:46025_c%2Fu","state":"down","type":"NRT"} [junit4] 2> 248111 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={node=127.0.0.1:46025_c%252Fu&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2} status=0 QTime=5387 [junit4] 2> 248117 INFO (TEST-HdfsRestartWhileUpdatingTest.test-seed#[1B990FFF714B6410]) [ ] o.a.s.SolrTestCaseJ4 ###Starting test [junit4] 2> 248154 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/control_collection/terms/shard1 to Terms{values={core_node2=1}, version=1} [junit4] 2> 248154 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[3-0 (1623129029088378881)]} 0 13 [junit4] 2> 248155 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying [junit4] 2> 248155 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[2-0 (1623129029078941696)]} 0 25 [junit4] 2> 248201 INFO (qtp968959859-957) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[0-0 (1623129029117739008)]} 0 45 [junit4] 2> 248202 INFO (qtp968959859-959) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[1-0 (1623129029088378880)]} 0 62 [junit4] 2> 248212 INFO (watches-238-thread-1) [ ] o.a.s.h.a.PrepRecoveryOp In WaitForState(recovering): collection=collection1, shard=shard1, thisCore=collection1_shard1_replica_n1, leaderDoesNotNeedRecovery=false, isLeader? true, live=true, checkLive=true, currentState=recovering, localState=active, nodeName=127.0.0.1:46025_c%2Fu, coreNodeName=core_node6, onlyIfActiveCheckResult=false, nodeProps: core_node6:{"core":"collection1_shard1_replica_n5","base_url":"https://127.0.0.1:46025/c/u","node_name":"127.0.0.1:46025_c%2Fu","state":"recovering","type":"NRT"} [junit4] 2> 248212 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu x:collection1_shard1_replica_n1] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={nodeName=127.0.0.1:46025_c%252Fu&onlyIfLeaderActive=true&core=collection1_shard1_replica_n1&coreNodeName=core_node6&action=PREPRECOVERY&checkLive=true&state=recovering&onlyIfLeader=true&wt=javabin&version=2} status=0 QTime=102 [junit4] 2> 248235 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.ZkShardTerms Successful update of terms at /collections/collection1/terms/shard1 to Terms{values={core_node6=0, core_node2=1, core_node4=1}, version=3} [junit4] 2> 248247 INFO (zkCallback-270-thread-1) [ ] o.a.s.c.RecoveringCoreTermWatcher Start recovery on core_node6 because core's term is less than leader's term [junit4] 2> 248248 WARN (updateExecutor-265-thread-2) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Stopping recovery for core=[collection1_shard1_replica_n5] coreNodeName=[core_node6] [junit4] 2> 248287 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[2-0 (1623129029150244864)]} 0 20 [junit4] 2> 248291 ERROR (updateExecutor-229-thread-1-processing-x:collection1_shard1_replica_n1 r:core_node2 null n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.s.i.ConcurrentUpdateHttp2SolrClient Error consuming and closing http response stream. [junit4] 2> java.nio.channels.AsynchronousCloseException: null [junit4] 2> at org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] [junit4] 2> at org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) ~[java/:?] [junit4] 2> at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) ~[java/:?] [junit4] 2> at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) ~[metrics-core-3.2.6.jar:3.2.6] [junit4] 2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[java/:?] [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] [junit4] 2> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] [junit4] 2> 248292 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[2-0 (1623129029150244864)]} 0 91 [junit4] 2> 248308 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[2-1 (1623129029262442496)]} 0 0 [junit4] 2> 248335 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[1-0 (1623129029191139328)]} 0 19 [junit4] 2> 248345 ERROR (updateExecutor-229-thread-1-processing-x:collection1_shard1_replica_n1 r:core_node2 null n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.s.i.ConcurrentUpdateHttp2SolrClient Error consuming and closing http response stream. [junit4] 2> java.nio.channels.AsynchronousCloseException: null [junit4] 2> at org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] [junit4] 2> at org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) ~[java/:?] [junit4] 2> at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) ~[java/:?] [junit4] 2> at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) ~[metrics-core-3.2.6.jar:3.2.6] [junit4] 2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[java/:?] [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] [junit4] 2> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] [junit4] 2> 248346 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[1-0 (1623129029191139328)]} 0 106 [junit4] 2> 248351 INFO (qtp1479506840-1031) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[0-0 (1623129029220499456)]} 0 47 [junit4] 2> 248395 INFO (qtp968959859-958) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[1-1 (1623129029342134272)]} 0 12 [junit4] 2> 248463 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[2-1 (1623129029295996928)]} 0 68 [junit4] 2> 248494 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[2-1 (1623129029295996928)]} 0 155 [junit4] 2> 248529 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[0-0 (1623129029220499456)]} 0 262 [junit4] 2> 248548 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[0-1 (1623129029514100736)]} 0 1 [junit4] 2> 248551 INFO (qtp968959859-957) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[2-0 (-1623129029515149312)]} 0 3 [junit4] 2> 248561 INFO (qtp1479506840-1034) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[1-1 (1623129029356814336)]} 0 1 [junit4] 2> 248552 INFO (qtp1479506840-1033) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[3-0 (1623129029148147712)]} 0 4 [junit4] 2> 248590 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[0-1 (1623129029553946624)]} 0 0 [junit4] 2> 248592 ERROR (updateExecutor-229-thread-3-processing-x:collection1_shard1_replica_n1 r:core_node2 null n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.c.s.i.ConcurrentUpdateHttp2SolrClient Error consuming and closing http response stream. [junit4] 2> java.nio.channels.AsynchronousCloseException: null [junit4] 2> at org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:316) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at java.io.InputStream.read(InputStream.java:101) ~[?:1.8.0_191] [junit4] 2> at org.eclipse.jetty.client.util.InputStreamResponseListener$Input.read(InputStreamResponseListener.java:287) ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114] [junit4] 2> at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:283) ~[java/:?] [junit4] 2> at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176) ~[java/:?] [junit4] 2> at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) ~[metrics-core-3.2.6.jar:3.2.6] [junit4] 2> at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[java/:?] [junit4] 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191] [junit4] 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191] [junit4] 2> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191] [junit4] 2> 248592 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[0-1 (1623129029553946624)]} 0 7 [junit4] 2> 248594 INFO (qtp968959859-959) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[0-0 (-1623129029563383808)]} 0 0 [junit4] 2> 248635 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[2-0 (-1623129029552898048)]} 0 0 [junit4] 2> 248655 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[1-1 (1623129029356814336)]} 0 258 [junit4] 2> 248657 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[1-0 (-1623129029629444096)]} 0 0 [junit4] 2> 248663 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[3-0 (1623129029148147712)]} 0 465 [junit4] 2> 248663 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[0-0 (-1623129029564432384)]} 0 1 [junit4] 2> 248681 INFO (qtp1479506840-1031) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[1-0 (-1623129029631541248)]} 0 1 [junit4] 2> 248681 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[1-0 (-1623129029631541248)]} 0 22 [junit4] 2> 248683 INFO (qtp968959859-958) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[1-2 (1623129029656707072)]} 0 0 [junit4] 2> 248691 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[0-0 (-1623129029564432384)]} 0 95 [junit4] 2> 248691 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[2-0 (-1623129029552898048)]} 0 107 [junit4] 2> 248693 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[3-1 (1623129029666144256)]} 0 1 [junit4] 2> 248694 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[2-2 (1623129029668241408)]} 0 0 [junit4] 2> 248695 INFO (qtp968959859-957) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[0-2 (1623129029667192832)]} 0 2 [junit4] 2> 248697 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[2-2 (1623129029669289984)]} 0 0 [junit4] 2> 248699 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[2-2 (1623129029669289984)]} 0 3 [junit4] 2> 248699 INFO (qtp1479506840-1034) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[1-2 (1623129029667192832)]} 0 4 [junit4] 2> 248708 INFO (qtp1479506840-1033) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[3-1 (1623129029668241408)]} 0 11 [junit4] 2> 248708 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[1-2 (1623129029667192832)]} 0 15 [junit4] 2> 248708 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[3-1 (1623129029668241408)]} 0 14 [junit4] 2> 248711 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[0-2 (1623129029672435712)]} 0 12 [junit4] 2> 248712 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[2-3 (1623129029686067200)]} 0 0 [junit4] 2> 248726 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[0-2 (1623129029672435712)]} 0 28 [junit4] 2> 248727 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Attempting to PeerSync from [https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/] - recoveringAfterStartup=[true] [junit4] 2> 248727 WARN (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.PeerSyncWithLeader no frame of reference to tell if we've missed updates [junit4] 2> 248727 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy PeerSync Recovery was not successful - trying replication. [junit4] 2> 248727 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy RecoveryStrategy has been closed [junit4] 2> 248727 INFO (recoveryExecutor-267-thread-1-processing-n:127.0.0.1:46025_c%2Fu x:collection1_shard1_replica_n5 c:collection1 s:shard1 r:core_node6) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.RecoveryStrategy Finished recovery process, successful=[false] [junit4] 2> 248728 INFO (updateExecutor-265-thread-2) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.u.DefaultSolrCoreState Running recovery [junit4] 2> 248728 INFO (updateExecutor-265-thread-2) [n:127.0.0.1:46025_c%2Fu c:collection1 s:shard1 r:core_node6 x:collection1_shard1_replica_n5] o.a.s.c.ActionThrottle Throttling recovery attempts - waiting for 9367ms [junit4] 2> 248730 INFO (qtp968959859-958) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[3-0 (-1623129029705990144)]} 0 0 [junit4] 2> 248730 INFO (qtp1479506840-1031) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[2-3 (1623129029702844416)]} 0 0 [junit4] 2> 248731 INFO (qtp968959859-959) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[0-3 (1623129029705990145)]} 0 0 [junit4] 2> 248733 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[3-0 (-1623129029707038720)]} 0 0 [junit4] 2> 248734 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[3-0 (-1623129029707038720)]} 0 3 [junit4] 2> 248734 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[2-3 (1623129029702844416)]} 0 7 [junit4] 2> 248747 INFO (OverseerCollectionConfigSetProcessor-72652085889662981-127.0.0.1:39617_c%2Fu-n_0000000000) [n:127.0.0.1:39617_c%2Fu ] o.a.s.c.OverseerTaskQueue Response ZK path: /overseer/collection-queue-work/qnr-0000000008 doesn't exist. Requestor may have disconnected from ZooKeeper [junit4] 2> 248774 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[3-2 (1623129029751078912)]} 0 38 [junit4] 2> 248774 INFO (qtp968959859-957) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[1-3 (1623129029752127488)]} 0 0 [junit4] 2> 248779 INFO (qtp1479506840-1033) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[3-2 (1623129029753176064)]} 0 0 [junit4] 2> 248779 INFO (qtp1479506840-1034) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[1-3 (1623129029753176065)]} 0 2 [junit4] 2> 248779 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[3-2 (1623129029753176064)]} 0 4 [junit4] 2> 248780 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[2-1 (-1623129029758418944)]} 0 5 [junit4] 2> 248782 INFO (qtp968959859-958) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[3-3 (1623129029759467520)]} 0 0 [junit4] 2> 248785 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[1-3 (1623129029753176065)]} 0 9 [junit4] 2> 248787 INFO (qtp968959859-959) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[1-1 (-1623129029765758976)]} 0 0 [junit4] 2> 248788 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[3-3 (1623129029761564672)]} 0 3 [junit4] 2> 248788 INFO (qtp1479506840-1031) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[2-1 (-1623129029759467520)]} 0 2 [junit4] 2> 248791 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[1-1 (-1623129029766807552)]} 0 0 [junit4] 2> 248804 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[2-1 (-1623129029759467520)]} 0 23 [junit4] 2> 248806 INFO (qtp968959859-957) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[2-4 (1623129029784633344)]} 0 0 [junit4] 2> 248809 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[3-3 (1623129029761564672)]} 0 26 [junit4] 2> 248809 INFO (qtp1479506840-1033) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[2-4 (1623129029786730496)]} 0 0 [junit4] 2> 248814 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[0-3 (1623129029790924800)]} 0 0 [junit4] 2> 248827 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[0-3 (1623129029790924800)]} 0 16 [junit4] 2> 248827 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[3-1 (-1623129029807702016)]} 0 0 [junit4] 2> 248829 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[0-1 (-1623129029809799168)]} 0 0 [junit4] 2> 248829 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[1-1 (-1623129029766807552)]} 0 41 [junit4] 2> 248830 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[2-4 (1623129029786730496)]} 0 22 [junit4] 2> 248831 INFO (qtp968959859-958) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[1-4 (1623129029811896320)]} 0 0 [junit4] 2> 248836 INFO (qtp968959859-959) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[2-5 (1623129029816090624)]} 0 0 [junit4] 2> 248837 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[0-1 (-1623129029813993472)]} 0 0 [junit4] 2> 248839 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[0-1 (-1623129029813993472)]} 0 6 [junit4] 2> 248839 INFO (qtp1479506840-1033) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[2-5 (1623129029818187776)]} 0 1 [junit4] 2> 248841 INFO (qtp1479506840-1034) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[1-4 (1623129029818187777)]} 0 1 [junit4] 2> 248841 INFO (qtp968959859-957) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[0-4 (1623129029822382080)]} 0 0 [junit4] 2> 248841 INFO (qtp1479506840-1031) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[3-1 (-1623129029809799168)]} 0 6 [junit4] 2> 248842 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[2-5 (1623129029818187776)]} 0 4 [junit4] 2> 248843 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[2-6 (1623129029824479232)]} 0 0 [junit4] 2> 248845 INFO (qtp695705683-1018) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[3-1 (-1623129029809799168)]} 0 16 [junit4] 2> 248848 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[3-4 (1623129029829722112)]} 0 0 [junit4] 2> 248848 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[1-4 (1623129029818187777)]} 0 11 [junit4] 2> 248849 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[2-6 (1623129029826576384)]} 0 0 [junit4] 2> 248850 INFO (qtp968959859-958) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[1-5 (1623129029831819264)]} 0 0 [junit4] 2> 248850 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[2-6 (1623129029826576384)]} 0 5 [junit4] 2> 248852 INFO (qtp968959859-959) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{delete=[2-2 (-1623129029833916416)]} 0 0 [junit4] 2> 248854 INFO (qtp1479506840-1033) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[3-4 (1623129029832867840)]} 0 1 [junit4] 2> 248854 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[0-4 (1623129029826576385)]} 0 8 [junit4] 2> 248855 INFO (qtp695705683-1021) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[3-4 (1623129029832867840)]} 0 4 [junit4] 2> 248856 INFO (qtp695705683-1020) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[0-4 (1623129029826576385)]} 0 11 [junit4] 2> 248858 INFO (qtp1479506840-1031) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{delete=[2-2 (-1623129029837062144)]} 0 0 [junit4] 2> 248859 INFO (qtp695705683-1017) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{delete=[2-2 (-1623129029837062144)]} 0 3 [junit4] 2> 248861 INFO (qtp968959859-955) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[2-7 (1623129029842305024)]} 0 1 [junit4] 2> 248863 INFO (qtp968959859-956) [n:127.0.0.1:39617_c%2Fu c:control_collection s:shard1 r:core_node2 x:control_collection_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [control_collection_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2&CONTROL=TRUE}{add=[0-5 (1623129029839159296)]} 0 5 [junit4] 2> 248867 INFO (qtp1479506840-1032) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[2-7 (1623129029844402176)]} 0 3 [junit4] 2> 248867 INFO (qtp695705683-1019) [n:127.0.0.1:34807_c%2Fu c:collection1 s:shard1 r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n1] webapp=/c/u path=/update params={wt=javabin&version=2}{add=[2-7 (1623129029844402176)]} 0 5 [junit4] 2> 248869 INFO (qtp1479506840-1030) [n:127.0.0.1:45583_c%2Fu c:collection1 s:shard1 r:core_node4 x:collection1_shard1_replica_n3] o.a.s.u.p.LogUpdateProcessorFactory [collection1_shard1_replica_n3] webapp=/c/u path=/update params={update.distrib=FROMLEADER&distrib.from=https://127.0.0.1:34807/c/u/collection1_shard1_replica_n1/&wt=javabin&version=2}{add=[0-5 (1623129029846499328)]} 0 0 [junit4] 2> [...truncated too long message...] nt.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > 5) Thread[id=1143, name=searcherExecutor-209-thread-1, state=WAITING, group=TGRP-RestartWhileUpdatingTest] [junit4] > at sun.misc.Unsafe.park(Native Method) [junit4] > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) [junit4] > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) [junit4] > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) [junit4] > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) [junit4] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > 6) Thread[id=971, name=searcherExecutor-175-thread-1, state=WAITING, group=TGRP-RestartWhileUpdatingTest] [junit4] > at sun.misc.Unsafe.park(Native Method) [junit4] > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) [junit4] > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) [junit4] > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) [junit4] > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) [junit4] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > at __randomizedtesting.SeedInfo.seed([1B990FFF714B6410]:0)Throwable #3: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: [junit4] > 1) Thread[id=797, name=searcherExecutor-133-thread-1, state=WAITING, group=TGRP-RestartWhileUpdatingTest] [junit4] > at sun.misc.Unsafe.park(Native Method) [junit4] > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) [junit4] > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) [junit4] > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) [junit4] > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) [junit4] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > 2) Thread[id=1105, name=searcherExecutor-195-thread-1, state=WAITING, group=TGRP-RestartWhileUpdatingTest] [junit4] > at sun.misc.Unsafe.park(Native Method) [junit4] > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) [junit4] > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) [junit4] > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) [junit4] > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) [junit4] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > 3) Thread[id=946, name=searcherExecutor-170-thread-1, state=WAITING, group=TGRP-RestartWhileUpdatingTest] [junit4] > at sun.misc.Unsafe.park(Native Method) [junit4] > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) [junit4] > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) [junit4] > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) [junit4] > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) [junit4] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > 4) Thread[id=1181, name=searcherExecutor-223-thread-1, state=WAITING, group=TGRP-RestartWhileUpdatingTest] [junit4] > at sun.misc.Unsafe.park(Native Method) [junit4] > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) [junit4] > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) [junit4] > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) [junit4] > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) [junit4] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > 5) Thread[id=1143, name=searcherExecutor-209-thread-1, state=WAITING, group=TGRP-RestartWhileUpdatingTest] [junit4] > at sun.misc.Unsafe.park(Native Method) [junit4] > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) [junit4] > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) [junit4] > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) [junit4] > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) [junit4] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > 6) Thread[id=971, name=searcherExecutor-175-thread-1, state=WAITING, group=TGRP-RestartWhileUpdatingTest] [junit4] > at sun.misc.Unsafe.park(Native Method) [junit4] > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) [junit4] > at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) [junit4] > at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) [junit4] > at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) [junit4] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) [junit4] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [junit4] > at java.lang.Thread.run(Thread.java:748) [junit4] > at __randomizedtesting.SeedInfo.seed([1B990FFF714B6410]:0) [junit4] Completed [10/10 (5!)] on J0 in 154.68s, 1 test, 1 failure, 2 errors <<< FAILURES! [junit4] [junit4] [junit4] Tests with failures [seed: 1B990FFF714B6410]: [junit4] - org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest (suite) [junit4] - org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest.test [junit4] - org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest (suite) [junit4] - org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest.test [junit4] - org.apache.solr.cloud.RestartWhileUpdatingTest (suite) [junit4] [junit4] [junit4] JVM J0: 1.06 .. 591.77 = 590.70s [junit4] JVM J1: 0.82 .. 193.57 = 192.75s [junit4] JVM J2: 0.83 .. 399.71 = 398.88s [junit4] Execution time total: 9 minutes 51 seconds [junit4] Tests summary: 10 suites, 10 tests, 9 suite-level errors, 2 failures, 5 ignored BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1572: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-repro/lucene/common-build.xml:1099: There were test failures: 10 suites, 10 tests, 9 suite-level errors, 2 failures, 5 ignored [seed: 1B990FFF714B6410] Total time: 9 minutes 54 seconds [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.RestartWhileUpdatingTest [repro] 4/5 failed: org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest [repro] git checkout c51838479a83cbc4956ad660712f6e06dd732ff8 Previous HEAD position was f140971... Ref Guide: fix typos and clarify some language for autoscaling docs HEAD is now at c518384... SOLR-13139: removing illegal date range syntax from the ref guide. [repro] Exiting with code 256 Archiving artifacts [Fast Archiver] No artifacts from Lucene-Solr-repro Repro-Lucene-Solr-Tests-8.x#8 to compare, so performing full copy of artifacts Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE Email was triggered for: Unstable (Test Failures) Sending email for trigger: Unstable (Test Failures)
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org