Build: https://ci-builds.apache.org/job/Lucene/job/Lucene-Solr-Tests-8.11/624/
2 tests failed.
FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsNNFailoverTest
Error Message:
1 thread leaked from SUITE scope at
org.apache.solr.cloud.hdfs.HdfsNNFailoverTest: 1) Thread[id=40104,
name=Command processor, state=WAITING, group=TGRP-HdfsNNFailoverTest]
at sun.misc.Unsafe.park(Native Method) at
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE
scope at org.apache.solr.cloud.hdfs.HdfsNNFailoverTest:
1) Thread[id=40104, name=Command processor, state=WAITING,
group=TGRP-HdfsNNFailoverTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
at __randomizedtesting.SeedInfo.seed([559316B2CFD74F2C]:0)
FAILED:
junit.framework.TestSuite.org.apache.solr.core.backup.repository.HdfsBackupRepositoryIntegrationTest
Error Message:
1 thread leaked from SUITE scope at
org.apache.solr.core.backup.repository.HdfsBackupRepositoryIntegrationTest:
1) Thread[id=29470, name=Command processor, state=WAITING,
group=TGRP-HdfsBackupRepositoryIntegrationTest] at
sun.misc.Unsafe.park(Native Method) at
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE
scope at
org.apache.solr.core.backup.repository.HdfsBackupRepositoryIntegrationTest:
1) Thread[id=29470, name=Command processor, state=WAITING,
group=TGRP-HdfsBackupRepositoryIntegrationTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
at __randomizedtesting.SeedInfo.seed([559316B2CFD74F2C]:0)
Build Log:
[...truncated 15524 lines...]
[junit4] Suite: org.apache.solr.cloud.hdfs.HdfsNNFailoverTest
[junit4] 2> 2059040 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.s.SolrTestCase Setting 'solr.default.confdir' system property to
test-framework derived value of
'/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/server/solr/configsets/_default/conf'
[junit4] 2> 2059040 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks:
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
[junit4] 2> 2059042 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.s.u.ErrorLogMuter Closing ErrorLogMuter-regex-20357 after mutting 0 log
messages
[junit4] 2> 2059042 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.s.u.ErrorLogMuter Creating ErrorLogMuter-regex-20358 for ERROR logs
matching regex: ignore_exception
[junit4] 2> 2059043 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.s.SolrTestCaseJ4 Created dataDir:
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/data-dir-172-001
[junit4] 2> 2059043 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true)
w/NUMERIC_DOCVALUES_SYSPROP=false
[junit4] 2> 2059044 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via:
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
[junit4] 2> 2059044 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property:
/mlmfo/vt
[junit4] 1> Formatting using clusterid: testClusterID
[junit4] 2> 2059154 WARN
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.h.s.a.s.AuthenticationFilter Unable to initialize FileSignerSecretProvider,
falling back to use random secrets. Reason: access denied
("java.io.FilePermission" "/home/jenkins/hadoop-http-auth-signature-secret"
"read")
[junit4] 2> 2059154 WARN
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
[junit4] 2> 2059156 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.e.j.s.Server jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git:
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
[junit4] 2> 2059157 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.e.j.s.session DefaultSessionIdManager workerName=node0
[junit4] 2> 2059157 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.e.j.s.session No SessionScavenger set, using defaults
[junit4] 2> 2059157 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.e.j.s.session node0 Scavenging every 660000ms
[junit4] 2> 2059159 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.e.j.s.h.ContextHandler Started
o.e.j.s.ServletContextHandler@66932d7c{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,AVAILABLE}
[junit4] 2> 2059288 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.e.j.s.h.ContextHandler Started
o.e.j.w.WebAppContext@2ce13d39{hdfs,/,file:///home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/jetty-localhost_localdomain-44457-hadoop-hdfs-3_2_4-tests_jar-_-any-4885732508701714943/webapp/,AVAILABLE}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/hdfs}
[junit4] 2> 2059289 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.e.j.s.AbstractConnector Started ServerConnector@627bb613{HTTP/1.1,
(http/1.1)}{localhost.localdomain:44457}
[junit4] 2> 2059289 INFO
(SUITE-HdfsNNFailoverTest-seed#[559316B2CFD74F2C]-worker) [ ]
o.e.j.s.Server Started @2059320ms
[junit4] 2> 2059375 WARN (Listener at localhost.localdomain/38059) [
] o.a.h.s.a.s.AuthenticationFilter Unable to initialize
FileSignerSecretProvider, falling back to use random secrets. Reason: access
denied ("java.io.FilePermission"
"/home/jenkins/hadoop-http-auth-signature-secret" "read")
[junit4] 2> 2059378 WARN (Listener at localhost.localdomain/38059) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
[junit4] 2> 2059380 INFO (Listener at localhost.localdomain/38059) [
] o.e.j.s.Server jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git:
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
[junit4] 2> 2059382 INFO (Listener at localhost.localdomain/38059) [
] o.e.j.s.session DefaultSessionIdManager workerName=node0
[junit4] 2> 2059383 INFO (Listener at localhost.localdomain/38059) [
] o.e.j.s.session No SessionScavenger set, using defaults
[junit4] 2> 2059383 INFO (Listener at localhost.localdomain/38059) [
] o.e.j.s.session node0 Scavenging every 660000ms
[junit4] 2> 2059385 INFO (Listener at localhost.localdomain/38059) [
] o.e.j.s.h.ContextHandler Started
o.e.j.s.ServletContextHandler@39811499{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,AVAILABLE}
[junit4] 2> 2059513 INFO (Listener at localhost.localdomain/38059) [
] o.e.j.s.h.ContextHandler Started
o.e.j.w.WebAppContext@5a36ac5d{datanode,/,file:///home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/jetty-localhost-34207-hadoop-hdfs-3_2_4-tests_jar-_-any-7418535199758633013/webapp/,AVAILABLE}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/datanode}
[junit4] 2> 2059513 INFO (Listener at localhost.localdomain/38059) [
] o.e.j.s.AbstractConnector Started ServerConnector@5c1c9eea{HTTP/1.1,
(http/1.1)}{localhost:34207}
[junit4] 2> 2059513 INFO (Listener at localhost.localdomain/38059) [
] o.e.j.s.Server Started @2059545ms
[junit4] 2> 2059547 WARN (Listener at localhost.localdomain/35077) [
] o.a.h.s.a.s.AuthenticationFilter Unable to initialize
FileSignerSecretProvider, falling back to use random secrets. Reason: access
denied ("java.io.FilePermission"
"/home/jenkins/hadoop-http-auth-signature-secret" "read")
[junit4] 2> 2059550 WARN (Listener at localhost.localdomain/35077) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
[junit4] 2> 2059552 INFO (Listener at localhost.localdomain/35077) [
] o.e.j.s.Server jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git:
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
[junit4] 2> 2059554 INFO (Listener at localhost.localdomain/35077) [
] o.e.j.s.session DefaultSessionIdManager workerName=node0
[junit4] 2> 2059555 INFO (Listener at localhost.localdomain/35077) [
] o.e.j.s.session No SessionScavenger set, using defaults
[junit4] 2> 2059555 INFO (Listener at localhost.localdomain/35077) [
] o.e.j.s.session node0 Scavenging every 660000ms
[junit4] 2> 2059555 INFO (Listener at localhost.localdomain/35077) [
] o.e.j.s.h.ContextHandler Started
o.e.j.s.ServletContextHandler@7b3826a4{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,AVAILABLE}
[junit4] 2> 2059679 INFO (Listener at localhost.localdomain/35077) [
] o.e.j.s.h.ContextHandler Started
o.e.j.w.WebAppContext@18a4c7a7{datanode,/,file:///home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/jetty-localhost-35741-hadoop-hdfs-3_2_4-tests_jar-_-any-4553913370640826775/webapp/,AVAILABLE}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/datanode}
[junit4] 2> 2059684 INFO (Listener at localhost.localdomain/35077) [
] o.e.j.s.AbstractConnector Started ServerConnector@7956ab1b{HTTP/1.1,
(http/1.1)}{localhost:35741}
[junit4] 2> 2059684 INFO (Listener at localhost.localdomain/35077) [
] o.e.j.s.Server Started @2059716ms
[junit4] 2> 2059754 INFO (Block report processor) [ ]
BlockStateChange BLOCK* processReport 0x340c1bef63fa51e2: Processing first
storage report for DS-d6d1a053-c9dc-42a1-9b6c-670b17564898 from datanode
DatanodeRegistration(127.0.0.1:44183,
datanodeUuid=92008d84-5a34-431d-a247-b0b57079f6ff, infoPort=38225,
infoSecurePort=0, ipcPort=35077,
storageInfo=lv=-57;cid=testClusterID;nsid=2092394424;c=1709823896049)
[junit4] 2> 2059754 INFO (Block report processor) [ ]
BlockStateChange BLOCK* processReport 0x340c1bef63fa51e2: from storage
DS-d6d1a053-c9dc-42a1-9b6c-670b17564898 node
DatanodeRegistration(127.0.0.1:44183,
datanodeUuid=92008d84-5a34-431d-a247-b0b57079f6ff, infoPort=38225,
infoSecurePort=0, ipcPort=35077,
storageInfo=lv=-57;cid=testClusterID;nsid=2092394424;c=1709823896049), blocks:
0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0
[junit4] 2> 2059754 INFO (Block report processor) [ ]
BlockStateChange BLOCK* processReport 0x340c1bef63fa51e2: Processing first
storage report for DS-738c0029-a89b-436d-b3e9-6c9df883fe33 from datanode
DatanodeRegistration(127.0.0.1:44183,
datanodeUuid=92008d84-5a34-431d-a247-b0b57079f6ff, infoPort=38225,
infoSecurePort=0, ipcPort=35077,
storageInfo=lv=-57;cid=testClusterID;nsid=2092394424;c=1709823896049)
[junit4] 2> 2059754 INFO (Block report processor) [ ]
BlockStateChange BLOCK* processReport 0x340c1bef63fa51e2: from storage
DS-738c0029-a89b-436d-b3e9-6c9df883fe33 node
DatanodeRegistration(127.0.0.1:44183,
datanodeUuid=92008d84-5a34-431d-a247-b0b57079f6ff, infoPort=38225,
infoSecurePort=0, ipcPort=35077,
storageInfo=lv=-57;cid=testClusterID;nsid=2092394424;c=1709823896049), blocks:
0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
[junit4] 2> 2059935 INFO (Block report processor) [ ]
BlockStateChange BLOCK* processReport 0x466540ee8e5480e7: Processing first
storage report for DS-33340cf5-ee24-42b2-bd5b-87851e48458b from datanode
DatanodeRegistration(127.0.0.1:43567,
datanodeUuid=27d58439-c6a6-4cf0-9625-1e45a1d711c2, infoPort=33249,
infoSecurePort=0, ipcPort=35287,
storageInfo=lv=-57;cid=testClusterID;nsid=2092394424;c=1709823896049)
[junit4] 2> 2059936 INFO (Block report processor) [ ]
BlockStateChange BLOCK* processReport 0x466540ee8e5480e7: from storage
DS-33340cf5-ee24-42b2-bd5b-87851e48458b node
DatanodeRegistration(127.0.0.1:43567,
datanodeUuid=27d58439-c6a6-4cf0-9625-1e45a1d711c2, infoPort=33249,
infoSecurePort=0, ipcPort=35287,
storageInfo=lv=-57;cid=testClusterID;nsid=2092394424;c=1709823896049), blocks:
0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0
[junit4] 2> 2059936 INFO (Block report processor) [ ]
BlockStateChange BLOCK* processReport 0x466540ee8e5480e7: Processing first
storage report for DS-cc7673cf-fe14-419b-8f86-4694e3679410 from datanode
DatanodeRegistration(127.0.0.1:43567,
datanodeUuid=27d58439-c6a6-4cf0-9625-1e45a1d711c2, infoPort=33249,
infoSecurePort=0, ipcPort=35287,
storageInfo=lv=-57;cid=testClusterID;nsid=2092394424;c=1709823896049)
[junit4] 2> 2059936 INFO (Block report processor) [ ]
BlockStateChange BLOCK* processReport 0x466540ee8e5480e7: from storage
DS-cc7673cf-fe14-419b-8f86-4694e3679410 node
DatanodeRegistration(127.0.0.1:43567,
datanodeUuid=27d58439-c6a6-4cf0-9625-1e45a1d711c2, infoPort=33249,
infoSecurePort=0, ipcPort=35287,
storageInfo=lv=-57;cid=testClusterID;nsid=2092394424;c=1709823896049), blocks:
0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0
[junit4] 2> 2059995 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.u.ErrorLogMuter Closing ErrorLogMuter-regex-20358 after mutting 0 log
messages
[junit4] 2> 2059995 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.u.ErrorLogMuter Creating ErrorLogMuter-regex-20359 for ERROR logs
matching regex: ignore_exception
[junit4] 2> 2059997 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
[junit4] 2> 2059997 INFO (ZkTestServer Run Thread) [ ]
o.a.s.c.ZkTestServer client port: 0.0.0.0/0.0.0.0:0
[junit4] 2> 2059997 INFO (ZkTestServer Run Thread) [ ]
o.a.s.c.ZkTestServer Starting server
[junit4] 2> 2059998 WARN (ZkTestServer Run Thread) [ ]
o.a.z.s.ServerCnxnFactory maxCnxns is not configured, using default value 0.
[junit4] 2> 2060097 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer start zk server on port: 33469
[junit4] 2> 2060097 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer waitForServerUp: 127.0.0.1:33469
[junit4] 2> 2060097 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:33469
[junit4] 2> 2060097 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer connecting to 127.0.0.1 33469
[junit4] 2> 2060099 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4] 2> 2060102 INFO (zkConnectionManagerCallback-23575-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2060102 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4] 2> 2060109 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4] 2> 2060110 INFO (zkConnectionManagerCallback-23577-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2060110 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4] 2> 2060111 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
to /configs/conf1/solrconfig.xml
[junit4] 2> 2060112 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/schema.xml
to /configs/conf1/schema.xml
[junit4] 2> 2060114 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
[junit4] 2> 2060115 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
to /configs/conf1/stopwords.txt
[junit4] 2> 2060116 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/protwords.txt
to /configs/conf1/protwords.txt
[junit4] 2> 2060117 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/currency.xml
to /configs/conf1/currency.xml
[junit4] 2> 2060118 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
to /configs/conf1/enumsConfig.xml
[junit4] 2> 2060119 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
to /configs/conf1/open-exchange-rates.json
[junit4] 2> 2060120 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
to /configs/conf1/mapping-ISOLatin1Accent.txt
[junit4] 2> 2060121 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/old_synonyms.txt
to /configs/conf1/old_synonyms.txt
[junit4] 2> 2060122 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkTestServer put
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/core/src/test-files/solr/collection1/conf/synonyms.txt
to /configs/conf1/synonyms.txt
[junit4] 2> 2060123 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.AbstractFullDistribZkTestBase Will use NRT replicas unless explicitly
asked otherwise
[junit4] 2> 2060306 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.s.e.JettySolrRunner Start Jetty (configured port=0, binding port=0)
[junit4] 2> 2060306 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 2 ...
[junit4] 2> 2060306 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ] o.e.j.s.Server
jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git:
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
[junit4] 2> 2060309 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ] o.e.j.s.session
DefaultSessionIdManager workerName=node0
[junit4] 2> 2060309 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ] o.e.j.s.session
No SessionScavenger set, using defaults
[junit4] 2> 2060309 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ] o.e.j.s.session
node0 Scavenging every 660000ms
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.e.j.s.h.ContextHandler Started
o.e.j.s.ServletContextHandler@4b88cb35{/mlmfo/vt,null,AVAILABLE}
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.e.j.s.AbstractConnector Started ServerConnector@6633ecf8{HTTP/1.1, (http/1.1,
h2c)}{127.0.0.1:34369}
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ] o.e.j.s.Server
Started @2060342ms
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.s.e.JettySolrRunner Jetty properties:
{solr.data.dir=hdfs://localhost.localdomain:38059/hdfs__localhost.localdomain_38059__home_jenkins_jenkins-slave_workspace_Lucene_Lucene-Solr-Tests-8.11_solr_build_solr-core_test_J2_temp_solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001_tempDir-002_control_data,
hostContext=/mlmfo/vt, hostPort=34369,
coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/control-001/cores}
[junit4] 2> 2060311 ERROR
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be
missing or incomplete.
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.s.SolrDispatchFilter Using logger factory
org.apache.logging.slf4j.Log4jLoggerFactory
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr? version
8.11.4
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir:
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr
[junit4] 2> 2060311 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time:
2024-03-07T15:04:57.284Z
[junit4] 2> 2060313 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4] 2> 2060314 INFO (zkConnectionManagerCallback-23579-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2060314 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4] 2> 2060416 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in
ZooKeeper)
[junit4] 2> 2060416 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.SolrXmlConfig Loading container configuration from
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/control-001/solr.xml
[junit4] 2> 2060419 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay
is ignored
[junit4] 2> 2060419 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.SolrXmlConfig Configuration parameter
autoReplicaFailoverBadNodeExpiration is ignored
[junit4] 2> 2060421 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.SolrXmlConfig MBean server found:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef, but no JMX reporters were
configured - adding default JMX reporter.
[junit4] 2> 2061260 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized:
WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false]
[junit4] 2> 2061261 WARN
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.e.j.u.s.S.config Trusting all certificates configured for
Client@463505d3[provider=null,keyStore=null,trustStore=null]
[junit4] 2> 2061261 WARN
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for
Client@463505d3[provider=null,keyStore=null,trustStore=null]
[junit4] 2> 2061265 WARN
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.e.j.u.s.S.config Trusting all certificates configured for
Client@4da4f44f[provider=null,keyStore=null,trustStore=null]
[junit4] 2> 2061265 WARN
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for
Client@4da4f44f[provider=null,keyStore=null,trustStore=null]
[junit4] 2> 2061266 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:33469/solr
[junit4] 2> 2061268 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4] 2> 2061268 INFO (zkConnectionManagerCallback-23590-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2061268 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4] 2> 2061269 WARN
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]-SendThread(127.0.0.1:33469))
[ ] o.a.z.ClientCnxn An exception was thrown while closing send thread for
session 0x100ec7018e90003.
[junit4] 2> => EndOfStreamException: Unable to read additional
data from server sessionid 0x100ec7018e90003, likely server has closed socket
[junit4] 2> at
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
[junit4] 2> org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable
to read additional data from server sessionid 0x100ec7018e90003, likely server
has closed socket
[junit4] 2> at
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
~[zookeeper-3.6.2.jar:3.6.2]
[junit4] 2> at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
~[zookeeper-3.6.2.jar:3.6.2]
[junit4] 2> at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275)
[zookeeper-3.6.2.jar:3.6.2]
[junit4] 2> 2061371 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.c.ConnectionManager Waiting for
client to connect to ZooKeeper
[junit4] 2> 2061372 INFO (zkConnectionManagerCallback-23592-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2061372 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.c.ConnectionManager Client is
connected to ZooKeeper
[junit4] 2> 2061461 WARN
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.ZkController Contents of zookeeper
/security.json are world-readable; consider setting up ACLs as described in
https://solr.apache.org/guide/zookeeper-access-control.html
[junit4] 2> 2061466 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.OverseerElectionContext I am going
to be the leader 127.0.0.1:34369_mlmfo%2Fvt
[junit4] 2> 2061466 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.Overseer Overseer
(id=72317560236343300-127.0.0.1:34369_mlmfo%2Fvt-n_0000000000) starting
[junit4] 2> 2061473 INFO
(OverseerStateUpdate-72317560236343300-127.0.0.1:34369_mlmfo%2Fvt-n_0000000000)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.Overseer Starting to work on the
main queue : 127.0.0.1:34369_mlmfo%2Fvt
[junit4] 2> 2061473 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.ZkController Register node as live
in ZooKeeper:/live_nodes/127.0.0.1:34369_mlmfo%2Fvt
[junit4] 2> 2061474 INFO
(OverseerStateUpdate-72317560236343300-127.0.0.1:34369_mlmfo%2Fvt-n_0000000000)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.c.ZkStateReader Updated live nodes
from ZooKeeper... (0) -> (1)
[junit4] 2> 2061476 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.ZkController non-data nodes now []
[junit4] 2> 2061478 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.p.PackageLoader /packages.json
updated to version -1
[junit4] 2> 2061479 WARN
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.CoreContainer Not all security
plugins configured! authentication=disabled authorization=disabled. Solr is
only as secure as you make it. Consider configuring
authentication/authorization before exposing Solr to users internal or
external. See https://s.apache.org/solrsecurity for more info
[junit4] 2> 2061510 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.h.a.MetricsHistoryHandler No .system
collection, keeping metrics history in memory.
[junit4] 2> 2061533 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.m.r.SolrJmxReporter JMX monitoring
for 'solr.node' (registry 'solr.node') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2061545 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.m.r.SolrJmxReporter JMX monitoring
for 'solr.jvm' (registry 'solr.jvm') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2061545 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.m.r.SolrJmxReporter JMX monitoring
for 'solr.jetty' (registry 'solr.jetty') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2061547 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C])
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.CorePropertiesLocator Found 0 core
definitions underneath
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/control-001/cores
[junit4] 2> 2061571 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4] 2> 2061572 INFO (zkConnectionManagerCallback-23609-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2061572 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4] 2> 2061574 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
[junit4] 2> 2061575 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:33469/solr ready
[junit4] 2> 2061579 INFO (qtp959887688-40205)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall
HttpSolrCall.init(http://127.0.0.1:34369/mlmfo/vt/admin/collections?action=CREATE&name=control_collection&collection.configName=conf1&createNodeSet=127.0.0.1%3A34369_mlmfo%252Fvt&numShards=1&nrtReplicas=1&wt=javabin&version=2)
[junit4] 2> 2061582 INFO
(OverseerThreadFactory-23599-thread-1-processing-n:127.0.0.1:34369_mlmfo%2Fvt)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.a.c.CreateCollectionCmd Create
collection control_collection
[junit4] 2> 2061695 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall
HttpSolrCall.init(http://127.0.0.1:34369/mlmfo/vt/admin/cores?null)
[junit4] 2> 2061696 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt x:control_collection_shard1_replica_n1 ]
o.a.s.h.a.CoreAdminOperation core create command
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT
[junit4] 2> 2061696 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt x:control_collection_shard1_replica_n1 ]
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient core cache for max 4
cores with initial capacity of 4
[junit4] 2> 2062727 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.SolrConfig Using Lucene
MatchVersion: 8.11.4
[junit4] 2> 2062727 WARN (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.SolrConfig solrconfig.xml:
<jmx> is no longer supported, use solr.xml:/metrics/reporter section instead
[junit4] 2> 2062731 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.s.IndexSchema Schema name=test
[junit4] 2> 2062741 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.s.IndexSchema Loaded schema
test/1.0 with uniqueid field id
[junit4] 2> 2062761 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.CoreContainer Creating
SolrCore 'control_collection_shard1_replica_n1' using configuration from
configset conf1, trusted=true
[junit4] 2> 2062762 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.m.r.SolrJmxReporter JMX
monitoring for 'solr.core.control_collection.shard1.replica_n1' (registry
'solr.core.control_collection.shard1.replica_n1') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2062762 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory
solr.hdfs.home=hdfs://localhost.localdomain:38059/solr_hdfs_home
[junit4] 2> 2062762 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Solr
Kerberos Authentication disabled
[junit4] 2> 2062763 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.SolrCore
[[control_collection_shard1_replica_n1] ] Opening new SolrCore at
[/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/control-001/cores/control_collection_shard1_replica_n1],
dataDir=[hdfs://localhost.localdomain:38059/solr_hdfs_home/control_collection/core_node2/data/]
[junit4] 2> 2062764 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/control_collection/core_node2/data/snapshot_metadata
[junit4] 2> 2062774 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Number of
slabs of block cache [1] with direct memory allocation set to [true]
[junit4] 2> 2062775 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Block
cache target memory usage, slab size of [33554432] will allocate [1] slabs and
use ~[33554432] bytes
[junit4] 2> 2062775 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Creating
new global HDFS BlockCache
[junit4] 2> 2062801 WARN (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support
in Solr has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2062801 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.s.b.BlockDirectory Block cache
on write is disabled
[junit4] 2> 2062802 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/control_collection/core_node2/data
[junit4] 2> 2062814 WARN (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support
in Solr has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2062824 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/control_collection/core_node2/data/index
[junit4] 2> 2062831 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Number of
slabs of block cache [1] with direct memory allocation set to [true]
[junit4] 2> 2062831 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Block
cache target memory usage, slab size of [33554432] will allocate [1] slabs and
use ~[33554432] bytes
[junit4] 2> 2062835 WARN (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support
in Solr has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2062835 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.s.b.BlockDirectory Block cache
on write is disabled
[junit4] 2> 2062835 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.u.RandomMergePolicy
RandomMergePolicy wrapping class
org.apache.lucene.index.LogByteSizeMergePolicy: [LogByteSizeMergePolicy:
minMergeSize=1677721, mergeFactor=21, maxMergeSize=2147483648,
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false,
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12,
noCFSRatio=0.6506841493502946]
[junit4] 2> 2062895 WARN (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.RequestHandlers INVALID
paramSet a in requestHandler {type = requestHandler,name = /dump,class =
DumpRequestHandler,attributes = {initParams=a, name=/dump,
class=DumpRequestHandler},args = {defaults={a=A,b=B}}}
[junit4] 2> 2062964 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.u.UpdateHandler Using UpdateLog
implementation: org.apache.solr.update.HdfsUpdateLog
[junit4] 2> 2062964 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.u.UpdateLog Initializing
UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100
maxNumLogsToKeep=10 numVersionBuckets=65536
[junit4] 2> 2062964 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.u.HdfsUpdateLog Initializing
HdfsUpdateLog: tlogDfsReplication=2
[junit4] 2> 2062980 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.u.CommitTracker Hard AutoCommit:
disabled
[junit4] 2> 2062980 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.u.CommitTracker Soft AutoCommit:
disabled
[junit4] 2> 2062982 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.u.RandomMergePolicy
RandomMergePolicy wrapping class org.apache.lucene.index.LogDocMergePolicy:
[LogDocMergePolicy: minMergeSize=1000, mergeFactor=13,
maxMergeSize=9223372036854775807,
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false,
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12,
noCFSRatio=0.0]
[junit4] 2> 2062993 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage
Configured ZooKeeperStorageIO with znodeBase: /configs/conf1
[junit4] 2> 2062993 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage Loaded
null at path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
[junit4] 2> 2062997 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.h.ReplicationHandler Commits
will be reserved for 10000 ms
[junit4] 2> 2062998 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.u.UpdateLog Could not find max
version in index or recent updates, using new clock 1792880305735991296
[junit4] 2> 2063002 INFO
(searcherExecutor-23611-thread-1-processing-n:127.0.0.1:34369_mlmfo%2Fvt
x:control_collection_shard1_replica_n1 c:control_collection s:shard1)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.SolrCore
[control_collection_shard1_replica_n1] Registered new searcher autowarm time:
0 ms
[junit4] 2> 2063005 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.ZkShardTerms Successful update
of terms at /collections/control_collection/terms/shard1 to
Terms{values={core_node2=0}, version=0}
[junit4] 2> 2063005 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContextBase
make sure parent is created /collections/control_collection/leaders/shard1
[junit4] 2> 2063008 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContext
Enough replicas found to continue.
[junit4] 2> 2063008 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContext I
may be the new leader - try and sync
[junit4] 2> 2063008 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.SyncStrategy Sync replicas to
http://127.0.0.1:34369/mlmfo/vt/control_collection_shard1_replica_n1/
[junit4] 2> 2063009 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.SyncStrategy Sync Success -
now sync replicas to me
[junit4] 2> 2063009 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.SyncStrategy
http://127.0.0.1:34369/mlmfo/vt/control_collection_shard1_replica_n1/ has no
replicas
[junit4] 2> 2063009 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContextBase
Creating leader registration node
/collections/control_collection/leaders/shard1/leader after winning as
/collections/control_collection/leader_elect/shard1/election/72317560236343300-core_node2-n_0000000000
[junit4] 2> 2063011 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContext I
am the new leader:
http://127.0.0.1:34369/mlmfo/vt/control_collection_shard1_replica_n1/ shard1
[junit4] 2> 2063113 INFO (zkCallback-23591-thread-1) [ ]
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent
state:SyncConnected type:NodeDataChanged
path:/collections/control_collection/state.json] for collection
[control_collection] has occurred - updating... (live nodes size: [1])
[junit4] 2> 2063115 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt c:control_collection s:shard1
x:control_collection_shard1_replica_n1 ] o.a.s.c.ZkController I am the leader,
no recovery necessary
[junit4] 2> 2063118 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall [admin] webapp=null
path=/admin/cores
params={qt=/admin/cores&collection.configName=conf1&newCollection=true&name=control_collection_shard1_replica_n1&action=CREATE&numShards=1&collection=control_collection&shard=shard1&wt=javabin&version=2&replicaType=NRT}
status=0 QTime=1422
[junit4] 2> 2063120 INFO (qtp959887688-40205)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.h.a.CollectionsHandler Wait for new
collection to be active for at most 45 seconds. Check all shard replicas
[junit4] 2> 2063218 INFO (zkCallback-23591-thread-1) [ ]
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent
state:SyncConnected type:NodeDataChanged
path:/collections/control_collection/state.json] for collection
[control_collection] has occurred - updating... (live nodes size: [1])
[junit4] 2> 2063218 INFO (zkCallback-23591-thread-2) [ ]
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent
state:SyncConnected type:NodeDataChanged
path:/collections/control_collection/state.json] for collection
[control_collection] has occurred - updating... (live nodes size: [1])
[junit4] 2> 2063219 INFO (qtp959887688-40205)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall [admin] webapp=null
path=/admin/collections
params={collection.configName=conf1&name=control_collection&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=127.0.0.1:34369_mlmfo%252Fvt&wt=javabin&version=2}
status=0 QTime=1640
[junit4] 2> 2063219 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.AbstractFullDistribZkTestBase Waiting to see 1 active replicas in
collection: control_collection
[junit4] 2> 2063221 WARN
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]-SendThread(127.0.0.1:33469))
[ ] o.a.z.ClientCnxn An exception was thrown while closing send thread for
session 0x100ec7018e90005.
[junit4] 2> => EndOfStreamException: Unable to read additional
data from server sessionid 0x100ec7018e90005, likely server has closed socket
[junit4] 2> at
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
[junit4] 2> org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable
to read additional data from server sessionid 0x100ec7018e90005, likely server
has closed socket
[junit4] 2> at
org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
~[zookeeper-3.6.2.jar:3.6.2]
[junit4] 2> at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
~[zookeeper-3.6.2.jar:3.6.2]
[junit4] 2> at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275)
[zookeeper-3.6.2.jar:3.6.2]
[junit4] 2> 2063324 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4] 2> 2063325 INFO (zkConnectionManagerCallback-23620-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2063325 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4] 2> 2063327 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (0) -> (1)
[junit4] 2> 2063328 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.s.i.ZkClientClusterStateProvider Cluster at 127.0.0.1:33469/solr ready
[junit4] 2> 2063328 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.ChaosMonkey monkey: init - expire sessions:false cause connection
loss:false
[junit4] 2> 2063328 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall
HttpSolrCall.init(http://127.0.0.1:34369/mlmfo/vt/admin/collections?action=CREATE&name=collection1&collection.configName=conf1&createNodeSet=&numShards=1&nrtReplicas=1&stateFormat=1&wt=javabin&version=2)
[junit4] 2> 2063332 INFO
(OverseerThreadFactory-23599-thread-2-processing-n:127.0.0.1:34369_mlmfo%2Fvt)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.a.c.CreateCollectionCmd Create
collection collection1
[junit4] 2> 2063332 INFO
(OverseerCollectionConfigSetProcessor-72317560236343300-127.0.0.1:34369_mlmfo%2Fvt-n_0000000000)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.OverseerTaskQueue Response ZK
path: /overseer/collection-queue-work/qnr-0000000000 doesn't exist. Requestor
may have disconnected from ZooKeeper
[junit4] 2> 2063535 WARN
(OverseerThreadFactory-23599-thread-2-processing-n:127.0.0.1:34369_mlmfo%2Fvt)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.a.c.CreateCollectionCmd It is
unusual to create a collection (collection1) without cores.
[junit4] 2> 2063537 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.h.a.CollectionsHandler Wait for new
collection to be active for at most 45 seconds. Check all shard replicas
[junit4] 2> 2063537 INFO (qtp959887688-40207)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall [admin] webapp=null
path=/admin/collections
params={collection.configName=conf1&name=collection1&nrtReplicas=1&action=CREATE&numShards=1&createNodeSet=&stateFormat=1&wt=javabin&version=2}
status=0 QTime=208
[junit4] 2> 2063538 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.SolrCloudTestCase active slice count: 1 expected: 1
[junit4] 2> 2063538 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.SolrCloudTestCase active replica count: 0 expected replica count: 0
[junit4] 2> 2063538 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.AbstractFullDistribZkTestBase Creating jetty instances
pullReplicaCount=0 numOtherReplicas=1
[junit4] 2> 2063698 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.AbstractFullDistribZkTestBase create jetty 1 in directory
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/shard-1-001
of type NRT for shard1
[junit4] 2> 2063699 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.s.e.JettySolrRunner Start Jetty (configured port=0, binding port=0)
[junit4] 2> 2063699 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 2 ...
[junit4] 2> 2063699 INFO (closeThreadPool-23621-thread-1) [ ]
o.e.j.s.Server jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git:
27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 1.8.0_362-b09
[junit4] 2> 2063700 INFO (closeThreadPool-23621-thread-1) [ ]
o.e.j.s.session DefaultSessionIdManager workerName=node0
[junit4] 2> 2063700 INFO (closeThreadPool-23621-thread-1) [ ]
o.e.j.s.session No SessionScavenger set, using defaults
[junit4] 2> 2063700 INFO (closeThreadPool-23621-thread-1) [ ]
o.e.j.s.session node0 Scavenging every 600000ms
[junit4] 2> 2063700 INFO (closeThreadPool-23621-thread-1) [ ]
o.e.j.s.h.ContextHandler Started
o.e.j.s.ServletContextHandler@37b0567d{/mlmfo/vt,null,AVAILABLE}
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.e.j.s.AbstractConnector Started ServerConnector@2c8194f2{HTTP/1.1, (http/1.1,
h2c)}{127.0.0.1:41181}
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.e.j.s.Server Started @2063732ms
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.s.e.JettySolrRunner Jetty properties:
{solr.data.dir=hdfs://localhost.localdomain:38059/hdfs__localhost.localdomain_38059__home_jenkins_jenkins-slave_workspace_Lucene_Lucene-Solr-Tests-8.11_solr_build_solr-core_test_J2_temp_solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001_tempDir-002_jetty1,
solrconfig=solrconfig.xml, hostContext=/mlmfo/vt, hostPort=41181,
coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/shard-1-001/cores}
[junit4] 2> 2063701 ERROR (closeThreadPool-23621-thread-1) [ ]
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be
missing or incomplete.
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.s.SolrDispatchFilter Using logger factory
org.apache.logging.slf4j.Log4jLoggerFactory
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.s.SolrDispatchFilter ___ _ Welcome to Apache Solr? version
8.11.4
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.s.SolrDispatchFilter / __| ___| |_ _ Starting in cloud mode on port null
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_| Install dir:
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.s.SolrDispatchFilter |___/\___/_|_| Start time:
2024-03-07T15:05:00.674Z
[junit4] 2> 2063701 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4] 2> 2063706 INFO (zkConnectionManagerCallback-23623-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2063707 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4] 2> 2063808 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.s.SolrDispatchFilter Loading solr.xml from SolrHome (not found in
ZooKeeper)
[junit4] 2> 2063808 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.SolrXmlConfig Loading container configuration from
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/../../../../../../../../../../../home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/shard-1-001/solr.xml
[junit4] 2> 2063811 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay
is ignored
[junit4] 2> 2063811 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.SolrXmlConfig Configuration parameter
autoReplicaFailoverBadNodeExpiration is ignored
[junit4] 2> 2063813 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.SolrXmlConfig MBean server found:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef, but no JMX reporters were
configured - adding default JMX reporter.
[junit4] 2> 2064700 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.h.c.HttpShardHandlerFactory Host whitelist initialized:
WhitelistHostChecker [whitelistHosts=null, whitelistHostCheckingEnabled=false]
[junit4] 2> 2064701 WARN (closeThreadPool-23621-thread-1) [ ]
o.e.j.u.s.S.config Trusting all certificates configured for
Client@577f26f5[provider=null,keyStore=null,trustStore=null]
[junit4] 2> 2064701 WARN (closeThreadPool-23621-thread-1) [ ]
o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for
Client@577f26f5[provider=null,keyStore=null,trustStore=null]
[junit4] 2> 2064704 WARN (closeThreadPool-23621-thread-1) [ ]
o.e.j.u.s.S.config Trusting all certificates configured for
Client@353fd25[provider=null,keyStore=null,trustStore=null]
[junit4] 2> 2064704 WARN (closeThreadPool-23621-thread-1) [ ]
o.e.j.u.s.S.config No Client EndPointIdentificationAlgorithm configured for
Client@353fd25[provider=null,keyStore=null,trustStore=null]
[junit4] 2> 2064705 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:33469/solr
[junit4] 2> 2064706 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
[junit4] 2> 2064707 INFO (zkConnectionManagerCallback-23634-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2064707 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
[junit4] 2> 2064809 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.c.ConnectionManager Waiting for
client to connect to ZooKeeper
[junit4] 2> 2064810 INFO (zkConnectionManagerCallback-23636-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
[junit4] 2> 2064810 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.c.ConnectionManager Client is
connected to ZooKeeper
[junit4] 2> 2064817 WARN (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.ZkController Contents of zookeeper
/security.json are world-readable; consider setting up ACLs as described in
https://solr.apache.org/guide/zookeeper-access-control.html
[junit4] 2> 2064818 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.c.ZkStateReader Updated live nodes
from ZooKeeper... (0) -> (1)
[junit4] 2> 2064823 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.ZkController Publish
node=127.0.0.1:41181_mlmfo%2Fvt as DOWN
[junit4] 2> 2064823 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.TransientSolrCoreCacheDefault
Allocating transient core cache for max 4 cores with initial capacity of 4
[junit4] 2> 2064823 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.ZkController Register node as live
in ZooKeeper:/live_nodes/127.0.0.1:41181_mlmfo%2Fvt
[junit4] 2> 2064824 INFO (zkCallback-23619-thread-1) [ ]
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
[junit4] 2> 2064824 INFO (zkCallback-23591-thread-1) [ ]
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
[junit4] 2> 2064824 INFO (zkCallback-23635-thread-1) [ ]
o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
[junit4] 2> 2064825 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.ZkController non-data nodes now []
[junit4] 2> 2064827 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.p.PackageLoader /packages.json
updated to version -1
[junit4] 2> 2064827 WARN (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.CoreContainer Not all security
plugins configured! authentication=disabled authorization=disabled. Solr is
only as secure as you make it. Consider configuring
authentication/authorization before exposing Solr to users internal or
external. See https://s.apache.org/solrsecurity for more info
[junit4] 2> 2064856 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.h.a.MetricsHistoryHandler No .system
collection, keeping metrics history in memory.
[junit4] 2> 2064880 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.m.r.SolrJmxReporter JMX monitoring
for 'solr.node' (registry 'solr.node') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2064891 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.m.r.SolrJmxReporter JMX monitoring
for 'solr.jvm' (registry 'solr.jvm') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2064891 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.m.r.SolrJmxReporter JMX monitoring
for 'solr.jetty' (registry 'solr.jetty') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2064892 INFO (closeThreadPool-23621-thread-1)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.c.CorePropertiesLocator Found 0 core
definitions underneath
/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/shard-1-001/cores
[junit4] 2> 2064912 INFO (closeThreadPool-23621-thread-1) [ ]
o.a.s.c.AbstractFullDistribZkTestBase waitForLiveNode:
127.0.0.1:41181_mlmfo%2Fvt
[junit4] 2> 2064915 INFO (qtp959887688-40205)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall
HttpSolrCall.init(http://127.0.0.1:34369/mlmfo/vt/admin/collections?action=ADDREPLICA&collection=collection1&shard=shard1&node=127.0.0.1%3A41181_mlmfo%252Fvt&type=NRT&wt=javabin&version=2)
[junit4] 2> 2064918 INFO
(OverseerCollectionConfigSetProcessor-72317560236343300-127.0.0.1:34369_mlmfo%2Fvt-n_0000000000)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.OverseerTaskQueue Response ZK
path: /overseer/collection-queue-work/qnr-0000000002 doesn't exist. Requestor
may have disconnected from ZooKeeper
[junit4] 2> 2064918 INFO
(OverseerThreadFactory-23599-thread-3-processing-n:127.0.0.1:34369_mlmfo%2Fvt)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection1 s:shard1 ]
o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:41181_mlmfo%2Fvt for
creating new replica of shard shard1 for collection collection1
[junit4] 2> 2064920 INFO
(OverseerThreadFactory-23599-thread-3-processing-n:127.0.0.1:34369_mlmfo%2Fvt)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection1 s:shard1 ]
o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command.
[junit4] 2> 2064928 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall
HttpSolrCall.init(http://127.0.0.1:41181/mlmfo/vt/admin/cores?null)
[junit4] 2> 2064928 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt x:collection1_shard1_replica_n1 ]
o.a.s.h.a.CoreAdminOperation core create command
qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT
[junit4] 2> 2065940 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.SolrConfig Using Lucene MatchVersion:
8.11.4
[junit4] 2> 2065941 WARN (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.SolrConfig solrconfig.xml: <jmx> is
no longer supported, use solr.xml:/metrics/reporter section instead
[junit4] 2> 2065945 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.s.IndexSchema Schema name=test
[junit4] 2> 2065955 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.s.IndexSchema Loaded schema test/1.0
with uniqueid field id
[junit4] 2> 2065972 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.CoreContainer Creating SolrCore
'collection1_shard1_replica_n1' using configuration from configset conf1,
trusted=true
[junit4] 2> 2065973 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.m.r.SolrJmxReporter JMX monitoring for
'solr.core.collection1.shard1.replica_n1' (registry
'solr.core.collection1.shard1.replica_n1') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2065973 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory
solr.hdfs.home=hdfs://localhost.localdomain:38059/solr_hdfs_home
[junit4] 2> 2065973 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Solr Kerberos
Authentication disabled
[junit4] 2> 2065973 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.SolrCore
[[collection1_shard1_replica_n1] ] Opening new SolrCore at
[/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/shard-1-001/cores/collection1_shard1_replica_n1],
dataDir=[hdfs://localhost.localdomain:38059/solr_hdfs_home/collection1/core_node2/data/]
[junit4] 2> 2065974 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/collection1/core_node2/data/snapshot_metadata
[junit4] 2> 2065985 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Number of slabs
of block cache [1] with direct memory allocation set to [true]
[junit4] 2> 2065985 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Block cache
target memory usage, slab size of [33554432] will allocate [1] slabs and use
~[33554432] bytes
[junit4] 2> 2065989 WARN (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support in Solr
has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2065989 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.s.b.BlockDirectory Block cache on write
is disabled
[junit4] 2> 2065990 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/collection1/core_node2/data
[junit4] 2> 2066000 WARN (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support in Solr
has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2066012 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/collection1/core_node2/data/index
[junit4] 2> 2066019 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Number of slabs
of block cache [1] with direct memory allocation set to [true]
[junit4] 2> 2066019 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Block cache
target memory usage, slab size of [33554432] will allocate [1] slabs and use
~[33554432] bytes
[junit4] 2> 2066022 WARN (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support in Solr
has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2066022 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.s.b.BlockDirectory Block cache on write
is disabled
[junit4] 2> 2066023 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.u.RandomMergePolicy RandomMergePolicy
wrapping class org.apache.lucene.index.LogByteSizeMergePolicy:
[LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=21,
maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807,
calibrateSizeByDeletes=false, maxMergeDocs=2147483647,
maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.6506841493502946]
[junit4] 2> 2066063 WARN (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.RequestHandlers INVALID paramSet a in
requestHandler {type = requestHandler,name = /dump,class =
DumpRequestHandler,attributes = {initParams=a, name=/dump,
class=DumpRequestHandler},args = {defaults={a=A,b=B}}}
[junit4] 2> 2066119 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.u.UpdateHandler Using UpdateLog
implementation: org.apache.solr.update.HdfsUpdateLog
[junit4] 2> 2066119 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.u.UpdateLog Initializing UpdateLog:
dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10
numVersionBuckets=65536
[junit4] 2> 2066119 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.u.HdfsUpdateLog Initializing
HdfsUpdateLog: tlogDfsReplication=2
[junit4] 2> 2066156 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.u.CommitTracker Hard AutoCommit:
disabled
[junit4] 2> 2066156 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.u.CommitTracker Soft AutoCommit:
disabled
[junit4] 2> 2066158 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.u.RandomMergePolicy RandomMergePolicy
wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy:
minMergeSize=1000, mergeFactor=13, maxMergeSize=9223372036854775807,
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false,
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12,
noCFSRatio=0.0]
[junit4] 2> 2066168 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage Configured
ZooKeeperStorageIO with znodeBase: /configs/conf1
[junit4] 2> 2066169 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage Loaded null at
path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
[junit4] 2> 2066169 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.h.ReplicationHandler Commits will be
reserved for 10000 ms
[junit4] 2> 2066169 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.u.UpdateLog Could not find max version
in index or recent updates, using new clock 1792880309061025792
[junit4] 2> 2066172 INFO
(searcherExecutor-23647-thread-1-processing-n:127.0.0.1:41181_mlmfo%2Fvt
x:collection1_shard1_replica_n1 c:collection1 s:shard1)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.SolrCore
[collection1_shard1_replica_n1] Registered new searcher autowarm time: 0 ms
[junit4] 2> 2066176 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.ZkShardTerms Successful update of
terms at /collections/collection1/terms/shard1 to Terms{values={core_node2=0},
version=0}
[junit4] 2> 2066176 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContextBase make
sure parent is created /collections/collection1/leaders/shard1
[junit4] 2> 2066179 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContext Enough
replicas found to continue.
[junit4] 2> 2066179 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContext I may be
the new leader - try and sync
[junit4] 2> 2066179 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.SyncStrategy Sync replicas to
http://127.0.0.1:41181/mlmfo/vt/collection1_shard1_replica_n1/
[junit4] 2> 2066179 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.SyncStrategy Sync Success - now sync
replicas to me
[junit4] 2> 2066179 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.SyncStrategy
http://127.0.0.1:41181/mlmfo/vt/collection1_shard1_replica_n1/ has no replicas
[junit4] 2> 2066179 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContextBase
Creating leader registration node
/collections/collection1/leaders/shard1/leader after winning as
/collections/collection1/leader_elect/shard1/election/72317560236343305-core_node2-n_0000000000
[junit4] 2> 2066180 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.ShardLeaderElectionContext I am the
new leader: http://127.0.0.1:41181/mlmfo/vt/collection1_shard1_replica_n1/
shard1
[junit4] 2> 2066181 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.c.ZkStateReader
/collections/collection1/state.json is deleted, stop watching children
[junit4] 2> 2066283 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.ZkController I am the leader, no
recovery necessary
[junit4] 2> 2066284 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt c:collection1 s:shard1
x:collection1_shard1_replica_n1 ] o.a.s.c.c.ZkStateReader
/collections/collection1/state.json is deleted, stop watching children
[junit4] 2> 2066286 INFO (qtp683541821-40269)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall [admin] webapp=null
path=/admin/cores
params={qt=/admin/cores&collection.configName=conf1&name=collection1_shard1_replica_n1&action=CREATE&collection=collection1&shard=shard1&wt=javabin&version=2&replicaType=NRT}
status=0 QTime=1357
[junit4] 2> 2066287 INFO (qtp959887688-40205)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection1 ] o.a.s.s.HttpSolrCall [admin]
webapp=null path=/admin/collections
params={node=127.0.0.1:41181_mlmfo%252Fvt&action=ADDREPLICA&collection=collection1&shard=shard1&type=NRT&wt=javabin&version=2}
status=0 QTime=1372
[junit4] 2> 2066288 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.AbstractFullDistribZkTestBase Waiting to see 1 active replicas in
collection: collection1
[junit4] 2> 2066288 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.c.c.ZkStateReader /collections/collection1/state.json is deleted, stop
watching children
[junit4] 2> 2066387 INFO
(TEST-HdfsNNFailoverTest.test-seed#[559316B2CFD74F2C]) [ ]
o.a.s.SolrTestCaseJ4 ###Starting test
[junit4] 2> 2066389 INFO (qtp683541821-40271)
[n:127.0.0.1:41181_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall
HttpSolrCall.init(http://127.0.0.1:41181/mlmfo/vt/admin/collections?action=CREATE&numShards=1&replicationFactor=1&maxShardsPerNode=1&name=collection&collection.configName=conf1&wt=javabin&version=2)
[junit4] 2> 2066393 INFO
(OverseerThreadFactory-23599-thread-4-processing-n:127.0.0.1:34369_mlmfo%2Fvt)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.a.c.CreateCollectionCmd Create
collection collection
[junit4] 2> 2066393 INFO
(OverseerCollectionConfigSetProcessor-72317560236343300-127.0.0.1:34369_mlmfo%2Fvt-n_0000000000)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.c.OverseerTaskQueue Response ZK
path: /overseer/collection-queue-work/qnr-0000000004 doesn't exist. Requestor
may have disconnected from ZooKeeper
[junit4] 2> 2066602 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt ] o.a.s.s.HttpSolrCall
HttpSolrCall.init(http://127.0.0.1:34369/mlmfo/vt/admin/cores?null)
[junit4] 2> 2066602 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt x:collection_shard1_replica_n1 ]
o.a.s.h.a.CoreAdminOperation core create command
qt=/admin/cores&collection.configName=conf1&newCollection=true&name=collection_shard1_replica_n1&action=CREATE&numShards=1&collection=collection&shard=shard1&wt=javabin&version=2&replicaType=NRT
[junit4] 2> 2067613 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.SolrConfig Using Lucene MatchVersion:
8.11.4
[junit4] 2> 2067613 WARN (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.SolrConfig solrconfig.xml: <jmx> is no
longer supported, use solr.xml:/metrics/reporter section instead
[junit4] 2> 2067614 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.s.IndexSchema Schema name=test
[junit4] 2> 2067625 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.s.IndexSchema Loaded schema test/1.0
with uniqueid field id
[junit4] 2> 2067643 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.CoreContainer Creating SolrCore
'collection_shard1_replica_n1' using configuration from configset conf1,
trusted=true
[junit4] 2> 2067644 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.m.r.SolrJmxReporter JMX monitoring for
'solr.core.collection.shard1.replica_n1' (registry
'solr.core.collection.shard1.replica_n1') enabled at server:
com.sun.jmx.mbeanserver.JmxMBeanServer@b2350ef
[junit4] 2> 2067644 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory
solr.hdfs.home=hdfs://localhost.localdomain:38059/solr_hdfs_home
[junit4] 2> 2067644 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Solr Kerberos
Authentication disabled
[junit4] 2> 2067644 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.SolrCore
[[collection_shard1_replica_n1] ] Opening new SolrCore at
[/home/jenkins/jenkins-slave/workspace/Lucene/Lucene-Solr-Tests-8.11/solr/build/solr-core/test/J2/temp/solr.cloud.hdfs.HdfsNNFailoverTest_559316B2CFD74F2C-001/control-001/cores/collection_shard1_replica_n1],
dataDir=[hdfs://localhost.localdomain:38059/solr_hdfs_home/collection/core_node2/data/]
[junit4] 2> 2067645 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/collection/core_node2/data/snapshot_metadata
[junit4] 2> 2067661 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Number of slabs
of block cache [1] with direct memory allocation set to [true]
[junit4] 2> 2067661 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Block cache
target memory usage, slab size of [33554432] will allocate [1] slabs and use
~[33554432] bytes
[junit4] 2> 2067666 WARN (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support in Solr
has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2067666 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.s.b.BlockDirectory Block cache on write
is disabled
[junit4] 2> 2067668 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/collection/core_node2/data
[junit4] 2> 2067678 WARN (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support in Solr
has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2067687 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory creating
directory factory for path
hdfs://localhost.localdomain:38059/solr_hdfs_home/collection/core_node2/data/index
[junit4] 2> 2067694 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Number of slabs
of block cache [1] with direct memory allocation set to [true]
[junit4] 2> 2067694 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.HdfsDirectoryFactory Block cache
target memory usage, slab size of [33554432] will allocate [1] slabs and use
~[33554432] bytes
[junit4] 2> 2067698 WARN (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.s.h.HdfsDirectory HDFS support in Solr
has been deprecated as of 8.6. See SOLR-14021 for details.
[junit4] 2> 2067699 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.s.b.BlockDirectory Block cache on write
is disabled
[junit4] 2> 2067699 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.u.RandomMergePolicy RandomMergePolicy
wrapping class org.apache.lucene.index.LogByteSizeMergePolicy:
[LogByteSizeMergePolicy: minMergeSize=1677721, mergeFactor=21,
maxMergeSize=2147483648, maxMergeSizeForForcedMerge=9223372036854775807,
calibrateSizeByDeletes=false, maxMergeDocs=2147483647,
maxCFSSegmentSizeMB=8.796093022207999E12, noCFSRatio=0.6506841493502946]
[junit4] 2> 2067727 WARN (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.c.RequestHandlers INVALID paramSet a in
requestHandler {type = requestHandler,name = /dump,class =
DumpRequestHandler,attributes = {initParams=a, name=/dump,
class=DumpRequestHandler},args = {defaults={a=A,b=B}}}
[junit4] 2> 2067783 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.u.UpdateHandler Using UpdateLog
implementation: org.apache.solr.update.HdfsUpdateLog
[junit4] 2> 2067783 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.u.UpdateLog Initializing UpdateLog:
dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 maxNumLogsToKeep=10
numVersionBuckets=65536
[junit4] 2> 2067783 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.u.HdfsUpdateLog Initializing
HdfsUpdateLog: tlogDfsReplication=2
[junit4] 2> 2067796 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.u.CommitTracker Hard AutoCommit: disabled
[junit4] 2> 2067796 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.u.CommitTracker Soft AutoCommit: disabled
[junit4] 2> 2067798 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.u.RandomMergePolicy RandomMergePolicy
wrapping class org.apache.lucene.index.LogDocMergePolicy: [LogDocMergePolicy:
minMergeSize=1000, mergeFactor=13, maxMergeSize=9223372036854775807,
maxMergeSizeForForcedMerge=9223372036854775807, calibrateSizeByDeletes=false,
maxMergeDocs=2147483647, maxCFSSegmentSizeMB=8.796093022207999E12,
noCFSRatio=0.0]
[junit4] 2> 2067813 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage Configured
ZooKeeperStorageIO with znodeBase: /configs/conf1
[junit4] 2> 2067813 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.r.ManagedResourceStorage Loaded null at
path _rest_managed.json using ZooKeeperStorageIO:path=/configs/conf1
[junit4] 2> 2067813 INFO (qtp959887688-40208)
[n:127.0.0.1:34369_mlmfo%2Fvt c:collection s:shard1
x:collection_shard1_replica_n1 ] o.a.s.h.ReplicationHandler Commit
[...truncated too long message...]
ERROR (Command processor) [ ] o.a.h.h.s.d.DataNode Command processor
encountered interrupt and exit.
[junit4] 2> 2156283 WARN (BP-711263047-127.0.0.1-1709823988826
heartbeating to localhost.localdomain/127.0.0.1:38317) [ ]
o.a.h.h.s.d.DataNode Ending block pool service for: Block pool
BP-711263047-127.0.0.1-1709823988826 (Datanode Uuid
e7a3991e-0ec8-4bd8-977d-66ed98807164) service to
localhost.localdomain/127.0.0.1:38317
[junit4] 2> 2156283 WARN (Command processor) [ ] o.a.h.h.s.d.DataNode
Ending command processor service for: Thread[Command
processor,5,TGRP-HdfsBackupRepositoryIntegrationTest]
[junit4] 2> 2156293 INFO (Listener at localhost.localdomain/34765) [
] o.e.j.s.h.ContextHandler Stopped
o.e.j.w.WebAppContext@26dcdf28{hdfs,/,null,STOPPED}{jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/hdfs}
[junit4] 2> 2156293 INFO (Listener at localhost.localdomain/34765) [
] o.e.j.s.AbstractConnector Stopped ServerConnector@10905211{HTTP/1.1,
(http/1.1)}{localhost.localdomain:0}
[junit4] 2> 2156293 INFO (Listener at localhost.localdomain/34765) [
] o.e.j.s.session node0 Stopped scavenging
[junit4] 2> 2156293 INFO (Listener at localhost.localdomain/34765) [
] o.e.j.s.h.ContextHandler Stopped
o.e.j.s.ServletContextHandler@4ca1ac40{static,/static,jar:file:/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-3.2.4-tests.jar!/webapps/static,STOPPED}
[junit4] 2> 2156312 INFO (Listener at localhost.localdomain/34765) [
] o.a.s.u.ErrorLogMuter Closing ErrorLogMuter-regex-353 after mutting 0 log
messages
[junit4] 2> 2156312 INFO (Listener at localhost.localdomain/34765) [
] o.a.s.u.ErrorLogMuter Creating ErrorLogMuter-regex-354 for ERROR logs
matching regex: ignore_exception
[junit4] 2> 2156313 INFO (Listener at localhost.localdomain/34765) [
] o.a.s.SolrTestCaseJ4 -------------------------------------------------------
Done waiting for tracked resources to be released
[junit4] 2> Mar 07, 2024 3:06:33 PM
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
[junit4] 2> WARNING: Will linger awaiting termination of 33 leaked
thread(s).
[junit4] 2> Mar 07, 2024 3:06:43 PM
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
[junit4] 2> SEVERE: 1 thread leaked from SUITE scope at
org.apache.solr.core.backup.repository.HdfsBackupRepositoryIntegrationTest:
[junit4] 2> 1) Thread[id=29470, name=Command processor, state=WAITING,
group=TGRP-HdfsBackupRepositoryIntegrationTest]
[junit4] 2> at sun.misc.Unsafe.park(Native Method)
[junit4] 2> at
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
[junit4] 2> at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
[junit4] 2> at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
[junit4] 2> at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
[junit4] 2> at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
[junit4] 2> Mar 07, 2024 3:06:43 PM
com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
[junit4] 2> INFO: Starting to interrupt leaked threads:
[junit4] 2> 1) Thread[id=29470, name=Command processor, state=WAITING,
group=TGRP-HdfsBackupRepositoryIntegrationTest]
[junit4] 2> 2166371 ERROR (Command processor) [ ] o.a.h.h.s.d.DataNode
Command processor encountered interrupt and exit.
[junit4] 2> 2166371 WARN (Command processor) [ ] o.a.h.h.s.d.DataNode
Ending command processor service for: Thread[Command
processor,5,TGRP-HdfsBackupRepositoryIntegrationTest]
[junit4] 2> Mar 07, 2024 3:06:43 PM
com.carrotsearch.randomizedtesting.ThreadLeakControl tryToInterruptAll
[junit4] 2> INFO: All leaked threads terminated.
[junit4] 2> NOTE: test params are: codec=Asserting(Lucene87),
sim=Asserting(RandomSimilarity(queryNorm=true): {}), locale=et-EE,
timezone=Europe/Rome
[junit4] 2> NOTE: Linux 4.15.0-213-generic amd64/Temurin 1.8.0_362
(64-bit)/cpus=4,threads=3,free=209558056,total=516423680
[junit4] 2> NOTE: All tests run in this JVM: [CollectionPropsTest,
CheckHdfsIndexTest, RecoveryAfterSoftCommitTest,
ClassificationUpdateProcessorFactoryTest, CoreAdminRequestStatusTest,
CdcrOpsAndBoundariesTest, TestBulkSchemaConcurrent, TestQuerySenderNoQuery,
SpellCheckCollatorTest, NodeLostTriggerTest, TestTlogReplayVsRecovery,
DateMathParserTest, TestManagedSchemaThreadSafety, TokenizerChainTest,
SolrXmlInZkTest, TestSimGenericDistributedQueue,
TestSubQueryTransformerDistrib, TestTermsQParserPlugin, RankFieldTest,
TestNumericRangeQuery32, BadComponentTest, TestMinMaxOnMultiValuedField,
TestSizeLimitedDistributedMap, TestFieldCacheSortRandom,
TestSerializedLuceneMatchVersion, BufferStoreTest, OverseerTaskQueueTest,
SplitHandlerTest, ShardBackupIdTest, MBeansHandlerTest,
TestIBSimilarityFactory, TestSolrCoreSnapshots, DeleteStatusTest,
TestSolrConfigHandler, DistributedQueueTest, NoCacheHeaderTest,
ChaosMonkeySafeLeaderTest, TestConfigOverlay, TestFaceting,
DeleteLastCustomShardedReplicaTest, TestJavabinTupleStreamParser,
LegacyCloudClusterPropTest, AddSchemaFieldsUpdateProcessorFactoryTest,
DirectUpdateHandlerTest, TestHighFrequencyDictionaryFactory,
TestBlendedInfixSuggestions, JWTAuthPluginIntegrationTest,
TestSolrCloudWithSecureImpersonation, CdcrWithNodesRestartsTest,
TermVectorComponentTest, TestUseDocValuesAsStored, MetricTriggerTest,
TestMaxTokenLenTokenizer, TestUtilizeNode, TestCloudJSONFacetJoinDomain,
MinimalSchemaTest, SparseHLLTest, TestQueryingOnDownCollection,
TestJsonRangeFacets, TestCustomSort, AuthWithShardHandlerFactoryOverrideTest,
ImplicitSnitchTest, TestClusterStateMutator, SaslZkACLProviderTest,
AnalysisErrorHandlingTest, TestEmbeddedSolrServerSchemaAPI,
TestRandomCollapseQParserPlugin, TestHttpShardHandlerFactory,
SolrMetricManagerTest, TestRecovery, IndexSizeTriggerMixedBoundsTest,
TestFoldingMultitermQuery, HdfsChaosMonkeyNothingIsSafeTest,
SignificantTermsQParserPluginTest, ComputePlanActionTest,
TestMergePolicyConfig, BigEndianAscendingWordDeserializerTest,
SystemLogListenerTest, TestAuthorizationFramework, RegexBytesRefFilterTest,
TestSnapshotCloudManager, DistributedQueryElevationComponentTest,
V2StandaloneTest, MultiAuthPluginTest, TestCSVResponseWriter,
CrossCollectionJoinQueryTest, DateRangeFieldTest, TestRequestForwarding,
TestZkAclsWithHadoopAuth, TestMacros, TestInPlaceUpdatesStandalone,
SolrJmxReporterCloudTest, TestPrepRecovery, SecurityConfHandlerTest,
FileUtilsTest, RollingRestartTest, MultiSolrCloudTestCaseTest, TestCSVLoader,
AutoscalingHistoryHandlerTest, ShardSplitTest, SolrTestCaseJ4DeleteCoreTest,
CloudExitableDirectoryReaderTest, ConjunctionSolrSpellCheckerTest,
QueryParsingTest, FacetPivot2CollectionsTest, XMLAtomicUpdateMultivalueTest,
RootFieldTest, SharedFSAutoReplicaFailoverTest, PingRequestHandlerTest,
TestDynamicLoading, BJQParserTest, TestDocTermOrdsUninvertLimit,
BackupRestoreApiErrorConditionsTest, ZkShardTermsTest, TestSolrFieldCacheBean,
BasicDistributedZkTest, TestSearchPerf, TestCloudConsistency,
TestConfigSetsAPIShareSchema, TestDeleteCollectionOnDownNodes,
TestDistribDocBasedVersion, TestLeaderElectionWithEmptyReplica,
TestPullReplicaErrorHandling, TestSolrCloudWithDelegationTokens,
TestStressCloudBlindAtomicUpdates, TlogReplayBufferedWhileIndexingTest,
ZkCLITest, ConcurrentCreateCollectionTest,
ConcurrentDeleteAndCreateCollectionTest, CustomCollectionTest,
HdfsCollectionsAPIDistributedZkTest, LocalFSCloudIncrementalBackupTest,
TestHdfsCloudBackupRestore, AutoAddReplicasPlanActionTest,
IndexSizeTriggerTest, TestSimUtils, BaseCdcrDistributedZkTest,
CdcrBidirectionalTest, CdcrBootstrapTest, HdfsRecoveryZkTest,
HdfsTlogReplayBufferedWhileIndexingTest, HdfsUnloadDistributedZkTest,
HdfsWriteToMultipleCollectionsTest, StressHdfsTest, RuleEngineTest, RulesTest,
RAMDirectoryFactoryTest, RequestHandlersTest, ResourceLoaderTest, SOLR749Test,
SolrCoreCheckLockOnStartupTest, SolrCoreTest, TestCoreDiscovery,
TestInfoStreamLogging, TestJmxIntegration, TestReloadAndDeleteDocs,
TestSolrDeletionPolicy2, HdfsBackupRepositoryIntegrationTest]
[junit4] 2> NOTE: reproduce with: ant test
-Dtestcase=HdfsBackupRepositoryIntegrationTest -Dtests.seed=559316B2CFD74F2C
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=et-EE
-Dtests.timezone=Europe/Rome -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
[junit4] ERROR 0.00s J3 | HdfsBackupRepositoryIntegrationTest (suite) <<<
[junit4] > Throwable #1:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE
scope at
org.apache.solr.core.backup.repository.HdfsBackupRepositoryIntegrationTest:
[junit4] > 1) Thread[id=29470, name=Command processor, state=WAITING,
group=TGRP-HdfsBackupRepositoryIntegrationTest]
[junit4] > at sun.misc.Unsafe.park(Native Method)
[junit4] > at
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
[junit4] > at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
[junit4] > at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
[junit4] > at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.processQueue(BPServiceActor.java:1291)
[junit4] > at
org.apache.hadoop.hdfs.server.datanode.BPServiceActor$CommandProcessingThread.run(BPServiceActor.java:1275)
[junit4] > at
__randomizedtesting.SeedInfo.seed([559316B2CFD74F2C]:0)
[junit4] Completed [682/959 (2!)] on J3 in 14.59s, 10 tests, 1 error <<<
FAILURES!
[...truncated 55315 lines...]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]