[
https://issues.apache.org/jira/browse/HDFS-9590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15070032#comment-15070032
]
Xiao Chen commented on HDFS-9590:
---------------------------------
This is the full log for run (7,1):
{noformat}
-------------------------------------------
Beginning test, failing at (7, 1)
-------------------------------------------
2015-12-20 18:51:46,213 INFO qjournal.MiniJournalCluster
(MiniJournalCluster.java:<init>(95)) - Starting MiniJournalCluster with 3
journal nodes
2015-12-20 18:51:46,215 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-12-20 18:51:46,216 INFO hdfs.DFSUtil
(DFSUtil.java:httpServerTemplateForNNAndJN(1712)) - Starting Web-server for
journal at: http://localhost:0
2015-12-20 18:51:46,217 INFO server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize
FileSignerSecretProvider, falling back to use random secrets.
2015-12-20 18:51:46,217 INFO http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.journal is not defined
2015-12-20 18:51:46,217 INFO http.HttpServer2
(HttpServer2.java:addGlobalFilter(759)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-20 18:51:46,218 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context journal
2015-12-20 18:51:46,218 INFO http.HttpServer2
(HttpServer2.java:addFilter(744)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2015-12-20 18:51:46,218 INFO http.HttpServer2
(HttpServer2.java:addFilter(744)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2015-12-20 18:51:46,219 INFO http.HttpServer2
(HttpServer2.java:openListeners(947)) - Jetty bound to port 46849
2015-12-20 18:51:46,219 INFO mortbay.log (Slf4jLog.java:info(67)) -
jetty-6.1.26.cloudera.4
2015-12-20 18:51:46,226 INFO mortbay.log (Slf4jLog.java:info(67)) - Started
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46849
2015-12-20 18:51:46,226 INFO ipc.CallQueueManager
(CallQueueManager.java:<init>(56)) - Using callQueue class
java.util.concurrent.LinkedBlockingQueue
2015-12-20 18:51:46,227 INFO ipc.Server (Server.java:run(616)) - Starting
Socket Reader #1 for port 36209
2015-12-20 18:51:46,229 INFO ipc.Server (Server.java:run(839)) - IPC Server
Responder: starting
2015-12-20 18:51:46,229 INFO ipc.Server (Server.java:run(686)) - IPC Server
listener on 36209: starting
2015-12-20 18:51:46,231 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-12-20 18:51:46,232 INFO hdfs.DFSUtil
(DFSUtil.java:httpServerTemplateForNNAndJN(1712)) - Starting Web-server for
journal at: http://localhost:0
2015-12-20 18:51:46,232 INFO server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize
FileSignerSecretProvider, falling back to use random secrets.
2015-12-20 18:51:46,232 INFO http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.journal is not defined
2015-12-20 18:51:46,233 INFO http.HttpServer2
(HttpServer2.java:addGlobalFilter(759)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-20 18:51:46,233 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context journal
2015-12-20 18:51:46,233 INFO http.HttpServer2
(HttpServer2.java:addFilter(744)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2015-12-20 18:51:46,233 INFO http.HttpServer2
(HttpServer2.java:addFilter(744)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2015-12-20 18:51:46,234 INFO http.HttpServer2
(HttpServer2.java:openListeners(947)) - Jetty bound to port 50567
2015-12-20 18:51:46,234 INFO mortbay.log (Slf4jLog.java:info(67)) -
jetty-6.1.26.cloudera.4
2015-12-20 18:51:46,241 INFO mortbay.log (Slf4jLog.java:info(67)) - Started
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:50567
2015-12-20 18:51:46,241 INFO ipc.CallQueueManager
(CallQueueManager.java:<init>(56)) - Using callQueue class
java.util.concurrent.LinkedBlockingQueue
2015-12-20 18:51:46,242 INFO ipc.Server (Server.java:run(616)) - Starting
Socket Reader #1 for port 36031
2015-12-20 18:51:46,243 INFO ipc.Server (Server.java:run(839)) - IPC Server
Responder: starting
2015-12-20 18:51:46,243 INFO ipc.Server (Server.java:run(686)) - IPC Server
listener on 36031: starting
2015-12-20 18:51:46,245 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-12-20 18:51:46,246 INFO hdfs.DFSUtil
(DFSUtil.java:httpServerTemplateForNNAndJN(1712)) - Starting Web-server for
journal at: http://localhost:0
2015-12-20 18:51:46,246 INFO server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize
FileSignerSecretProvider, falling back to use random secrets.
2015-12-20 18:51:46,247 INFO http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.journal is not defined
2015-12-20 18:51:46,247 INFO http.HttpServer2
(HttpServer2.java:addGlobalFilter(759)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-20 18:51:46,247 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context journal
2015-12-20 18:51:46,248 INFO http.HttpServer2
(HttpServer2.java:addFilter(744)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2015-12-20 18:51:46,248 INFO http.HttpServer2
(HttpServer2.java:addFilter(744)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2015-12-20 18:51:46,248 INFO http.HttpServer2
(HttpServer2.java:openListeners(947)) - Jetty bound to port 43198
2015-12-20 18:51:46,248 INFO mortbay.log (Slf4jLog.java:info(67)) -
jetty-6.1.26.cloudera.4
2015-12-20 18:51:46,255 INFO mortbay.log (Slf4jLog.java:info(67)) - Started
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43198
2015-12-20 18:51:46,256 INFO ipc.CallQueueManager
(CallQueueManager.java:<init>(56)) - Using callQueue class
java.util.concurrent.LinkedBlockingQueue
2015-12-20 18:51:46,256 INFO ipc.Server (Server.java:run(616)) - Starting
Socket Reader #1 for port 44247
2015-12-20 18:51:46,258 INFO ipc.Server (Server.java:run(839)) - IPC Server
Responder: starting
2015-12-20 18:51:46,258 INFO ipc.Server (Server.java:run(686)) - IPC Server
listener on 44247: starting
2015-12-20 18:51:46,280 INFO server.JournalNode
(JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/waitactive
2015-12-20 18:51:46,280 INFO server.JournalNode
(JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/waitactive
2015-12-20 18:51:46,280 INFO server.JournalNode
(JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/waitactive
2015-12-20 18:51:46,280 WARN common.Storage (Storage.java:analyzeStorage(477))
- Storage directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/waitactive
does not exist
2015-12-20 18:51:46,280 WARN common.Storage (Storage.java:analyzeStorage(477))
- Storage directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/waitactive
does not exist
2015-12-20 18:51:46,280 WARN common.Storage (Storage.java:analyzeStorage(477))
- Storage directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/waitactive
does not exist
2015-12-20 18:51:46,596 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #1: [/127.0.0.1:36209]
format(test-journal, lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp)
2015-12-20 18:51:46,597 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #1: [/127.0.0.1:44247]
format(test-journal, lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp)
2015-12-20 18:51:46,597 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #1: [/127.0.0.1:36031]
format(test-journal, lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp)
2015-12-20 18:51:46,598 INFO server.JournalNode
(JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal
2015-12-20 18:51:46,598 INFO server.JournalNode
(JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal
2015-12-20 18:51:46,599 WARN common.Storage (Storage.java:analyzeStorage(477))
- Storage directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal
does not exist
2015-12-20 18:51:46,599 WARN common.Storage (Storage.java:analyzeStorage(477))
- Storage directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal
does not exist
2015-12-20 18:51:46,599 INFO server.JournalNode
(JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal
2015-12-20 18:51:46,599 WARN common.Storage (Storage.java:analyzeStorage(477))
- Storage directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal
does not exist
2015-12-20 18:51:46,600 INFO server.Journal (Journal.java:format(216)) -
Formatting org.apache.hadoop.hdfs.qjournal.server.Journal@7fe9d315 with
namespace info: lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp
2015-12-20 18:51:46,600 INFO common.Storage (JNStorage.java:format(184)) -
Formatting journal Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal
with nsid: 12345
2015-12-20 18:51:46,601 INFO server.Journal (Journal.java:format(216)) -
Formatting org.apache.hadoop.hdfs.qjournal.server.Journal@5021494e with
namespace info: lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp
2015-12-20 18:51:46,601 INFO common.Storage (JNStorage.java:format(184)) -
Formatting journal Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal
with nsid: 12345
2015-12-20 18:51:46,602 INFO server.Journal (Journal.java:format(216)) -
Formatting org.apache.hadoop.hdfs.qjournal.server.Journal@3953c8fb with
namespace info: lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp
2015-12-20 18:51:46,602 INFO common.Storage (JNStorage.java:format(184)) -
Formatting journal Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal
with nsid: 12345
2015-12-20 18:51:46,603 INFO common.Storage (Storage.java:tryLock(716)) - Lock
on
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/in_use.lock
acquired by nodename [email protected]
2015-12-20 18:51:46,603 INFO common.Storage (Storage.java:tryLock(716)) - Lock
on
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/in_use.lock
acquired by nodename [email protected]
2015-12-20 18:51:46,604 INFO common.Storage (Storage.java:tryLock(716)) - Lock
on
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/in_use.lock
acquired by nodename [email protected]
2015-12-20 18:51:46,604 INFO client.QuorumJournalManager
(QuorumJournalManager.java:recoverUnfinalizedSegments(435)) - Starting recovery
process for unclosed journal segments...
2015-12-20 18:51:46,604 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #2: [/127.0.0.1:36209]
getJournalState(test-journal)
2015-12-20 18:51:46,604 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #2: [/127.0.0.1:44247]
getJournalState(test-journal)
2015-12-20 18:51:46,604 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #2: [/127.0.0.1:36031]
getJournalState(test-journal)
2015-12-20 18:51:46,605 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #3:
[localhost/127.0.0.1:36209] newEpoch(test-journal,
lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp, 1)
2015-12-20 18:51:46,605 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #3:
[localhost/127.0.0.1:44247] newEpoch(test-journal,
lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp, 1)
2015-12-20 18:51:46,606 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #3:
[localhost/127.0.0.1:36031] newEpoch(test-journal,
lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp, 1)
2015-12-20 18:51:46,606 INFO server.Journal
(Journal.java:updateLastPromisedEpoch(325)) - Updating lastPromisedEpoch from 0
to 1 for client /127.0.0.1
2015-12-20 18:51:46,606 INFO server.Journal
(Journal.java:updateLastPromisedEpoch(325)) - Updating lastPromisedEpoch from 0
to 1 for client /127.0.0.1
2015-12-20 18:51:46,606 INFO server.Journal
(Journal.java:updateLastPromisedEpoch(325)) - Updating lastPromisedEpoch from 0
to 1 for client /127.0.0.1
2015-12-20 18:51:46,607 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(188)) - Scanning storage
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal)
2015-12-20 18:51:46,607 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(205)) - No files in
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal)
2015-12-20 18:51:46,607 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(188)) - Scanning storage
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal)
2015-12-20 18:51:46,607 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(205)) - No files in
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal)
2015-12-20 18:51:46,607 INFO client.QuorumJournalManager
(QuorumJournalManager.java:recoverUnfinalizedSegments(437)) - Successfully
started new epoch 1
2015-12-20 18:51:46,608 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #4:
[localhost/127.0.0.1:36209]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@29c63873,
1, -60)
2015-12-20 18:51:46,608 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #4:
[localhost/127.0.0.1:44247]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@14a41b72,
1, -60)
2015-12-20 18:51:46,608 INFO server.Journal
(Journal.java:startLogSegment(547)) - Updating lastWriterEpoch from 0 to 1 for
client /127.0.0.1
2015-12-20 18:51:46,608 INFO server.Journal
(Journal.java:startLogSegment(547)) - Updating lastWriterEpoch from 0 to 1 for
client /127.0.0.1
2015-12-20 18:51:46,609 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(188)) - Scanning storage
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal)
2015-12-20 18:51:46,609 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(205)) - No files in
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal)
2015-12-20 18:51:46,609 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #4:
[localhost/127.0.0.1:36031]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@2fa28eb6,
1, -60)
2015-12-20 18:51:46,610 INFO server.Journal
(Journal.java:startLogSegment(547)) - Updating lastWriterEpoch from 0 to 1 for
client /127.0.0.1
2015-12-20 18:51:46,612 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #5:
[localhost/127.0.0.1:36209]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@5ec5a0e2, 1, 1, 3,
[B@27e2faeb)
2015-12-20 18:51:46,612 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #5:
[localhost/127.0.0.1:44247]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@26c76ec2, 1, 1, 3,
[B@27e2faeb)
2015-12-20 18:51:46,612 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #5:
[localhost/127.0.0.1:36031]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@c5860a, 1, 1, 3,
[B@27e2faeb)
2015-12-20 18:51:46,613 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #6:
[localhost/127.0.0.1:36031]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@adc7bb6,
1, 3)
2015-12-20 18:51:46,613 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #6:
[localhost/127.0.0.1:44247]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@79296f6b,
1, 3)
2015-12-20 18:51:46,613 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #6:
[localhost/127.0.0.1:36209]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@113def4c,
1, 3)
2015-12-20 18:51:46,619 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_inprogress_0000000000000000001
->
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_0000000000000000001-0000000000000000003
2015-12-20 18:51:46,624 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_inprogress_0000000000000000001
->
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_0000000000000000001-0000000000000000003
2015-12-20 18:51:46,649 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/current/edits_inprogress_0000000000000000001
->
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/current/edits_0000000000000000001-0000000000000000003
2015-12-20 18:51:46,650 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #7:
[localhost/127.0.0.1:44247]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@e3e7226,
4, -60)
2015-12-20 18:51:46,650 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #7:
[localhost/127.0.0.1:36031]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@25b82418,
4, -60)
2015-12-20 18:51:46,650 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(434)) - Injecting code before IPC #7:
[localhost/127.0.0.1:36209]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@5c49b700,
4, -60)
2015-12-20 18:51:46,653 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #8:
[localhost/127.0.0.1:36209]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@582bd643, 4, 4, 3,
[B@25fff5ee)
2015-12-20 18:51:46,653 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #8:
[localhost/127.0.0.1:36031]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@476aefbf, 4, 4, 3,
[B@25fff5ee)
2015-12-20 18:51:46,653 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #8:
[localhost/127.0.0.1:44247]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@3da2093a, 4, 4, 3,
[B@25fff5ee)
2015-12-20 18:51:46,653 WARN security.UserGroupInformation
(UserGroupInformation.java:doAs(1674)) - PriviledgedActionException as:jenkins
(auth:SIMPLE)
cause:org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't
write, no segment open
2015-12-20 18:51:46,654 INFO ipc.Server (Server.java:run(2107)) - IPC Server
handler 4 on 36209, call
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.journal from
127.0.0.1:49480 Call#12141 Retry#0
org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't
write, no segment open
at
org.apache.hadoop.hdfs.qjournal.server.Journal.checkSync(Journal.java:485)
at
org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:354)
at
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:149)
at
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
at
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
2015-12-20 18:51:46,654 WARN client.QuorumJournalManager
(IPCLoggerChannel.java:call(388)) - Remote journal 127.0.0.1:36209 failed to
write txns 4-6. Will try to write to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException):
Can't write, no segment open
at
org.apache.hadoop.hdfs.qjournal.server.Journal.checkSync(Journal.java:485)
at
org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:354)
at
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:149)
at
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
at
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
at org.apache.hadoop.ipc.Client.call(Client.java:1466)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy11.journal(Unknown Source)
at
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:167)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults$WrapEveryCall.answer(TestQJMWithFaults.java:473)
at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
at
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol$$EnhancerByMockitoWithCGLIB$$ea839bd4.journal(<generated>)
at
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:385)
at
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:378)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-12-20 18:51:46,654 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #9:
[localhost/127.0.0.1:36031]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@6dd12abe,
4, 6)
2015-12-20 18:51:46,654 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #9:
[localhost/127.0.0.1:44247]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@6dd12abe,
4, 6)
2015-12-20 18:51:46,654 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #9:
[localhost/127.0.0.1:36209]
heartbeat(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@6ede3a7a)
2015-12-20 18:51:46,663 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_inprogress_0000000000000000004
->
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_0000000000000000004-0000000000000000006
2015-12-20 18:51:46,667 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_inprogress_0000000000000000004
->
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_0000000000000000004-0000000000000000006
2015-12-20 18:51:46,770 INFO client.QuorumJournalManager
(QuorumJournalManager.java:recoverUnfinalizedSegments(435)) - Starting recovery
process for unclosed journal segments...
2015-12-20 18:51:46,772 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #1: [/127.0.0.1:36209]
getJournalState(test-journal)
2015-12-20 18:51:46,772 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #1: [/127.0.0.1:36031]
getJournalState(test-journal)
2015-12-20 18:51:46,773 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #1: [/127.0.0.1:44247]
getJournalState(test-journal)
2015-12-20 18:51:46,774 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #2:
[localhost/127.0.0.1:36209] newEpoch(test-journal,
lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp, 2)
2015-12-20 18:51:46,774 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #2:
[localhost/127.0.0.1:44247] newEpoch(test-journal,
lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp, 2)
2015-12-20 18:51:46,774 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #2:
[localhost/127.0.0.1:36031] newEpoch(test-journal,
lv=-60;cid=mycluster;nsid=12345;c=0;bpid=my-bp, 2)
2015-12-20 18:51:46,775 INFO server.Journal
(Journal.java:updateLastPromisedEpoch(325)) - Updating lastPromisedEpoch from 1
to 2 for client /127.0.0.1
2015-12-20 18:51:46,775 INFO server.Journal
(Journal.java:updateLastPromisedEpoch(325)) - Updating lastPromisedEpoch from 1
to 2 for client /127.0.0.1
2015-12-20 18:51:46,775 INFO server.Journal
(Journal.java:updateLastPromisedEpoch(325)) - Updating lastPromisedEpoch from 1
to 2 for client /127.0.0.1
2015-12-20 18:51:46,776 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(188)) - Scanning storage
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal)
2015-12-20 18:51:46,776 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(188)) - Scanning storage
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal)
2015-12-20 18:51:46,776 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(194)) - Latest log is
EditLogFile(file=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_0000000000000000004-0000000000000000006,first=0000000000000000004,last=0000000000000000006,inProgress=false,hasCorruptHeader=false)
2015-12-20 18:51:46,776 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(194)) - Latest log is
EditLogFile(file=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_0000000000000000004-0000000000000000006,first=0000000000000000004,last=0000000000000000006,inProgress=false,hasCorruptHeader=false)
2015-12-20 18:51:46,776 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(188)) - Scanning storage
FileJournalManager(root=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal)
2015-12-20 18:51:46,776 INFO client.QuorumJournalManager
(QuorumJournalManager.java:recoverUnfinalizedSegments(437)) - Successfully
started new epoch 2
2015-12-20 18:51:46,776 INFO client.QuorumJournalManager
(QuorumJournalManager.java:recoverUnclosedSegment(263)) - Beginning recovery of
unclosed segment starting at txid 4
2015-12-20 18:51:46,777 INFO server.Journal
(Journal.java:scanStorageForLatestEdits(194)) - Latest log is
EditLogFile(file=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/current/edits_0000000000000000001-0000000000000000003,first=0000000000000000001,last=0000000000000000003,inProgress=false,hasCorruptHeader=false)
2015-12-20 18:51:46,777 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #3:
[localhost/127.0.0.1:44247]
prepareRecovery(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@606bf823,
4)
2015-12-20 18:51:46,777 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #3:
[localhost/127.0.0.1:36031]
prepareRecovery(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@506ff349,
4)
2015-12-20 18:51:46,777 INFO server.Journal (Journal.java:getSegmentInfo(702))
- getSegmentInfo(4):
EditLogFile(file=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_0000000000000000004-0000000000000000006,first=0000000000000000004,last=0000000000000000006,inProgress=false,hasCorruptHeader=false)
-> startTxId: 4 endTxId: 6 isInProgress: false
2015-12-20 18:51:46,777 INFO server.Journal
(Journal.java:prepareRecovery(746)) - Prepared recovery for segment 4:
segmentState { startTxId: 4 endTxId: 6 isInProgress: false } lastWriterEpoch: 1
lastCommittedTxId: 6
2015-12-20 18:51:46,777 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #3:
[localhost/127.0.0.1:36209]
prepareRecovery(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@101d3057,
4)
2015-12-20 18:51:46,778 INFO server.Journal (Journal.java:getSegmentInfo(702))
- getSegmentInfo(4):
EditLogFile(file=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_0000000000000000004-0000000000000000006,first=0000000000000000004,last=0000000000000000006,inProgress=false,hasCorruptHeader=false)
-> startTxId: 4 endTxId: 6 isInProgress: false
2015-12-20 18:51:46,778 INFO server.Journal
(Journal.java:prepareRecovery(746)) - Prepared recovery for segment 4:
segmentState { startTxId: 4 endTxId: 6 isInProgress: false } lastWriterEpoch: 1
lastCommittedTxId: 6
2015-12-20 18:51:46,778 INFO server.Journal
(Journal.java:prepareRecovery(746)) - Prepared recovery for segment 4:
lastWriterEpoch: 1 lastCommittedTxId: 6
2015-12-20 18:51:46,778 INFO client.QuorumJournalManager
(QuorumJournalManager.java:recoverUnclosedSegment(272)) - Recovery prepare
phase complete. Responses:
127.0.0.1:36031: segmentState { startTxId: 4 endTxId: 6 isInProgress: false }
lastWriterEpoch: 1 lastCommittedTxId: 6
127.0.0.1:44247: segmentState { startTxId: 4 endTxId: 6 isInProgress: false }
lastWriterEpoch: 1 lastCommittedTxId: 6
2015-12-20 18:51:46,778 INFO client.QuorumJournalManager
(QuorumJournalManager.java:recoverUnclosedSegment(296)) - Using longest log:
127.0.0.1:36031=segmentState {
startTxId: 4
endTxId: 6
isInProgress: false
}
lastWriterEpoch: 1
lastCommittedTxId: 6
2015-12-20 18:51:46,779 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #4:
[localhost/127.0.0.1:36209]
acceptRecovery(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@7f96717d,
startTxId: 4
endTxId: 6
isInProgress: false
,
http://localhost:50567/getJournal?jid=test-journal&segmentTxId=4&storageInfo=-60%3A12345%3A0%3Amycluster)
2015-12-20 18:51:46,779 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #4:
[localhost/127.0.0.1:44247]
acceptRecovery(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@6df506fb,
startTxId: 4
endTxId: 6
isInProgress: false
,
http://localhost:50567/getJournal?jid=test-journal&segmentTxId=4&storageInfo=-60%3A12345%3A0%3Amycluster)
2015-12-20 18:51:46,779 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #4:
[localhost/127.0.0.1:36031]
acceptRecovery(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@1b891974,
startTxId: 4
endTxId: 6
isInProgress: false
,
http://localhost:50567/getJournal?jid=test-journal&segmentTxId=4&storageInfo=-60%3A12345%3A0%3Amycluster)
2015-12-20 18:51:46,779 INFO server.Journal (Journal.java:acceptRecovery(792))
- Synchronizing log startTxId: 4 endTxId: 6 isInProgress: false: no current
segment in place
2015-12-20 18:51:46,779 INFO server.Journal (Journal.java:syncLog(888)) -
Synchronizing log startTxId: 4 endTxId: 6 isInProgress: false from
http://localhost:50567/getJournal?jid=test-journal&segmentTxId=4&storageInfo=-60%3A12345%3A0%3Amycluster
2015-12-20 18:51:46,780 INFO server.Journal (Journal.java:getSegmentInfo(702))
- getSegmentInfo(4):
EditLogFile(file=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_0000000000000000004-0000000000000000006,first=0000000000000000004,last=0000000000000000006,inProgress=false,hasCorruptHeader=false)
-> startTxId: 4 endTxId: 6 isInProgress: false
2015-12-20 18:51:46,780 INFO server.Journal (Journal.java:acceptRecovery(832))
- Skipping download of log startTxId: 4 endTxId: 6 isInProgress: false: already
have up-to-date logs
2015-12-20 18:51:46,780 INFO server.Journal (Journal.java:getSegmentInfo(702))
- getSegmentInfo(4):
EditLogFile(file=/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_0000000000000000004-0000000000000000006,first=0000000000000000004,last=0000000000000000006,inProgress=false,hasCorruptHeader=false)
-> startTxId: 4 endTxId: 6 isInProgress: false
2015-12-20 18:51:46,780 INFO server.Journal (Journal.java:acceptRecovery(832))
- Skipping download of log startTxId: 4 endTxId: 6 isInProgress: false: already
have up-to-date logs
2015-12-20 18:51:46,782 INFO server.Journal (Journal.java:acceptRecovery(865))
- Accepted recovery for segment 4: segmentState { startTxId: 4 endTxId: 6
isInProgress: false } acceptedInEpoch: 2
2015-12-20 18:51:46,782 INFO server.Journal (Journal.java:acceptRecovery(865))
- Accepted recovery for segment 4: segmentState { startTxId: 4 endTxId: 6
isInProgress: false } acceptedInEpoch: 2
2015-12-20 18:51:46,783 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #5:
[localhost/127.0.0.1:36031]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@5b7127,
4, 6)
2015-12-20 18:51:46,783 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #5:
[localhost/127.0.0.1:44247]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@7361b79f,
4, 6)
2015-12-20 18:51:46,784 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #5:
[localhost/127.0.0.1:36209] getEditLogManifest(test-journal, 0, false)
2015-12-20 18:51:46,785 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #6:
[localhost/127.0.0.1:36031] getEditLogManifest(test-journal, 0, false)
2015-12-20 18:51:46,785 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #6:
[localhost/127.0.0.1:44247] getEditLogManifest(test-journal, 0, false)
2015-12-20 18:51:46,786 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #7:
[localhost/127.0.0.1:36031]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@62fd6abb,
7, -60)
2015-12-20 18:51:46,786 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #7:
[localhost/127.0.0.1:44247]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@6b6a4dc2,
7, -60)
2015-12-20 18:51:46,787 INFO server.Journal
(Journal.java:startLogSegment(547)) - Updating lastWriterEpoch from 1 to 2 for
client /127.0.0.1
2015-12-20 18:51:46,787 INFO server.Journal
(Journal.java:startLogSegment(547)) - Updating lastWriterEpoch from 1 to 2 for
client /127.0.0.1
2015-12-20 18:51:46,787 INFO namenode.TransferFsImage
(TransferFsImage.java:receiveFile(546)) - Transfer took 0.01s at 0.00 KB/s
2015-12-20 18:51:46,790 INFO server.Journal (Journal.java:acceptRecovery(865))
- Accepted recovery for segment 4: segmentState { startTxId: 4 endTxId: 6
isInProgress: false } acceptedInEpoch: 2
2015-12-20 18:51:46,790 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #6:
[localhost/127.0.0.1:36209]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@3a83f397,
4, 6)
2015-12-20 18:51:46,791 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #8:
[localhost/127.0.0.1:36031]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@1354f527, 7, 7, 3,
[B@2eaadd5b)
2015-12-20 18:51:46,791 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #8:
[localhost/127.0.0.1:44247]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@53aa994c, 7, 7, 3,
[B@2eaadd5b)
2015-12-20 18:51:46,791 INFO server.Journal
(Journal.java:finalizeLogSegment(599)) - Validating log segment
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/current/edits_inprogress_0000000000000000004
about to be finalized
2015-12-20 18:51:46,791 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/current/edits_inprogress_0000000000000000004
->
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/current/edits_0000000000000000004-0000000000000000006
2015-12-20 18:51:46,792 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #7:
[localhost/127.0.0.1:36209]
startLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@5f270347,
7, -60)
2015-12-20 18:51:46,792 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #9:
[localhost/127.0.0.1:44247]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@7a485b1e,
7, 9)
2015-12-20 18:51:46,792 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #9:
[localhost/127.0.0.1:36031]
finalizeLogSegment(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@3652158,
7, 9)
2015-12-20 18:51:46,792 INFO server.Journal
(Journal.java:startLogSegment(547)) - Updating lastWriterEpoch from 1 to 2 for
client /127.0.0.1
2015-12-20 18:51:46,809 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_inprogress_0000000000000000007
->
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal/current/edits_0000000000000000007-0000000000000000009
2015-12-20 18:51:46,819 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(142)) - Finalizing edits file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_inprogress_0000000000000000007
->
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal/current/edits_0000000000000000007-0000000000000000009
2015-12-20 18:51:46,820 INFO ipc.Server (Server.java:stop(2485)) - Stopping
server on 36209
2015-12-20 18:51:46,820 WARN namenode.FileJournalManager
(FileJournalManager.java:startLogSegment(127)) - Unable to start log segment 7
at
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/current/edits_inprogress_0000000000000000007:
null
2015-12-20 18:51:46,821 FATAL server.JournalNode
(JournalNode.java:reportErrorOnFile(299)) - Error reported on file
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal/current/edits_inprogress_0000000000000000007...
exiting
java.lang.Exception
at
org.apache.hadoop.hdfs.qjournal.server.JournalNode$ErrorReporter.reportErrorOnFile(JournalNode.java:299)
at
org.apache.hadoop.hdfs.server.namenode.FileJournalManager.startLogSegment(FileJournalManager.java:130)
at
org.apache.hadoop.hdfs.qjournal.server.Journal.startLogSegment(Journal.java:559)
at
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.startLogSegment(JournalNodeRpcServer.java:162)
at
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.startLogSegment(QJournalProtocolServerSideTranslatorPB.java:198)
at
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25425)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
2015-12-20 18:51:46,821 INFO ipc.Server (Server.java:run(844)) - Stopping IPC
Server Responder
2015-12-20 18:51:46,821 INFO ipc.Server (Server.java:run(718)) - Stopping IPC
Server listener on 36209
2015-12-20 18:51:46,821 INFO ipc.Server (Server.java:stop(2485)) - Stopping
server on 36209
2015-12-20 18:51:46,822 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #8:
[localhost/127.0.0.1:36209]
journal(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@2b991bc5, 7, 7, 3,
[B@2eaadd5b)
2015-12-20 18:51:46,822 WARN client.QuorumJournalManager
(IPCLoggerChannel.java:call(388)) - Remote journal 127.0.0.1:36209 failed to
write txns 7-9. Will try to write to this JN again after the next log roll.
java.net.ConnectException: Call From
ec2-beefy-slave-0594.vpc.cloudera.com/172.26.1.242 to localhost:36209 failed on
connection exception: java.net.ConnectException: Connection refused; For more
details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor31.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1470)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy11.journal(Unknown Source)
at
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:167)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults$WrapEveryCall.answer(TestQJMWithFaults.java:473)
at org.mockito.internal.MockHandler.handle(MockHandler.java:99)
at
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
at
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol$$EnhancerByMockitoWithCGLIB$$ea839bd4.journal(<generated>)
at
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:385)
at
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:378)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:708)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
at org.apache.hadoop.ipc.Client.call(Client.java:1442)
... 17 more
2015-12-20 18:51:46,824 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
2015-12-20 18:51:46,824 INFO client.TestQJMWithFaults
(TestQJMWithFaults.java:beforeCall(438)) - IPC call #9:
[localhost/127.0.0.1:36209]
heartbeat(org.apache.hadoop.hdfs.qjournal.protocol.RequestInfo@3e467d8f)
2015-12-20 18:51:46,825 INFO common.Storage (JNStorage.java:close(248)) -
Closing journal storage for Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal
2015-12-20 18:51:46,825 INFO common.Storage (JNStorage.java:close(248)) -
Closing journal storage for Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/test-journal
2015-12-20 18:51:46,825 INFO common.Storage (JNStorage.java:close(248)) -
Closing journal storage for Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/waitactive
2015-12-20 18:51:46,825 WARN qjournal.MiniJournalCluster
(MiniJournalCluster.java:shutdown(157)) - Unable to stop journal node
org.apache.hadoop.hdfs.qjournal.server.JournalNode@fcb345b
java.lang.NullPointerException
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.unlock(Storage.java:747)
at
org.apache.hadoop.hdfs.server.common.Storage.unlockAll(Storage.java:1125)
at
org.apache.hadoop.hdfs.qjournal.server.JNStorage.close(JNStorage.java:249)
at
org.apache.hadoop.hdfs.qjournal.server.Journal.close(Journal.java:227)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
at
org.apache.hadoop.hdfs.qjournal.server.JournalNode.stop(JournalNode.java:207)
at
org.apache.hadoop.hdfs.qjournal.server.JournalNode.stopAndJoin(JournalNode.java:232)
at
org.apache.hadoop.hdfs.qjournal.MiniJournalCluster.shutdown(MiniJournalCluster.java:154)
at
org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults.__CLR4_0_3uct6ut6408(TestQJMWithFaults.java:181)
at
org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults.testRecoverAfterDoubleFailures(TestQJMWithFaults.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
at
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
2015-12-20 18:51:46,825 INFO ipc.Server (Server.java:stop(2485)) - Stopping
server on 36031
2015-12-20 18:51:46,825 WARN security.UserGroupInformation
(UserGroupInformation.java:doAs(1674)) - PriviledgedActionException as:jenkins
(auth:SIMPLE) cause:java.nio.channels.ClosedByInterruptException
2015-12-20 18:51:46,826 INFO ipc.Server (Server.java:run(2107)) - IPC Server
handler 3 on 36209, call
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.startLogSegment from
127.0.0.1:49483 Call#12169 Retry#0
java.nio.channels.ClosedByInterruptException
at
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.FileChannelImpl.writeInternal(FileChannelImpl.java:743)
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:723)
at org.apache.hadoop.io.IOUtils.writeFully(IOUtils.java:317)
at
org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream.preallocate(EditLogFileOutputStream.java:231)
at
org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream.flushAndSync(EditLogFileOutputStream.java:203)
at
org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113)
at
org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107)
at
org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream.create(EditLogFileOutputStream.java:122)
at
org.apache.hadoop.hdfs.server.namenode.FileJournalManager.startLogSegment(FileJournalManager.java:124)
at
org.apache.hadoop.hdfs.qjournal.server.Journal.startLogSegment(Journal.java:559)
at
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.startLogSegment(JournalNodeRpcServer.java:162)
at
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.startLogSegment(QJournalProtocolServerSideTranslatorPB.java:198)
at
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25425)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
2015-12-20 18:51:46,826 WARN ipc.Server (Server.java:processResponse(1039)) -
IPC Server handler 3 on 36209, call
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.startLogSegment from
127.0.0.1:49483 Call#12169 Retry#0: output error
2015-12-20 18:51:46,826 INFO ipc.Server (Server.java:run(2152)) - IPC Server
handler 3 on 36209 caught an exception
java.nio.channels.ClosedChannelException
at
sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:265)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:474)
at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2621)
at org.apache.hadoop.ipc.Server.access$1900(Server.java:134)
at
org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:989)
at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1054)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2141)
2015-12-20 18:51:46,826 INFO ipc.Server (Server.java:run(718)) - Stopping IPC
Server listener on 36031
2015-12-20 18:51:46,827 INFO ipc.Server (Server.java:run(844)) - Stopping IPC
Server Responder
2015-12-20 18:51:46,827 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
2015-12-20 18:51:46,928 INFO common.Storage (JNStorage.java:close(248)) -
Closing journal storage for Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/test-journal
2015-12-20 18:51:46,928 INFO common.Storage (JNStorage.java:close(248)) -
Closing journal storage for Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/waitactive
2015-12-20 18:51:46,928 INFO ipc.Server (Server.java:stop(2485)) - Stopping
server on 44247
2015-12-20 18:51:46,929 INFO ipc.Server (Server.java:run(718)) - Stopping IPC
Server listener on 44247
2015-12-20 18:51:46,929 INFO ipc.Server (Server.java:run(844)) - Stopping IPC
Server Responder
2015-12-20 18:51:46,930 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
2015-12-20 18:51:46,931 INFO common.Storage (JNStorage.java:close(248)) -
Closing journal storage for Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/test-journal
2015-12-20 18:51:46,931 INFO common.Storage (JNStorage.java:close(248)) -
Closing journal storage for Storage Directory
/data/jenkins/workspace/CDH5-Hadoop-HDFS-2.6.0-Clover/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/waitactive
{noformat}
> NPE in Storage$StorageDirectory#unlock()
> ----------------------------------------
>
> Key: HDFS-9590
> URL: https://issues.apache.org/jira/browse/HDFS-9590
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Xiao Chen
> Assignee: Xiao Chen
> Attachments: HDFS-9590.01.patch
>
>
> The code looks to be possible to have race conditions in multiple-threaded
> runs.
> {code}
> public void unlock() throws IOException {
> if (this.lock == null)
> return;
> this.lock.release();
> lock.channel().close();
> lock = null;
> }
> {code}
> This is called in a handful of places, and I don't see any protection. Shall
> we add some synchronization mechanism? Not sure if I missed any design
> assumptions here.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)