[
https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14291438#comment-14291438
]
Carrey Zhan commented on HDFS-7609:
-----------------------------------
same call stack is found in origin problem. sorry for not attaching it at the
first moment;
{noformat}
"main" prio=10 tid=0x00007f03f800b000 nid=0x47ec runnable [0x00007f03ff10a000]
java.lang.Thread.State: RUNNABLE
at java.util.PriorityQueue.remove(PriorityQueue.java:305)
at
org.apache.hadoop.util.LightWeightCache.put(LightWeightCache.java:217)
at org.apache.hadoop.ipc.RetryCache.addCacheEntry(RetryCache.java:270)
- locked <0x00007ef83c305940> (a org.apache.hadoop.ipc.RetryCache)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntry(FSNamesystem.java:717)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:406)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:199)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:112)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:733)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:647)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:264)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1177)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1249)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
Locked ownable synchronizers:
- <0x00007ef83d350788> (a
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
- <0x00007ef83d41f620> (a
java.util.concurrent.locks.ReentrantReadWriteLock$FairSync)
{noformat}
> startup used too much time to load edits
> ----------------------------------------
>
> Key: HDFS-7609
> URL: https://issues.apache.org/jira/browse/HDFS-7609
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Affects Versions: 2.2.0
> Reporter: Carrey Zhan
> Attachments: HDFS-7609-CreateEditsLogWithRPCIDs.patch,
> recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same
> time under very high load, leaving behind about 100 million transactions in
> edits log.(I still have no idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be
> needed before finish, and it was loading fsedits most of the time. I also
> tryed to restart namenode in recover mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache.
> So I set dfs.namenode.enable.retrycache to false, the restart process
> finished in half an hour.
> I think the retry cached is useless during startup, at least during recover
> process.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)