[ 
https://issues.apache.org/jira/browse/HDFS-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567176#comment-14567176
 ] 

Brahma Reddy Battula commented on HDFS-8507:
--------------------------------------------

Discussed with [~ajisakaa] offline about this issue..Going to do like 
following...

initialize() is called before countUncheckpointedTxns() and
initialize() do the two things:

1) create connection to NN
2) launch Http Server

Therefore SNN starts. For countUncheckpointedTxns(),
2) is not needed. I'm thinking we can split initialize()
into two methods to avoid starting SNN as follows:

initialize() ... create connection to NN
startServer() ... launch Http Server

When -geteditsize is specified, only initialize() is called
and SNN does not start.

> hdfs secondarynamenode -geteditsize will fail when SNN is running as daemon
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-8507
>                 URL: https://issues.apache.org/jira/browse/HDFS-8507
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>
> hdfs secondarynamenode -geteditsize  , will try to startSNN and fail to get 
> editsize, I feel, it might not required...
> we can direcly call following is enough right..? why we need start SNN as 
> part of this command..?
> {code}
> case GETEDITSIZE:
>         long uncheckpointed = countUncheckpointedTxns();
>         System.out.println("NameNode has " + uncheckpointed +
>             " uncheckpointed transactions");
> {code}
>  *Trace* 
> {noformat}
> 15/06/01 20:25:31 ERROR common.Storage: It appears that another node  
> 12290@host189 has already locked the storage directory: 
> /home/hdfs/OpenSource/hadoop-2.7.0/hadoop-hdfs/dfs/namesecondary
> java.nio.channels.OverlappingFileLockException
>       at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:712)
>       at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:678)
>       at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:499)
>       at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.recoverCreate(SecondaryNameNode.java:962)
>       at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:243)
>       at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
>       at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
> 15/06/01 20:25:31 INFO common.Storage: Cannot lock storage 
> /home/hdfs/OpenSource/hadoop-2.7.0/hadoop-hdfs/dfs/namesecondary. The 
> directory is already locked
> 15/06/01 20:25:31 FATAL namenode.SecondaryNameNode: Failed to start secondary 
> namenode
> java.io.IOException: Cannot lock storage 
> /home/hdfs/OpenSource/hadoop-2.7.0/hadoop-hdfs/dfs/namesecondary. The 
> directory is already locked
>       at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:683)
>       at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:499)
>       at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.recoverCreate(SecondaryNameNode.java:962)
>       at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:243)
>       at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
>       at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to