[ 
https://issues.apache.org/jira/browse/HADOOP-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13107560#comment-13107560
 ] 

Ravi Prakash commented on HADOOP-7652:
--------------------------------------

Although not the solution to this jira 
https://issues.apache.org/jira/browse/HDFS-2305 takes care of corruption due to 
multiple secondary namenodes running

> Provide a mechanism for a client Hadoop configuration to 'poison' daemon 
> startup; i.e., disallow daemon start up on a client config.
> ------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-7652
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7652
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: conf
>            Reporter: Philip Zeyliger
>
> We've seen folks who have been given Hadoop configuration to act as a client 
> accidentally type "hadoop namenode" and get things into a confused, or 
> incorrect state.  Most recently, we've seen data corruption when users 
> accidentally run extra secondary namenodes 
> (https://issues.apache.org/jira/browse/HDFS-2305).
> I'd like to propose that we introduce a configuration property, say, 
> "client.poison.servers", which, if set, disables the Hadoop daemons (nn, snn, 
> jt, tt, etc.) with a reasonable error message.  Hadoop administrators can 
> hand out/install configs that are on machines intended to just be clients 
> with a little less worry that they'll accidentally get run.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to