[
https://issues.apache.org/jira/browse/HADOOP-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13107760#comment-13107760
]
Steve Loughran commented on HADOOP-7652:
----------------------------------------
# I don't think poison is the right term ".disabled" would be a more
appropriate suffix
# you may want to do it by service, "hdfs.namenode.disabled",
"hdfs.datanode.disabled" etc
# Scripts must exit with a negative error code when node startup is disabled
# testing: try to bring up a miniDFS cluster against configs with the namenode
and datanodes disabled, see what happens.
> Provide a mechanism for a client Hadoop configuration to 'poison' daemon
> startup; i.e., disallow daemon start up on a client config.
> ------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-7652
> URL: https://issues.apache.org/jira/browse/HADOOP-7652
> Project: Hadoop Common
> Issue Type: Improvement
> Components: conf
> Reporter: Philip Zeyliger
>
> We've seen folks who have been given Hadoop configuration to act as a client
> accidentally type "hadoop namenode" and get things into a confused, or
> incorrect state. Most recently, we've seen data corruption when users
> accidentally run extra secondary namenodes
> (https://issues.apache.org/jira/browse/HDFS-2305).
> I'd like to propose that we introduce a configuration property, say,
> "client.poison.servers", which, if set, disables the Hadoop daemons (nn, snn,
> jt, tt, etc.) with a reasonable error message. Hadoop administrators can
> hand out/install configs that are on machines intended to just be clients
> with a little less worry that they'll accidentally get run.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira