[
https://issues.apache.org/jira/browse/HADOOP-7652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13107496#comment-13107496
]
Arun C Murthy commented on HADOOP-7652:
---------------------------------------
Have you tried service-level-authorization?
> Provide a mechanism for a client Hadoop configuration to 'poison' daemon
> startup; i.e., disallow daemon start up on a client config.
> ------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-7652
> URL: https://issues.apache.org/jira/browse/HADOOP-7652
> Project: Hadoop Common
> Issue Type: Improvement
> Components: conf
> Reporter: Philip Zeyliger
>
> We've seen folks who have been given Hadoop configuration to act as a client
> accidentally type "hadoop namenode" and get things into a confused, or
> incorrect state. Most recently, we've seen data corruption when users
> accidentally run extra secondary namenodes
> (https://issues.apache.org/jira/browse/HDFS-2305).
> I'd like to propose that we introduce a configuration property, say,
> "client.poison.servers", which, if set, disables the Hadoop daemons (nn, snn,
> jt, tt, etc.) with a reasonable error message. Hadoop administrators can
> hand out/install configs that are on machines intended to just be clients
> with a little less worry that they'll accidentally get run.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira