Jeff Hubbs resolved HDFS-13397.
      Resolution: Invalid
    Release Note: This fix apparently does not work in all cases, will withdraw 
and re-post after further investigation

> start-dfs.sh and hdfs --daemon start datanode say "ERROR: Cannot set priority 
> of datanode process XXXX"
> -------------------------------------------------------------------------------------------------------
>                 Key: HDFS-13397
>                 URL: https://issues.apache.org/jira/browse/HDFS-13397
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs
>    Affects Versions: 3.0.1
>            Reporter: Jeff Hubbs
>            Priority: Major
> When executing
> {code:java}
> $HADOOP_HOME/bin/hdfs --daemon start datanode
> {code}
> as a regular user (e.g. "hdfs") you achieve fail saying
> {code:java}
> ERROR: Cannot set priority of datanode process XXXX
> {code}
> where XXXX is some PID.
> It turned out that this is because at least on Gentoo Linux (and I think this 
> is pretty well universal), by default a regular user process can't increase 
> the priority of itself or any of the user's other processes. To fix this, I 
> added these lines to /etc/security/limits.conf [NOTE: the users hdfs, yarn, 
> and mapred are in the group called hadoop on this system]:
> {code:java}
> @hadoop        hard    nice            -15
> @hadoop        hard    priority        -15
> {code}
> This change will need to be made on all datanodes.
> The need to enable [at minimum] the hdfs user to raise its processes' 
> priority needs to be added to the documentation. This is not a problem I 
> observed under 3.0.0.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to