[
https://issues.apache.org/jira/browse/HADOOP-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16464859#comment-16464859
]
Allen Wittenauer commented on HADOOP-15443:
-------------------------------------------
I thought more about this and have a few more notes to pass on.
Just to refresh everyone's memory,
[here|https://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-common/SecureMode.html#Secure_DataNode]
is the official documentation. It (poorly) documents four states:
|| HDFS_DATANODE_SECURE_USER || dfs.data.transfer.protection || Mode ||
| unset | unset | non-root start, non-reserved port, insecure |
| set | unset | root start, reserved port, secure |
| unset | set | non-root start, non-reserved port, secure(-ish) |
| set | set | root start, reserved port, secure |
HDFS_DATANODE_SECURE_USER (and it's deprecated form HADOOP_SECURE_DN_USER)
should only be set if the privileged mode is needed. This requirement is pretty
much unchanged from branch-2; only the names have been cleaned up. The biggest
difference is that JSVC_HOME was also required in branch-2. As explained in
HDFS-13501, it's not a particularly useful check (even in branch-2) given other
portions of HDFS also required JSVC_HOME to defined. [It's a supported
configuration to have portmap running privileged and datanode unprivileged in a
secure cluster.]
Also in reference to HDFS-13501, I could easily see how JSVC_HOME was being
used as the pivot point for privileged vs. non-privileged startup. That won't
work anymore in 3.x, given that JSVC_HOME is only referenced when jsvc is
actually being used. Instead, it's now solely HDFS_DATANODE_SECURE_USER .
Again, this essentially matches real-world conditions under branch-2, but now
much more formally.
I wrote
[this|https://effectivemachines.com/2017/07/08/powerful-_users-in-apache-hadoop-3-0-0-alpha4/]
up last year to talk about how some of the _USER vars work in 2.x vs. 3.x. In
hindsight, using PRIVILEGED instead of SECURE would have been better, but c'est
la vie. I think I deferred to SECURE because it was more likely to be used in
more configurations. This was probably a mistake.
> hadoop shell should allow non-privileged user to start secure daemons.
> ----------------------------------------------------------------------
>
> Key: HADOOP-15443
> URL: https://issues.apache.org/jira/browse/HADOOP-15443
> Project: Hadoop Common
> Issue Type: Bug
> Reporter: Ajay Kumar
> Assignee: Ajay Kumar
> Priority: Major
> Attachments: HADOOP-15443.poc.patch
>
>
> With [HDFS-13081] now secure Datanode can be started without root privileges
> if rpc port is protected via sasl and ssl is enabled for http. However hadoop
> shell still has check for privilged user in hadoop-functions.sh. Jira intends
> to amend it, at-least for hdfs.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]