[
https://issues.apache.org/jira/browse/HDFS-6859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth updated HDFS-6859:
--------------------------------
Resolution: Won't Fix
Status: Resolved (was: Patch Available)
Hi, [~benoyantony]. Unfortunately, I don't think we can implement this change
for reasons of compatibility.
At runtime, it's important that a DataNode is using either a privileged port or
SASL on DataTransferProtocol, but not both. The client checks the port number
of the DataNode it's connecting to. It only tries SASL if the port is a
non-privileged port. This is important in scenarios like switching an existing
cluster from using root to using SASL. When that happens, you could
temporarily have a state where there is a mix of some DataNodes using root and
other DataNodes using SASL. Thus, the client needs a clear way to determine
whether or not to try SASL. This is discussed more fully in this comment from
HDFS-2856:
https://issues.apache.org/jira/browse/HDFS-2856?focusedCommentId=13988389&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13988389
The problem with the change proposed here is that existing clusters would have
hadoop.rpc.protection configured, and therefore they would automatically get
dfs.data.transfer.protection when they upgrade HDFS software. This would make
the DataNode think it needs to use SASL, even though it's still configured to
run on a privileged port.
We actually reject attempts to configure both a privileged port and SASL
simultaneously for this very reason. See {{DataNode#checkSecureConfig}}, which
aborts during startup if it finds this kind of configuration.
I'm going to resolve this as won't fix, but please feel free to reopen if you
have another take on this. Thanks!
> Allow dfs.data.transfer.protection default to hadoop.rpc.protection
> -------------------------------------------------------------------
>
> Key: HDFS-6859
> URL: https://issues.apache.org/jira/browse/HDFS-6859
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: security
> Affects Versions: 2.5.0
> Reporter: Benoy Antony
> Assignee: Benoy Antony
> Priority: Minor
> Attachments: HDFS-6859.patch
>
>
> Currently administrator needs to configure both
> _dfs.data.transfer.protection_ and _hadoop.rpc.protection_ to specify _QOP_
> for rpc and data transfer protocols. In some cases, the values for these two
> properties will be same. In those cases, it may be easier to allow
> dfs.data.transfer.protection default to hadoop.rpc.protection.
> This also ensures that an admin will get QOP as _Authentication_ if admin
> does not specify either of those values.
> Separate jiras (HDFS-6858 and HDFS-6859) are created for
> dfs.data.transfer.saslproperties.resolver.class and
> dfs.data.transfer.protection respectively.
--
This message was sent by Atlassian JIRA
(v6.2#6252)