[
https://issues.apache.org/jira/browse/HDFS-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14139272#comment-14139272
]
Chris Nauroth commented on HDFS-7073:
-------------------------------------
bq. In the patch, fallback for writeblock is handled, but fallback for
readblock is not handled.
Yes, I spotted the same thing in my testing yesterday and chose to cancel the
patch to make it clear that it's not ready. I'm working on a new patch. Thank
you for your testing too.
bq. The test case for this scenario is hard to write because
UserGroupInformation#isSecurityEnabled() is static...
Yes, agreed. Unfortunately, until we refactor some of the static stuff inside
{{UserGroupInformation}}, it's going to be impossible to put tests covering
these kinds of cross-cluster scenarios directly into the source tree. We're
having to rely on external system tests to cover this. Last time I looked at
refactoring {{UserGroupInformation}}, it looked like it was going to be a big
effort, and possibly backwards-incompatible.
bq. If we allow this type of fallback, as discussed in HDFS-2856 about the
attack vector, a malicious task can easily listen on the DN's port after it
dies and steal the block access token. So we'd better not allow the fallback?
Thanks, great catch. The difficulty here is that
{{ipc.client.fallback-to-simple-auth-allowed}} controls fallback globally
regardless of which cluster the client is connecting to. One of the big use
cases motivating fallback is distcp between a secure cluster and a non-secure
cluster. In that scenario, setting
{{ipc.client.fallback-to-simple-auth-allowed}} could accidentally trigger
fallback during communication with the secured cluster, when we really only
want it for the unsecured cluster.
I'm going to explore an alternative implementation that detects if fallback
actually occurred during the corresponding NameNode interaction before the
DataTransferProtocol call. This would tell us unambiguously if the remote
DataNode was unsecured. Doing this would require some additional plumbing at
the RPC layer.
> Allow falling back to a non-SASL connection on DataTransferProtocol in
> several edge cases.
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-7073
> URL: https://issues.apache.org/jira/browse/HDFS-7073
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, hdfs-client, security
> Reporter: Chris Nauroth
> Assignee: Chris Nauroth
> Attachments: HDFS-7073.1.patch
>
>
> HDFS-2856 implemented general SASL support on DataTransferProtocol. Part of
> that work also included a fallback mode in case the remote cluster is running
> under a different configuration without SASL. I've discovered a few edge
> case configurations that this did not support:
> * Cluster is unsecured, but has block access tokens enabled. This is not
> something I've seen done in practice, but I've heard historically it has been
> allowed. The HDFS-2856 code relied on seeing an empty block access token to
> trigger fallback, and this doesn't work if the unsecured cluster actually is
> using block access tokens.
> * The DataNode has an unpublicized testing configuration property that could
> be used to skip the privileged port check. However, the HDFS-2856 code is
> still enforcing requirement of SASL when the ports are not privileged, so
> this would force existing configurations to make changes to activate SASL.
> This patch will restore the old behavior so that these edge case
> configurations will continue to work the same way.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)