[ 
https://issues.apache.org/jira/browse/HDFS-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14138634#comment-14138634
 ] 

Yi Liu commented on HDFS-7073:
------------------------------

Hi [~cnauroth], nice work.

{quote}
DataNode: There had been some mishandling in checkSecureConfig around checking 
the dfs.data.tranfser.protection property. It's defined in hdfs-default.xml, so 
it always comes in with empty string as the default (not null). I changed some 
of this logic to check for empty string instead of null.
{quote}
That's great for this fix too, otherwise if cluster is security enabled and we 
still can start DN listened on an unprivileged port (> 1024) even 
{{dfs.data.transfer.protection}} is empty.

{quote}
Cluster is unsecured, but has block access tokens enabled. This is not 
something I've seen done in practice, but I've heard historically it has been 
allowed. The HDFS-2856 code relied on seeing an empty block access token to 
trigger fallback, and this doesn't work if the unsecured cluster actually is 
using block access tokens.
{quote}

In the patch, fallback for writeblock is handled, but fallback for readblock is 
not handled. 
The test case for this scenario is hard to write because 
{{UserGroupInformation#isSecurityEnabled()}} is static, so we can't configure 
client secured but server unsecured.
But I just have this environment and test this scenario, I configured: 
server(unsecured and block access tokens enabled), client (secure enabled, 
block access tokens enabled and fallback enabled). I see write file is 
successful, but *read file failed*.

> Allow falling back to a non-SASL connection on DataTransferProtocol in 
> several edge cases.
> ------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7073
>                 URL: https://issues.apache.org/jira/browse/HDFS-7073
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, hdfs-client, security
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HDFS-7073.1.patch
>
>
> HDFS-2856 implemented general SASL support on DataTransferProtocol.  Part of 
> that work also included a fallback mode in case the remote cluster is running 
> under a different configuration without SASL.  I've discovered a few edge 
> case configurations that this did not support:
> * Cluster is unsecured, but has block access tokens enabled.  This is not 
> something I've seen done in practice, but I've heard historically it has been 
> allowed.  The HDFS-2856 code relied on seeing an empty block access token to 
> trigger fallback, and this doesn't work if the unsecured cluster actually is 
> using block access tokens.
> * The DataNode has an unpublicized testing configuration property that could 
> be used to skip the privileged port check.  However, the HDFS-2856 code is 
> still enforcing requirement of SASL when the ports are not privileged, so 
> this would force existing configurations to make changes to activate SASL.
> This patch will restore the old behavior so that these edge case 
> configurations will continue to work the same way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to