[
https://issues.apache.org/jira/browse/HDFS-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14137711#comment-14137711
]
Chris Nauroth commented on HDFS-7073:
-------------------------------------
[~jnp], thank you for taking a look.
Regarding the instanceof check, I don't see a way to avoid it. There is a lot
of existing logic in the catch clause checking for specific exception types and
doing special handling. When fallback fails, we need to be able to drive the
original exception into this handling logic to preserve the existing error
handling behavior. Some of this logic controls the outer loop too ({{break}}
vs. {{continue}}), so it's not logic that's trivial to refactor behind a
reusable method. Maybe this whole code path would benefit from a larger
clean-up refactoring, but that would be too much to fold into this patch now.
Regarding checking {{ignore.secure.ports.for.testing}}, we can't make that
change in the client. {{ignore.secure.ports.for.testing}} has been a
server-side property used by the DataNode only. It's possible that the client
would be running with different config files than the DataNode. If an existing
deployment defined {{ignore.secure.ports.for.testing}} in the DataNode configs,
but not the client configs, then this wouldn't work.
On the server side, I'm unclear on exactly what we'd do with a check of
{{ignore.secure.ports.for.testing}}. If we do {{} else if
(ignoreSecurePortsForTesting) {}}, then we'd still need a catch-all {{else}}
block for when both {{dfs.data.transfer.protection}} and
{{ignore.secure.ports.for.testing}} are off. I suppose the only thing we can
do there is to try a no-SASL connection, which is identical to the handling of
the current patch. Maybe it's actually clearer to leave the patch as is?
Let me know your thoughts. Thanks again!
> Allow falling back to a non-SASL connection on DataTransferProtocol in
> several edge cases.
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-7073
> URL: https://issues.apache.org/jira/browse/HDFS-7073
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, hdfs-client, security
> Reporter: Chris Nauroth
> Assignee: Chris Nauroth
> Attachments: HDFS-7073.1.patch
>
>
> HDFS-2856 implemented general SASL support on DataTransferProtocol. Part of
> that work also included a fallback mode in case the remote cluster is running
> under a different configuration without SASL. I've discovered a few edge
> case configurations that this did not support:
> * Cluster is unsecured, but has block access tokens enabled. This is not
> something I've seen done in practice, but I've heard historically it has been
> allowed. The HDFS-2856 code relied on seeing an empty block access token to
> trigger fallback, and this doesn't work if the unsecured cluster actually is
> using block access tokens.
> * The DataNode has an unpublicized testing configuration property that could
> be used to skip the privileged port check. However, the HDFS-2856 code is
> still enforcing requirement of SASL when the ports are not privileged, so
> this would force existing configurations to make changes to activate SASL.
> This patch will restore the old behavior so that these edge case
> configurations will continue to work the same way.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)