[ 
https://issues.apache.org/jira/browse/HDFS-1150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12894240#action_12894240
 ] 

Todd Lipcon commented on HDFS-1150:
-----------------------------------

bq. If your "support" for security comes with quotation marks, you've got a 
problem.

The quotes were to say that we don't need to explicitly add support in order to 
use an external mechanism. Securing a high port with SELinux is equally as good 
as the jsvc solution that uses a low port. On Solaris you can use user-based 
privileges to grant the hdfs user access to bind to a low port. This is also at 
least as good as the jsvc solution and doesn't require any code changes to 
Hadoop. I would in fact argue that both of these solutions are *more* secure, 
since the Hadoop administrators don't need root on the system except for the 
initial host configuration.

bq. Administrators would need to affirmatively decline this type of protection, 
perhaps with a value to the key of "No, thanks."

Hence my above point that it should be a configuration with the default as you 
suggested -- something that advanced users (and developers) can override.

I also disagree with your general point that Hadoop should make it impossible 
to misconfigure it in such a way that there are security holes. Already with 
your solution you're relying on ops to provide some of the security - for 
example, if users have root on any machine in the same subnet, they can take 
over the IP of one of the datanodes by spamming arps. So long as we're taking 
the shortcut instead of actually putting SASL on the xceiver protocol, we need 
external security. I agree completely with the decision to take the workaround 
in the short term, but making arguments about "security" vs real security seems 
strange given the context.

> Verify datanodes' identities to clients in secure clusters
> ----------------------------------------------------------
>
>                 Key: HDFS-1150
>                 URL: https://issues.apache.org/jira/browse/HDFS-1150
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: data-node
>    Affects Versions: 0.22.0
>            Reporter: Jakob Homan
>            Assignee: Jakob Homan
>         Attachments: commons-daemon-1.0.2-src.tar.gz, 
> HDFS-1150-BF-Y20-LOG-DIRS-2.patch, HDFS-1150-BF-Y20-LOG-DIRS.patch, 
> HDFS-1150-BF1-Y20.patch, hdfs-1150-bugfix-1.1.patch, 
> hdfs-1150-bugfix-1.2.patch, hdfs-1150-bugfix-1.patch, HDFS-1150-trunk.patch, 
> HDFS-1150-Y20-BetterJsvcHandling.patch, HDFS-1150-y20.build-script.patch, 
> HDFS-1150-Y20S-ready-5.patch, HDFS-1150-Y20S-ready-6.patch, 
> HDFS-1150-Y20S-ready-7.patch, HDFS-1150-Y20S-ready-8.patch, 
> HDFS-1150-Y20S-Rough-2.patch, HDFS-1150-Y20S-Rough-3.patch, 
> HDFS-1150-Y20S-Rough-4.patch, HDFS-1150-Y20S-Rough.txt
>
>
> Currently we use block access tokens to allow datanodes to verify clients' 
> identities, however we don't have a way for clients to verify the 
> authenticity of the datanodes themselves.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to