[
https://issues.apache.org/jira/browse/HDFS-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth updated HDFS-2856:
--------------------------------
Attachment: HDFS-2856.2.patch
I'm attaching v2 of the patch. This has the following changes since last time:
# Added new tests for the balancer with SASL on DataTransferProtocol. This
helped me find a bug with the balancer passing around the incorrect datanode ID
(the source instead of the destination), so I fixed that.
# Removed TODO for inclusion of block pool ID in the SASL handshake. I already
include the token identifier, which contains the block pool ID as a component,
so it's not necessary to add block pool ID again.
# Removed the client-generated timestamp from the SASL handshake. The original
intention of the timestamp was to make it harder for a man in the middle to
replay the message. The server side would have checked elapsed time since the
timestamp and rejected the request if it was beyond a threshold. However, the
SASL DIGEST-MD5 handshake already protects against this, because the server
initiates a random challenge at the start of any new connection. It's highly
likely that the challenge will be unique across different connection attempts,
and thus a replayed message is highly likely to be rejected. The timestamp
wouldn't provide any additional benefit.
# Removed datanode ID from the SASL handshake. This had been intended to
protect against a man in the middle rerouting a message to a different
datanode. As described above, SASL DIGEST-MD5 already protects against this,
because the server issues a different challenge on each connection attempt.
The datanode ID wouldn't provide any additional benefit.
# Added code in {{DataNode#checkSecureConfig}} to check that when SASL is used
on DataTransferProtocol, SSL must also be used on HTTP. Plain HTTP wouldn't be
safe, because the client could write a delegation token query parameter onto
the socket without any authentication of the server. By requiring SSL, we
enforce that the server is authenticated before sending the delegation token.
> Fix block protocol so that Datanodes don't require root or jsvc
> ---------------------------------------------------------------
>
> Key: HDFS-2856
> URL: https://issues.apache.org/jira/browse/HDFS-2856
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode, security
> Affects Versions: 3.0.0, 2.4.0
> Reporter: Owen O'Malley
> Assignee: Chris Nauroth
> Attachments: Datanode-Security-Design.pdf,
> Datanode-Security-Design.pdf, Datanode-Security-Design.pdf,
> HDFS-2856.1.patch, HDFS-2856.2.patch, HDFS-2856.prototype.patch
>
>
> Since we send the block tokens unencrypted to the datanode, we currently
> start the datanode as root using jsvc and get a secure (< 1024) port.
> If we have the datanode generate a nonce and send it on the connection and
> the sends an hmac of the nonce back instead of the block token it won't
> reveal any secrets. Thus, we wouldn't require a secure port and would not
> require root or jsvc.
--
This message was sent by Atlassian JIRA
(v6.2#6252)