[
https://issues.apache.org/jira/browse/HADOOP-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Douglas updated HADOOP-2239:
----------------------------------
Attachment: 2239-0.patch
This roughly follows what Doug outlined above. I added a SunJsseListener to the
namenode and datanode StatusHttpServer, initialized iff the keystore location
is specified. The keystore properties- including passwords- are specified in
another resource, specified in the config. I added HsftpFileSystem to handle
the client-side and included a redirect to a ssl-capable datanode port from the
NameNode servlet, assumed to be static (avoiding the protocol version bump).
> Security: Need to be able to encrypt Hadoop socket connections
> ---------------------------------------------------------------
>
> Key: HADOOP-2239
> URL: https://issues.apache.org/jira/browse/HADOOP-2239
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: Allen Wittenauer
> Attachments: 2239-0.patch
>
>
> We need to be able to use hadoop over hostile networks, both internally and
> externally to the enterpise. While authentication prevents unauthorized
> access, encryption should be used to prevent such things as packet snooping
> across the wire. This means that hadoop client connections, distcp, etc,
> would use something such as SSL to protect the TCP/IP packets.
> Post-Kerberos, it would be useful to use something similar to NFS's krb5p
> option.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.