[ 
https://issues.apache.org/jira/browse/HADOOP-2239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12569042#action_12569042
 ] 

Doug Cutting commented on HADOOP-2239:
--------------------------------------

> it would be good if in the future we could encrypt dfs put/get [ ... ]

Hftp: uris can be used most places that hdfs: uris can be used, including 
put/get.  The big limitations at present are that hftp does not support:

# random access -- hftp mapreduce inputs cannot be split; and
# locality hints -- hftp mapreduce inputs are not localized.

Currently, to use an hftp filesystem as a mapreduce input, one would have to 
set mapred.min.split.size=0xffffffffffffffff, but, other than that, it should 
work.

So I think making the hftp protocol secure will permit the uses you have in 
mind, no?

> Security:  Need to be able to encrypt Hadoop socket connections
> ---------------------------------------------------------------
>
>                 Key: HADOOP-2239
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2239
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Allen Wittenauer
>
> We need to be able to use hadoop over hostile networks, both internally and 
> externally to the enterpise.  While authentication prevents unauthorized 
> access, encryption should be used to prevent such things as packet snooping 
> across the wire.  This means that hadoop client connections, distcp, etc, 
> would use something such as SSL to protect the TCP/IP packets.  
> Post-Kerberos, it would be useful to use something similar to NFS's krb5p 
> option.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to