[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10150:
----------------------------------------

    Attachment: HDFSDataatRestEncryptionAttackVectors.pdf
                HDFSDataatRestEncryptionProposal.pdf
                HDFSDataAtRestEncryptionAlternatives.pdf

[Cross-posting  with HDFS-6134, closed HDFS-6134 as duplicate, discussion to 
continue here]

Larry, Steve, Nicholas, thanks for your comments.

Todd Lipcon and I had an offline discussion with Andrew Purtell, Yi Liu and 
Avik Dey to see if we could combine what HADOOP-10150 and HDFS-6134 into one 
proposal while supporting both, encryption for multiple filesystems and 
transparent encryption for HDFS.

Also, following Steve’s suggestion, I’ve put together a Attack Vectors Matrix 
for all approaches.

I think both documents, the proposal and the attack vectors, address most if 
not all the questions/concerns raised in the JIRA.

Attaching 3 documents:

  * Alternatives: the original doc posted in HDFS-6134
  * Proposal: the combined proposal
  * Attack Vectors: a matrix with the different attacks for the alternatives 
and the proposal


> Hadoop cryptographic file system
> --------------------------------
>
>                 Key: HADOOP-10150
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10150
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: security
>    Affects Versions: 3.0.0
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>              Labels: rhino
>             Fix For: 3.0.0
>
>         Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system-V2.docx, HADOOP cryptographic file system.pdf, 
> HDFSDataAtRestEncryptionAlternatives.pdf, 
> HDFSDataatRestEncryptionAttackVectors.pdf, 
> HDFSDataatRestEncryptionProposal.pdf, cfs.patch, extended information based 
> on INode feature.patch
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.    Transparent to and no modification required for upper layer 
> applications.
> 2.    “Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.    Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.    Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.    Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.    A robust key management framework.
> 7.    Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to