[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15843591#comment-15843591
 ] 

Steve Moist commented on HADOOP-13075:
--------------------------------------

I ran some tests using the minicluster against the default s3 endpoint where 
buckets were configured in Oregon and encryption keys stored in Oregon.  One 
thing to note using SSE-C, there is nothing in the AWS S3 GUI that shows that 
the file is encrypted.

>    we should be able to verify that data written with one key cannot be 
> parsed if a different fs + key is used to read it.
I uploaded FileA using the minicluster under aws:kms key1, then did a fs -mv 
after restarting the minicluster with aws:kms key2.  It successfully moved it 
to FileA' encrypted with aws:kms key2.  If SSE-C is enabled, it will throw an 
error.

 >   we should see what happens if you try to read unencrypted data with an FS 
 > with encryption enabled
I uploaded FileA through the AWS S3 GUI.  I moved a file from fileA to fileA' 
where fileA was unencrypted with SSE-KMS enabled.  FileA' became encrypted 
under the aws kms key configured.  If SSE-C is enabled, it will throw an error.

  >  maybe: if a bucket is set up to require encryption, then unencrypted data 
cannot be written, encrypted can. This implies that the tester will need a 
special bucket for this test & declare it in the configs.
The user can still upload data through the GUI or AWS cli and not have it be 
encrypted.  Based on #2, any new file will be encrypted with SSE-S3/SSE-KMS and 
any copied/moved file will now be encrypted.  This point seems like it will be 
hard to enforce.

I ran the full s3a suite with aws:kms enabled and it ran fine on the defaults 
s3 enpdoint in the oregon region.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---------------------------------------------------
>
>                 Key: HADOOP-13075
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13075
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Andrew Olson
>            Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to