[ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832317#comment-15832317
 ] 

Aaron Fabbri commented on HADOOP-13876:
---------------------------------------

Done.. Feel free to tweak descriptions, etc.

> S3Guard: better support for multi-bucket access
> -----------------------------------------------
>
>                 Key: HADOOP-13876
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13876
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: HADOOP-13345
>            Reporter: Aaron Fabbri
>            Assignee: Aaron Fabbri
>         Attachments: HADOOP-13876-HADOOP-13345.000.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> However, if a user sets {{fs.s3a.s3guard.ddb.table}} and accesses multiple 
> buckets, DynamoDBMetadataStore does not properly differentiate between paths 
> belonging to different buckets.  For example, it would treat 
> s3a://bucket-a/path1 as the same as s3a://bucket-b/path1.
> Goals for this JIRA:
> - Allow for a "one DynamoDB table per cluster" configuration.  If a user 
> accesess multiple buckets with that single table, it should work correctly.  
> - Explain which credentials are used for DynamoDB.  Currently each 
> S3AFileSystem has its own DynamoDBMetadataStore, which uses the credentials 
> from the S3A fs.   We at least need to document this behavior.
> - Document any other limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to