[
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aaron Fabbri updated HADOOP-13876:
----------------------------------
Attachment: HADOOP-13876-HADOOP-13345.001.patch
Attaching v001 patch which fixes DynamoDBMetadataStore to allow a single
DynamoDB table (fs.s3a.s3guard.ddb.table) to be used when accessing multiple
buckets.
- All paths given to DynamoDBMetadataStore must be absolute, and contain a host
(bucket) component. S3AFileSystem already does this, but some DDB tests had
to be fixed.
- In the DynamoDB table, the parent key now includes the bucket name as the
first component of the path.
- Remove assumptions in DynamoDBMetadataStore about only being used for one
bucket (e.g. delete uses of s3uri member)
- Fix bug where use of the new initialize(Configuration) method would cause
errors due to missing s3uri member. May also fix an issue around accidential
discard of return value from removeSchemeAndAuthority(Path).
- Update S3Guard site docs to include information on setting the DDB endpoint.
- Add new test case MetadataStoreTestBase#testMultiBucketPaths(). Also remove
some dead code I ran into there.
Testing: Ran all s3a/s3guard integration and scale tests against Us West 2
(oregon).
{quote}
Tests in error:
ITestS3AAWSCredentialsProvider.testAnonymousProvider:132 » AWSServiceIO
initTa...
{quote}
AmazonDynamoDBException: Request is missing Authentication Token
{quote}
ITestS3ACredentialsInURL.testInstantiateFromURL:86 » InterruptedIO initTable:
...
{quote}
No AWS Credentials provided
{quote}
ITestS3AFileSystemContract>FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed:669->FileSystemContractBaseTest.rename:525
» AWSServiceIO
{quote}
AmazonDynamoDBException: Provided list of item keys contains duplicates
{quote}
Tests run: 361, Failures: 0, Errors: 3, Skipped: 70
{quote}
Related: We need some followup work on DDB region and endpoint. It seems like
we should mirror what the AWS SDK does here: give you an easy option to set a
region, and also expose ability to set endpoint, but document that as for
advanced users (the usual way to set up a DDB client being just to specify the
region). Also I think [~mackrorysd] mentioned his DDB tables were getting set
up in the wrong region?
> S3Guard: better support for multi-bucket access
> -----------------------------------------------
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: HADOOP-13345
> Reporter: Aaron Fabbri
> Assignee: Aaron Fabbri
> Attachments: HADOOP-13876-HADOOP-13345.000.patch,
> HADOOP-13876-HADOOP-13345.001.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> However, if a user sets {{fs.s3a.s3guard.ddb.table}} and accesses multiple
> buckets, DynamoDBMetadataStore does not properly differentiate between paths
> belonging to different buckets. For example, it would treat
> s3a://bucket-a/path1 as the same as s3a://bucket-b/path1.
> Goals for this JIRA:
> - Allow for a "one DynamoDB table per cluster" configuration. If a user
> accesess multiple buckets with that single table, it should work correctly.
> - Explain which credentials are used for DynamoDB. Currently each
> S3AFileSystem has its own DynamoDBMetadataStore, which uses the credentials
> from the S3A fs. We at least need to document this behavior.
> - Document any other limitations etc. in the s3guard.md site doc.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]