[
https://issues.apache.org/jira/browse/HADOOP-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Mackrory updated HADOOP-14027:
-----------------------------------
Attachment: HADOOP-14027-HADOOP-13345.001.patch
Attaching a patch that will allow a configured region to override the S3
region, even when the metadata store is initialized with an S3 bucket.
Several tests started failing after I did this. Some of them have failed for a
while because of any time the configured table name differs from the bucket
name (so only kinda related to this change in that it's a bigger deal when
testing this specific kind of setup, but a pre-existing issue). Others don't
really make sense why they're suddenly failing after my change (but I've
confirmed they reliably passed before) - but the changes required to make them
work aren't surprising at all - they all seem like the way the test should have
been all along.
Ran all tests with and without S3Guard, split between a bucket in us-west-1 and
tables in us-west-2.
> Implicitly creating DynamoDB table ignores endpoint config
> ----------------------------------------------------------
>
> Key: HADOOP-14027
> URL: https://issues.apache.org/jira/browse/HADOOP-14027
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Sean Mackrory
> Assignee: Sean Mackrory
> Attachments: HADOOP-14027-HADOOP-13345.001.patch
>
>
> When you're using the 'bin/hadoop s3a init' command, it correctly uses the
> endpoint provided on the command-line (if provided), it will then use the
> endpoint in the config (if provided), and failing that it will default to the
> same region as the bucket.
> However if you just set fs.s3a.s3guard.ddb.table.create to true and create a
> directory for a new bucket / table, it will always use the same region as the
> bucket, even if another endpoint is configured.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]