[
https://issues.apache.org/jira/browse/HADOOP-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15351412#comment-15351412
]
Steve Loughran commented on HADOOP-13324:
-----------------------------------------
Note that tests against data stored in US-east fail once you set the endpoint
to be frankfurt.
That's because the tests are trying to set up multipart purging. 301 responses
are being returned from the frankfurt endpoint, which are then triggering
failures.
{code}
<property>
<name>fs.s3a.endpoint</name>
<value>s3.eu-central-1.amazonaws.com</value>
</property>
{code}
{code}
testLazySeekEnabled(org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance)
Time elapsed: 0.6 sec <<< ERROR!
org.apache.hadoop.fs.s3a.AWSS3IOException: purging multipart uploads on
landsat-pds: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you
are attempting to access must be addressed using the specified endpoint. Please
send all future requests to this endpoint. (Service: Amazon S3; Status Code:
301; Error Code: PermanentRedirect; Request ID: 5B7A5D18BE596E4B), S3 Extended
Request ID:
uE4pbbmpxi8Nh7rycS6GfIEi9UH/SWmJfGtM9IeKvRyBPZp/hN7DbPyz272eynz3PEMM2azlhjE=:
The bucket you are attempting to access must be addressed using the specified
endpoint. Please send all future requests to this endpoint. (Service: Amazon
S3; Status Code: 301; Error Code: PermanentRedirect; Request ID:
5B7A5D18BE596E4B)
at
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3738)
at
com.amazonaws.services.s3.AmazonS3Client.listMultipartUploads(AmazonS3Client.java:2796)
at
com.amazonaws.services.s3.transfer.TransferManager.abortMultipartUploads(TransferManager.java:1217)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.initMultipartUploads(S3AFileSystem.java:454)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:289)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2715)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:96)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2749)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2737)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:430)
at
org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance.bindS3aFS(TestS3AInputStreamPerformance.java:93)
at
org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance.openFS(TestS3AInputStreamPerformance.java:81)
{code}
Even if multipart setup was skipped, no doubt the next list/get/head call would
trigger the same reaction. And, if the client did process the 301, it'd kill
performance.
Best to have the tests which work with the public datasets to be configurable
as to which endpoint to use —and to default to us-east
> s3a doesn't authenticate with S3 frankfurt (or other V4 auth only endpoints)
> ----------------------------------------------------------------------------
>
> Key: HADOOP-13324
> URL: https://issues.apache.org/jira/browse/HADOOP-13324
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.8.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
>
> S3A doesn't auth with S3 frankfurt. This installation only supports v4 API.
> There are some JVM options which should set this, but even they don't appear
> to be enough. It appears that we have to allow the s3a client to change the
> endpoint with which it authenticates from a generic "AWS S3" to a
> frankfurt-specific one.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]