[
https://issues.apache.org/jira/browse/HADOOP-13139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15327482#comment-15327482
]
Steve Loughran commented on HADOOP-13139:
-----------------------------------------
-1 to patch 003
the problem I'm seeing is that as soon
{code}
mvn test -Dtest=TestS3A\* -Pparallel-tests -DtestsThreadCount=2
...
{code}
I don't what's happening, but if it's happening here, it's presumably happening
in trunk too...just nobody is doing parallel tests there.
While it's easy to dismiss this as one of the "not so good to run in parallel"
tests, we need to understand more why this is. I suspect it may just be that
the parallel test is initing the same object store during the test, as as
multipart purge is set to true + 0; that's enough to break things.
Notable that we're getting 404 as a failure here, which is being remapped to an
FNFE in our code. Is that the right reaction?
{code}
Running org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 18.818 sec <<<
FAILURE! - in org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool
testRegularMultiPartUpload(org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool)
Time elapsed: 7.491 sec <<< ERROR!
org.apache.hadoop.fs.s3a.AWSClientIOException: saving output on
tests3a/1a868efc-3a49-4407-9b36-9265743b5db6:
com.amazonaws.AmazonClientException: Unable to complete multi-part upload.
Individual part upload failed : The specified upload does not exist. The upload
ID may be invalid, or the upload may have been aborted or completed. (Service:
Amazon S3; Status Code: 404; Error Code: NoSuchUpload; Request ID:
ED572BAB993A2DC4): Unable to complete multi-part upload. Individual part upload
failed : The specified upload does not exist. The upload ID may be invalid, or
the upload may have been aborted or completed. (Service: Amazon S3; Status
Code: 404; Error Code: NoSuchUpload; Request ID: ED572BAB993A2DC4)
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:84)
at
org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:123)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at
org.apache.hadoop.fs.contract.ContractTestUtils.generateTestFile(ContractTestUtils.java:864)
at
org.apache.hadoop.fs.contract.ContractTestUtils.createAndVerifyFile(ContractTestUtils.java:892)
at
org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool.testRegularMultiPartUpload(TestS3ABlockingThreadPool.java:68)
Caused by: com.amazonaws.AmazonClientException: Unable to complete multi-part
upload. Individual part upload failed : The specified upload does not exist.
The upload ID may be invalid, or the upload may have been aborted or completed.
(Service: Amazon S3; Status Code: 404; Error Code: NoSuchUpload; Request ID:
ED572BAB993A2DC4)
at
com.amazonaws.services.s3.transfer.internal.CompleteMultipartUpload.collectPartETags(CompleteMultipartUpload.java:122)
at
com.amazonaws.services.s3.transfer.internal.CompleteMultipartUpload.call(CompleteMultipartUpload.java:85)
at
com.amazonaws.services.s3.transfer.internal.CompleteMultipartUpload.call(CompleteMultipartUpload.java:38)
at
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The specified
upload does not exist. The upload ID may be invalid, or the upload may have
been aborted or completed. (Service: Amazon S3; Status Code: 404; Error Code:
NoSuchUpload; Request ID: ED572BAB993A2DC4)
at
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2921)
at
com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2906)
at
com.amazonaws.services.s3.transfer.internal.UploadPartCallable.call(UploadPartCallable.java:33)
at
com.amazonaws.services.s3.transfer.internal.UploadPartCallable.call(UploadPartCallable.java:23)
at
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
testFastMultiPartUpload(org.apache.hadoop.fs.s3a.TestS3ABlockingThreadPool)
Time elapsed: 11.249 sec <<< ERROR!
java.io.FileNotFoundException: Multi-part upload with id
'jFO63Jn9nnLWYp17xOMOXlZE6A3kBHLNfRydOFYkd1TJKESP7ZgLCE4OPWhV2rluUdKysiC4XsnxFxYfMmXIqg--'
on tests3a/5f2bd5c5-5482-4e57-8836-8ec228e87a61:
com.amazonaws.services.s3.model.AmazonS3Exception: The specified upload does
not exist. The upload ID may be invalid, or the upload may have been aborted or
completed. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchUpload;
Request ID: ADFB4ECB8AA92906), S3 Extended Request ID:
IPOATzKHoEoWXlgogqfM3PB9x8m8TNwlqywNjE1f8JvPNn6RdQqxoxzhFTa5fTAbk4M3ef7XEGw=
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:106)
at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:141)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.waitForAllPartUploads(S3AFastOutputStream.java:365)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload.access$100(S3AFastOutputStream.java:319)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream.close(S3AFastOutputStream.java:254)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
{code}
> Branch-2: S3a to use thread pool that blocks clients
> ----------------------------------------------------
>
> Key: HADOOP-13139
> URL: https://issues.apache.org/jira/browse/HADOOP-13139
> Project: Hadoop Common
> Issue Type: Task
> Components: fs/s3
> Affects Versions: 2.8.0
> Reporter: Pieter Reuse
> Assignee: Pieter Reuse
> Attachments: HADOOP-13139-001.patch, HADOOP-13139-branch-2-003.patch,
> HADOOP-13139-branch-2.001.patch, HADOOP-13139-branch-2.002.patch
>
>
> HADOOP-11684 is accepted into trunk, but was not applied to branch-2. I will
> attach a patch applicable to branch-2.
> It should be noted in CHANGES-2.8.0.txt that the config parameter
> 'fs.s3a.threads.core' has been been removed and the behavior of the
> ThreadPool for s3a has been changed.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]