[ 
https://issues.apache.org/jira/browse/HADOOP-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16417219#comment-16417219
 ] 

Steve Loughran commented on HADOOP-15349:
-----------------------------------------

Stack
{code}
2018-03-28 04:22:17,375 [s3-committer-pool-2] ERROR s3a.S3AFileSystem 
(S3AFileSystem.java:finishedWrite(2730)) - S3Guard: Error updating 
MetadataStore for write to 
cloud-integration/DELAY_LISTING_ME/S3ACommitBulkDataSuite/bulkdata/output/landsat/parquet/parted-1/year=2016/month=6/part-00000-24152aa2-c86d-49d2-98d4-820dc37a6df1-local-1522235507089.c000.snappy.parquet:
java.io.IOException: Max retries exceeded (9) for DynamoDB
        at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoff(DynamoDBMetadataStore.java:657)
        at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.processBatchWriteRequest(DynamoDBMetadataStore.java:636)
        at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.put(DynamoDBMetadataStore.java:695)
        at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.put(DynamoDBMetadataStore.java:685)
        at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.putAndReturn(S3Guard.java:149)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:2727)
        at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$finalizeMultipartUpload$1(WriteOperationHelper.java:234)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
        at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.finalizeMultipartUpload(WriteOperationHelper.java:222)
        at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.completeMPUwithRetries(WriteOperationHelper.java:267)
        at 
org.apache.hadoop.fs.s3a.commit.CommitOperations.innerCommit(CommitOperations.java:179)
        at 
org.apache.hadoop.fs.s3a.commit.CommitOperations.commit(CommitOperations.java:151)
        at 
org.apache.hadoop.fs.s3a.commit.CommitOperations.commitOrFail(CommitOperations.java:134)
        at 
org.apache.hadoop.fs.s3a.commit.AbstractS3ACommitter.lambda$commitPendingUploads$3(AbstractS3ACommitter.java:451)
        at org.apache.hadoop.fs.s3a.commit.Tasks$Builder$1.run(Tasks.java:254)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
{code}

> S3Guard DDB retryBackoff to be more informative on limits exceeded
> ------------------------------------------------------------------
>
>                 Key: HADOOP-15349
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15349
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Priority: Minor
>
> When S3Guard can't update the DB and so throws an IOE after the retry limit 
> is exceeded, it's not at all informative. Improve logging & exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to