[
https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16713512#comment-16713512
]
lqjacklee commented on HADOOP-15847:
------------------------------------
When set the capacity to 1, test output render as below :
2018-12-08 11:55:57,912 [pool-4-thread-22] INFO
s3guard.ITestDynamoDBMetadataStoreScale
(ITestDynamoDBMetadataStoreScale.java:lambda$execute$8(432)) - Operation [0]
raised a throttled exception
org.apache.hadoop.fs.s3a.AWSServiceThrottledException: Max retries during batch
write exceeded (2) for DynamoDB. This may be because the write threshold of
DynamoDB is set too low.: Throttling (Service: S3Guard; Status Code: 503; Error
Code: Throttling; Request ID: n/a)
org.apache.hadoop.fs.s3a.AWSServiceThrottledException: Max retries during batch
write exceeded (2) for DynamoDB. This may be because the write threshold of
DynamoDB is set too low.: Throttling (Service: S3Guard; Status Code: 503; Error
Code: Throttling; Request ID: n/a)
at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:800)
at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.processBatchWriteRequest(DynamoDBMetadataStore.java:759)
at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.innerPut(DynamoDBMetadataStore.java:845)
at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.put(DynamoDBMetadataStore.java:837)
at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.put(DynamoDBMetadataStore.java:831)
at
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreScale.lambda$test_030_BatchedWrite$0(ITestDynamoDBMetadataStoreScale.java:237)
at
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStoreScale.lambda$execute$8(ITestDynamoDBMetadataStoreScale.java:428)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.AmazonServiceException: Throttling (Service: S3Guard;
Status Code: 503; Error Code: Throttling; Request ID: n/a)
at
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:791)
... 11 more
> S3Guard testConcurrentTableCreations to set r & w capacity == 1
> ---------------------------------------------------------------
>
> Key: HADOOP-15847
> URL: https://issues.apache.org/jira/browse/HADOOP-15847
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3, test
> Affects Versions: 3.2.0
> Reporter: Steve Loughran
> Priority: Major
> Attachments: HADOOP-15847-001.patch
>
>
> I just found a {{testConcurrentTableCreations}} DDB table lurking in a
> region, presumably from an interrupted test. Luckily
> test/resources/core-site.xml forces the r/w capacity to be 10, but it could
> still run up bills.
> Recommend
> * explicitly set capacity = 1 for the test
> * and add comments in the testing docs about keeping cost down.
> I think we may also want to make this a scale-only test, so it's run less
> often
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]