[
https://issues.apache.org/jira/browse/BEAM-10706?focusedWorklogId=476661&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-476661
]
ASF GitHub Bot logged work on BEAM-10706:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 31/Aug/20 16:43
Start Date: 31/Aug/20 16:43
Worklog Time Spent: 10m
Work Description: dennisylyung commented on a change in pull request
#12583:
URL: https://github.com/apache/beam/pull/12583#discussion_r480253111
##########
File path:
sdks/java/io/amazon-web-services/src/test/java/org/apache/beam/sdk/io/aws/dynamodb/DynamoDBIOTest.java
##########
@@ -199,7 +209,8 @@ public void testRetries() throws Throwable {
writeRequest -> KV.of(tableName, writeRequest))
.withRetryConfiguration(
DynamoDBIO.RetryConfiguration.create(4,
Duration.standardSeconds(10)))
-
.withAwsClientsProvider(AwsClientsProviderMock.of(amazonDynamoDBMock)));
+
.withAwsClientsProvider(AwsClientsProviderMock.of(amazonDynamoDBMock))
+ .withOverwriteByPKeys(overwriteByPKeys));
Review comment:
Spent some time working on this but couldn't come up with anything. Any
tips how to test if the batches are correct? I can't think of anyway to test it
in a pipeline, as the end result written to the db is the same (the issue is on
api level). Should I do it outside a pipeline, call processElement manually and
check the batch size?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 476661)
Time Spent: 1h 20m (was: 1h 10m)
> DynamoDBIO fail to write to the same key in short consecution
> -------------------------------------------------------------
>
> Key: BEAM-10706
> URL: https://issues.apache.org/jira/browse/BEAM-10706
> Project: Beam
> Issue Type: Bug
> Components: io-java-aws
> Affects Versions: 2.23.0
> Reporter: Dennis Yung
> Assignee: Dennis Yung
> Priority: P2
> Time Spent: 1h 20m
> Remaining Estimate: 0h
>
> Internally, DynamoDBIO.Write uses the batchWriteItem method from the AWS SDK
> to sink items. However, there is a limitation in the AWS SDK that a call to
> batchWriteItem cannot contain duplicate keys.
> Currently DynamoDBIO.Write performs no key deduplication before flushing a
> batch, which could cause ValidationException: Provided list of item keys
> contains duplicates, if consecutive updates to a single key is within the
> batch size (currently hardcoded to be 25).
> To fix this bug, the batch of write requests need to be deduplicated before
> being sent to batchRequest.addRequestItemsEntry
--
This message was sent by Atlassian Jira
(v8.3.4#803005)