[ 
https://issues.apache.org/jira/browse/BEAM-10706?focusedWorklogId=521761&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521761
 ]

ASF GitHub Bot logged work on BEAM-10706:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/Dec/20 16:46
            Start Date: 08/Dec/20 16:46
    Worklog Time Spent: 10m 
      Work Description: iemejia commented on pull request #12583:
URL: https://github.com/apache/beam/pull/12583#issuecomment-740753739


   Thanks a lot for the info on localstack if you feel motivated please open a 
PR to update localstack and enable the tests that work in that PR otherwise I 
will do.
   
   I am trying to understand this again because it seems I may have 
misinterpreted it before. If we need deduplication to guarantee that operations 
do not fail in the presence of duplicate attributes why we need to pass the 
overwrite keys explictly? With the current implementation I have the impression 
we could end up filtering also non repeated keys. Couldn't we just simply 
compare the current `private List<KV<String, WriteRequest>> batch;` with the 
current element  and overwrite the duplicate element if is the case. I think 
this will make the implementation much simpler (and also won't change order).
   
   Also if deduplication is mandatory to not have `ValidationException` 
shouldn't this be the default behaviour? Or in which case I can benefit of 
avoiding deduplication?
   
   I excuse myself because I don't want to delay this longer but I still think 
I do not understand the fix.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 521761)
    Time Spent: 3.5h  (was: 3h 20m)

> DynamoDBIO fail to write to the same key in short consecution
> -------------------------------------------------------------
>
>                 Key: BEAM-10706
>                 URL: https://issues.apache.org/jira/browse/BEAM-10706
>             Project: Beam
>          Issue Type: Bug
>          Components: io-java-aws
>    Affects Versions: 2.23.0
>            Reporter: Dennis Yung
>            Assignee: Dennis Yung
>            Priority: P2
>             Fix For: 2.27.0
>
>          Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Internally, DynamoDBIO.Write uses the batchWriteItem method from the AWS SDK 
> to sink items. However, there is a limitation in the AWS SDK that a call to 
> batchWriteItem cannot contain duplicate keys.
> Currently DynamoDBIO.Write performs no key deduplication before flushing a 
> batch, which could cause ValidationException: Provided list of item keys 
> contains duplicates, if consecutive updates to a single key is within the 
> batch size (currently hardcoded to be 25). 
> To fix this bug, the batch of write requests need to be deduplicated before 
> being sent to batchRequest.addRequestItemsEntry



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to