[ 
https://issues.apache.org/jira/browse/BEAM-1330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17434506#comment-17434506
 ] 

Chamikara Madhusanka Jayalath commented on BEAM-1330:
-----------------------------------------------------

I think a simple fix can be keeping track of keys using a map and flushing 
existing batch when a duplicate key is detected.

> DatastoreIO Writes should flush early when duplicate keys arrive.
> -----------------------------------------------------------------
>
>                 Key: BEAM-1330
>                 URL: https://issues.apache.org/jira/browse/BEAM-1330
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-gcp
>            Reporter: Vikas Kedigehalli
>            Priority: P3
>
> DatastoreIO writes batches upto 500 entities (rpc limit for Cloud Datastore), 
> before flushing them out. The writes are non-transactional and thus do not 
> support duplicate keys in the writes. This can be problem, especially when 
> using a non global windowing, where multiple windows for the same key end up 
> in the same batch, and prevents the writes from succeeding. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to