[ 
https://issues.apache.org/jira/browse/FLINK-24229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17545434#comment-17545434
 ] 

Yuri Gusev edited comment on FLINK-24229 at 6/2/22 11:51 AM:
-------------------------------------------------------------

Thanks [~dannycranmer]. 

We fixed most of the comments, there is also a PR to hide ElementConverter for 
review separately,but we can merge into this one.

One last missing part is client creation, I'll try to fix it soon. But most of 
it available for re-review. Would be nice to get it out soon before too many 
changes again, we re-wrote it couple of times already (earlier implementation 
was without shared base  async connector class). :)


was (Author: gusev):
Thanks [~dannycranmer]. 

We fixed most of the comments, there is also a PR to hide ElementConverter for 
review separately,but we can merge into this one.

One last missing part is client creation, I'll try to fix it soon. But most of 
it available for re-review. Would be nice to get it out soon before too many 
changes again, we re-wrote it couple of times already :)

> [FLIP-171] DynamoDB implementation of Async Sink
> ------------------------------------------------
>
>                 Key: FLINK-24229
>                 URL: https://issues.apache.org/jira/browse/FLINK-24229
>             Project: Flink
>          Issue Type: New Feature
>          Components: Connectors / Common
>            Reporter: Zichen Liu
>            Assignee: Yuri Gusev
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.16.0
>
>
> h2. Motivation
> *User stories:*
>  As a Flink user, I’d like to use DynamoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for DynamoDB by inheriting the 
> AsyncSinkBase class. The implementation can for now reside in its own module 
> in flink-connectors.
>  * Implement an asynchornous sink writer for DynamoDB by extending the 
> AsyncSinkWriter. The implementation must deal with failed requests and retry 
> them using the {{requeueFailedRequestEntry}} method. If possible, the 
> implementation should batch multiple requests (PutRecordsRequestEntry 
> objects) to Firehose for increased throughput. The implemented Sink Writer 
> will be used by the Sink class that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing: add tests that hits a real AWS instance. (How to best 
> donate resources to the Flink project to allow this to happen?)
> h2. References
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to