[
https://issues.apache.org/jira/browse/FLINK-24905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Danny Cranmer updated FLINK-24905:
----------------------------------
Description:
h2. Motivation
*User stories:*
As a Flink user, I’d like to use the Table API for the new Kinesis Data Streams
sink.
*Scope:*
* Introduce {{AsyncDynamicTableSink}} that enables Sinking Tables into Async
Implementations.
* Implement a new {{KinesisDynamicTableSink}} that uses
{{KinesisDataStreamSink}} Async Implementation and implements
{{{}AsyncDynamicTableSink{}}}.
* The implementation introduces Async Sink configurations as optional options
in the table definition, with default values derived from the
{{KinesisDataStream}} default values.
* Unit/Integration testing. modify KinesisTableAPI tests for the new
implementation, add unit tests for {{AsyncDynamicTableSink}} and
{{KinesisDynamicTableSink}} and {{{}KinesisDynamicTableSinkFactory{}}}.
* Java / code-level docs.
h2. References
More details to be found
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]
was:
h2. Motivation
*User stories:*
As a Flink user, I’d like to use the Table API for the new Kinesis Data Streams
sink.
*Scope:*
* Implement an asynchronous sink for Kinesis Data Streams (KDS) by inheriting
the AsyncSinkBase class. The implementation can for now reside in its own
module in flink-connectors. The module and package name can be anything
reasonable e.g. {{flink-connector-aws-kinesis}} for the module name and
{{org.apache.flink.connector.aws.kinesis}} for the package name. Side-note:
There will be additional work later to move these implementations somewhere
else.
* The implementation must allow users to configure the Kinesis Client, with
reasonable default settings.
* Implement an asynchornous sink writer for KDS by extending the
AsyncSinkWriter. The implementation must deal with failed requests and retry
them using the {{requeueFailedRequestEntry}} method. If possible, the
implementation should batch multiple requests (PutRecordsRequestEntry objects)
to KDS for increased throughput. The implemented Sink Writer will be used by
the Sink class that will be created as part of this story.
* Unit/Integration testing. Use Kinesalite (in-memory Kinesis simulation). We
already use this in {{{}KinesisTableApiITCase{}}}.
* Java / code-level docs.
* End to end testing: add tests that hits a real AWS instance. (How to best
donate resources to the Flink project to allow this to happen?)
h2. References
More details to be found
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]
> KDS implementation of Async Sink Table API
> ------------------------------------------
>
> Key: FLINK-24905
> URL: https://issues.apache.org/jira/browse/FLINK-24905
> Project: Flink
> Issue Type: New Feature
> Components: Connectors / Kinesis
> Reporter: Zichen Liu
> Assignee: Ahmed Hamdy
> Priority: Major
> Fix For: 1.15.0
>
>
> h2. Motivation
> *User stories:*
> As a Flink user, I’d like to use the Table API for the new Kinesis Data
> Streams sink.
> *Scope:*
> * Introduce {{AsyncDynamicTableSink}} that enables Sinking Tables into Async
> Implementations.
> * Implement a new {{KinesisDynamicTableSink}} that uses
> {{KinesisDataStreamSink}} Async Implementation and implements
> {{{}AsyncDynamicTableSink{}}}.
> * The implementation introduces Async Sink configurations as optional
> options in the table definition, with default values derived from the
> {{KinesisDataStream}} default values.
> * Unit/Integration testing. modify KinesisTableAPI tests for the new
> implementation, add unit tests for {{AsyncDynamicTableSink}} and
> {{KinesisDynamicTableSink}} and {{{}KinesisDynamicTableSinkFactory{}}}.
> * Java / code-level docs.
> h2. References
> More details to be found
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]
--
This message was sent by Atlassian Jira
(v8.20.1#820001)