[
https://issues.apache.org/jira/browse/FLINK-31946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17716811#comment-17716811
]
Curtis Jensen commented on FLINK-31946:
---------------------------------------
I attempted to fork the flink-connector-aws repo and implement a solution that
allowed for multiple items to be generated. I modified the
DynamoDbWriteRequest object to support multiple items, but caused the maxBatch
size to be exceeded because the AsyncSinkWriter expects one item per element.
So this may cause a bigger change to the connector-base, unless a better way is
available.
> DynamoDB Sink Allow Multiple Item Writes
> ----------------------------------------
>
> Key: FLINK-31946
> URL: https://issues.apache.org/jira/browse/FLINK-31946
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / DynamoDB
> Reporter: Curtis Jensen
> Priority: Minor
>
> In some cases, it is desirable to be able to write aggregation data to
> multiple partition keys. This supports the case of denormalizing data to
> facilitate more efficient read operations.
> However, the DynamoDBSink allows for only a single DynamoDB item to be
> generated for each Flink Element. This appears to be a limitation of the
> ElementConverter more than DyanmoDBSink.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)