[
https://issues.apache.org/jira/browse/FLINK-8380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Flink Jira Bot updated FLINK-8380:
----------------------------------
Labels: auto-deprioritized-major stale-minor (was:
auto-deprioritized-major)
I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help
the community manage its development. I see this issues has been marked as
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is
still Minor, please either assign yourself or give an update. Afterwards,
please remove the label or in 7 days the issue will be deprioritized.
> Dynamic BucketingSink paths based on ingested Kafka topics
> ----------------------------------------------------------
>
> Key: FLINK-8380
> URL: https://issues.apache.org/jira/browse/FLINK-8380
> Project: Flink
> Issue Type: Improvement
> Components: API / DataStream
> Affects Versions: 1.4.0
> Reporter: Kyle Hamlin
> Priority: Minor
> Labels: auto-deprioritized-major, stale-minor
>
> Flink 1.4 released a feature that allows Kafka consumers to dynamically
> ingest topics based on a regex pattern. If a user wanted to use Flink as a
> simple (no transformations) but dynamic (auto topic discovery & auto output
> path generation) data persister they would currently only have half the tools
> to do so. I believe it would be a beneficial feature to allow users to not
> only define automatic topic discovery but also a way to dynamically
> incorporate those topics into a BucketingSink output path. For example:
> If I had three Kafka topics
> {code:java}
> select-topic-1
> ignore-topic-1
> select-topic-2
> {code}
> And my Kafka consumers regex only selected two topics
> {code:java}
> val consumer = new
> FlinkKafkaConsumer010[GenericRecord](Pattern.compile("select-.*?"), new
> MyDeserializer(), props)
> {code}
> Then the selected topics would appended to the beginning of the BucketingSink
> output path and any Bucketers partitions would follow
> {code:java}
> val sink = new BucketingSink[GenericRecord]("s3://my-bucket/")
> sink.setBucketer(new DateTimeBucketer[GenericRecord]("yyyyMMdd"))
> {code}
> The resulting output paths would be
> {code:java}
> s3://my-bucket/selected-topic1/yyyyMMdd/
> s3://my-bucket/selected-topic2/yyyyMMdd/
> {code}
> As new topics are discovered via the regex pattern (while the app is running)
> the set of BucketingSink output paths would grow.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)