Hi all,

I wonder whether it’s possible to use Flink for the following requirement. We 
need to process a Kinesis stream and based on values in each record, route 
those records to different S3 buckets and keyspaces, with support for batching 
up of files and control over partitioning scheme (so preferably through 
Firehose).

I know it’s straightforward to have a Kinesis source and a Kinesis sink, and 
the hook up Firehose to the sink from AWS, but I need a “fan out” to 
potentially thousands of different buckets, based on content of each event.

Thanks!
Turar


Reply via email to