Hi Marco,

Not sure if I get your problem correctly, but you can process those windows
on data "split" from the same input within the same Flink job.
Something along these lines:

DataStream<SomePojo> stream = ...
DataStream<SomePojo> a = stream.filter( /* time series name == "a" */);
a.keyBy(...).window(TumblingEventTimeWindows.of(Time.minutes(1)));

DataStream<SomePojo> b = stream.filter( /* time series name == "b" */);
b.keyBy(...).window(TumblingEventTimeWindows.of(Time.minutes(5)));

If needed, you can then union all of the separate results streams together.
a.union(b, c ...);

There is no need for separate Flink deployments to create such a pipeline.

Best,
Alexander Fedulov

On Wed, Jan 26, 2022 at 6:47 PM Marco Villalobos <mvillalo...@kineteque.com>
wrote:

> Hi,
>
> I am working with time series data in the form of (timestamp, name,
> value), and an event time that is the timestamp when the data was published
> onto kafka, and I have a business requirement in which each stream element
> becomes enriched, and then processing requires different time series names
> to be processed in different windows with different time averages.
>
> For example, time series with name "a"
>
> might require a one minute window, and five minute window.
>
> time series with name "b" requires no windowing.
>
> time series with name "c" requires a two minute window and 10 minute
> window.
>
> Does flink support this style of windowing?  I think it doesn't.  Also,
> does any streaming platform support that type of windowing?
>
> I was thinking that this type of windowing support might require a
> different flink deployment per each window.  Would that scale though, if
> there are tens of thousands of time series names / windows?
>
> Any help or advice would be appreciated. Thank you.
>
> Marco A. Villalobos
>
>
>

Reply via email to