s arrived (can cause high latency if the arrival
> rate of elements is low or varies).
> The flatmap function can be executed in parallel and does not require a
> keyed stream.
>
> Best, Fabian
>
>
> 2016-04-25 18:58 GMT+02:00 Konstantin Kulagin <kkula...@gmail.com>:
>
Hi guys,
I have some kind of general question in order to get more understanding of
stream vs final data transformation. More specific - I am trying to
understand 'entities' lifecycle during processing.
1) For example in a case of streams: suppose we start with some key-value
source, parallel it
I was trying to implement this (force flink to handle all values from
input) but had no success...
Probably I am not getting smth with flink windowing mechanism
I've created my 'finishing' trigger which is basically a copy of purging
trigger
But was not able to make it work:
No problems at all, there is not much flink people and a lot of asking guys
- it should be hard to understand each person's issues :)
Yes, it is not as easy as 'contains' operator: I need to collect some
amount of tuples in order to create a in-memory lucene index. After that I
will filter
Hi guys,
trying to run this example:
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource> source = env.addSource(new
SourceFunction>() {
@Override
public void