Thanks for the explanation!
Very nice set of features. Looking forward to check it out myself :-)


2014-08-18 21:38 GMT+02:00 Gyula Fóra <gyula.f...@gmail.com>:

> Hey,
>
> The simple reduce is like what you said yes. But there are also grouped
> reduce which you can use by calling .groupBy(keyposition) and then reduce.
>
> Also there is reduce for windows: batchReduce and windowReduce batch gives
> you a sliding window over a predefined number of records, and window reduce
> gices you the same but by time. (also there are grouped versions of these)
>
> Cheers,
> Gyula
>
>
> On Mon, Aug 18, 2014 at 9:19 PM, Fabian Hueske <fhue...@apache.org> wrote:
>
> > Hi folks,
> >
> > great work!
> >
> > Looking at the example I have a quick question. What's the semantics of
> the
> > Reduce operator? I guess its not a window reduce.
> > Is it backed by a hash table and every input tuple updates the hash table
> > and returns the updated value?
> >
> > Cheers, Fabian
> >
> >
> > 2014-08-18 20:53 GMT+02:00 Stephan Ewen <se...@apache.org>:
> >
> > > The streaming code is in "flink-addons", for new/experimental code.
> > >
> > > Documents should come over the next days/weeks, definitely before we
> make
> > > this part of the core.
> > >
> > > Right now, I would suggest to have a look at some of the examples, to
> > get a
> > > feeling for the addon, check for example this here:
> > >
> > >
> >
> https://github.com/apache/incubator-flink/tree/master/flink-addons/flink-streaming/flink-streaming-examples/src/main/java/org/apache/flink/streaming/examples/wordcount
> > >
> > > (The example reads a file for simplicity, but the project also provides
> > > connectors for Kafka, RabbitMQ, ...)
> > >
> >
>

Reply via email to