Ara, may I ask why you need to use micro-batching in the first place?
Reason why I am asking: Typically, when people talk about micro-batching, they are refer to the way some originally batch-based stream processing tools "bolt on" real-time processing by making their batch sizes really small. Here, micro-batching belongs to the realm of the inner workings of the stream processing tool. Orthogonally to that, you have features/operations such as windowing, triggers, etc. that -- unlike micro-batching -- allow you as the user of the stream processing tool to define which exact computation logic you need. Whether or not, say, windowing is or is not computed via micro-batching behind the scenes should (at least in an ideal world) be of no concern to the user. -Michael On Mon, Sep 5, 2016 at 9:10 PM, Ara Ebrahimi <ara.ebrah...@argyledata.com> wrote: > Hi, > > What’s the best way to do micro-batching in Kafka Streams? Any plans for a > built-in mechanism? Perhaps StateStore could act as the buffer? What > exactly are ProcessorContext.schedule()/punctuate() for? They don’t seem > to be used anywhere? > > http://hortonworks.com/blog/apache-storm-design-pattern-micro-batching/ > > Ara. > > > > ________________________________ > > This message is for the designated recipient only and may contain > privileged, proprietary, or otherwise confidential information. If you have > received it in error, please notify the sender immediately and delete the > original. Any other use of the e-mail by you is prohibited. Thank you in > advance for your cooperation. > > ________________________________ >