Hi Michael,

Thanks for the response. I guess I was thinking more in terms of the
regular streaming model. so In this case I am little confused what my
window interval and slide interval be for the following case?

I need to hold a state (say a count) for 24 hours while capturing all its
updates and produce results every second. I also need to reset the state
(the count) back to zero every 24 hours.






On Mon, Apr 10, 2017 at 11:49 AM, Michael Armbrust <mich...@databricks.com>
wrote:

> Nope, structured streaming eliminates the limitation that micro-batching
> should affect the results of your streaming query.  Trigger is just an
> indication of how often you want to produce results (and if you leave it
> blank we just run as quickly as possible).
>
> To control how tuples are grouped into a window, take a look at the window
> <http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#window-operations-on-event-time>
> function.
>
> On Thu, Apr 6, 2017 at 10:26 AM, kant kodali <kanth...@gmail.com> wrote:
>
>> Hi All,
>>
>> Is the trigger interval mentioned in this doc
>> <http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html>
>> the same as batch interval in structured streaming? For example I have a
>> long running receiver(not kafka) which sends me a real time stream I want
>> to use window interval, slide interval of 24 hours to create the Tumbling
>> window effect but I want to process updates every second.
>>
>> Thanks!
>>
>
>

Reply via email to