worked.
>
> Though there are some more complications.
> On Oct 30, 2015 8:27 AM, "skaarthik oss" wrote:
>
>> Did you consider UpdateStateByKey operation?
>>
>>
>>
>> *From:* Sandeep Giri [mailto:sand...@knowbigdata.com]
>> *Sent:* Thursday, October
5 3:09 PM
> *To:* user ; dev
> *Subject:* Maintaining overall cumulative data in Spark Streaming
>
>
>
> Dear All,
>
>
>
> If a continuous stream of text is coming in and you have to keep
> publishing the overall word count so far since 0:00 today, what would you
Did you consider UpdateStateByKey operation?
From: Sandeep Giri [mailto:sand...@knowbigdata.com]
Sent: Thursday, October 29, 2015 3:09 PM
To: user ; dev
Subject: Maintaining overall cumulative data in Spark Streaming
Dear All,
If a continuous stream of text is coming in and you have
-dev +user
Hi Sandeep,
Perhaps (flat)mapping values and using an accumulator?
> El 29/10/2015, a las 23:08, Sandeep Giri escribió:
>
> Dear All,
>
> If a continuous stream of text is coming in and you have to keep publishing
> the overall word count so far since 0:00 today, what would you d
Dear All,
If a continuous stream of text is coming in and you have to keep publishing
the overall word count so far since 0:00 today, what would you do?
Publishing the results for a window is easy but if we have to keep
aggregating the results, how to go about it?
I have tried to keep an StreamR