I don't think this is solving the problem. So here are the issues:
1) How do we push entire data to vertica. Opening a connection per record
will be too costly
2) If a key doesn't come again, how do we push this to vertica
3) How do we schedule the dumping of data to avoid loading too much data in
state.



On Mon, May 23, 2016 at 1:33 PM, Ofir Kerker <ofir.ker...@gmail.com> wrote:

> Yes, check out mapWithState:
>
> https://databricks.com/blog/2016/02/01/faster-stateful-stream-processing-in-apache-spark-streaming.html
>
> _____________________________
> From: Nikhil Goyal <nownik...@gmail.com>
> Sent: Monday, May 23, 2016 23:28
> Subject: Timed aggregation in Spark
> To: <user@spark.apache.org>
>
>
>
> Hi all,
>
> I want to aggregate my data for 5-10 min and then flush the aggregated
> data to some database like vertica. updateStateByKey is not exactly helpful
> in this scenario as I can't flush all the records at once, neither can I
> clear the state. I wanted to know if anyone else has faced a similar issue
> and how did they handle it.
>
> Thanks
> Nikhil
>
>
>

Reply via email to