Hi All,

I'm quite new about this topic and about Spark in general. 

I have a sensor that is pushing data in real time and I need to calculate
some KPIs based on the data I have received. Given that some of the KPIs are
related to very old data (e.g. average of number of event in the last 3
months) I was wondering what is the best approach to do this with SPARK. 

The approach I'm currently following is creating partial KPIs in real time
and then create the other KPIs with a second spark chain scheduled on daily
/ weekly / monthly basis.

does make sense? if so, how can I schedule spark to run only once in a day /
week / month?

Thx
D



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Scheduling-Spark-process-tp25287.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to