We have some very large datasets where the calculation converge on a result.
Our current implementation allows us to track how quickly the calculations
are converging and end the processing early. This can significantly speed up
some of our processing.

Is there a way to do the same thing is spark?

A trivial example might be a column average on a dataset. As we're
'aggregating' rows into columnar averages I can track how fast these
averages are moving and decide to stop after a low percentage of the rows
has been processed, producing an estimate rather than an exact value.

Within a partition, or better yet, within a worker across 'reduce' steps, is
there a way to stop all of the aggregations and just continue on with
reduces of already processed data?

Thanks
JIm




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Ending-a-job-early-tp17505.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to