Re: Ending a job early

2014-10-28 Thread Patrick Wendell
Hey Jim,

There are some experimental (unstable) API's that support running jobs
which might short-circuit:

https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/SparkContext.scala#L1126

This can be used for doing online aggregations like you are
describing. And in one or two cases we've exposed functions that rely
on this:

https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala#L334

I would expect more robust support for online aggregation to show up
in a future version of Spark.

- Patrick

On Tue, Oct 28, 2014 at 7:27 AM, Jim Carroll  wrote:
>
> We have some very large datasets where the calculation converge on a result.
> Our current implementation allows us to track how quickly the calculations
> are converging and end the processing early. This can significantly speed up
> some of our processing.
>
> Is there a way to do the same thing is spark?
>
> A trivial example might be a column average on a dataset. As we're
> 'aggregating' rows into columnar averages I can track how fast these
> averages are moving and decide to stop after a low percentage of the rows
> has been processed, producing an estimate rather than an exact value.
>
> Within a partition, or better yet, within a worker across 'reduce' steps, is
> there a way to stop all of the aggregations and just continue on with
> reduces of already processed data?
>
> Thanks
> JIm
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Ending-a-job-early-tp17505.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Ending a job early

2014-10-28 Thread Jim Carroll

We have some very large datasets where the calculation converge on a result.
Our current implementation allows us to track how quickly the calculations
are converging and end the processing early. This can significantly speed up
some of our processing.

Is there a way to do the same thing is spark?

A trivial example might be a column average on a dataset. As we're
'aggregating' rows into columnar averages I can track how fast these
averages are moving and decide to stop after a low percentage of the rows
has been processed, producing an estimate rather than an exact value.

Within a partition, or better yet, within a worker across 'reduce' steps, is
there a way to stop all of the aggregations and just continue on with
reduces of already processed data?

Thanks
JIm




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Ending-a-job-early-tp17505.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org