If you’re looking for executor side setup and cleanup functions, there ain’t any yet, but you can achieve the same semantics via |RDD.mapPartitions|.

Please check the “setup() and cleanup” section of this blog from Cloudera for details: http://blog.cloudera.com/blog/2014/09/how-to-translate-from-mapreduce-to-apache-spark/

On 11/14/14 10:44 AM, Dai, Kevin wrote:

HI, all

Is there setup and cleanup function as in hadoop mapreduce in spark which does some initialization and cleanup work?

Best Regards,

Kevin.

Reply via email to