I want to map over a Cassandra table in Spark but my code that executes
needs a shutdown() call to return any threads, release file handles, etc.

Will spark always execute my mappers as a forked process? And if so how do
I handle threads preventing the JVM from terminating.

It would be nice if there was a way to clean up after yourself gracefully
in map jobs but I don’t think that exists right now.

-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
<http://spinn3r.com>

Reply via email to