[ https://issues.apache.org/jira/browse/CRUNCH-566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Nithin Asokan updated CRUNCH-566: --------------------------------- Summary: SparkPipeline done() should wait for all jobs to complete before closing SparkContext (was: SparkPipeline done() should wait for all jobs to complete) > SparkPipeline done() should wait for all jobs to complete before closing > SparkContext > ------------------------------------------------------------------------------------- > > Key: CRUNCH-566 > URL: https://issues.apache.org/jira/browse/CRUNCH-566 > Project: Crunch > Issue Type: Bug > Components: Spark > Affects Versions: 0.12.0 > Reporter: Nithin Asokan > > {{SparkPipeline#done()}} should consider waiting for all Spark jobs to > complete before shutting down a {{SparkContext}}. > Here is an example that fails > {code} > Pipeline pipeline = new SparkPipeline(sparkConnect, "ParallelAction", > getClass(), getConf()); > IdentityFn<Student> fn = IdentityFn.getInstance(); > pipeline.read(From.avroFile(input1, Student.class)).parallelDo(fn, > Avros.records(Student.class)).write(To.avroFile(args[3])); > pipeline.runAsync(); > pipeline.read(From.avroFile(input2, Student.class)).parallelDo(fn, > Avros.records(Student.class)).write(To.avroFile(args[4])); > pipeline.done(); > {code} > Error: > {code} > [2015-09-30 15:18:55,835] [ERROR] [Thread-37] > [org.apache.crunch.impl.spark.SparkRuntime] - Spark Exception > org.apache.spark.SparkException: Job cancelled because SparkContext was shut > down > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:699) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:698) > at scala.collection.mutable.HashSet.foreach(HashSet.scala:79) > at > org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:698) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1411) > at org.apache.spark.util.EventLoop.stop(EventLoop.scala:81) > at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1346) > at org.apache.spark.SparkContext.stop(SparkContext.scala:1386) > at > org.apache.spark.api.java.JavaSparkContext.stop(JavaSparkContext.scala:652) > at org.apache.crunch.impl.spark.SparkPipeline.done(SparkPipeline.java:178) > at com.test.ParallelAction.run(ParallelAction.java:47) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at com.test.ParallelAction.main(ParallelAction.java:53) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569) > at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166) > at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189) > at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110) > at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)