[ 
https://issues.apache.org/jira/browse/SPARK-18977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vitaly Gerasimov updated SPARK-18977:
-------------------------------------
    Summary: Heavy udf is not stopped by cancelJobGroup  (was: Heavy udf in not 
stopped by cancelJobGroup)

> Heavy udf is not stopped by cancelJobGroup
> ------------------------------------------
>
>                 Key: SPARK-18977
>                 URL: https://issues.apache.org/jira/browse/SPARK-18977
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.6.2
>            Reporter: Vitaly Gerasimov
>
> Let's say we have a heavy udf that processing during a long time. When I try 
> to run a job in job group that execute this udf and call cancelJobGroup(), 
> the job is still continue processing.
> {code}
> # ./spark-shell
> > import scala.concurrent.Future
> > import scala.concurrent.ExecutionContext.Implicits.global
> > sc.setJobGroup("test-group", "udf-test")
> > sqlContext.udf.register("sleep", (times: Int) => { (1 to 
> > times).toList.foreach{ _ =>  print("sleep..."); Thread.sleep(10000) }; 1L })
> > Future { Thread.sleep(50000); sc.cancelJobGroup("test-group") }
> > sqlContext.sql("SELECT sleep(10)").collect()
> {code}
> It returns:
> {code}
> sleep...sleep...sleep...sleep...sleep...org.apache.spark.SparkException: Job 
> 0 cancelled part of cancelled job group test-group
> > sleep...sleep...sleep...sleep...sleep...16/12/22 14:36:44 WARN 
> > TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): TaskKilled 
> > (killed intentionally)
> {code}
> It seems unexpectedly for me, but if I don't know something and it works as 
> expected feel free to close the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to