[ 
https://issues.apache.org/jira/browse/SPARK-10812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcelo Vanzin updated SPARK-10812:
-----------------------------------
    Fix Version/s: 1.5.2

Backported to branch-1.5 (clean merge) to fix SPARK-11201.

> Spark Hadoop Util does not support stopping a non-yarn Spark Context & 
> starting a Yarn spark context.
> -----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-10812
>                 URL: https://issues.apache.org/jira/browse/SPARK-10812
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>            Reporter: holdenk
>            Assignee: Holden Karau
>            Priority: Minor
>             Fix For: 1.5.2, 1.6.0
>
>
> While this is likely not a huge issue for real production systems, for test 
> systems which may setup a Spark Context and tear it down and stand up a Spark 
> Context with a different master (e.g. some local mode & some yarn mode) tests 
> this cane be an issue. Discovered during work on spark-testing-base on Spark 
> 1.4.1, but seems like the logic that triggers it is present in master (see 
> SparkHadoopUtil object). A valid work around for users encountering this 
> issue is to fork a different JVM, however this can be heavy weight.
> {quote}
> [info] SampleMiniClusterTest:
> [info] Exception encountered when attempting to run a suite with class name: 
> com.holdenkarau.spark.testing.SampleMiniClusterTest *** ABORTED ***
> [info]   java.lang.ClassCastException: 
> org.apache.spark.deploy.SparkHadoopUtil cannot be cast to 
> org.apache.spark.deploy.yarn.YarnSparkHadoopUtil
> [info]   at 
> org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$.get(YarnSparkHadoopUtil.scala:163)
> [info]   at 
> org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:257)
> [info]   at 
> org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:561)
> [info]   at 
> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:115)
> [info]   at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
> [info]   at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
> [info]   at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
> [info]   at 
> com.holdenkarau.spark.testing.SharedMiniCluster$class.setup(SharedMiniCluster.scala:186)
> [info]   at 
> com.holdenkarau.spark.testing.SampleMiniClusterTest.setup(SampleMiniClusterTest.scala:26)
> [info]   at 
> com.holdenkarau.spark.testing.SharedMiniCluster$class.beforeAll(SharedMiniCluster.scala:103)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to