[ 
https://issues.apache.org/jira/browse/SPARK-22778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anirudh Ramanathan updated SPARK-22778:
---------------------------------------
    Description: 
Building images based on master and deploying Spark PI results in the following 
error.

2017-12-13 19:57:19 INFO  SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Could not parse 
Master URL: 'k8s:https://xx.yy.zz.ww'
        at 
org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:496)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2490)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:927)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:918)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:918)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
2017-12-13 19:57:19 INFO  ShutdownHookManager:54 - Shutdown hook called
2017-12-13 19:57:19 INFO  ShutdownHookManager:54 - Deleting directory 
/tmp/spark-b47515c2-6750-4a37-aa68-6ee12da5d2bd

This is likely an artifact seen because of changes in master, or our submission 
code in the reviews. We haven't seen this on our fork. Hopefully once 
integration tests are ported against upstream/master, we will catch these 
issues earlier. 

  was:
Building images based on master and deploying Spark PI results in the following 
error.

2017-12-13 19:57:19 INFO  SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Could not parse 
Master URL: 'k8s:https://35.197.21.13'
        at 
org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:496)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2490)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:927)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:918)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:918)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
2017-12-13 19:57:19 INFO  ShutdownHookManager:54 - Shutdown hook called
2017-12-13 19:57:19 INFO  ShutdownHookManager:54 - Deleting directory 
/tmp/spark-b47515c2-6750-4a37-aa68-6ee12da5d2bd

This is likely an artifact seen because of changes in master, or our submission 
code in the reviews. We haven't seen this on our fork. Hopefully once 
integration tests are ported against upstream/master, we will catch these 
issues earlier. 


> Kubernetes scheduler at master failing to run applications successfully
> -----------------------------------------------------------------------
>
>                 Key: SPARK-22778
>                 URL: https://issues.apache.org/jira/browse/SPARK-22778
>             Project: Spark
>          Issue Type: Bug
>          Components: Kubernetes
>    Affects Versions: 2.3.0
>            Reporter: Anirudh Ramanathan
>
> Building images based on master and deploying Spark PI results in the 
> following error.
> 2017-12-13 19:57:19 INFO  SparkContext:54 - Successfully stopped SparkContext
> Exception in thread "main" org.apache.spark.SparkException: Could not parse 
> Master URL: 'k8s:https://xx.yy.zz.ww'
>       at 
> org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
>       at org.apache.spark.SparkContext.<init>(SparkContext.scala:496)
>       at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2490)
>       at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:927)
>       at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:918)
>       at scala.Option.getOrElse(Option.scala:121)
>       at 
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:918)
>       at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
>       at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
> 2017-12-13 19:57:19 INFO  ShutdownHookManager:54 - Shutdown hook called
> 2017-12-13 19:57:19 INFO  ShutdownHookManager:54 - Deleting directory 
> /tmp/spark-b47515c2-6750-4a37-aa68-6ee12da5d2bd
> This is likely an artifact seen because of changes in master, or our 
> submission code in the reviews. We haven't seen this on our fork. Hopefully 
> once integration tests are ported against upstream/master, we will catch 
> these issues earlier. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to