On 1 Oct 2016, at 02:49, Kevin Grealish 
<kevin...@microsoft.com<mailto:kevin...@microsoft.com>> wrote:

I’m seeing a regression when submitting a batch PySpark program with additional 
files using LIVY. This is YARN cluster mode. The program files are placed into 
the mounted Azure Storage before making the call to LIVY. This is happening 
from an application which has credentials for the storage and the LIVY 
endpoint, but not local file systems on the cluster. This previously worked but 
now I’m getting the error below.

Seems this restriction was introduced with 
https://github.com/apache/spark/commit/5081a0a9d47ca31900ea4de570de2cbb0e063105 
(new in 1.6.2 and 2.0.0).

How should the scenario above be achieved now? Am I missing something?

This has been fixed in https://issues.apache.org/jira/browse/SPARK-17512 ; I 
don't know if its in 2.0.1 though


Exception in thread "main" java.lang.IllegalArgumentException: Launching Python 
applications through spark-submit is currently only supported for local files: 
wasb://kevingreclust...@xxxxxxxx.blob.core.windows.net/xxxxxxxxx/xxxxxxx.py
                at 
org.apache.spark.deploy.PythonRunner$.formatPath(PythonRunner.scala:104)
                at 
org.apache.spark.deploy.PythonRunner$$anonfun$formatPaths$3.apply(PythonRunner.scala:136)
                at 
org.apache.spark.deploy.PythonRunner$$anonfun$formatPaths$3.apply(PythonRunner.scala:136)
                at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
                at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
                at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
                at 
scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
                at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
                at 
scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
                at 
org.apache.spark.deploy.PythonRunner$.formatPaths(PythonRunner.scala:136)
                at 
org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$11.apply(SparkSubmit.scala:639)
                at 
org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$11.apply(SparkSubmit.scala:637)
                at scala.Option.foreach(Option.scala:236)
                at 
org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:637)
                at 
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:154)
                at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
                at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
java.lang.Exception: spark-submit exited with code 1}.

Reply via email to