[ https://issues.apache.org/jira/browse/SPARK-6358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363498#comment-14363498 ]
dustin davidson commented on SPARK-6358: ---------------------------------------- 'spark-submit' is running as 'hadmin'. The python file executed is owned by 'hadmin' and sits in the '/home/hadmin' directory. The 'testenv' is also owned by 'hadmin'. I recursively 777 the 'testenv' directory as well. Python binary in '/home/hadmin/testenv/bin': -rwxrwxrwx. 1 hadmin hadmin 4.8K Mar 11 19:45 python Pyspark CLI (No errors): PYSPARK_PYTHON=/home/hadmin/testenv/bin/python pyspark Spark-submit (Error): PYSPARK_PYTHON=/home/hadmin/testenv/bin/python spark-submit collaborative_filtering.py > Spark-submit error when using PYSPARK_PYTHON enviromnental variable > ------------------------------------------------------------------- > > Key: SPARK-6358 > URL: https://issues.apache.org/jira/browse/SPARK-6358 > Project: Spark > Issue Type: Bug > Affects Versions: 1.2.0 > Environment: Using CDH5.3 with Spark 1.2.0 > Reporter: dustin davidson > > When using spark-submit the PYSPARK_PYTHON setting throws an error. I can > run the pyspark repl while setting PYSPARK_PYTHON, but spark-submit does not > work. > Error received when running spark-submit: > 15/03/12 15:25:22 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, > hadoop-manager): java.io.IOException: Cannot run program > "/home/hadmin/testenv/bin/python": error=13, Permission denied > at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) > at > org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:160) > at > org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:86) > at > org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:62) > at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:102) > at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:230) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) > at org.apache.spark.scheduler.Task.run(Task.scala:56) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: error=13, Permission denied > at java.lang.UNIXProcess.forkAndExec(Native Method) > at java.lang.UNIXProcess.<init>(UNIXProcess.java:186) > at java.lang.ProcessImpl.start(ProcessImpl.java:130) > at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) > ... 13 more -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org