Could you try setting zeppelin.pyspark.python in the interpreter setting to the
matching Python 3? "python3" in your example below.
_____________________________
From: Paulo Cheadi Haddad Filho <[email protected]>
Sent: Wednesday, September 16, 2015 9:21 AM
Subject: Fwd: Zeppelin error when trying to run pyspark using python3
To: <[email protected]>
Hello,
Yesterday I installed a Spark server with Zeppelin and, while testing in a new
notebook, I realized that pyspark is using Python 2.7.9. I have Python 3.4.3
installed and some other things I can tell if you later.
Looking for how to use python3, I found this post [1]. I tried setting those
env variables in .bashrc and zeppelin-env.sh as
export PYSPARK_PYTHON="python3"
export PYSPARK_DRIVER_PYTHON="ipython3"
When I run ./bin/pyspark I get
paulo_filho@spark:~$ $SPARK_HOME/bin/pyspark
Python 3.4.3 (default, Mar 26 2015, 22:03:40)
Type "copyright", "credits" or "license" for more information.
IPython 4.0.0 -- An enhanced Interactive Python.
...
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.5.0
/_/
Using Python version 3.4.3 (default, Mar 26 2015 22:03:40)
but Zeppelin didn't work. Instead, I got the error below:
%pyspark import sys
print(sys.version_info) sys.version_info(major=2, minor=7,
micro=9, releaselevel='final', serial=0)
%pyspark bankText = sc.textFile("/home/paulo_filho/data/bank.csv")
print(bankText.take(2))
Py4JJavaError: An error occurred while calling
z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID
0, localhost): org.apache.spark.api.python.PythonException: Traceback (most
recent call last):
File
"/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
line 64, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.4 than that in driver 2.7,
PySpark cannot run with different minor versions
at
org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at
org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:361)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent
call last):
File
"/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
line 64, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.4 than that in driver 2.7,
PySpark cannot run with different minor versions
at
org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at
org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while
calling z:org.apache.spark.api.python.PythonRDD.runJob.
', JavaObject id=o53), <traceback object at 0x7f814b4596c8>)
I noticed that in Zeppelin's "Interpreter" section there's a
config
zeppelin.pyspark.python python
I've already tried to change that, but I got the same error
always.
So, I'm here asking for your help. =)
Thanks!
[1]
http://stackoverflow.com/questions/30518362/how-do-i-set-the-drivers-python-version-in-spark