Please try to install numpy

Best Regard,
Jeff Zhang


From: mingda li <limingda1...@gmail.com<mailto:limingda1...@gmail.com>>
Reply-To: "users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>" 
<users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
Date: Friday, February 3, 2017 at 6:03 AM
To: "users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>" 
<users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>
Subject: Re: Error about PySpark

And I tried the ./bin/pyspark to run same program with package of mllib, That 
can work well for spark.

So do I need to set something for Zeppelin? Like PySpark_Python or PythonPath.

Bests,
Mingda

On Thu, Feb 2, 2017 at 12:07 PM, mingda li 
<limingda1...@gmail.com<mailto:limingda1...@gmail.com>> wrote:
Thanks. But when I changed the env of zeppelin as following:

export JAVA_HOME=/home/clash/asterixdb/jdk1.8.0_101

export ZEPPELIN_PORT=19037

export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12

Each time, if I want to use the mllib in zeppelin, I will meet the problem:

Py4JJavaError: An error occurred while calling 
z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 
13, SCAI05.CS.UCLA.EDU<http://SCAI05.CS.UCLA.EDU>): 
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File 
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py",
 line 98, in main
command = pickleSer._read_with_length(infile)
File 
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
 line 164, in _read_with_length
return self.loads(obj)
File 
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
 line 422, in loads
return pickle.loads(obj)
File 
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/mllib/__init__.py",
 line 25, in <module>
ImportError: No module named numpy
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at 
org.apache.spark.scheduler.DAGScheduler.org<http://org.apache.spark.scheduler.DAGScheduler.org>$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent 
call last):
File 
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py",
 line 98, in main
command = pickleSer._read_with_length(infile)
File 
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
 line 164, in _read_with_length
return self.loads(obj)
File 
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
 line 422, in loads
return pickle.loads(obj)
File 
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/mllib/__init__.py",
 line 25, in <module>
ImportError: No module named numpy
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while 
calling z:org.apache.spark.api.python.PythonRDD.runJob.\n', JavaObject 
id=o108), <traceback object at 0x7f3054e56f80>)

Do you know why? Do I need to set the python path?

On Wed, Feb 1, 2017 at 1:33 AM, Hyung Sung Shim 
<hss...@nflabs.com<mailto:hss...@nflabs.com>> wrote:
Hello.
You don't need to remove the /tmp/zeppelin_pyspark-4018989172273347075.py 
because it is generated automatically when you run the pyspark command.
and I think you don't need to set PYTHONPATH if you have python in your system.

I recommend you are using the SPARK_HOME like following.
export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12

now you can restart zeppelin please run your python command.

*. Could you give the absolute path for logFile like following.
logFile = "/Users/user/hiv.data"


2017-02-01 11:48 GMT+09:00 mingda li 
<limingda1...@gmail.com<mailto:limingda1...@gmail.com>>:
Dear all,

We are using Zeppelin. And I have added the export 
PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python to 
zeppelin-env.sh.
But each time, when I want to use pyspark, for example the program:

%pyspark
from pyspark import SparkContext
logFile = "hiv.data"
logData = sc.textFile(logFile).cache()
numAs = logData.filter(lambda s: 'a' in s).count()
numBs = logData.filter(lambda s: 'b' in s).count()
print "Lines with a: %i, lines with b: %i" % (numAs, numBs)

It can firstly run well. But second time, I run it again I will get such error:
Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in <module>
    sc.setJobGroup(jobGroup, "Zeppelin")
  File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py", 
line 876, in setJobGroup
    self._jsc.setJobGroup(groupId, description, interruptOnCancel)
AttributeError: 'NoneType' object has no attribute 'setJobGroup'

I need to rm /tmp/zeppelin_pyspark-4018989172273347075.py and start zeppelin 
again to let it work.
Does anyone have idea why?

Thanks



Reply via email to