Matei, thanks. So including the PYTHONPATH in spark-env.sh seemed to work. I am faced with this issue now. I am doing a large GroupBy in pyspark and the process fails (at the driver it seems). There is not much of a stack trace here to see where the issue is happening. This process works locally.
---- 14/04/11 12:59:11 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 14/04/11 12:59:11 INFO scheduler.DAGScheduler: Failed to run foreach at load/load_etl.py:150 Traceback (most recent call last): File "load/load_etl.py", line 164, in <module> generateImplVolSeries(dirName="vodimo/data/month/", symbols=symbols, outputFilePath="vodimo/data/series/output") File "load/load_etl.py", line 150, in generateImplVolSeries rdd = rdd.foreach(generateATMImplVols) File "/root/spark/python/pyspark/rdd.py", line 462, in foreach self.mapPartitions(processPartition).collect() # Force evaluation File "/root/spark/python/pyspark/rdd.py", line 469, in collect bytesInJava = self._jrdd.collect().iterator() File "/root/spark/python/lib/py4j-0.8.1-src.zip/py4j/java_gateway.py", line 537, in __call__ File "/root/spark/python/lib/py4j-0.8.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o55.collect. : org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 4 times (most recent failure: unknown) at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020) at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018) at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604) at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604) at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) at akka.actor.ActorCell.invoke(ActorCell.scala:456) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) at akka.dispatch.Mailbox.run(Mailbox.scala:219) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 14/04/11 12:59:11 INFO scheduler.DAGScheduler: Executor lost: 3 (epoch 4) 14/04/11 12:59:11 INFO storage.BlockManagerMasterActor: Trying to remove executor 3 from BlockManagerMaster. 14/04/11 12:59:11 INFO storage.BlockManagerMaster: Removed 3 successfully in removeExecutor ----- CEO / Velos (velos.io) -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-0-9-1-PySpark-ImportError-tp4068p4125.html Sent from the Apache Spark User List mailing list archive at Nabble.com.