Josh Rosen created SPARK-3114:
---------------------------------
Summary: Python UDFS broken in Spark SQL
Key: SPARK-3114
URL: https://issues.apache.org/jira/browse/SPARK-3114
Project: Spark
Issue Type: Bug
Components: PySpark, SQL
Affects Versions: 1.1.0
Reporter: Josh Rosen
Assignee: Josh Rosen
Priority: Blocker
Python UDFs were inadvertently broken in SparkSQL by the PySpark
broadcast-optimization commit:
{code}
**********************************************************************
File "/Users/joshrosen/Documents/Spark/python/pyspark/sql.py", line 975, in
pyspark.sql.SQLContext.registerFunction
Failed example:
sqlCtx.sql("SELECT twoArgs('test', 1)").collect()
Exception raised:
Traceback (most recent call last):
File
"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/doctest.py",
line 1253, in __run
compileflags, 1) in test.globs
File "<doctest pyspark.sql.SQLContext.registerFunction[5]>", line 1, in
<module>
sqlCtx.sql("SELECT twoArgs('test', 1)").collect()
File "/Users/joshrosen/Documents/Spark/python/pyspark/sql.py", line 1615,
in collect
rows = RDD.collect(self)
File "/Users/joshrosen/Documents/Spark/python/pyspark/rdd.py", line 725,
in collect
bytesInJava = self._jrdd.collect().iterator()
File
"/Users/joshrosen/Documents/Spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",
line 538, in __call__
self.target_id, self.name)
File
"/Users/joshrosen/Documents/Spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py",
line 300, in get_return_value
format(target_id, '.', name), value)
Py4JJavaError: An error occurred while calling o607.collect.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 60.0 failed 1 times, most recent failure: Lost task 0.0 in stage 60.0
(TID 141, localhost): org.apache.spark.api.python.PythonException: Traceback
(most recent call last):
File "pyspark/worker.py", line 75, in main
command = ser._read_with_length(infile)
File "pyspark/serializers.py", line 150, in _read_with_length
return self.loads(obj)
File "pyspark/serializers.py", line 420, in loads
return self.serializer.loads(zlib.decompress(obj))
error: Error -3 while decompressing data: incorrect header check
org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:124)
org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:87)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:115)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1153)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1142)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1141)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1141)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:682)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:682)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:682)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1359)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{code}
The zlib compression was introduced in a recent commit for improving PySpark’s
broadcast variable performance (https://github.com/apache/spark/pull/1912). It
looks like the worker is expecting to receive a zlib-compressed command, but
somehow is receiving something else.
It looks like the code that registers Python UDFs doesn’t perform this
compression, leading to this issue:
{code}
self._ssql_ctx.registerPython(name,
bytearray(CloudPickleSerializer().dumps(command)),
env,
includes,
self._sc.pythonExec,
self._sc._javaAccumulator,
str(returnType))
{code}
The root problem here is that the SparkSQL Python tests weren't run by Jenkins.
I think the problem is that PySpark’s SparkSQL tests are skipped unless
_RUN_SQL_TESTS is true, and this is variable is only set when we detect changes
to SparkSQL. Instead, it should always be set when running the PySpark tests.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]