[
https://issues.apache.org/jira/browse/SPARK-12157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15832907#comment-15832907
]
Maciej Szymkiewicz commented on SPARK-12157:
--------------------------------------------
I've been looking at this in context of SPARK-19159 and it is not hard to fix.
Especially in case of scalar types. This would also address another problem
where {{udf}} is way to strict for standard types. For example
{code}
identity = udf(lambda x: x, DoubleType())
spark.range(0, 10).toDF("x").select(identity("x"))
{code}
will return `NULL`. It is a bit confusing behavior, especially for a Python
programmer.
So, easy to fix, but there is of course performance penalty. Bad news is it is
pretty severe in trivial cases (with identity preemptive cast increase
execution time ~3 fold). Good news is it shouldn't have much impact on overall
execution time considering overhead of Pyrolite.
[~josephkb] Do we have any standard performance suite to test Python UDFs?
> Support numpy types as return values of Python UDFs
> ---------------------------------------------------
>
> Key: SPARK-12157
> URL: https://issues.apache.org/jira/browse/SPARK-12157
> Project: Spark
> Issue Type: Improvement
> Components: PySpark, SQL
> Affects Versions: 1.5.2
> Reporter: Justin Uang
>
> Currently, if I have a python UDF
> {code}
> import pyspark.sql.types as T
> import pyspark.sql.functions as F
> from pyspark.sql import Row
> import numpy as np
> argmax = F.udf(lambda x: np.argmax(x), T.IntegerType())
> df = sqlContext.createDataFrame([Row(array=[1,2,3])])
> df.select(argmax("array")).count()
> {code}
> I get an exception that is fairly opaque:
> {code}
> Caused by: net.razorvine.pickle.PickleException: expected zero arguments for
> construction of ClassDict (for numpy.dtype)
> at
> net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
> at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:701)
> at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:171)
> at net.razorvine.pickle.Unpickler.load(Unpickler.java:85)
> at net.razorvine.pickle.Unpickler.loads(Unpickler.java:98)
> at
> org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1$$anonfun$apply$3.apply(python.scala:404)
> at
> org.apache.spark.sql.execution.BatchPythonEvaluation$$anonfun$doExecute$1$$anonfun$apply$3.apply(python.scala:403)
> {code}
> Numpy types like np.int and np.float64 should automatically be cast to the
> proper dtypes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]