Github user icexelloss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20531#discussion_r166660050
  
    --- Diff: python/pyspark/sql/tests.py ---
    @@ -4509,23 +4523,32 @@ def weighted_mean(v, w):
             return weighted_mean
     
         def test_manual(self):
    +        from pyspark.sql.functions import pandas_udf, array
    +
             df = self.data
             sum_udf = self.pandas_agg_sum_udf
             mean_udf = self.pandas_agg_mean_udf
    -
    -        result1 = df.groupby('id').agg(sum_udf(df.v), 
mean_udf(df.v)).sort('id')
    +        mean_arr_udf = pandas_udf(
    +            self.pandas_agg_mean_udf.func,
    --- End diff --
    
    I think with Pandas UDFs, certain type coercion is supported, e.g., when 
user specify "double type" and returns a `pd.Series` of `int`, it will 
automatically cast it to `pd.Series` of double. This behavior is different from 
regular Python UDF, which will return null in this case. Most of the type 
coercion is done by pyarrow. (Btw, I think type coercion in Pandas UDFs is an 
huge improvement over Python UDF because that's one of the biggest frustration 
our PySpark users have...)
    
     
    
    
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to