Github user icexelloss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20211#discussion_r160864143
  
    --- Diff: python/pyspark/sql/group.py ---
    @@ -233,6 +233,27 @@ def apply(self, udf):
             |  2| 1.1094003924504583|
             +---+-------------------+
     
    +        Notes on grouping column:
    --- End diff --
    
    Yeah. To be honest I don't think there is behavior that is both simple and 
works well with all use cases. It's probably a matter of leaning towards 
simpler behavior that doesn't work well in some cases or towards somewhat 
"magic" behavior. I don't think there is an obvious answer here.
    
    Another option is to always prepend grouping columns, if users want to 
return grouping columns in the UDF output, they can do a `drop` after `groupby 
apply` 
    
    ```
    pandas_udf('id int, v double', GROUP_MAP)
    def foo(pdf):
          return pdf.assign(v=pdf.v+1)
    
    df.groupby('id').apply(foo).drop(df.id)
    ```
    
    I don't think it's too annoying to add a drop after apply and it works well 
with the linear regression case. This is also a pretty straight forward non 
magical behavior. 
    
    What do you all think?



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to