Github user icexelloss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20211#discussion_r161659200
  
    --- Diff: python/pyspark/sql/group.py ---
    @@ -233,6 +233,27 @@ def apply(self, udf):
             |  2| 1.1094003924504583|
             +---+-------------------+
     
    +        Notes on grouping column:
    --- End diff --
    
    Sorry for the late reply. I agree with @HyukjinKwon, I think we can do 
support both `foo(pdf)` and `foo(key, pdf)` through inspection.
    
    I will try to put up a PR soon.
    
    As to how to represent key, I think a tuple might be enough but I think a 
row also works. What do you guys think?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to