xinrong-meng opened a new pull request, #40486:
URL: https://github.com/apache/spark/pull/40486

   ### What changes were proposed in this pull request?
   Implement Grouped Map API:`GroupedData.applyInPandas` and 
`GroupedData.apply`.
   
   ### Why are the changes needed?
   Parity with vanilla PySpark.
   
   
   ### Does this PR introduce _any_ user-facing change?
   Yes. `GroupedData.applyInPandas` and `GroupedData.apply` are supported now, 
as shown below.
   ```sh
   >>> df = spark.createDataFrame([(1, 1.0), (1, 2.0), (2, 3.0), (2, 5.0), (2, 
10.0)],("id", "v"))
   >>> def normalize(pdf):
   ...     v = pdf.v
   ...     return pdf.assign(v=(v - v.mean()) / v.std())
   ... 
   >>> df.groupby("id").applyInPandas(normalize, schema="id long, v 
double").show()
   
   +---+-------------------+                                                    
   
   | id|                  v|
   +---+-------------------+
   |  1|-0.7071067811865475|
   |  1| 0.7071067811865475|
   |  2|-0.8320502943378437|
   |  2|-0.2773500981126146|
   |  2| 1.1094003924504583|
   +---+-------------------+
   ```
   
   ```sh
   >>> @pandas_udf("id long, v double", PandasUDFType.GROUPED_MAP)
   ... def normalize(pdf):
   ...     v = pdf.v
   ...     return pdf.assign(v=(v - v.mean()) / v.std())
   ... 
   >>> df.groupby("id").apply(normalize).show()
   /Users/xinrong.meng/spark/python/pyspark/sql/connect/group.py:228: 
UserWarning: It is preferred to use 'applyInPandas' over this API. This API 
will be deprecated in the future releases. See SPARK-28264 for more details.
     warnings.warn(
   +---+-------------------+                                                    
   
   | id|                  v|
   +---+-------------------+
   |  1|-0.7071067811865475|
   |  1| 0.7071067811865475|
   |  2|-0.8320502943378437|
   |  2|-0.2773500981126146|
   |  2| 1.1094003924504583|
   +---+-------------------+
   ```
   
   ### How was this patch tested?
   (Parity) Unit tests.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to