Github user icexelloss commented on a diff in the pull request:
https://github.com/apache/spark/pull/22329#discussion_r214940744
--- Diff: python/pyspark/sql/functions.py ---
@@ -2804,6 +2804,20 @@ def pandas_udf(f=None, returnType=None,
functionType=None):
| 1|1.5|
| 2|6.0|
+---+---+
+ >>> @pandas_udf("id long, v1 double, v2 double",
PandasUDFType.GROUPED_MAP) # doctest: +SKIP
--- End diff --
It took me a while to realize `v1` is a grouping key. It also a bit
uncommon to use double value as a grouping key . How about we do sth like?
`id long, additional_key long, v double`
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]