HyukjinKwon opened a new pull request, #53958:
URL: https://github.com/apache/spark/pull/53958
### What changes were proposed in this pull request?
Add support for keyword arguments in `GroupedData.agg()` to allow a more
Pythonic syntax for specifying column-to-aggregation-function mappings.
### Why are the changes needed?
Currently, users must use dictionary syntax `agg({"age": "min"})` to specify
aggregations. Keyword arguments provide a more intuitive and Pythonic
alternative: `agg(age="min", salary="max")`, which is consistent with other
Python APIs and reduces verbosity.
### Does this PR introduce _any_ user-facing change?
Yes. Users can now use keyword arguments for aggregations:
Before:
```python
df.groupBy("name").agg({"age": "min", "salary": "max"})
```
After (new syntax):
```python
df.groupBy("name").agg(age="min", salary="max")
```
### How was this patch tested?
Unittest added.
### Was this patch authored or co-authored using generative AI tooling?
No.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]