This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new c4a35bb93b3 [SPARK-40041][PYTHON][DOCS] Add document parameters for
pyspark.sql.window
c4a35bb93b3 is described below
commit c4a35bb93b31069a8298916d5ce0f6dca1a6bf10
Author: Qian.Sun <[email protected]>
AuthorDate: Thu Aug 11 14:19:18 2022 +0900
[SPARK-40041][PYTHON][DOCS] Add document parameters for pyspark.sql.window
### What changes were proposed in this pull request?
As mentioned
https://github.com/apache/spark/pull/37450#pullrequestreview-1067543738, this
PR proposes to add document parameters for `orderBy` and `partitionBy` method
in `pyspark.sql.window`
### Why are the changes needed?
To make the documentation more readable.
### Does this PR introduce _any_ user-facing change?
Yes, it changes the documentation.
### How was this patch tested?
Manually ran each doctest.
Closes #37476 from dcoliversun/SPARK-40041.
Authored-by: Qian.Sun <[email protected]>
Signed-off-by: Hyukjin Kwon <[email protected]>
---
python/pyspark/sql/window.py | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/python/pyspark/sql/window.py b/python/pyspark/sql/window.py
index f895e2010ce..7bb59f36289 100644
--- a/python/pyspark/sql/window.py
+++ b/python/pyspark/sql/window.py
@@ -74,6 +74,11 @@ class Window:
def partitionBy(*cols: Union["ColumnOrName", List["ColumnOrName_"]]) ->
"WindowSpec":
"""
Creates a :class:`WindowSpec` with the partitioning defined.
+
+ Parameters
+ ----------
+ cols : str, :class:`Column` or list
+ names of columns or expressions
"""
sc = SparkContext._active_spark_context
assert sc is not None and sc._jvm is not None
@@ -85,6 +90,11 @@ class Window:
def orderBy(*cols: Union["ColumnOrName", List["ColumnOrName_"]]) ->
"WindowSpec":
"""
Creates a :class:`WindowSpec` with the ordering defined.
+
+ Parameters
+ ----------
+ cols : str, :class:`Column` or list
+ names of columns or expressions
"""
sc = SparkContext._active_spark_context
assert sc is not None and sc._jvm is not None
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]