Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/20400#discussion_r165254437
--- Diff: python/pyspark/sql/window.py ---
@@ -129,11 +131,34 @@ def rangeBetween(start, end):
:param end: boundary end, inclusive.
The frame is unbounded if this is
``Window.unboundedFollowing``, or
any value greater than or equal to min(sys.maxsize,
9223372036854775807).
+
+ >>> from pyspark.sql import functions as F, SparkSession, Window
+ >>> spark = SparkSession.builder.getOrCreate()
+ >>> df = spark.createDataFrame([(1, "a"), (1, "a"), (2, "a"), (1,
"b"), (2, "b"),
+ ... (3, "b")], ["id", "category"])
+ >>> window =
Window.orderBy("id").partitionBy("category").rangeBetween(F.currentRow(),
+ ... F.lit(1))
--- End diff --
ditto:
```python
>>> window = Window.orderBy("id").partitionBy("category").rangeBetween(
... F.currentRow(), F.lit(1))
```
or line break or anything complying pep8.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]