jagadesh-kiran commented on a change in pull request #23946:
[SPARK-26860][PySpark] [SparkR] Fix for RangeBetween and RowsBetween docs to be
in sync with spark documentation
URL: https://github.com/apache/spark/pull/23946#discussion_r262769820
##########
File path: python/pyspark/sql/window.py
##########
@@ -97,6 +97,33 @@ def rowsBetween(start, end):
and ``Window.currentRow`` to specify special boundary values, rather
than using integral
values directly.
+ A row based boundary is based on the position of the row within the
partition.
+ An offset indicates the number of rows above or below the current row,
the frame for the
+ current row starts or ends. For instance, given a row based sliding
frame with a lower bound
+ offset of -1 and a upper bound offset of +2. The frame for row with
index 5 would range from
+ index 4 to index 6.
+ """
+ # from pyspark.sql import Window
Review comment:
Thanks @HyukjinKwon for your review, Using >>> compilation failure will
happen for py file , so i will uncomment the same ,should it be fine ?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]