Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20400#discussion_r164950531
  
    --- Diff: python/pyspark/sql/functions.py ---
    @@ -809,6 +809,45 @@ def ntile(n):
         return Column(sc._jvm.functions.ntile(int(n)))
     
     
    +@since(2.3)
    +def unboundedPreceding():
    +    """
    +    Window function: returns the special frame boundary that represents 
the first row
    +    in the window partition.
    +    >>> df = spark.createDataFrame([(5,)])
    +    >>> df.select(unboundedPreceding()).show
    +    <bound method DataFrame.show of DataFrame[UNBOUNDED PRECEDING: null]>
    +    """
    +    sc = SparkContext._active_spark_context
    +    return Column(sc._jvm.functions.unboundedPreceding())
    +
    +
    +@since(2.3)
    +def unboundedFollowing():
    +    """
    +    Window function: returns the special frame boundary that represents 
the last row
    +    in the window partition.
    +    >>> df = spark.createDataFrame([(5,)])
    --- End diff --
    
    I believe we didn't claim we follow PEP 257 yet but I believe it would be 
good to have a newline between doctest and the description at least, if you 
don't mind.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to