Yikun commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r872956648


##########
python/pyspark/pandas/groupby.py:
##########
@@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike:
         groupkey_scols = [psdf._internal.spark_column_for(label) for label in 
groupkey_labels]
 
         sdf = psdf._internal.spark_frame
-        tmp_col = verify_temp_column_name(sdf, "__row_number__")
+        tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__")
 
+        window = Window.partitionBy(*groupkey_scols)
         # This part is handled differently depending on whether it is a tail 
or a head.
-        window = (
-            
Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc())
+        ordered_window = (
+            window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc())
             if asc
-            else Window.partitionBy(*groupkey_scols).orderBy(
-                F.col(NATURAL_ORDER_COLUMN_NAME).desc()
-            )
+            else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc())
         )
 
-        sdf = (
-            sdf.withColumn(tmp_col, F.row_number().over(window))
-            .filter(F.col(tmp_col) <= n)
-            .drop(tmp_col)
-        )
+        if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"):
+
+            sdf = (
+                sdf.withColumn(tmp_row_num_col, 
F.row_number().over(ordered_window))
+                .filter(F.col(tmp_row_num_col) <= n)
+                .drop(tmp_row_num_col)
+            )
+        else:
+            # Pandas supports Groupby positional indexing since v1.4.0
+            # 
https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing
+            #
+            # To support groupby positional indexing, we need add two columns 
to help we filter
+            # target rows:
+            # - Add `__row_number__` and `__group_count__` columns.
+            # - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= 
positional_index_number` to
+            #   filter target rows.
+            # - Then drop `__row_number__` and `__group_count__` columns.
+            #
+            # For example for the dataframe:
+            # >>> df = ps.DataFrame([["g", "g0"],
+            # ...                   ["g", "g1"],
+            # ...                   ["g", "g2"],
+            # ...                   ["g", "g3"],
+            # ...                   ["h", "h0"],
+            # ...                   ["h", "h1"]], columns=["A", "B"])
+            # >>> df.groupby("A").head(-1)
+            #
+            # Below is an example to show the `__row_number__` column and 
`__group_count__` column
+            # for above df:
+            # >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+            #        .withColumn(tmp_cnt_col, F.count("*").over(window)).show()
+            # 
+---------------+------------+---+---+------------+--------------+---------------+
+            # |__index_level..|__groupkey..|  A|  
B|__natural_..|__row_number__|__group_count__|
+            # 
+---------------+------------+---+---+------------+--------------+---------------+
+            # |              0|           g|  g| g0| 17179869184|             
1|              4|
+            # |              1|           g|  g| g1| 42949672960|             
2|              4|
+            # |              2|           g|  g| g2| 60129542144|             
3|              4|
+            # |              3|           g|  g| g3| 85899345920|             
4|              4|
+            # |              4|           h|  h| h0|111669149696|             
1|              2|
+            # |              5|           h|  h| h1|128849018880|             
2|              2|
+            # 
+---------------+------------+---+---+------------+--------------+---------------+
+            #
+            # The limit n is `-1`, we need to filter rows[:-1] in each group:
+            #
+            # >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+            #        .withColumn(tmp_cnt_col, F.count("*").over(window))
+            #        .filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= 
-1).show()

Review Comment:
   Thanks for review! @zhengruifeng This is good point to avoid extra 
WindowExec.
   
   Current, `F.row_number().over(window_desc) > 1` as filter is not allowed: 
`pyspark.sql.utils.AnalysisException: It is not allowed to use window functions 
inside WHERE clause`.
   
   But according ruifeng's idea an alternative way can be:
   
   ```python
   # Alternative way: Reverse Sort
   sdf = (
       # Generate the reverse row number (WindowsExec + Sort1)
       sdf.withColumn(tmp_row_num_col, F.row_number().over(window_desc))
       # Filter the row according to reverse row number
       .filter(F.col(tmp_row_num_col) > -n)
       # Extra reverse sort to keep original sort behavior (Sort2)
       .sortWithinPartitions(F.col(tmp_row_num_col), ascending=False)
       .drop(tmp_row_num_col)
   )
   ```
   <details><summary>== Physical Plan ==</summary>
   
   ```
   == Physical Plan ==
   AdaptiveSparkPlan isFinalPlan=false
   +- Project [__groupkey_0__#15, A#1, B#2]
      +- Sort [__row_number__#22 DESC NULLS LAST], false, 0
         +- Project [__groupkey_0__#15, A#1, B#2, __row_number__#22]
            +- Filter (__row_number__#22 > 1)
               +- Window [row_number() windowspecdefinition(__groupkey_0__#15, 
__natural_order__#6L DESC NULLS LAST, specifiedwindowframe(RowFrame, 
unboundedpreceding$(), currentrow$())) AS __row_number__#22], 
[__groupkey_0__#15], [__natural_order__#6L DESC NULLS LAST]
                  +- Sort [__groupkey_0__#15 ASC NULLS FIRST, 
__natural_order__#6L DESC NULLS LAST], false, 0
                     +- Exchange hashpartitioning(__groupkey_0__#15, 200), 
ENSURE_REQUIREMENTS, [id=#32]
                        +- Project [A#1 AS __groupkey_0__#15, A#1, B#2, 
__natural_order__#6L]
                           +- Project [A#1, B#2, monotonically_increasing_id() 
AS __natural_order__#6L]
                              +- Scan ExistingRDD 
arrow[__index_level_0__#0L,A#1,B#2]
   ```
   
   </details>
   
   
   Compare with current implementation:
   
   ```python
   # Current way: Group Count Helper
   sdf = (
       # Generate the row number (WindowExec1 + Sort)
       sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window))
       # Generate the Group Count (WindowExec2)
       .withColumn(tmp_cnt_col, F.count("*").over(window))
       .filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= n)
       .drop(tmp_row_num_col, tmp_cnt_col)
   )
   ```
   
   <details><summary>== Physical Plan ==</summary>
   
   ```
   == Physical Plan ==
   AdaptiveSparkPlan isFinalPlan=false
   +- Project [__groupkey_0__#15, A#1, B#2]
      +- Filter ((cast(__row_number__#22 as bigint) - __group_count__#30L) <= 
-1)
         +- Window [count(1) windowspecdefinition(__groupkey_0__#15, 
specifiedwindowframe(RowFrame, unboundedpreceding$(), unboundedfollowing$())) 
AS __group_count__#30L], [__groupkey_0__#15]
            +- Project [__groupkey_0__#15, A#1, B#2, __row_number__#22]
               +- Window [row_number() windowspecdefinition(__groupkey_0__#15, 
__natural_order__#6L ASC NULLS FIRST, specifiedwindowframe(RowFrame, 
unboundedpreceding$(), currentrow$())) AS __row_number__#22], 
[__groupkey_0__#15], [__natural_order__#6L ASC NULLS FIRST]
                  +- Sort [__groupkey_0__#15 ASC NULLS FIRST, 
__natural_order__#6L ASC NULLS FIRST], false, 0
                     +- Exchange hashpartitioning(__groupkey_0__#15, 200), 
ENSURE_REQUIREMENTS, [id=#32]
                        +- Project [A#1 AS __groupkey_0__#15, A#1, B#2, 
__natural_order__#6L]
                           +- Project [A#1, B#2, monotonically_increasing_id() 
AS __natural_order__#6L]
                              +- Scan ExistingRDD 
arrow[__index_level_0__#0L,A#1,B#2]
   ```
   </details>
   
   We can see:
   - Current way (Group Count Helper): 2 `WindowsExec` + 1 `sort` + 1 `shuffle`.
   - Alternative way (Reverse Sort): 1 `WindowsExec` + 2 `sort` + 1 `shuffle`.
   
   I had a offline dicussion with ruifeng, the conclusion is we'd better to 
keep current implementation. Thanks for dicussion, learned a lot from it!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to