[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r873426625 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: @zhengruifeng I think `lag` is better in here: 1 `WindowsExec` + 2 `sort` + 1 `shuffle`, same cost with orignal positive index implments == Physical Plan == ``` == Physical Plan == AdaptiveSparkPlan isFinalPlan=false +- Project [__index_level_0__#0, __groupkey_0__#19L, a#1L, b#2L, c#3L, __natural_order__#8L] +- Filter isnull(__tmp_lag__#447) +- Window [lag(0, -2, null) windowspecdefinition(__groupkey_0__#19L, __natural_order__#8L ASC NULLS FIRST, specifiedwindowframe(RowFrame, -2, -2)) AS __tmp_lag__#447], [__groupkey_0__#19L], [__natural_order__#8L ASC NULLS FIRST] +- Sort [__groupkey_0__#19L ASC NULLS FIRST, __natural_order__#8L ASC NULLS FIRST], fals
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r873437306 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,60 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) Review Comment: BTW, we could also consider to unify here to use `lag` way: ```python sdf = ( sdf.withColumn(tmp_lag_col, F.lag(F.lit(0), n).over(window)) # for positive case .where(F.isnull(F.col(tmp_lag_col))) .drop(tmp_lag_col) ) ``` If you guys think it's necessary, I can submit a separate PR to address it. Theoretically, `lag` has better performance than `row_number` especially when rows number is very huge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r873437306 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,60 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) Review Comment: BTW, we could also consider to unify here to use `lag` way: ```python sdf = ( sdf.withColumn(tmp_lag_col, F.lag(F.lit(0), n).over(window)) # for positive case .where(F.isnull(F.col(tmp_lag_col))) .drop(tmp_lag_col) ) ``` I can submit a separate PR to address it, if you guys think it's necessary. Theoretically, `lag` has better performance than `row_number` especially when rows number is very huge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r873426625 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: @zhengruifeng I think `lag` is better in here: 1 `WindowsExec` + 2 `sort` + 1 `shuffle`, same cost with orignal implments == Physical Plan == ``` == Physical Plan == AdaptiveSparkPlan isFinalPlan=false +- Project [__index_level_0__#0, __groupkey_0__#19L, a#1L, b#2L, c#3L, __natural_order__#8L] +- Filter isnull(__tmp_lag__#447) +- Window [lag(0, -2, null) windowspecdefinition(__groupkey_0__#19L, __natural_order__#8L ASC NULLS FIRST, specifiedwindowframe(RowFrame, -2, -2)) AS __tmp_lag__#447], [__groupkey_0__#19L], [__natural_order__#8L ASC NULLS FIRST] +- Sort [__groupkey_0__#19L ASC NULLS FIRST, __natural_order__#8L ASC NULLS FIRST], false, 0
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r873100372 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: @zhengruifeng I wil have a try soon, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r872956648 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: Thanks for review! @zhengruifeng This is good point to avoid extra WindowExec. Current, `F.row_number().over(window_desc) > 1` as filter is not allowed: `pyspark.sql.utils.AnalysisException: It is not allowed to use window functions inside WHERE clause`. But according ruifeng's idea an alternative way can be: ```python # Alternative way: Reverse Sort sdf = ( # Generate the reverse row number (WindowsExec + Sort1) sdf.withColumn(tmp_row_num_col, F.row_number().over(window_desc)) # Filter the row according to reverse row number .filter(F.col(tmp_row_num_col) > -n) # Extra reverse sort to keep original sort behavior (Sort2) .sortWithinParti
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r872956648 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: Thanks for review! @zhengruifeng This is good point to avoid extra WindowExec. Current, `F.row_number().over(window_desc) > 1` as filter is not allowed: `pyspark.sql.utils.AnalysisException: It is not allowed to use window functions inside WHERE clause`. But according ruifeng's idea an alternative way can be: ```python # Alternative way: Reverse Sort sdf = ( # Generate the reverse row number (WindowsExec + Sort1) sdf.withColumn(tmp_row_num_col, F.row_number().over(window_desc)) # Filter the row according to reverse row number .filter(F.row_number().over(window_desc)) # Extra reverse sort to keep original sort behavior (Sort2) .sortWithin
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r872956648 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: Thanks for review! @zhengruifeng This is good point to avoid WindowExec. Current, `F.row_number().over(window_desc) > 1` as filter is not allowed: `pyspark.sql.utils.AnalysisException: It is not allowed to use window functions inside WHERE clause`. But according ruifeng's idea an alternative way can be: ```python # Alternative way: Reverse Sort sdf = ( # Generate the reverse row number (WindowsExec + Sort1) sdf.withColumn(tmp_row_num_col, F.row_number().over(window_desc)) # Filter the row according to reverse row number .filter(F.row_number().over(window_desc)) # Extra reverse sort to keep original sort behavior (Sort2) .sortWithinPartiti
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r872956648 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: Thanks for review! @zhengruifeng This is good point! Current, `F.row_number().over(window_desc) > 1` as filter is not allowed: `pyspark.sql.utils.AnalysisException: It is not allowed to use window functions inside WHERE clause`. But according ruifeng's idea an alternative way can be: ```python # Alternative way: Reverse Sort sdf = ( # Generate the reverse row number (WindowsExec + Sort1) sdf.withColumn(tmp_row_num_col, F.row_number().over(window_desc)) # Filter the row according to reverse row number .filter(F.row_number().over(window_desc)) # Extra reverse sort to keep original sort behavior (Sort2) .sortWithinPartitions(F.col(tmp_row_nu
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r872956648 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: Thanks for review! @zhengruifeng This is good point! Current, `F.row_number().over(window_desc) > 1` as filter is not allowed: `pyspark.sql.utils.AnalysisException: It is not allowed to use window functions inside WHERE clause`. But according ruifeng's idea an alternative way can be: ```python # Alternative way: Reverse Sort sdf = ( # Generate the reverse row number (WindowsExec + Sort1) sdf.withColumn(tmp_row_num_col, F.row_number().over(window_desc)) # Filter the row according to reverse row number .filter(F.row_number().over(window_desc)) # Extra reverse sort to keep original sort behavior (Sort2) .sortWithinPartitions(F.col(tmp_row_nu
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r872956648 ## python/pyspark/pandas/groupby.py: ## @@ -2110,22 +2110,79 @@ def _limit(self, n: int, asc: bool) -> FrameLike: groupkey_scols = [psdf._internal.spark_column_for(label) for label in groupkey_labels] sdf = psdf._internal.spark_frame -tmp_col = verify_temp_column_name(sdf, "__row_number__") +tmp_row_num_col = verify_temp_column_name(sdf, "__row_number__") +window = Window.partitionBy(*groupkey_scols) # This part is handled differently depending on whether it is a tail or a head. -window = ( - Window.partitionBy(*groupkey_scols).orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) +ordered_window = ( +window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).asc()) if asc -else Window.partitionBy(*groupkey_scols).orderBy( -F.col(NATURAL_ORDER_COLUMN_NAME).desc() -) +else window.orderBy(F.col(NATURAL_ORDER_COLUMN_NAME).desc()) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): + +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(ordered_window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +# +# To support groupby positional indexing, we need add two columns to help we filter +# target rows: +# - Add `__row_number__` and `__group_count__` columns. +# - Use `F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= positional_index_number` to +# filter target rows. +# - Then drop `__row_number__` and `__group_count__` columns. +# +# For example for the dataframe: +# >>> df = ps.DataFrame([["g", "g0"], +# ... ["g", "g1"], +# ... ["g", "g2"], +# ... ["g", "g3"], +# ... ["h", "h0"], +# ... ["h", "h1"]], columns=["A", "B"]) +# >>> df.groupby("A").head(-1) +# +# Below is an example to show the `__row_number__` column and `__group_count__` column +# for above df: +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)).show() +# +---++---+---++--+---+ +# |__index_level..|__groupkey..| A| B|__natural_..|__row_number__|__group_count__| +# +---++---+---++--+---+ +# | 0| g| g| g0| 17179869184| 1| 4| +# | 1| g| g| g1| 42949672960| 2| 4| +# | 2| g| g| g2| 60129542144| 3| 4| +# | 3| g| g| g3| 85899345920| 4| 4| +# | 4| h| h| h0|111669149696| 1| 2| +# | 5| h| h| h1|128849018880| 2| 2| +# +---++---+---++--+---+ +# +# The limit n is `-1`, we need to filter rows[:-1] in each group: +# +# >>> sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +#.withColumn(tmp_cnt_col, F.count("*").over(window)) +#.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= -1).show() Review Comment: Thanks for review! @zhengruifeng This is good point! Current, `F.row_number().over(window_desc) > 1` as filter is not allowed: `pyspark.sql.utils.AnalysisException: It is not allowed to use window functions inside WHERE clause`. But according ruifeng's idea an alternative way can be: ```python # Alternative way: Reverse Sort sdf = ( # Generate the reverse row number (WindowsExec + Sort1) sdf.withColumn(tmp_row_num_col, F.row_number().over(window_desc)) # Filter the row according to reverse row number .filter(F.row_number().over(window_desc)) # Extra reverse sort to keep original sort behavior (Sort2) .sortWithinPartitions(F.col(tmp_row_nu
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r870137936 ## python/pyspark/pandas/groupby.py: ## @@ -2228,6 +2299,20 @@ def tail(self, n: int = 5) -> FrameLike: 65 98 Name: b, dtype: int64 + +# Supports Groupby positional indexing Since pandas on Spark 3.4 (with pandas 1.4+) +>>> df = ps.DataFrame([["g", "g0"], +... ["g", "g1"], +... ["g", "g2"], +... ["g", "g3"], +... ["h", "h0"], +... ["h", "h1"]], columns=["A", "B"]) +>>> df.groupby("A").tail(-1) # doctest: +SKIP Review Comment: I record a jira SPARK-39150 as reminder to remove `# doctest: +SKIP` when we upgrade pandas to 1.4.x -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] Yikun commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing
Yikun commented on code in PR #36464: URL: https://github.com/apache/spark/pull/36464#discussion_r870127549 ## python/pyspark/pandas/groupby.py: ## @@ -2121,11 +2121,22 @@ def _limit(self, n: int, asc: bool) -> FrameLike: ) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +tmp_cnt_col = verify_temp_column_name(sdf, "__group_count__") +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +.withColumn(tmp_cnt_col, F.count("*").over(Window.partitionBy(*groupkey_scols))) Review Comment: Good suggestion, I rename the original window to `ordered_window`, var `window` is new window. ## python/pyspark/pandas/groupby.py: ## @@ -2121,11 +2121,22 @@ def _limit(self, n: int, asc: bool) -> FrameLike: ) ) -sdf = ( -sdf.withColumn(tmp_col, F.row_number().over(window)) -.filter(F.col(tmp_col) <= n) -.drop(tmp_col) -) +if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"): +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +.filter(F.col(tmp_row_num_col) <= n) +.drop(tmp_row_num_col) +) +else: +# Pandas supports Groupby positional indexing since v1.4.0 +# https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing +tmp_cnt_col = verify_temp_column_name(sdf, "__group_count__") +sdf = ( +sdf.withColumn(tmp_row_num_col, F.row_number().over(window)) +.withColumn(tmp_cnt_col, F.count("*").over(Window.partitionBy(*groupkey_scols))) +.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= n) Review Comment: Added! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org