[GitHub] [spark] xinrong-databricks commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing

2022-05-11 Thread GitBox


xinrong-databricks commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r870855910


##
python/pyspark/pandas/groupby.py:
##
@@ -2228,6 +2299,20 @@ def tail(self, n: int = 5) -> FrameLike:
 65
 98
 Name: b, dtype: int64
+
+# Supports Groupby positional indexing Since pandas on Spark 3.4 (with 
pandas 1.4+)
+>>> df = ps.DataFrame([["g", "g0"],
+...   ["g", "g1"],
+...   ["g", "g2"],
+...   ["g", "g3"],
+...   ["h", "h0"],
+...   ["h", "h1"]], columns=["A", "B"])
+>>> df.groupby("A").tail(-1) # doctest: +SKIP

Review Comment:
   That's nice, thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] xinrong-databricks commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing

2022-05-10 Thread GitBox


xinrong-databricks commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r869454838


##
python/pyspark/pandas/groupby.py:
##
@@ -2121,11 +2121,22 @@ def _limit(self, n: int, asc: bool) -> FrameLike:
 )
 )
 
-sdf = (
-sdf.withColumn(tmp_col, F.row_number().over(window))
-.filter(F.col(tmp_col) <= n)
-.drop(tmp_col)
-)
+if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"):
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.filter(F.col(tmp_row_num_col) <= n)
+.drop(tmp_row_num_col)
+)
+else:
+# Pandas supports Groupby positional indexing since v1.4.0
+# 
https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing
+tmp_cnt_col = verify_temp_column_name(sdf, "__group_count__")
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.withColumn(tmp_cnt_col, 
F.count("*").over(Window.partitionBy(*groupkey_scols)))

Review Comment:
   nit: 
   We may also extract `Window.partitionBy(*groupkey_scols)` as a variable 
since there are 3 uses in this function. 
   Meanwhile, renaming `window` to be more specific may help understand the 
code (and see its difference from the window above). 
   The current code looks good enough if you prefer not to change it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] xinrong-databricks commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing

2022-05-10 Thread GitBox


xinrong-databricks commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r869460337


##
python/pyspark/pandas/groupby.py:
##
@@ -2121,11 +2121,22 @@ def _limit(self, n: int, asc: bool) -> FrameLike:
 )
 )
 
-sdf = (
-sdf.withColumn(tmp_col, F.row_number().over(window))
-.filter(F.col(tmp_col) <= n)
-.drop(tmp_col)
-)
+if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"):
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.filter(F.col(tmp_row_num_col) <= n)
+.drop(tmp_row_num_col)
+)
+else:
+# Pandas supports Groupby positional indexing since v1.4.0
+# 
https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing
+tmp_cnt_col = verify_temp_column_name(sdf, "__group_count__")
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.withColumn(tmp_cnt_col, 
F.count("*").over(Window.partitionBy(*groupkey_scols)))
+.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= n)

Review Comment:
   Below as a reference in case you need it.
   ```py
   >>> sdf.withColumn(tmp_row_num_col, 
F.row_number().over(window)).withColumn(tmp_cnt_col, 
F.count("*").over(Window.partitionBy(*groupkey_scols))).show()
   
+-+--+---+---+-+--+---+
   |__index_level_0__|__groupkey_0__|  A|  
B|__natural_order__|__row_number__|__group_count__|
   
+-+--+---+---+-+--+---+
   |0| g|  g| g0|  17179869184| 1|  
4|
   |1| g|  g| g1|  42949672960| 2|  
4|
   |2| g|  g| g2|  60129542144| 3|  
4|
   |3| g|  g| g3|  85899345920| 4|  
4|
   |4| h|  h| h0| 111669149696| 1|  
2|
   |5| h|  h| h1| 128849018880| 2|  
2|
   
+-+--+---+---+-+--+---+
   
   
   >>> sdf.withColumn(tmp_row_num_col, 
F.row_number().over(window)).withColumn(tmp_cnt_col, 
F.count("*").over(Window.partitionBy(*groupkey_scols))).filter(F.col(tmp_row_num_col)
 - F.col(tmp_cnt_col) <= -1).show()
   
+-+--+---+---+-+--+---+
   |__index_level_0__|__groupkey_0__|  A|  
B|__natural_order__|__row_number__|__group_count__|
   
+-+--+---+---+-+--+---+
   |0| g|  g| g0|  17179869184| 1|  
4|
   |1| g|  g| g1|  42949672960| 2|  
4|
   |2| g|  g| g2|  60129542144| 3|  
4|
   |4| h|  h| h0| 111669149696| 1|  
2|
   
+-+--+---+---+-+--+---+
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] xinrong-databricks commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing

2022-05-10 Thread GitBox


xinrong-databricks commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r869458670


##
python/pyspark/pandas/groupby.py:
##
@@ -2121,11 +2121,22 @@ def _limit(self, n: int, asc: bool) -> FrameLike:
 )
 )
 
-sdf = (
-sdf.withColumn(tmp_col, F.row_number().over(window))
-.filter(F.col(tmp_col) <= n)
-.drop(tmp_col)
-)
+if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"):
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.filter(F.col(tmp_row_num_col) <= n)
+.drop(tmp_row_num_col)
+)
+else:
+# Pandas supports Groupby positional indexing since v1.4.0
+# 
https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing
+tmp_cnt_col = verify_temp_column_name(sdf, "__group_count__")
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.withColumn(tmp_cnt_col, 
F.count("*").over(Window.partitionBy(*groupkey_scols)))
+.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= n)

Review Comment:
   The filter condition is cool but may not be easy to understand. Shall we add 
(an example as?) a comment?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] xinrong-databricks commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing

2022-05-10 Thread GitBox


xinrong-databricks commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r869458670


##
python/pyspark/pandas/groupby.py:
##
@@ -2121,11 +2121,22 @@ def _limit(self, n: int, asc: bool) -> FrameLike:
 )
 )
 
-sdf = (
-sdf.withColumn(tmp_col, F.row_number().over(window))
-.filter(F.col(tmp_col) <= n)
-.drop(tmp_col)
-)
+if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"):
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.filter(F.col(tmp_row_num_col) <= n)
+.drop(tmp_row_num_col)
+)
+else:
+# Pandas supports Groupby positional indexing since v1.4.0
+# 
https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing
+tmp_cnt_col = verify_temp_column_name(sdf, "__group_count__")
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.withColumn(tmp_cnt_col, 
F.count("*").over(Window.partitionBy(*groupkey_scols)))
+.filter(F.col(tmp_row_num_col) - F.col(tmp_cnt_col) <= n)

Review Comment:
   The filter condition is cool but may not be easy to understand. Shall we add 
(an example as?) a comment?
   
   Below is just a reference.
   ```py
   
+-+--+---+---+-+--+---+
   |__index_level_0__|__groupkey_0__|  A|  
B|__natural_order__|__row_number__|__group_count__|
   
+-+--+---+---+-+--+---+
   |0| g|  g| g0|  17179869184| 1|  
4|
   |1| g|  g| g1|  42949672960| 2|  
4|
   |2| g|  g| g2|  60129542144| 3|  
4|
   |4| h|  h| h0| 111669149696| 1|  
2|
   
+-+--+---+---+-+--+---+
   
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] xinrong-databricks commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing

2022-05-10 Thread GitBox


xinrong-databricks commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r869454838


##
python/pyspark/pandas/groupby.py:
##
@@ -2121,11 +2121,22 @@ def _limit(self, n: int, asc: bool) -> FrameLike:
 )
 )
 
-sdf = (
-sdf.withColumn(tmp_col, F.row_number().over(window))
-.filter(F.col(tmp_col) <= n)
-.drop(tmp_col)
-)
+if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"):
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.filter(F.col(tmp_row_num_col) <= n)
+.drop(tmp_row_num_col)
+)
+else:
+# Pandas supports Groupby positional indexing since v1.4.0
+# 
https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing
+tmp_cnt_col = verify_temp_column_name(sdf, "__group_count__")
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.withColumn(tmp_cnt_col, 
F.count("*").over(Window.partitionBy(*groupkey_scols)))

Review Comment:
   nit: We may also extract `Window.partitionBy(*groupkey_scols)` as a variable 
since there are 3 uses in this function. 
   Meanwhile, renaming `window` to be more specific may help understand the 
code. 
   The current code looks good enough if you prefer not to change it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] xinrong-databricks commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing

2022-05-10 Thread GitBox


xinrong-databricks commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r869454838


##
python/pyspark/pandas/groupby.py:
##
@@ -2121,11 +2121,22 @@ def _limit(self, n: int, asc: bool) -> FrameLike:
 )
 )
 
-sdf = (
-sdf.withColumn(tmp_col, F.row_number().over(window))
-.filter(F.col(tmp_col) <= n)
-.drop(tmp_col)
-)
+if n >= 0 or LooseVersion(pd.__version__) < LooseVersion("1.4.0"):
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.filter(F.col(tmp_row_num_col) <= n)
+.drop(tmp_row_num_col)
+)
+else:
+# Pandas supports Groupby positional indexing since v1.4.0
+# 
https://pandas.pydata.org/docs/whatsnew/v1.4.0.html#groupby-positional-indexing
+tmp_cnt_col = verify_temp_column_name(sdf, "__group_count__")
+sdf = (
+sdf.withColumn(tmp_row_num_col, F.row_number().over(window))
+.withColumn(tmp_cnt_col, 
F.count("*").over(Window.partitionBy(*groupkey_scols)))

Review Comment:
   nit: We may also extract `Window.partitionBy(*groupkey_scols)` as a variable 
since there are 3 uses in this function. Meantime, renaming `window` to be more 
specific may help understand the code. The current code looks good enough if 
you prefer not to change it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] xinrong-databricks commented on a diff in pull request #36464: [SPARK-38947][PYTHON] Supports groupby positional indexing

2022-05-10 Thread GitBox


xinrong-databricks commented on code in PR #36464:
URL: https://github.com/apache/spark/pull/36464#discussion_r869421364


##
python/pyspark/pandas/tests/test_groupby.py:
##
@@ -2538,40 +2538,19 @@ def test_head(self):
 )
 psdf = ps.from_pandas(pdf)
 
-self.assert_eq(
-pdf.groupby("a").head(2).sort_index(), 
psdf.groupby("a").head(2).sort_index()
-)
-self.assert_eq(
-pdf.groupby("a").head(-2).sort_index(), 
psdf.groupby("a").head(-2).sort_index()
-)
-self.assert_eq(
-pdf.groupby("a").head(10).sort_index(), 
psdf.groupby("a").head(10).sort_index()
-)
-
-self.assert_eq(
-pdf.groupby("a")["b"].head(2).sort_index(), 
psdf.groupby("a")["b"].head(2).sort_index()
-)
-self.assert_eq(
-pdf.groupby("a")["b"].head(-2).sort_index(),
-psdf.groupby("a")["b"].head(-2).sort_index(),
-)
-self.assert_eq(
-pdf.groupby("a")["b"].head(10).sort_index(),
-psdf.groupby("a")["b"].head(10).sort_index(),
-)
-
-self.assert_eq(
-pdf.groupby("a")[["b"]].head(2).sort_index(),
-psdf.groupby("a")[["b"]].head(2).sort_index(),
-)
-self.assert_eq(
-pdf.groupby("a")[["b"]].head(-2).sort_index(),
-psdf.groupby("a")[["b"]].head(-2).sort_index(),
-)
-self.assert_eq(
-pdf.groupby("a")[["b"]].head(10).sort_index(),
-psdf.groupby("a")[["b"]].head(10).sort_index(),
-)
+for limit in (2, 10, -2, -10, -1):

Review Comment:
   Nice!!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org