zhengruifeng commented on code in PR #37845:
URL: https://github.com/apache/spark/pull/37845#discussion_r969130142
##########
python/pyspark/pandas/frame.py:
##########
@@ -1454,11 +1462,196 @@ def corr(self, method: str = "pearson") -> "DataFrame":
There are behavior differences between pandas-on-Spark and pandas.
* the `method` argument only accepts 'pearson', 'spearman'
- * the data should not contain NaNs. pandas-on-Spark will return an
error.
- * pandas-on-Spark doesn't support the following argument(s).
+ * if the `method` is `spearman`, the data should not contain NaNs.
+ * if the `method` is `spearman`, `min_periods` argument is not
supported.
+ """
+ if method not in ["pearson", "spearman", "kendall"]:
+ raise ValueError(f"Invalid method {method}")
+ if method == "kendall":
+ raise NotImplementedError("method doesn't support kendall for now")
+ if min_periods is not None and not isinstance(min_periods, int):
+ raise TypeError(f"Invalid min_periods type
{type(min_periods).__name__}")
Review Comment:
I am also not sure, but the type of `min_periods` is also expected to be
`int` in Pandas.
I think that `pdf.corr('pearson', min_periods=1.4)` can work in Pandas just
because a validation is missing in Pandas
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]