This is an automated email from the ASF dual-hosted git repository.
ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 0ccf53ae6faa [SPARK-49609][PYTHON][FOLLOWUP] Correct the typehint for
`filter` and `where`
0ccf53ae6faa is described below
commit 0ccf53ae6faabc4420317d379da77a299794c84c
Author: Ruifeng Zheng <[email protected]>
AuthorDate: Wed Sep 25 19:21:36 2024 +0800
[SPARK-49609][PYTHON][FOLLOWUP] Correct the typehint for `filter` and
`where`
### What changes were proposed in this pull request?
Correct the typehint for `filter` and `where`
### Why are the changes needed?
the input `str` should not be treated as column name
### Does this PR introduce _any_ user-facing change?
doc change
### How was this patch tested?
ci
### Was this patch authored or co-authored using generative AI tooling?
no
Closes #48244 from zhengruifeng/py_filter_where.
Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Ruifeng Zheng <[email protected]>
---
python/pyspark/sql/classic/dataframe.py | 2 +-
python/pyspark/sql/connect/dataframe.py | 2 +-
python/pyspark/sql/dataframe.py | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/python/pyspark/sql/classic/dataframe.py
b/python/pyspark/sql/classic/dataframe.py
index 23484fcf0051..0dd66a9d8654 100644
--- a/python/pyspark/sql/classic/dataframe.py
+++ b/python/pyspark/sql/classic/dataframe.py
@@ -1787,7 +1787,7 @@ class DataFrame(ParentDataFrame, PandasMapOpsMixin,
PandasConversionMixin):
def inputFiles(self) -> List[str]:
return list(self._jdf.inputFiles())
- def where(self, condition: "ColumnOrName") -> ParentDataFrame:
+ def where(self, condition: Union[Column, str]) -> ParentDataFrame:
return self.filter(condition)
# Two aliases below were added for pandas compatibility many years ago.
diff --git a/python/pyspark/sql/connect/dataframe.py
b/python/pyspark/sql/connect/dataframe.py
index cb37af8868aa..146cfe11bc50 100644
--- a/python/pyspark/sql/connect/dataframe.py
+++ b/python/pyspark/sql/connect/dataframe.py
@@ -1260,7 +1260,7 @@ class DataFrame(ParentDataFrame):
res._cached_schema = self._merge_cached_schema(other)
return res
- def where(self, condition: "ColumnOrName") -> ParentDataFrame:
+ def where(self, condition: Union[Column, str]) -> ParentDataFrame:
if not isinstance(condition, (str, Column)):
raise PySparkTypeError(
errorClass="NOT_COLUMN_OR_STR",
diff --git a/python/pyspark/sql/dataframe.py b/python/pyspark/sql/dataframe.py
index 2179a844b1e5..142034583dbd 100644
--- a/python/pyspark/sql/dataframe.py
+++ b/python/pyspark/sql/dataframe.py
@@ -3351,7 +3351,7 @@ class DataFrame:
...
@dispatch_df_method
- def filter(self, condition: "ColumnOrName") -> "DataFrame":
+ def filter(self, condition: Union[Column, str]) -> "DataFrame":
"""Filters rows using the given condition.
:func:`where` is an alias for :func:`filter`.
@@ -5902,7 +5902,7 @@ class DataFrame:
...
@dispatch_df_method
- def where(self, condition: "ColumnOrName") -> "DataFrame":
+ def where(self, condition: Union[Column, str]) -> "DataFrame":
"""
:func:`where` is an alias for :func:`filter`.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]