This is an automated email from the ASF dual-hosted git repository. gurwls223 pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push: new 023f07d845c3 [SPARK-47933][CONNECT][PYTHON][FOLLOW-UP] Remove `pyspark.sql.classic` reference in `pyspark.ml.stat` 023f07d845c3 is described below commit 023f07d845c304cfb7d231e85e0700807ee4a113 Author: Hyukjin Kwon <gurwls...@apache.org> AuthorDate: Mon Apr 29 08:42:22 2024 +0900 [SPARK-47933][CONNECT][PYTHON][FOLLOW-UP] Remove `pyspark.sql.classic` reference in `pyspark.ml.stat` ### What changes were proposed in this pull request? This PR is a followup of https://github.com/apache/spark/pull/46155 that removes the reference of `_to_seq` that `pyspark-connect` package does not have. ### Why are the changes needed? To recover the CI https://github.com/apache/spark/actions/runs/8861971303 ### Does this PR introduce _any_ user-facing change? No, the main change has not been released out yet. ### How was this patch tested? Manually tested. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #46262 from HyukjinKwon/SPARK-47933-followup4. Authored-by: Hyukjin Kwon <gurwls...@apache.org> Signed-off-by: Hyukjin Kwon <gurwls...@apache.org> --- python/pyspark/ml/stat.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/python/pyspark/ml/stat.py b/python/pyspark/ml/stat.py index d6020607aff2..4dcc96190952 100644 --- a/python/pyspark/ml/stat.py +++ b/python/pyspark/ml/stat.py @@ -23,7 +23,6 @@ from pyspark.ml.common import _java2py, _py2java from pyspark.ml.linalg import Matrix, Vector from pyspark.ml.wrapper import JavaWrapper, _jvm from pyspark.sql.column import Column -from pyspark.sql.classic.column import _to_seq from pyspark.sql.dataframe import DataFrame from pyspark.sql.functions import lit @@ -432,6 +431,7 @@ class Summarizer: :py:class:`pyspark.ml.stat.SummaryBuilder` """ from pyspark.core.context import SparkContext + from pyspark.sql.classic.column import _to_seq sc = SparkContext._active_spark_context assert sc is not None --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org