This is an automated email from the ASF dual-hosted git repository.
ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new ca175146970a [SPARK-54643][PYTHON][TESTS][FOLLOW-UP] Restore classic
only workflows
ca175146970a is described below
commit ca175146970adfc5bb54c914e3392afb859bd648
Author: Ruifeng Zheng <[email protected]>
AuthorDate: Thu Dec 11 20:50:05 2025 +0800
[SPARK-54643][PYTHON][TESTS][FOLLOW-UP] Restore classic only workflows
### What changes were proposed in this pull request?
restore classic-only workflows, see
https://github.com/apache/spark/actions/runs/20110097373/job/57705084088
### Why are the changes needed?
to fix CI
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
PR builder with
```
default: '{"PYSPARK_IMAGE_TO_TEST": "python-311-classic-only",
"PYTHON_TO_TEST": "python3.11"}'
```
https://github.com/zhengruifeng/spark/actions/runs/20124816930/job/57754435568
### Was this patch authored or co-authored using generative AI tooling?
No
Closes #53440 from zhengruifeng/fix_test_spark_connect.
Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Ruifeng Zheng <[email protected]>
---
.../pyspark/pipelines/tests/test_spark_connect.py | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/python/pyspark/pipelines/tests/test_spark_connect.py
b/python/pyspark/pipelines/tests/test_spark_connect.py
index 3027de46864f..b0cd38f091ce 100644
--- a/python/pyspark/pipelines/tests/test_spark_connect.py
+++ b/python/pyspark/pipelines/tests/test_spark_connect.py
@@ -21,16 +21,6 @@ Tests that run Pipelines against a Spark Connect server.
import unittest
-from pyspark.errors.exceptions.connect import AnalysisException
-from pyspark.pipelines.graph_element_registry import
graph_element_registration_context
-from pyspark.pipelines.spark_connect_graph_element_registry import (
- SparkConnectGraphElementRegistry,
-)
-from pyspark.pipelines.spark_connect_pipeline import (
- create_dataflow_graph,
- start_run,
- handle_pipeline_events,
-)
from pyspark import pipelines as dp
from pyspark.testing.connectutils import (
ReusedConnectTestCase,
@@ -38,6 +28,18 @@ from pyspark.testing.connectutils import (
connect_requirement_message,
)
+if should_test_connect:
+ from pyspark.errors.exceptions.connect import AnalysisException
+ from pyspark.pipelines.graph_element_registry import
graph_element_registration_context
+ from pyspark.pipelines.spark_connect_graph_element_registry import (
+ SparkConnectGraphElementRegistry,
+ )
+ from pyspark.pipelines.spark_connect_pipeline import (
+ create_dataflow_graph,
+ start_run,
+ handle_pipeline_events,
+ )
+
@unittest.skipIf(not should_test_connect, connect_requirement_message)
class SparkConnectPipelinesTest(ReusedConnectTestCase):
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]