LuciferYang commented on code in PR #53106:
URL: https://github.com/apache/spark/pull/53106#discussion_r2540061004


##########
sql/connect/server/src/test/scala/org/apache/spark/sql/connect/pipelines/PythonPipelineSuite.scala:
##########
@@ -1111,4 +1104,13 @@ class PythonPipelineSuite
         |  return spark.range(5)
         |""".stripMargin)
   }
+
+  override protected def test(testName: String, testTags: Tag*)(testFun: => 
Any)(implicit
+      pos: Position): Unit = {
+    if (PythonTestDepsChecker.isConnectDepsAvailable) {

Review Comment:
   with this pr, when there is a missing pyconnect dependency:
   
   
   ```
   WARNING: Using incubator modules: jdk.incubator.vector
   Traceback (most recent call last):
     File 
"/Users/yangjie01/SourceCode/git/spark-mine-sbt/python/pyspark/sql/connect/utils.py",
 line 105, in require_minimum_zstandard_version
       import zstandard  # noqaOnly 3s
       ^^^^^^^^^^^^^^^^
   ModuleNotFoundError: No module named 'zstandard'
   
   The above exception was the direct cause of the following exception:
   
   Traceback (most recent call last):
     File "<string>", line 1, in <module>
     File 
"/Users/yangjie01/SourceCode/git/spark-mine-sbt/python/pyspark/sql/connect/utils.py",
 line 40, in check_dependencies
       require_minimum_zstandard_version()
     File 
"/Users/yangjie01/SourceCode/git/spark-mine-sbt/python/pyspark/sql/connect/utils.py",
 line 107, in require_minimum_zstandard_version
       raise PySparkImportError(
   pyspark.errors.exceptions.base.PySparkImportError: [PACKAGE_NOT_INSTALLED] 
zstandard >= 0.25.0 must be installed; however, it was not found.
   [info] PythonPipelineSuite:
   [info] - basic !!! IGNORED !!!
   [info] - failed flow progress event has correct python source code location 
!!! IGNORED !!!
   [info] - flow progress events have correct python source code location !!! 
IGNORED !!!
   [info] - basic with inverted topological order !!! IGNORED !!!
   [info] - flows !!! IGNORED !!!
   [info] - external sink !!! IGNORED !!!
   [info] - referencing internal datasets !!! IGNORED !!!
   [info] - referencing external datasets !!! IGNORED !!!
   [info] - referencing internal datasets failed !!! IGNORED !!!
   [info] - referencing external datasets failed !!! IGNORED !!!
   [info] - reading external datasets outside query function works !!! IGNORED 
!!!
   [info] - reading internal datasets outside query function that don't trigger 
eager analysis or execution !!! IGNORED !!!
   [info] - reading internal datasets outside query function that trigger eager 
analysis or execution will fail (spark.sql("SELECT * FROM src")) !!! IGNORED !!!
   [info] - reading internal datasets outside query function that trigger eager 
analysis or execution will fail (spark.read.table("src").collect()) !!! IGNORED 
!!!
   [info] - create dataset with the same name will fail !!! IGNORED !!!
   [info] - create datasets with fully/partially qualified names !!! IGNORED !!!
   [info] - create datasets with three part names !!! IGNORED !!!
   [info] - temporary views works !!! IGNORED !!!
   [info] - create named flow with multipart name will fail !!! IGNORED !!!
   [info] - create flow with multipart target and no explicit name succeeds !!! 
IGNORED !!!
   [info] - create named flow with multipart target succeeds !!! IGNORED !!!
   [info] - groupby and rollup works with internal datasets, referencing with 
(col, str) !!! IGNORED !!!
   [info] - MV/ST with partition columns works !!! IGNORED !!!
   [info] - create pipeline without table will throw RUN_EMPTY_PIPELINE 
exception !!! IGNORED !!!
   [info] - create pipeline with only temp view will throw RUN_EMPTY_PIPELINE 
exception !!! IGNORED !!!
   [info] - create pipeline with only flow will throw RUN_EMPTY_PIPELINE 
exception !!! IGNORED !!!
   [info] - table with string schema !!! IGNORED !!!
   [info] - table with StructType schema !!! IGNORED !!!
   [info] - string schema validation error - schema mismatch !!! IGNORED !!!
   [info] - StructType schema validation error - schema mismatch !!! IGNORED !!!
   [info] - empty cluster_by list should work and create table with no 
clustering !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (SET CATALOG some_catalog) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (USE SCHEMA some_schema) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (SET `test_conf` = `true`) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (CREATE TABLE some_table (id INT)) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (CREATE VIEW some_view AS SELECT * FROM some_table) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (INSERT INTO some_table VALUES (1)) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (ALTER TABLE some_table RENAME TO some_new_table) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (CREATE NAMESPACE some_namespace) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (DROP VIEW some_view) !!! IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (CREATE MATERIALIZED VIEW some_view AS SELECT * FROM some_table) !!! 
IGNORED !!!
   [info] - Unsupported SQL command outside query function should result in a 
failure (CREATE STREAMING TABLE some_table AS SELECT * FROM some_table) !!! 
IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (SET CATALOG some_catalog) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (USE SCHEMA some_schema) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (SET `test_conf` = `true`) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (CREATE TABLE some_table (id INT)) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (CREATE VIEW some_view AS SELECT * FROM some_table) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (INSERT INTO some_table VALUES (1)) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (ALTER TABLE some_table RENAME TO some_new_table) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (CREATE NAMESPACE some_namespace) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (DROP VIEW some_view) !!! IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (CREATE MATERIALIZED VIEW some_view AS SELECT * FROM some_table) !!! 
IGNORED !!!
   [info] - Unsupported SQL command inside query function should result in a 
failure (CREATE STREAMING TABLE some_table AS SELECT * FROM some_table) !!! 
IGNORED !!!
   [info] - Supported SQL command outside query function should work (DESCRIBE 
TABLE spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SHOW 
TABLES) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SHOW 
TBLPROPERTIES spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SHOW 
NAMESPACES) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SHOW 
COLUMNS FROM spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SHOW 
FUNCTIONS) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SHOW 
VIEWS) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SHOW 
CATALOGS) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SHOW 
CREATE TABLE spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SELECT * 
FROM RANGE(5)) !!! IGNORED !!!
   [info] - Supported SQL command outside query function should work (SELECT * 
FROM spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (DESCRIBE 
TABLE spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SHOW 
TABLES) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SHOW 
TBLPROPERTIES spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SHOW 
NAMESPACES) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SHOW 
COLUMNS FROM spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SHOW 
FUNCTIONS) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SHOW 
VIEWS) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SHOW 
CATALOGS) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SHOW 
CREATE TABLE spark_catalog.default.src) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SELECT * 
FROM RANGE(5)) !!! IGNORED !!!
   [info] - Supported SQL command inside query function should work (SELECT * 
FROM spark_catalog.default.src) !!! IGNORED !!!
   [info] Run completed in 3 seconds, 595 milliseconds.
   [info] Total number of tests run: 0
   [info] Suites: completed 1, aborted 0
   [info] Tests: succeeded 0, failed 0, canceled 0, ignored 75, pending 0
   [info] No tests were executed.
   [success] Total time: 190 s (0:03:10.0), completed 2025年11月19日 上午8:40:22
   yangjie01@localhost spark-mine-sbt % build/sbt clean "connect/testOnly 
org.apache.spark.sql.connect.pipelines.PythonPipelineSuite"
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to