zhengruifeng opened a new pull request, #41986:
URL: https://github.com/apache/spark/pull/41986

   ### What changes were proposed in this pull request?
   
   `SparkSession.sql` in Connect actually only return a unresolved plan, 
however it may reference temp view, it should hold the analyzed plan in the 
server side in this case, since the temp view maybe dropped somewhere.
   
   ### Why are the changes needed?
   
   it is a common pattern:
   
   1. create temp views;
   2. reference temp views in `SparkSession.sql` and get a result dataframe;
   3. drop temp views;
   4. return the dataframe;
   
   
   for example:
   1, in sql_formatter: 
https://github.com/apache/spark/blob/9aa42a970c4bd8e54603b1795a0f449bd556b11b/python/pyspark/sql/sql_formatter.py#L67-L85
   2, in mllib:
   
https://github.com/apache/spark/blob/d679dabdd1b5ad04b8c7deb1c06ce886a154a928/mllib/src/main/scala/org/apache/spark/ml/feature/SQLTransformer.scala#L70-L75
   
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   yes
   
   before:
   ```
   In [1]: df = spark.range(0, 10)
   
   In [2]: df.createOrReplaceTempView("t1")
   
   In [3]: df2 = spark.sql("select * from t1")
   
   In [4]: spark.catalog.dropTempView("t1")
   Out[4]: True
   
   In [5]: df2.show()
   23/07/13 18:33:08 ERROR SparkConnectService: Error during: execute. UserId: 
ruifeng.zheng. SessionId: 3ce414b0-c99e-450d-ba11-c0f5fcb2daf3.
   org.apache.spark.sql.AnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table 
or view `t1` cannot be found. Verify the spelling and correctness of the schema 
and catalog.
   If you did not qualify the name with a schema, verify the current_schema() 
output, or qualify the name with the correct schema and catalog.
   To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF 
EXISTS.; line 1 pos 14;
   'Project [*]
   +- 'UnresolvedRelation [t1], [], false
   ```
   
   after:
   ```
   In [1]: df = spark.range(0, 10)
   
   In [2]: df.createOrReplaceTempView("t1")
   
   In [3]: df2 = spark.sql("select * from t1")
   
   In [4]: spark.catalog.dropTempView("t1")
   Out[4]: True
   
   In [5]: df2.show()
   +---+
   | id|
   +---+
   |  0|
   |  1|
   |  2|
   |  3|
   |  4|
   |  5|
   |  6|
   |  7|
   |  8|
   |  9|
   +---+
   ```
   
   
   ### How was this patch tested?
   added ut


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to