Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20803#discussion_r174600066
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala 
---
    @@ -635,6 +637,7 @@ class SparkSession private(
        * @since 2.0.0
        */
       def sql(sqlText: String): DataFrame = {
    +    SQLExecution.setSqlText(substitutor.substitute(sqlText))
    --- End diff --
    
    I think the most difficult part is, how to connect the SQL text to the 
execution. I don't think the current one works, e.g.
    ```
    val df = spark.sql("xxxxx")
    spark.range(10).count()
    ```
    You set the SQL text for the next execution, but the next execution may not 
happen on this dataframe.
    
    I think SQL text should belong to a DataFrame, and executions on this 
dataframe show the SQL text. e.g.
    ```
    val df = spark.sql("xxxxxx")
    df.collect() // this should show sql text on the UI
    df.count() // shall we shall sql text?
    df.show() // this adds a limit on top of the query plan, but ideally we 
should shall the sql text.
    df.filter(...).collect() // how about this?
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to