tsarid commented on code in PR #52585:
URL: https://github.com/apache/spark/pull/52585#discussion_r2489418068


##########
docs/spark-connect-gotchas.md:
##########
@@ -0,0 +1,422 @@
+---
+layout: global
+title: "Eager vs Lazy: Spark Connect vs Spark Classic"
+license: |
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+---
+
+The comparison highlights key differences between Spark Connect and Spark 
Classic in terms of execution and analysis behavior. While both utilize lazy 
execution for transformations, Spark Connect also defers analysis, introducing 
unique considerations like temporary view handling and UDF evaluation. The 
guide outlines common gotchas and provides strategies for mitigation.
+
+**When does this matter?** These differences are particularly important when 
migrating existing code from Spark Classic to Spark Connect, or when writing 
code that needs to work with both modes. Understanding these distinctions helps 
avoid unexpected behavior and performance issues.
+
+For an overview of Spark Connect, see [Spark Connect 
Overview](spark-connect-overview.html).
+
+# Query Execution: Both Lazy
+
+## Spark Classic
+
+In traditional Spark, DataFrame transformations (e.g., `filter`, `limit`) are 
lazy. This means they are not executed immediately but are recorded in a 
logical plan. The actual computation is triggered only when an action (e.g., 
`show()`, `collect()`) is invoked.
+
+## Spark Connect
+
+Spark Connect follows a similar lazy evaluation model. Transformations are 
constructed on the client side and sent as unresolved proto plans to the 
server. The server then performs the necessary analysis and execution when an 
action is called.
+
+## Comparison
+
+Both Spark Classic and Spark Connect follow the same lazy execution model for 
query execution.
+
+| Aspect                                                                       
         | Spark Classic   | Spark Connect   |
+|:--------------------------------------------------------------------------------------|:----------------|:----------------|
+| Transformations: `df.filter(...)`, `df.select(...)`, `df.limit(...)`, etc    
         | Lazy execution  | Lazy execution  |
+| SQL queries: <br/> `spark.sql("select …")`                                   
         | Lazy execution  | Lazy execution  |
+| Actions: `df.collect()`, `df.show()`, etc                                    
         | Eager execution | Eager execution |
+| SQL commands: <br/> `spark.sql("insert …")`, <br/> `spark.sql("create …")`, 
<br/> etc | Eager execution | Eager execution |
+
+# Schema Analysis: Eager vs. Lazy
+
+## Spark Classic
+
+Traditionally, Spark Classic performs schema analysis eagerly during the 
logical plan construction phase. This means that when you define 
transformations, Spark immediately analyzes the DataFrame's schema to ensure 
all referenced columns and data types are valid.
+
+For example, executing `spark.sql("select 1 as a, 2 as b").filter("c > 1")` 
will throw an error eagerly, indicating the column `c` cannot be found.
+
+## Spark Connect
+
+Spark Connect differs from Classic because the client constructs unresolved 
proto plans during transformation. When accessing a schema or executing an 
action, the client sends the unresolved plans to the server via RPC (remote 
procedure call). The server then performs the analysis and execution. This 
design defers schema analysis.
+
+For example, `spark.sql("select 1 as a, 2 as b").filter("c > 1")` will not 
throw any error because the unresolved plan is client-side only, but on 
`df.columns` or `df.show()` an error will be thrown because the unresolved plan 
is sent to the server for analysis.
+
+## Comparison
+
+Unlike query execution, Spark Classic and Spark Connect differ in when schema 
analysis occurs.
+
+| Aspect                                                                    | 
Spark Classic | Spark Connect                                                   
           |
+|:--------------------------------------------------------------------------|:--------------|:---------------------------------------------------------------------------|
+| Transformations: `df.filter(...)`, `df.select(...)`, `df.limit(...)`, etc | 
Eager         | **Lazy**                                                        
           |
+| Schema access: `df.columns`, `df.schema`, `df.isStreaming`, etc           | 
Eager         | **Eager** <br/> **Triggers an analysis RPC request, unlike 
Spark Classic** |
+| Actions: `df.collect()`, `df.show()`, etc                                 | 
Eager         | Eager                                                           
           |
+| Dependent session state: UDFs, temporary views, configs, etc              | 
Eager         | **Lazy** <br/> **Evaluated during the execution**               
           |
+
+# Common Gotchas (with Mitigations)
+
+If not careful about the difference between lazy vs. eager analysis, there are 
some gotchas you can run into.

Review Comment:
   It would be nice to replace "some gotchas" with more specific information 
about what the reader can find in the remainder of this file (e.g., "four 
gotchas regarding 1) temporary view names, 2) UDFs with mutable external 
variables, 3) delayed error detection, and 4) schema access on new dataframes)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to