This is an automated email from the ASF dual-hosted git repository.

parthc pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/main by this push:
     new 477013e29 doc: Document sql query error propagation (#3651)
477013e29 is described below

commit 477013e29fa7ffc45a0916c5fc5da303ac8592bd
Author: Parth Chandra <[email protected]>
AuthorDate: Tue Mar 10 17:21:00 2026 -0700

    doc: Document sql query error propagation (#3651)
---
 .../contributor-guide/error_pipeline_overview.svg  | 173 +++++++
 .../contributor-guide/query_context_journey.svg    | 130 ++++++
 docs/source/contributor-guide/shim_pattern.svg     |  75 +++
 .../contributor-guide/sql_error_propagation.md     | 511 +++++++++++++++++++++
 4 files changed, 889 insertions(+)

diff --git a/docs/source/contributor-guide/error_pipeline_overview.svg 
b/docs/source/contributor-guide/error_pipeline_overview.svg
new file mode 100644
index 000000000..a0b461438
--- /dev/null
+++ b/docs/source/contributor-guide/error_pipeline_overview.svg
@@ -0,0 +1,173 @@
+<svg xmlns="http://www.w3.org/2000/svg"; viewBox="0 0 900 760" 
font-family="monospace, sans-serif" font-size="13">
+  <!-- Background -->
+  <rect width="900" height="760" fill="#1e1e2e"/>
+
+  <!-- Title -->
+  <text x="450" y="35" text-anchor="middle" fill="#cdd6f4" font-size="18" 
font-weight="bold">Comet: ANSI SQL Error Propagation Pipeline</text>
+
+  <!-- === LAYER: SPARK / JVM === -->
+  <rect x="20" y="55" width="860" height="200" rx="8" fill="#181825" 
stroke="#89b4fa" stroke-width="1.5" stroke-dasharray="6,3"/>
+  <text x="36" y="74" fill="#89b4fa" font-size="12" font-weight="bold">SPARK / 
JVM LAYER</text>
+
+  <!-- Box: User SQL -->
+  <rect x="40" y="82" width="160" height="50" rx="6" fill="#313244" 
stroke="#cba6f7" stroke-width="1.5"/>
+  <text x="120" y="103" text-anchor="middle" fill="#cba6f7" 
font-weight="bold">User SQL Query</text>
+  <text x="120" y="120" text-anchor="middle" fill="#a6e3a1" 
font-size="11">SELECT a/b FROM t</text>
+
+  <!-- Arrow: SQL → SerPlan -->
+  <line x1="201" y1="107" x2="255" y2="107" stroke="#6c7086" 
stroke-width="1.5" marker-end="url(#arrowhead)"/>
+  <text x="228" y="100" text-anchor="middle" fill="#6c7086" 
font-size="10">serialize</text>
+
+  <!-- Box: QueryPlanSerde -->
+  <rect x="256" y="82" width="180" height="50" rx="6" fill="#313244" 
stroke="#89b4fa" stroke-width="1.5"/>
+  <text x="346" y="103" text-anchor="middle" fill="#89b4fa" 
font-weight="bold">QueryPlanSerde.scala</text>
+  <text x="346" y="120" text-anchor="middle" fill="#a6e3a1" 
font-size="11">extracts origin → QueryContext</text>
+
+  <!-- Arrow: SerPlan → Proto -->
+  <line x1="437" y1="107" x2="491" y2="107" stroke="#6c7086" 
stroke-width="1.5" marker-end="url(#arrowhead)"/>
+  <text x="464" y="100" text-anchor="middle" fill="#6c7086" 
font-size="10">protobuf</text>
+
+  <!-- Box: Protobuf bytes -->
+  <rect x="492" y="82" width="160" height="50" rx="6" fill="#313244" 
stroke="#f38ba8" stroke-width="1.5"/>
+  <text x="572" y="103" text-anchor="middle" fill="#f38ba8" 
font-weight="bold">Protobuf Bytes</text>
+  <text x="572" y="120" text-anchor="middle" fill="#a6e3a1" 
font-size="11">Expr + expr_id + QueryContext</text>
+
+  <!-- Arrow: Proto → JNI -->
+  <line x1="653" y1="107" x2="707" y2="107" stroke="#6c7086" 
stroke-width="1.5" marker-end="url(#arrowhead)"/>
+  <text x="680" y="100" text-anchor="middle" fill="#6c7086" font-size="10">JNI 
call</text>
+
+  <!-- Box: createPlan -->
+  <rect x="708" y="82" width="145" height="50" rx="6" fill="#313244" 
stroke="#89b4fa" stroke-width="1.5"/>
+  <text x="780" y="103" text-anchor="middle" fill="#89b4fa" 
font-weight="bold">Native.createPlan()</text>
+  <text x="780" y="120" text-anchor="middle" fill="#a6e3a1" font-size="11">JNI 
entry point</text>
+
+  <!-- === STEP 7 & 8: CometExecIterator + SparkErrorConverter === -->
+  <!-- Arrow from SparkErrorConverter back up to Exception -->
+  <rect x="40" y="152" width="200" height="50" rx="6" fill="#313244" 
stroke="#a6e3a1" stroke-width="1.5"/>
+  <text x="140" y="172" text-anchor="middle" fill="#a6e3a1" 
font-weight="bold">CometExecIterator</text>
+  <text x="140" y="188" text-anchor="middle" fill="#cdd6f4" 
font-size="11">catches CometQueryExecutionException</text>
+
+  <rect x="265" y="152" width="200" height="50" rx="6" fill="#313244" 
stroke="#a6e3a1" stroke-width="1.5"/>
+  <text x="365" y="172" text-anchor="middle" fill="#a6e3a1" 
font-weight="bold">SparkErrorConverter</text>
+  <text x="365" y="188" text-anchor="middle" fill="#cdd6f4" 
font-size="11">parses JSON → Spark exception</text>
+
+  <rect x="490" y="152" width="200" height="50" rx="6" fill="#313244" 
stroke="#a6e3a1" stroke-width="1.5"/>
+  <text x="590" y="172" text-anchor="middle" fill="#a6e3a1" 
font-weight="bold">ShimSparkErrorConverter</text>
+  <text x="590" y="188" text-anchor="middle" fill="#cdd6f4" 
font-size="11">calls QueryExecutionErrors.*</text>
+
+  <rect x="715" y="152" width="155" height="50" rx="6" fill="#313244" 
stroke="#f38ba8" stroke-width="1.5"/>
+  <text x="792" y="172" text-anchor="middle" fill="#f38ba8" 
font-weight="bold">ArithmeticException</text>
+  <text x="792" y="188" text-anchor="middle" fill="#cdd6f4" 
font-size="11">[DIVIDE_BY_ZERO] + SQL ptr</text>
+
+  <!-- Arrows in return path -->
+  <line x1="240" y1="177" x2="263" y2="177" stroke="#a6e3a1" 
stroke-width="1.5" marker-end="url(#arrowheadGreen)"/>
+  <line x1="465" y1="177" x2="488" y2="177" stroke="#a6e3a1" 
stroke-width="1.5" marker-end="url(#arrowheadGreen)"/>
+  <line x1="690" y1="177" x2="713" y2="177" stroke="#a6e3a1" 
stroke-width="1.5" marker-end="url(#arrowheadGreen)"/>
+
+  <!-- === JNI BOUNDARY === -->
+  <rect x="20" y="265" width="860" height="22" rx="4" fill="#181825" 
stroke="#f38ba8" stroke-width="2"/>
+  <text x="450" y="281" text-anchor="middle" fill="#f38ba8" font-size="12" 
font-weight="bold">──────────────────  JNI BOUNDARY (Java ↔ Rust)  
──────────────────</text>
+
+  <!-- Arrow: Rust error → JNI -->
+  <line x1="450" y1="540" x2="450" y2="285" stroke="#f38ba8" stroke-width="2" 
stroke-dasharray="8,4" marker-end="url(#arrowheadRed)"/>
+  <text x="460" y="420" fill="#f38ba8" font-size="11">throw 
CometQueryExecutionException</text>
+  <text x="460" y="436" fill="#f38ba8" font-size="11">(JSON payload)</text>
+
+  <!-- Arrow: createPlan → Rust planner -->
+  <line x1="780" y1="132" x2="780" y2="285" stroke="#89b4fa" 
stroke-width="1.5" stroke-dasharray="4,3"/>
+  <!-- goes down, continues below -->
+
+  <!-- === LAYER: RUST / NATIVE === -->
+  <rect x="20" y="297" width="860" height="390" rx="8" fill="#181825" 
stroke="#f9e2af" stroke-width="1.5" stroke-dasharray="6,3"/>
+  <text x="36" y="318" fill="#f9e2af" font-size="12" font-weight="bold">RUST / 
NATIVE LAYER  (DataFusion)</text>
+
+  <!-- Box: PhysicalPlanner -->
+  <rect x="40" y="328" width="200" height="70" rx="6" fill="#313244" 
stroke="#89b4fa" stroke-width="1.5"/>
+  <text x="140" y="350" text-anchor="middle" fill="#89b4fa" 
font-weight="bold">PhysicalPlanner</text>
+  <text x="140" y="367" text-anchor="middle" fill="#cdd6f4" 
font-size="11">planner.rs</text>
+  <text x="140" y="384" text-anchor="middle" fill="#cdd6f4" 
font-size="11">deserializes proto, builds plan</text>
+
+  <!-- Box: QueryContextMap -->
+  <rect x="270" y="328" width="200" height="70" rx="6" fill="#313244" 
stroke="#fab387" stroke-width="1.5"/>
+  <text x="370" y="350" text-anchor="middle" fill="#fab387" 
font-weight="bold">QueryContextMap</text>
+  <text x="370" y="367" text-anchor="middle" fill="#cdd6f4" 
font-size="11">query_context.rs</text>
+  <text x="370" y="384" text-anchor="middle" fill="#cdd6f4" 
font-size="11">expr_id → QueryContext registry</text>
+
+  <!-- Arrow: planner → register context -->
+  <line x1="241" y1="363" x2="268" y2="363" stroke="#fab387" 
stroke-width="1.5" marker-end="url(#arrowheadOrange)"/>
+  <text x="255" y="356" text-anchor="middle" fill="#fab387" 
font-size="10">register</text>
+
+  <!-- Box: Native Expressions -->
+  <rect x="500" y="328" width="200" height="70" rx="6" fill="#313244" 
stroke="#cba6f7" stroke-width="1.5"/>
+  <text x="600" y="350" text-anchor="middle" fill="#cba6f7" 
font-weight="bold">Native Expressions</text>
+  <text x="600" y="367" text-anchor="middle" fill="#cdd6f4" 
font-size="11">Cast, CheckOverflow,</text>
+  <text x="600" y="384" text-anchor="middle" fill="#cdd6f4" 
font-size="11">CheckedBinaryExpr, ...</text>
+
+  <!-- Arrow: planner → expressions (with context lookup) -->
+  <line x1="441" y1="363" x2="498" y2="363" stroke="#cba6f7" 
stroke-width="1.5" marker-end="url(#arrowheadPurple)"/>
+  <text x="470" y="356" text-anchor="middle" fill="#cba6f7" 
font-size="10">build + attach ctx</text>
+
+  <!-- Box: SparkError enum -->
+  <rect x="40" y="430" width="200" height="70" rx="6" fill="#313244" 
stroke="#f38ba8" stroke-width="1.5"/>
+  <text x="140" y="452" text-anchor="middle" fill="#f38ba8" 
font-weight="bold">SparkError (enum)</text>
+  <text x="140" y="469" text-anchor="middle" fill="#cdd6f4" 
font-size="11">error.rs — 30+ variants</text>
+  <text x="140" y="485" text-anchor="middle" fill="#a6e3a1" 
font-size="11">DivideByZero, CastInvalidValue...</text>
+
+  <!-- Box: SparkErrorWithContext -->
+  <rect x="270" y="430" width="200" height="70" rx="6" fill="#313244" 
stroke="#f38ba8" stroke-width="1.5"/>
+  <text x="370" y="452" text-anchor="middle" fill="#f38ba8" 
font-weight="bold">SparkErrorWithContext</text>
+  <text x="370" y="469" text-anchor="middle" fill="#cdd6f4" 
font-size="11">error.rs</text>
+  <text x="370" y="485" text-anchor="middle" fill="#cdd6f4" 
font-size="11">SparkError + QueryContext</text>
+
+  <!-- Arrow: SparkError → SparkErrorWithContext -->
+  <line x1="241" y1="465" x2="268" y2="465" stroke="#f38ba8" 
stroke-width="1.5" marker-end="url(#arrowheadRed)"/>
+  <text x="255" y="458" text-anchor="middle" fill="#f38ba8" 
font-size="10">wrap</text>
+
+  <!-- Arrow: expressions emit errors -->
+  <line x1="600" y1="399" x2="600" y2="440" stroke="#f38ba8" 
stroke-width="1.5"/>
+  <line x1="600" y1="440" x2="242" y2="440" stroke="#f38ba8" 
stroke-width="1.5"/>
+  <line x1="242" y1="440" x2="242" y2="432" stroke="#f38ba8" 
stroke-width="1.5" marker-end="url(#arrowheadRed)"/>
+  <text x="500" y="455" fill="#f38ba8" 
font-size="10">Err(SparkError::DivideByZero)</text>
+
+  <!-- Box: throw_exception (JNI boundary handler) -->
+  <rect x="500" y="430" width="200" height="70" rx="6" fill="#313244" 
stroke="#f38ba8" stroke-width="1.5"/>
+  <text x="600" y="452" text-anchor="middle" fill="#f38ba8" 
font-weight="bold">throw_exception()</text>
+  <text x="600" y="469" text-anchor="middle" fill="#cdd6f4" 
font-size="11">errors.rs</text>
+  <text x="600" y="485" text-anchor="middle" fill="#cdd6f4" 
font-size="11">serializes to JSON, calls JNI</text>
+
+  <!-- Arrow: SparkErrorWithContext → throw_exception -->
+  <line x1="471" y1="465" x2="498" y2="465" stroke="#f38ba8" 
stroke-width="1.5" marker-end="url(#arrowheadRed)"/>
+  <text x="485" y="458" text-anchor="middle" fill="#f38ba8" 
font-size="10">.to_json()</text>
+
+  <!-- Box: JSON payload -->
+  <rect x="40" y="530" width="620" height="130" rx="6" fill="#11111b" 
stroke="#a6e3a1" stroke-width="1.5"/>
+  <text x="50" y="552" fill="#a6e3a1" font-size="11" font-weight="bold">JSON 
Payload thrown to JVM:</text>
+  <text x="50" y="572" fill="#cdd6f4" font-size="11">{"errorType": 
"DivideByZero", "errorClass": "DIVIDE_BY_ZERO", "params": {},</text>
+  <text x="50" y="590" fill="#cdd6f4" font-size="11"> "context": {"sqlText": 
"SELECT a/b FROM t", "startIndex": 7, "stopIndex": 9,</text>
+  <text x="50" y="608" fill="#cdd6f4" font-size="11">             "line": 1, 
"startPosition": 7},</text>
+  <text x="50" y="626" fill="#cdd6f4" font-size="11"> "summary": "== SQL (line 
1, position 8) ==\nSELECT a/b FROM t\n       ^^^"}</text>
+  <text x="50" y="651" fill="#6c7086" font-size="10">↑ This is the 
getMessage() of CometQueryExecutionException on the Java side</text>
+
+  <!-- Arrow: throw_exception → JSON -->
+  <line x1="600" y1="501" x2="600" y2="528" stroke="#f38ba8" 
stroke-width="1.5"/>
+  <line x1="600" y1="528" x2="350" y2="528" stroke="#f38ba8" 
stroke-width="1.5" marker-end="url(#arrowheadRed)"/>
+
+  <!-- Arrowhead definitions -->
+  <defs>
+    <marker id="arrowhead" markerWidth="8" markerHeight="6" refX="8" refY="3" 
orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#6c7086"/>
+    </marker>
+    <marker id="arrowheadGreen" markerWidth="8" markerHeight="6" refX="8" 
refY="3" orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#a6e3a1"/>
+    </marker>
+    <marker id="arrowheadRed" markerWidth="8" markerHeight="6" refX="8" 
refY="3" orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#f38ba8"/>
+    </marker>
+    <marker id="arrowheadOrange" markerWidth="8" markerHeight="6" refX="8" 
refY="3" orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#fab387"/>
+    </marker>
+    <marker id="arrowheadPurple" markerWidth="8" markerHeight="6" refX="8" 
refY="3" orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#cba6f7"/>
+    </marker>
+  </defs>
+</svg>
\ No newline at end of file
diff --git a/docs/source/contributor-guide/query_context_journey.svg 
b/docs/source/contributor-guide/query_context_journey.svg
new file mode 100644
index 000000000..bdd15359f
--- /dev/null
+++ b/docs/source/contributor-guide/query_context_journey.svg
@@ -0,0 +1,130 @@
+<svg xmlns="http://www.w3.org/2000/svg"; viewBox="0 0 900 520" 
font-family="monospace, sans-serif" font-size="12">
+  <!-- Background -->
+  <rect width="900" height="520" fill="#1e1e2e"/>
+
+  <!-- Title -->
+  <text x="450" y="30" text-anchor="middle" fill="#cdd6f4" font-size="16" 
font-weight="bold">QueryContext Journey: From SQL Text to Error Pointer</text>
+
+  <!-- === PHASE 1: Spark side === -->
+  <rect x="20" y="48" width="260" height="200" rx="8" fill="#181825" 
stroke="#89b4fa" stroke-width="1.5"/>
+  <text x="150" y="68" text-anchor="middle" fill="#89b4fa" font-size="12" 
font-weight="bold">① Spark Parses SQL</text>
+
+  <text x="35" y="92" fill="#cdd6f4" font-size="11">SQL: SELECT a/b FROM 
t</text>
+  <text x="35" y="110" fill="#6c7086" font-size="11">Spark's parser attaches 
origin:</text>
+  <rect x="35" y="118" width="225" height="115" rx="4" fill="#11111b"/>
+  <text x="45" y="136" fill="#f9e2af" font-size="10">expr.origin:</text>
+  <text x="45" y="152" fill="#a6e3a1" font-size="10">  sqlText    = "SELECT 
a/b FROM t"</text>
+  <text x="45" y="168" fill="#a6e3a1" font-size="10">  startIndex = 7   (char 
'a')</text>
+  <text x="45" y="184" fill="#a6e3a1" font-size="10">  stopIndex  = 9   (char 
'b')</text>
+  <text x="45" y="200" fill="#a6e3a1" font-size="10">  line       = 1</text>
+  <text x="45" y="216" fill="#a6e3a1" font-size="10">  startPos   = 7   
(column)</text>
+
+  <!-- Arrow: Phase 1 → Phase 2 -->
+  <line x1="282" y1="148" x2="318" y2="148" stroke="#6c7086" 
stroke-width="1.5" marker-end="url(#arr1)"/>
+  <text x="300" y="140" text-anchor="middle" fill="#6c7086" 
font-size="10">serialize</text>
+
+  <!-- === PHASE 2: Protobuf === -->
+  <rect x="320" y="48" width="240" height="200" rx="8" fill="#181825" 
stroke="#f38ba8" stroke-width="1.5"/>
+  <text x="440" y="68" text-anchor="middle" fill="#f38ba8" font-size="12" 
font-weight="bold">② Protobuf Wire Format</text>
+  <text x="335" y="90" fill="#6c7086" font-size="11">Expr message 
(expr.proto):</text>
+  <rect x="335" y="97" width="210" height="135" rx="4" fill="#11111b"/>
+  <text x="345" y="114" fill="#f9e2af" font-size="10">message Expr {</text>
+  <text x="355" y="130" fill="#cba6f7" font-size="10">  optional uint64 
expr_id = 89;</text>
+  <text x="355" y="146" fill="#cba6f7" font-size="10">  optional 
QueryContext</text>
+  <text x="365" y="162" fill="#cba6f7" font-size="10">    query_context = 
90;</text>
+  <text x="355" y="178" fill="#6c7086" font-size="10">  // ... actual expr 
type ...</text>
+  <text x="345" y="194" fill="#f9e2af" font-size="10">}</text>
+  <text x="345" y="218" fill="#a6e3a1" font-size="10">expr_id=42, sql="SELECT 
a/b.."</text>
+
+  <!-- Arrow: Phase 2 → Phase 3 -->
+  <line x1="562" y1="148" x2="598" y2="148" stroke="#6c7086" 
stroke-width="1.5" marker-end="url(#arr1)"/>
+  <text x="580" y="140" text-anchor="middle" fill="#6c7086" 
font-size="10">JNI</text>
+
+  <!-- === PHASE 3: Rust Registry === -->
+  <rect x="600" y="48" width="280" height="200" rx="8" fill="#181825" 
stroke="#fab387" stroke-width="1.5"/>
+  <text x="740" y="68" text-anchor="middle" fill="#fab387" font-size="12" 
font-weight="bold">③ Rust: QueryContextMap</text>
+  <text x="615" y="90" fill="#6c7086" font-size="11">planner.rs registers 
it:</text>
+  <rect x="615" y="97" width="250" height="135" rx="4" fill="#11111b"/>
+  <text x="625" y="114" fill="#f9e2af" font-size="10">registry.register(42, 
QueryContext {</text>
+  <text x="635" y="130" fill="#a6e3a1" font-size="10">  sql_text:      "SELECT 
a/b FROM t"</text>
+  <text x="635" y="146" fill="#a6e3a1" font-size="10">  start_index:   7</text>
+  <text x="635" y="162" fill="#a6e3a1" font-size="10">  stop_index:    9</text>
+  <text x="635" y="178" fill="#a6e3a1" font-size="10">  line:          1</text>
+  <text x="635" y="194" fill="#a6e3a1" font-size="10">  start_position:7</text>
+  <text x="625" y="210" fill="#f9e2af" font-size="10">});</text>
+
+  <!-- === PHASE 4: Error occurs === -->
+  <rect x="20" y="275" width="260" height="160" rx="8" fill="#181825" 
stroke="#f38ba8" stroke-width="1.5"/>
+  <text x="150" y="295" text-anchor="middle" fill="#f38ba8" font-size="12" 
font-weight="bold">④ Error During Execution</text>
+  <text x="35" y="315" fill="#6c7086" font-size="11">CheckedBinaryExpr 
evaluates:</text>
+  <rect x="35" y="322" width="230" height="98" rx="4" fill="#11111b"/>
+  <text x="45" y="340" fill="#f9e2af" 
font-size="10">self.child.evaluate(batch)?;</text>
+  <text x="45" y="356" fill="#6c7086" font-size="10">// ↑ returns 
Err(DivideByZero)</text>
+  <text x="45" y="372" fill="#f9e2af" 
font-size="10">SparkErrorWithContext::with_context(</text>
+  <text x="55" y="388" fill="#a6e3a1" font-size="10">  
SparkError::DivideByZero,</text>
+  <text x="55" y="404" fill="#a6e3a1" font-size="10">  
Arc::clone(&amp;self.query_context),</text>
+  <text x="45" y="420" fill="#f9e2af" font-size="10">);</text>
+
+  <!-- Arrow: Phase 3 → lookup context -->
+  <line x1="600" y1="200" x2="600" y2="350" stroke="#fab387" 
stroke-width="1.5" stroke-dasharray="5,3"/>
+  <line x1="600" y1="350" x2="282" y2="350" stroke="#fab387" 
stroke-width="1.5" stroke-dasharray="5,3" marker-end="url(#arrOrange)"/>
+  <text x="500" y="342" text-anchor="middle" fill="#fab387" 
font-size="10">lookup context for expr_id=42</text>
+
+  <!-- === PHASE 5: JSON === -->
+  <rect x="320" y="275" width="240" height="160" rx="8" fill="#181825" 
stroke="#a6e3a1" stroke-width="1.5"/>
+  <text x="440" y="295" text-anchor="middle" fill="#a6e3a1" font-size="12" 
font-weight="bold">⑤ Serialized to JSON</text>
+  <rect x="335" y="304" width="210" height="120" rx="4" fill="#11111b"/>
+  <text x="345" y="321" fill="#a6e3a1" 
font-size="10">{"errorType":"DivideByZero",</text>
+  <text x="345" y="337" fill="#a6e3a1" font-size="10"> 
"errorClass":"DIVIDE_BY_ZERO",</text>
+  <text x="345" y="353" fill="#a6e3a1" font-size="10"> "params":{},</text>
+  <text x="345" y="369" fill="#a6e3a1" font-size="10"> "context":{</text>
+  <text x="355" y="385" fill="#a6e3a1" font-size="10">   "sqlText":"SELECT 
a/b...",</text>
+  <text x="355" y="401" fill="#a6e3a1" font-size="10">   
"startIndex":7,"line":1},</text>
+  <text x="345" y="417" fill="#a6e3a1" font-size="10"> "summary":"== SQL (line 
1)..."}</text>
+
+  <!-- Arrow: Phase 4 → Phase 5 -->
+  <line x1="282" y1="355" x2="318" y2="355" stroke="#f38ba8" 
stroke-width="1.5" marker-end="url(#arr1)"/>
+  <text x="300" y="347" text-anchor="middle" fill="#f38ba8" 
font-size="10">.to_json()</text>
+
+  <!-- === PHASE 6: Result === -->
+  <rect x="600" y="275" width="280" height="160" rx="8" fill="#181825" 
stroke="#cba6f7" stroke-width="1.5"/>
+  <text x="740" y="295" text-anchor="middle" fill="#cba6f7" font-size="12" 
font-weight="bold">⑥ User Sees Proper Error</text>
+  <rect x="615" y="304" width="250" height="120" rx="4" fill="#11111b"/>
+  <text x="625" y="321" fill="#f38ba8" 
font-size="10">SparkArithmeticException:</text>
+  <text x="625" y="337" fill="#cdd6f4" font-size="10">[DIVIDE_BY_ZERO] 
Division by zero.</text>
+  <text x="625" y="353" fill="#cdd6f4" font-size="10">Use `try_divide` to 
tolerate...</text>
+  <text x="625" y="369" fill="#6c7086" font-size="10"></text>
+  <text x="625" y="385" fill="#f9e2af" font-size="10">== SQL (line 1, position 
8) ==</text>
+  <text x="625" y="401" fill="#f9e2af" font-size="10">SELECT a/b FROM t</text>
+  <text x="625" y="417" fill="#f38ba8" font-size="10">       ^^^</text>
+
+  <!-- Arrow: Phase 5 → Phase 6 -->
+  <line x1="562" y1="355" x2="598" y2="355" stroke="#a6e3a1" 
stroke-width="1.5" marker-end="url(#arrGreen)"/>
+  <text x="580" y="347" text-anchor="middle" fill="#a6e3a1" 
font-size="10">convert</text>
+
+  <!-- JNI boundary line -->
+  <line x1="20" y1="262" x2="880" y2="262" stroke="#f38ba8" stroke-width="1" 
stroke-dasharray="8,4"/>
+  <text x="450" y="258" text-anchor="middle" fill="#f38ba8" font-size="10">── 
JNI boundary ──</text>
+
+  <!-- Step number labels below diagram -->
+  <text x="150" y="450" text-anchor="middle" fill="#89b4fa" font-size="11">① 
SQL → origin</text>
+  <text x="150" y="466" text-anchor="middle" fill="#89b4fa" 
font-size="11">(Spark)</text>
+
+  <text x="440" y="450" text-anchor="middle" fill="#f38ba8" font-size="11">②③ 
Proto + Registry</text>
+  <text x="440" y="466" text-anchor="middle" fill="#f38ba8" 
font-size="11">(Serialize → Deserialize)</text>
+
+  <text x="740" y="450" text-anchor="middle" fill="#fab387" font-size="11">④⑤⑥ 
Error → JSON → Exception</text>
+  <text x="740" y="466" text-anchor="middle" fill="#fab387" 
font-size="11">(Rust → JVM)</text>
+
+  <defs>
+    <marker id="arr1" markerWidth="8" markerHeight="6" refX="8" refY="3" 
orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#6c7086"/>
+    </marker>
+    <marker id="arrOrange" markerWidth="8" markerHeight="6" refX="8" refY="3" 
orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#fab387"/>
+    </marker>
+    <marker id="arrGreen" markerWidth="8" markerHeight="6" refX="8" refY="3" 
orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#a6e3a1"/>
+    </marker>
+  </defs>
+</svg>
diff --git a/docs/source/contributor-guide/shim_pattern.svg 
b/docs/source/contributor-guide/shim_pattern.svg
new file mode 100644
index 000000000..0f966c7a6
--- /dev/null
+++ b/docs/source/contributor-guide/shim_pattern.svg
@@ -0,0 +1,75 @@
+<svg xmlns="http://www.w3.org/2000/svg"; viewBox="0 0 900 440" 
font-family="monospace, sans-serif" font-size="12">
+  <!-- Background -->
+  <rect width="900" height="440" fill="#1e1e2e"/>
+
+  <!-- Title -->
+  <text x="450" y="28" text-anchor="middle" fill="#cdd6f4" font-size="16" 
font-weight="bold">Shim Pattern: Per-Version Spark API Bridging</text>
+
+  <!-- Problem statement -->
+  <text x="450" y="55" text-anchor="middle" fill="#6c7086" 
font-size="12">Spark 3.4, 3.5, and 4.0 all have slightly different 
QueryExecutionErrors APIs.</text>
+  <text x="450" y="72" text-anchor="middle" fill="#6c7086" 
font-size="12">Comet ships a separate ShimSparkErrorConverter implementation 
for each version.</text>
+
+  <!-- === Common code === -->
+  <rect x="280" y="88" width="340" height="80" rx="8" fill="#313244" 
stroke="#89b4fa" stroke-width="2"/>
+  <text x="450" y="110" text-anchor="middle" fill="#89b4fa" 
font-weight="bold">SparkErrorConverter.scala</text>
+  <text x="450" y="128" text-anchor="middle" fill="#cdd6f4" 
font-size="11">(common to all Spark versions)</text>
+  <text x="450" y="146" text-anchor="middle" fill="#a6e3a1" 
font-size="11">Parses JSON → calls convertErrorType()</text>
+
+  <!-- Arrows down to shims -->
+  <line x1="330" y1="168" x2="170" y2="230" stroke="#6c7086" 
stroke-width="1.5" marker-end="url(#shimArr)"/>
+  <line x1="450" y1="168" x2="450" y2="230" stroke="#6c7086" 
stroke-width="1.5" marker-end="url(#shimArr)"/>
+  <line x1="570" y1="168" x2="730" y2="230" stroke="#6c7086" 
stroke-width="1.5" marker-end="url(#shimArr)"/>
+
+  <text x="225" y="198" text-anchor="middle" fill="#f9e2af" 
font-size="10">spark-3.4</text>
+  <text x="450" y="198" text-anchor="middle" fill="#f9e2af" 
font-size="10">spark-3.5</text>
+  <text x="675" y="198" text-anchor="middle" fill="#f9e2af" 
font-size="10">spark-4.0</text>
+
+  <!-- Shim boxes -->
+  <rect x="40" y="232" width="260" height="80" rx="8" fill="#181825" 
stroke="#f9e2af" stroke-width="1.5"/>
+  <text x="170" y="254" text-anchor="middle" fill="#f9e2af" 
font-weight="bold">ShimSparkErrorConverter</text>
+  <text x="170" y="270" text-anchor="middle" fill="#6c7086" 
font-size="11">spark-3.4 edition</text>
+  <text x="170" y="290" text-anchor="middle" fill="#cdd6f4" 
font-size="11">Uses Spark 3.4 API signatures</text>
+  <text x="170" y="306" text-anchor="middle" fill="#cdd6f4" font-size="11">(no 
SQLQueryContext in some calls)</text>
+
+  <rect x="320" y="232" width="260" height="80" rx="8" fill="#181825" 
stroke="#f9e2af" stroke-width="1.5"/>
+  <text x="450" y="254" text-anchor="middle" fill="#f9e2af" 
font-weight="bold">ShimSparkErrorConverter</text>
+  <text x="450" y="270" text-anchor="middle" fill="#6c7086" 
font-size="11">spark-3.5 edition</text>
+  <text x="450" y="290" text-anchor="middle" fill="#cdd6f4" 
font-size="11">Uses Spark 3.5 API signatures</text>
+  <text x="450" y="306" text-anchor="middle" fill="#cdd6f4" 
font-size="11">(takes SQLQueryContext directly)</text>
+
+  <rect x="600" y="232" width="260" height="80" rx="8" fill="#181825" 
stroke="#f9e2af" stroke-width="1.5"/>
+  <text x="730" y="254" text-anchor="middle" fill="#f9e2af" 
font-weight="bold">ShimSparkErrorConverter</text>
+  <text x="730" y="270" text-anchor="middle" fill="#6c7086" 
font-size="11">spark-4.0 edition</text>
+  <text x="730" y="290" text-anchor="middle" fill="#cdd6f4" 
font-size="11">Uses Spark 4.0 API signatures</text>
+  <text x="730" y="306" text-anchor="middle" fill="#cdd6f4" 
font-size="11">(new functionName param in some)</text>
+
+  <!-- Arrows down to QueryExecutionErrors -->
+  <line x1="170" y1="312" x2="170" y2="360" stroke="#a6e3a1" 
stroke-width="1.5" marker-end="url(#shimArrGreen)"/>
+  <line x1="450" y1="312" x2="450" y2="360" stroke="#a6e3a1" 
stroke-width="1.5" marker-end="url(#shimArrGreen)"/>
+  <line x1="730" y1="312" x2="730" y2="360" stroke="#a6e3a1" 
stroke-width="1.5" marker-end="url(#shimArrGreen)"/>
+
+  <!-- QueryExecutionErrors boxes -->
+  <rect x="40" y="362" width="260" height="55" rx="6" fill="#313244" 
stroke="#a6e3a1" stroke-width="1.5"/>
+  <text x="170" y="384" text-anchor="middle" fill="#a6e3a1" 
font-weight="bold">QueryExecutionErrors</text>
+  <text x="170" y="402" text-anchor="middle" fill="#cdd6f4" 
font-size="11">Spark 3.4 internal API</text>
+
+  <rect x="320" y="362" width="260" height="55" rx="6" fill="#313244" 
stroke="#a6e3a1" stroke-width="1.5"/>
+  <text x="450" y="384" text-anchor="middle" fill="#a6e3a1" 
font-weight="bold">QueryExecutionErrors</text>
+  <text x="450" y="402" text-anchor="middle" fill="#cdd6f4" 
font-size="11">Spark 3.5 internal API</text>
+
+  <rect x="600" y="362" width="260" height="55" rx="6" fill="#313244" 
stroke="#a6e3a1" stroke-width="1.5"/>
+  <text x="730" y="384" text-anchor="middle" fill="#a6e3a1" 
font-weight="bold">QueryExecutionErrors</text>
+  <text x="730" y="402" text-anchor="middle" fill="#cdd6f4" 
font-size="11">Spark 4.0 internal API</text>
+
+  <!-- Example: diff in BinaryArithmeticOverflow -->
+  <text x="450" y="435" text-anchor="middle" fill="#6c7086" 
font-size="10">Example difference: "BinaryArithmeticOverflow" — Spark 3.x does 
NOT take functionName param, Spark 4.0 does.</text>
+
+  <defs>
+    <marker id="shimArr" markerWidth="8" markerHeight="6" refX="8" refY="3" 
orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#6c7086"/>
+    </marker>
+    <marker id="shimArrGreen" markerWidth="8" markerHeight="6" refX="8" 
refY="3" orient="auto">
+      <polygon points="0 0, 8 3, 0 6" fill="#a6e3a1"/>
+    </marker>
+  </defs>
+</svg>
diff --git a/docs/source/contributor-guide/sql_error_propagation.md 
b/docs/source/contributor-guide/sql_error_propagation.md
new file mode 100644
index 000000000..bafe2fcef
--- /dev/null
+++ b/docs/source/contributor-guide/sql_error_propagation.md
@@ -0,0 +1,511 @@
+# ANSI SQL Error Propagation in Comet
+
+## Overview
+
+Apache Comet is a native query accelerator for Apache Spark. It runs SQL 
expressions in
+**Rust** (via Apache DataFusion) instead of the JVM, which is much faster. But 
there's a
+catch: when something goes wrong — say, a divide-by-zero or a type cast 
failure — the error
+needs to travel from Rust code all the way back to the Spark/Scala/Java world 
as a proper
+Spark exception with the right type, the right message, and even a pointer to 
the exact
+character in the original SQL query where the error happened.
+
+This document explains the end-to-end error-propagation pipeline.
+
+---
+
+## The Big Picture
+
+![Error propagation pipeline overview](./error_pipeline_overview.svg)
+
+```
+SQL Query (Spark/Scala)
+        │
+        │  1. Spark serializes the plan + query context into Protobuf
+        ▼
+  Protobuf bytes ──────────────────────────────────►  JNI boundary
+        │
+        │  2. Rust deserializes the plan and registers query contexts
+        ▼
+   Native execution (DataFusion / Rust)
+        │
+        │  3. An error occurs (e.g. divide by zero)
+        ▼
+   SparkError (Rust enum)
+        │
+        │  4. Error is wrapped with SQL location context
+        ▼
+   SparkErrorWithContext
+        │
+        │  5. Serialized to JSON string
+        ▼
+   JSON string ◄────────────────────────────────────  JNI boundary
+        │
+        │  6. Thrown as CometQueryExecutionException
+        ▼
+   CometExecIterator.scala catches it
+        │
+        │  7. JSON is parsed, proper Spark exception is reconstructed
+        ▼
+   Spark exception (e.g. ArithmeticException with DIVIDE_BY_ZERO errorClass)
+        │
+        │  8. Spark displays it to the user with SQL location pointer
+        ▼
+   User sees:
+   [DIVIDE_BY_ZERO] Division by zero.
+   == SQL (line 1, position 8) ==
+   SELECT a/b FROM t
+          ^^^
+```
+
+---
+
+## Step 1: Spark Serializes Query Context into Protobuf
+
+When Spark compiles a SQL query, it parses it and attaches _origin_ 
information to every
+expression — the line number, column offset, and the full SQL text.
+
+`QueryPlanSerde.scala` is the Scala code that converts Spark's physical 
execution plan into
+a Protobuf binary that gets sent to the Rust side. It extracts origin 
information from each
+expression and encodes it alongside the expression in the Protobuf payload.
+
+### The `extractQueryContext` function
+
+```scala
+// spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala
+
+private def extractQueryContext(expr: Expression): 
Option[ExprOuterClass.QueryContext] = {
+  val contexts = expr.origin.getQueryContext       // Spark stores context in 
expr.origin
+  if (contexts != null && contexts.length > 0) {
+    val ctx = contexts(0)
+    ctx match {
+      case sqlCtx: SQLQueryContext =>
+        val builder = ExprOuterClass.QueryContext.newBuilder()
+          .setSqlText(sqlCtx.sqlText.getOrElse(""))         // full SQL text
+          .setStartIndex(sqlCtx.originStartIndex.getOrElse(...)) // char 
offset of expression start
+          .setStopIndex(sqlCtx.originStopIndex.getOrElse(...))   // char 
offset of expression end
+          .setLine(sqlCtx.line.getOrElse(0))
+          .setStartPosition(sqlCtx.startPosition.getOrElse(0))
+        // ...
+        Some(builder.build())
+    }
+  }
+}
+```
+
+Then, for **every single expression** converted to Protobuf, a unique numeric 
ID and the
+context are attached:
+
+```scala
+.map { protoExpr =>
+  val builder = protoExpr.toBuilder
+  builder.setExprId(nextExprId())           // unique ID (monotonically 
increasing counter)
+  extractQueryContext(expr).foreach { ctx =>
+    builder.setQueryContext(ctx)            // attach the SQL location info
+  }
+  builder.build()
+}
+```
+
+### The Protobuf schema
+
+```protobuf
+// native/proto/src/proto/expr.proto
+
+message Expr {
+  optional uint64 expr_id = 89;             // unique ID for each expression
+  optional QueryContext query_context = 90; // SQL location info
+  // ... actual expression type ...
+}
+
+message QueryContext {
+  string sql_text = 1;      // "SELECT a/b FROM t"
+  int32 start_index = 2;    // 7  (0-based character index of 'a')
+  int32 stop_index = 3;     // 9  (0-based character index of 'b', inclusive)
+  int32 line = 4;           // 1  (1-based line number)
+  int32 start_position = 5; // 7  (0-based column position)
+  optional string object_type = 6; // e.g. "VIEW"
+  optional string object_name = 7; // e.g. "v1"
+}
+```
+
+---
+
+## Step 2: Rust Deserializes the Plan and Registers Query Contexts
+
+On the Rust side, `PhysicalPlanner` in `planner.rs` converts the Protobuf into 
DataFusion's
+physical plan. A `QueryContextMap` — a global registry — maps expression IDs 
to their SQL
+context.
+
+### `QueryContextMap` (`native/spark-expr/src/query_context.rs`)
+
+```rust
+pub struct QueryContextMap {
+    contexts: RwLock<HashMap<u64, Arc<QueryContext>>>,
+}
+
+impl QueryContextMap {
+    pub fn register(&self, expr_id: u64, context: QueryContext) { ... }
+    pub fn get(&self, expr_id: u64) -> Option<Arc<QueryContext>> { ... }
+}
+```
+
+This is basically a lookup table: "for expression #42, the SQL context is: 
text=`SELECT a/b
+FROM t`, characters 7–9, line 1, column 7".
+
+### The `PhysicalPlanner` registers contexts during plan creation
+
+```rust
+// native/core/src/execution/planner.rs
+
+pub struct PhysicalPlanner {
+    query_context_registry: Arc<QueryContextMap>,
+    // ...
+}
+
+pub(crate) fn create_expr(&self, spark_expr: &Expr, ...) {
+    // 1. If this expression has a query context, register it
+    if let (Some(expr_id), Some(ctx_proto)) =
+        (spark_expr.expr_id, spark_expr.query_context.as_ref()) {
+        let query_ctx = QueryContext::new(
+            ctx_proto.sql_text.clone(),
+            ctx_proto.start_index,
+            ctx_proto.stop_index,
+            ...
+        );
+        self.query_context_registry.register(expr_id, query_ctx);
+    }
+
+    // 2. When building specific expressions (Cast, CheckOverflow, etc.),
+    //    look up the context and pass it to the expression
+    ExprStruct::Cast(expr) => {
+        let query_context = spark_expr.expr_id.and_then(|id| {
+            self.query_context_registry.get(id)
+        });
+        Ok(Arc::new(Cast::new(child, datatype, options, spark_expr.expr_id, 
query_context)))
+    }
+}
+```
+
+---
+
+## Step 3: An Error Occurs During Native Execution
+
+During query execution, a Rust expression might encounter something like 
division by zero.
+
+### The `SparkError` enum (`native/spark-expr/src/error.rs`)
+
+This enum contains one variant for every kind of error Spark can produce, with 
exactly the
+same error message format as Spark:
+
+```rust
+pub enum SparkError {
+    #[error("[DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate 
divisor \
+        being 0 and return NULL instead. If necessary set \
+        \"spark.sql.ansi.enabled\" to \"false\" to bypass this error.")]
+    DivideByZero,
+
+    #[error("[CAST_INVALID_INPUT] The value '{value}' of the type 
\"{from_type}\" \
+        cannot be cast to \"{to_type}\" because it is malformed. ...")]
+    CastInvalidValue { value: String, from_type: String, to_type: String },
+
+    // ... 30+ more variants matching Spark's error codes ...
+}
+```
+
+When a divide-by-zero happens, the arithmetic expression creates:
+
+```rust
+return Err(DataFusionError::External(Box::new(SparkError::DivideByZero)));
+```
+
+---
+
+## Step 4: Error Gets Wrapped with SQL Context
+
+The expression wrappers (`CheckedBinaryExpr`, `CheckOverflow`, `Cast`) catch 
the
+`SparkError` and attach the SQL context using `SparkErrorWithContext`:
+
+```rust
+// native/core/src/execution/expressions/arithmetic.rs
+// CheckedBinaryExpr wraps arithmetic operations
+
+impl PhysicalExpr for CheckedBinaryExpr {
+    fn evaluate(&self, batch: &RecordBatch) -> Result<...> {
+        match self.child.evaluate(batch) {
+            Err(DataFusionError::External(e)) if self.query_context.is_some() 
=> {
+                if let Some(spark_error) = e.downcast_ref::<SparkError>() {
+                    // Wrap the error with SQL location info
+                    let wrapped = SparkErrorWithContext::with_context(
+                        spark_error.clone(),
+                        Arc::clone(self.query_context.as_ref().unwrap()),
+                    );
+                    return Err(DataFusionError::External(Box::new(wrapped)));
+                }
+                Err(DataFusionError::External(e))
+            }
+            other => other,
+        }
+    }
+}
+```
+
+### `SparkErrorWithContext` (`native/spark-expr/src/error.rs`)
+
+```rust
+pub struct SparkErrorWithContext {
+    pub error: SparkError,                          // the actual error
+    pub context: Option<Arc<QueryContext>>,         // optional SQL location
+}
+```
+
+---
+
+## Step 5: Error Is Serialized to JSON
+
+When DataFusion propagates the error all the way up through the execution 
engine and it
+reaches the JNI boundary, `throw_exception()` in `errors.rs` is called. It 
detects the
+`SparkErrorWithContext` type and calls `.to_json()` on it:
+
+```rust
+// native/core/src/errors.rs
+
+fn throw_exception(env: &mut JNIEnv, error: &CometError, ...) {
+    match error {
+        CometError::DataFusion {
+            source: DataFusionError::External(e), ..
+        } => {
+            if let Some(spark_err_ctx) = 
e.downcast_ref::<SparkErrorWithContext>() {
+                // Has SQL context → throw with JSON payload
+                let json = spark_err_ctx.to_json();
+                
env.throw_new("org/apache/comet/exceptions/CometQueryExecutionException", json)
+            } else if let Some(spark_err) = e.downcast_ref::<SparkError>() {
+                // No SQL context → throw with JSON payload (no context field)
+                throw_spark_error_as_json(env, spark_err)
+            }
+        }
+        // ...
+    }
+}
+```
+
+The JSON looks like this for a divide-by-zero in `SELECT a/b FROM t`:
+
+```json
+{
+  "errorType": "DivideByZero",
+  "errorClass": "DIVIDE_BY_ZERO",
+  "params": {},
+  "context": {
+    "sqlText": "SELECT a/b FROM t",
+    "startIndex": 7,
+    "stopIndex": 9,
+    "line": 1,
+    "startPosition": 7,
+    "objectType": null,
+    "objectName": null
+  },
+  "summary": "== SQL (line 1, position 8) ==\nSELECT a/b FROM t\n       ^^^"
+}
+```
+
+---
+
+## Step 6: Java Receives `CometQueryExecutionException`
+
+On the Java side, a thin exception class carries the JSON string as its 
message:
+
+```java
+// 
common/src/main/java/org/apache/comet/exceptions/CometQueryExecutionException.java
+
+public final class CometQueryExecutionException extends CometNativeException {
+  public CometQueryExecutionException(String jsonMessage) {
+    super(jsonMessage);
+  }
+
+  public boolean isJsonMessage() {
+    String msg = getMessage();
+    return msg != null && msg.trim().startsWith("{") && 
msg.trim().endsWith("}");
+  }
+}
+```
+
+---
+
+## Step 7: Scala Converts JSON Back to a Real Spark Exception
+
+`CometExecIterator.scala` is the Scala code that drives the native execution. 
Every time it
+calls into the native engine for the next batch of data, it catches
+`CometQueryExecutionException` and converts it:
+
+```scala
+// spark/src/main/scala/org/apache/comet/CometExecIterator.scala
+
+try {
+  nativeUtil.getNextBatch(...)
+} catch {
+  case e: CometQueryExecutionException =>
+    logError(s"Native execution for task $taskAttemptId failed", e)
+    throw SparkErrorConverter.convertToSparkException(e)   // converts JSON to 
real exception
+}
+```
+
+### `SparkErrorConverter.scala` parses the JSON
+
+```scala
+// spark/src/main/scala/org/apache/comet/SparkErrorConverter.scala
+
+def convertToSparkException(e: CometQueryExecutionException): Throwable = {
+  val json = parse(e.getMessage)
+  val errorJson = json.extract[ErrorJson]
+
+  // Reconstruct Spark's SQLQueryContext from the embedded context
+  val sparkContext: Array[QueryContext] = errorJson.context match {
+    case Some(ctx) =>
+      Array(SQLQueryContext(
+        sqlText = Some(ctx.sqlText),
+        line = Some(ctx.line),
+        startPosition = Some(ctx.startPosition),
+        originStartIndex = Some(ctx.startIndex),
+        originStopIndex = Some(ctx.stopIndex),
+        originObjectType = ctx.objectType,
+        originObjectName = ctx.objectName))
+    case None => Array.empty
+  }
+
+  // Delegate to version-specific shim
+  convertErrorType(errorJson.errorType, errorClass, params, sparkContext, 
summary)
+}
+```
+
+### `ShimSparkErrorConverter` calls the real Spark API
+
+Because Spark's `QueryExecutionErrors` API changes between Spark versions 
(3.4, 3.5, 4.0),
+there is a separate implementation per version (in `spark-3.4/`, `spark-3.5/`, 
`spark-4.0/`).
+
+![Shim pattern for per-version Spark API bridging](./shim_pattern.svg)
+
+```scala
+// 
spark/src/main/spark-3.5/org/apache/spark/sql/comet/shims/ShimSparkErrorConverter.scala
+
+def convertErrorType(errorType: String, errorClass: String,
+    params: Map[String, Any], context: Array[QueryContext], summary: String): 
Option[Throwable] = {
+
+  errorType match {
+    case "DivideByZero" =>
+      Some(QueryExecutionErrors.divideByZeroError(sqlCtx(context)))
+      // This is the REAL Spark method that creates the ArithmeticException
+      // with the SQL context pointer. The error message will include
+      // "== SQL (line 1, position 8) ==" etc.
+
+    case "CastInvalidValue" =>
+      Some(QueryExecutionErrors.castingCauseOverflowError(...))
+
+    // ... all other error types ...
+
+    case _ => None  // fallback to generic SparkException
+  }
+}
+```
+
+The Spark 3.5 shim vs the Spark 4.0 shim differ in subtle API ways:
+
+```scala
+// 3.5: binaryArithmeticCauseOverflowError does NOT take functionName
+case "BinaryArithmeticOverflow" =>
+  Some(QueryExecutionErrors.binaryArithmeticCauseOverflowError(
+    params("value1").toString.toShort,
+    params("symbol").toString,
+    params("value2").toString.toShort))
+
+// 4.0: overloadedMethod takes functionName parameter
+case "BinaryArithmeticOverflow" =>
+  Some(QueryExecutionErrors.binaryArithmeticCauseOverflowError(
+    params("value1").toString.toShort,
+    params("symbol").toString,
+    params("value2").toString.toShort,
+    params("functionName").toString))   // extra param in 4.0
+```
+
+---
+
+## Step 8: The User Sees a Proper Spark Error
+
+The final exception that propagates out of Spark looks exactly like what 
native Spark would
+produce for the same error, including the ANSI error code and the SQL pointer:
+
+```
+org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero.
+Use `try_divide` to tolerate divisor being 0 and return NULL instead.
+If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
+
+== SQL (line 1, position 8) ==
+SELECT a/b FROM t
+       ^^^
+```
+
+---
+
+## Key Data Structures: A Summary
+
+| Structure                      | Language | File                             
   | Purpose                                             |
+| ------------------------------ | -------- | 
----------------------------------- | 
--------------------------------------------------- |
+| `QueryContext` (proto)         | Protobuf | `expr.proto`                     
   | Wire format for SQL location info                   |
+| `QueryContext` (Rust)          | Rust     | `query_context.rs`               
   | In-memory SQL location info                         |
+| `QueryContextMap`              | Rust     | `query_context.rs`               
   | Registry: expr_id → QueryContext                    |
+| `SparkError`                   | Rust     | `error.rs`                       
   | Typed Rust enum matching all Spark error variants   |
+| `SparkErrorWithContext`        | Rust     | `error.rs`                       
   | SparkError + optional QueryContext                  |
+| `CometQueryExecutionException` | Java     | 
`CometQueryExecutionException.java` | JNI transport: carries JSON string        
          |
+| `SparkErrorConverter`          | Scala    | `SparkErrorConverter.scala`      
   | Parses JSON, creates real Spark exception           |
+| `ShimSparkErrorConverter`      | Scala    | `ShimSparkErrorConverter.scala`  
   | Per-Spark-version calls to `QueryExecutionErrors.*` |
+
+---
+
+## Why JSON? Why Not Throw the Right Exception Directly?
+
+JNI does not support throwing arbitrary Java exception subclasses from Rust 
directly — you
+can only provide a class name and a string message. The class name is fixed 
(always
+`CometQueryExecutionException`), but the string payload can carry any 
structured data.
+
+JSON was chosen because:
+
+1. It is self-describing — the receiver can parse it without knowing the 
structure in advance.
+2. It is easy to add new fields without breaking old parsers.
+3. It maps cleanly to Scala's case class extraction 
(`json.extract[ErrorJson]`).
+4. All the typed information (error class, parameters, SQL context) can be 
round-tripped
+   perfectly.
+
+The alternative — throwing different Java exception classes from Rust — would 
require a
+separate JNI throw path for each of the 30+ error types, which would be much 
harder to
+maintain.
+
+---
+
+## Why `SparkError` Has Its Own `error_class()` Method
+
+Each `SparkError` variant knows its own ANSI error class code (e.g. 
`"DIVIDE_BY_ZERO"`,
+`"CAST_INVALID_INPUT"`). This is used both:
+
+- In the JSON payload's `"errorClass"` field (so the Java side can pass it to
+  `SparkException(errorClass = ...)` as a fallback)
+- In the legacy `exception_class()` method that maps to the right Java 
exception class (e.g.
+  `"java/lang/ArithmeticException"`)
+
+---
+
+## Diagrams
+
+### End-to-End Pipeline
+
+![Error propagation pipeline overview](./error_pipeline_overview.svg)
+
+### QueryContext Journey: From SQL Text to Error Pointer
+
+![QueryContext journey from SQL parser to error 
message](./query_context_journey.svg)
+
+### Shim Pattern: Per-Version Spark API Bridging
+
+![Shim pattern for per-version Spark API bridging](./shim_pattern.svg)
+
+---
+
+_This document and its diagrams were written by [Claude](https://claude.ai) 
(Anthropic)._


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to