This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new ab678f30f Publish built docs triggered by 
2056ace6303c2d8f97b09b59c2ca2538a28edf46
ab678f30f is described below

commit ab678f30f3aba0c82bb5fb1e7dcb416d6decd61c
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Tue Sep 9 20:09:44 2025 +0000

    Publish built docs triggered by 2056ace6303c2d8f97b09b59c2ca2538a28edf46
---
 _sources/user-guide/latest/iceberg.md.txt | 5 +----
 searchindex.js                            | 2 +-
 user-guide/latest/iceberg.html            | 8 +-------
 3 files changed, 3 insertions(+), 12 deletions(-)

diff --git a/_sources/user-guide/latest/iceberg.md.txt 
b/_sources/user-guide/latest/iceberg.md.txt
index aa3caa79b..052542f17 100644
--- a/_sources/user-guide/latest/iceberg.md.txt
+++ b/_sources/user-guide/latest/iceberg.md.txt
@@ -80,7 +80,7 @@ $SPARK_HOME/bin/spark-shell \
     --conf spark.sql.catalog.spark_catalog.type=hadoop \
     --conf spark.sql.catalog.spark_catalog.warehouse=/tmp/warehouse \
     --conf spark.plugins=org.apache.spark.CometPlugin \
-    --conf spark.comet.exec.shuffle.enabled=false \
+    --conf 
spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager
 \
     --conf spark.sql.iceberg.parquet.reader-type=COMET \
     --conf spark.comet.explainFallback.enabled=true \
     --conf spark.memory.offHeap.enabled=true \
@@ -143,8 +143,5 @@ scala> spark.sql(s"SELECT * from t1").explain()
 ```
 
 ## Known issues
- - We temporarily disable Comet when there are delete files in Iceberg scan, 
see Iceberg [1.8.1 
diff](https://github.com/apache/datafusion-comet/blob/main/dev/diffs/iceberg/1.8.1.diff)
 and this [PR](https://github.com/apache/iceberg/pull/13793)
-   - Iceberg scan w/ delete files lead to [runtime 
exceptions](https://github.com/apache/datafusion-comet/issues/2117) and 
[incorrect results](https://github.com/apache/datafusion-comet/issues/2118)
- - Enabling `CometShuffleManager` leads to [runtime 
exceptions](https://github.com/apache/datafusion-comet/issues/2086)
  - Spark Runtime Filtering isn't 
[working](https://github.com/apache/datafusion-comet/issues/2116)
    - You can bypass the issue by either setting 
`spark.sql.adaptive.enabled=false` or 
`spark.comet.exec.broadcastExchange.enabled=false`
diff --git a/searchindex.js b/searchindex.js
index 7942b43db..899967107 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[12, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[12, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[12, "run-spark-sql-tests"]], "ANSI Mode": [[43, 
"ansi-mode"]], "ANSI mode": [[17, "ansi-mode"], [30, "ansi-mode"]], "API 
Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "Accelerating Apache Iceberg 
Parquet Scans using Comet (Experimental)": [[22, null], [35, null], [48, 
null]],  [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[12, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[12, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[12, "run-spark-sql-tests"]], "ANSI Mode": [[43, 
"ansi-mode"]], "ANSI mode": [[17, "ansi-mode"], [30, "ansi-mode"]], "API 
Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "Accelerating Apache Iceberg 
Parquet Scans using Comet (Experimental)": [[22, null], [35, null], [48, 
null]],  [...]
\ No newline at end of file
diff --git a/user-guide/latest/iceberg.html b/user-guide/latest/iceberg.html
index 11ee74cfb..a83fd4c57 100644
--- a/user-guide/latest/iceberg.html
+++ b/user-guide/latest/iceberg.html
@@ -585,7 +585,7 @@ git<span class="w"> </span>apply<span class="w"> 
</span>../datafusion-comet/dev/
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.sql.catalog.spark_catalog.type<span class="o">=</span>hadoop<span 
class="w"> </span><span class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.sql.catalog.spark_catalog.warehouse<span 
class="o">=</span>/tmp/warehouse<span class="w"> </span><span 
class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> </span>spark.plugins<span 
class="o">=</span>org.apache.spark.CometPlugin<span class="w"> </span><span 
class="se">\</span>
-<span class="w">    </span>--conf<span class="w"> 
</span>spark.comet.exec.shuffle.enabled<span class="o">=</span><span 
class="nb">false</span><span class="w"> </span><span class="se">\</span>
+<span class="w">    </span>--conf<span class="w"> 
</span>spark.shuffle.manager<span 
class="o">=</span>org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager<span
 class="w"> </span><span class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.sql.iceberg.parquet.reader-type<span class="o">=</span>COMET<span 
class="w"> </span><span class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.comet.explainFallback.enabled<span class="o">=</span><span 
class="nb">true</span><span class="w"> </span><span class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.memory.offHeap.enabled<span class="o">=</span><span 
class="nb">true</span><span class="w"> </span><span class="se">\</span>
@@ -643,12 +643,6 @@ git<span class="w"> </span>apply<span class="w"> 
</span>../datafusion-comet/dev/
 <section id="known-issues">
 <h2>Known issues<a class="headerlink" href="#known-issues" title="Link to this 
heading">¶</a></h2>
 <ul class="simple">
-<li><p>We temporarily disable Comet when there are delete files in Iceberg 
scan, see Iceberg <a class="reference external" 
href="https://github.com/apache/datafusion-comet/blob/main/dev/diffs/iceberg/1.8.1.diff";>1.8.1
 diff</a> and this <a class="reference external" 
href="https://github.com/apache/iceberg/pull/13793";>PR</a></p>
-<ul>
-<li><p>Iceberg scan w/ delete files lead to <a class="reference external" 
href="https://github.com/apache/datafusion-comet/issues/2117";>runtime 
exceptions</a> and <a class="reference external" 
href="https://github.com/apache/datafusion-comet/issues/2118";>incorrect 
results</a></p></li>
-</ul>
-</li>
-<li><p>Enabling <code class="docutils literal notranslate"><span 
class="pre">CometShuffleManager</span></code> leads to <a class="reference 
external" href="https://github.com/apache/datafusion-comet/issues/2086";>runtime 
exceptions</a></p></li>
 <li><p>Spark Runtime Filtering isn’t <a class="reference external" 
href="https://github.com/apache/datafusion-comet/issues/2116";>working</a></p>
 <ul>
 <li><p>You can bypass the issue by either setting <code class="docutils 
literal notranslate"><span 
class="pre">spark.sql.adaptive.enabled=false</span></code> or <code 
class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.broadcastExchange.enabled=false</span></code></p></li>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@datafusion.apache.org
For additional commands, e-mail: commits-h...@datafusion.apache.org

Reply via email to