This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new f6c5d591 Publish built docs triggered by 
3808306b19f44253c087bd92742d93992ee6522d
f6c5d591 is described below

commit f6c5d5912ebda9e072dc8c5a957b72f384598ffb
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Tue May 14 02:04:48 2024 +0000

    Publish built docs triggered by 3808306b19f44253c087bd92742d93992ee6522d
---
 _sources/user-guide/configs.md.txt      |  1 +
 _sources/user-guide/installation.md.txt | 14 +++++++++++++-
 searchindex.js                          |  2 +-
 user-guide/configs.html                 | 22 +++++++++++++---------
 user-guide/installation.html            | 12 +++++++++++-
 5 files changed, 39 insertions(+), 12 deletions(-)

diff --git a/_sources/user-guide/configs.md.txt 
b/_sources/user-guide/configs.md.txt
index d75059a9..24f408a0 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -40,6 +40,7 @@ Comet provides the following configuration settings.
 | spark.comet.exec.memoryFraction | The fraction of memory from Comet memory 
overhead that the native memory manager can use for execution. The purpose of 
this config is to set aside memory for untracked data structures, as well as 
imprecise size estimation during memory acquisition. Default value is 0.7. | 
0.7 |
 | spark.comet.exec.shuffle.codec | The codec of Comet native shuffle used to 
compress shuffle data. Only zstd is supported. | zstd |
 | spark.comet.exec.shuffle.enabled | Whether to enable Comet native shuffle. 
By default, this config is false. Note that this requires setting 
'spark.shuffle.manager' to 
'org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager'. 
'spark.shuffle.manager' must be set before starting the Spark application and 
cannot be changed during the application. | false |
+| spark.comet.explainFallback.enabled | When this setting is enabled, Comet 
will provide logging explaining the reason(s) why a query stage cannot be 
executed natively. | false |
 | spark.comet.memory.overhead.factor | Fraction of executor memory to be 
allocated as additional non-heap memory per executor process for Comet. Default 
value is 0.2. | 0.2 |
 | spark.comet.memory.overhead.min | Minimum amount of additional memory to be 
allocated per executor process for Comet, in MiB. | 402653184b |
 | spark.comet.nativeLoadRequired | Whether to require Comet native library to 
load successfully when Comet is enabled. If not, Comet will silently fallback 
to Spark when it fails to load the native lib. Otherwise, an error will be 
thrown and the Spark job will be aborted. | false |
diff --git a/_sources/user-guide/installation.md.txt 
b/_sources/user-guide/installation.md.txt
index b948d50c..e9149019 100644
--- a/_sources/user-guide/installation.md.txt
+++ b/_sources/user-guide/installation.md.txt
@@ -67,7 +67,8 @@ $SPARK_HOME/bin/spark-shell \
     --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \
     --conf spark.comet.enabled=true \
     --conf spark.comet.exec.enabled=true \
-    --conf spark.comet.exec.all.enabled=true
+    --conf spark.comet.exec.all.enabled=true \
+    --conf spark.comet.explainFallback.enabled=true
 ```
 
 ### Verify Comet enabled for Spark SQL query
@@ -95,6 +96,17 @@ INFO src/lib.rs: Comet native library initialized
              PushedFilters: [IsNotNull(a), GreaterThan(a,5)], ReadSchema: 
struct<a:int>
 ```
 
+With the configuration `spark.comet.explainFallback.enabled=true`, Comet will 
log any reasons that prevent a plan from
+being executed natively.
+
+```scala
+scala> Seq(1,2,3,4).toDF("a").write.parquet("/tmp/test.parquet")
+WARN CometSparkSessionExtensions$CometExecRule: Comet cannot execute some 
parts of this plan natively because:
+  - LocalTableScan is not supported
+  - WriteFiles is not supported
+  - Execute InsertIntoHadoopFsRelationCommand is not supported
+```
+
 ### Enable Comet shuffle
 
 Comet shuffle feature is disabled by default. To enable it, please add related 
configs:
diff --git a/searchindex.js b/searchindex.js
index bc580ade..17360ede 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"ANSI mode": [[5, "ansi-mode"], [6, 
"ansi-mode"]], "ASF Links": [[4, null]], "Additional Info": [[1, 
"additional-info"]], "After your debugging is done,": [[1, 
"after-your-debugging-is-done"]], "Apache DataFusion Comet": [[4, 
"apache-datafusion-comet"]], "Architecture": [[13, "architecture"]], "Asking 
for Help": [[0, "asking-for-help"]], "Benchmark": [[2, "benchmark"]], "Build & 
Test": [[2, "build-test"]], "Building From Source": [[11, "building-from-source 
[...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"ANSI mode": [[5, "ansi-mode"], [6, 
"ansi-mode"]], "ASF Links": [[4, null]], "Additional Info": [[1, 
"additional-info"]], "After your debugging is done,": [[1, 
"after-your-debugging-is-done"]], "Apache DataFusion Comet": [[4, 
"apache-datafusion-comet"]], "Architecture": [[13, "architecture"]], "Asking 
for Help": [[0, "asking-for-help"]], "Benchmark": [[2, "benchmark"]], "Build & 
Test": [[2, "build-test"]], "Building From Source": [[11, "building-from-source 
[...]
\ No newline at end of file
diff --git a/user-guide/configs.html b/user-guide/configs.html
index 3ee2d74f..01dd6cf7 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -361,39 +361,43 @@ under the License.
 <td><p>Whether to enable Comet native shuffle. By default, this config is 
false. Note that this requires setting ‘spark.shuffle.manager’ to 
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’. 
‘spark.shuffle.manager’ must be set before starting the Spark application and 
cannot be changed during the application.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.memory.overhead.factor</p></td>
+<tr class="row-odd"><td><p>spark.comet.explainFallback.enabled</p></td>
+<td><p>When this setting is enabled, Comet will provide logging explaining the 
reason(s) why a query stage cannot be executed natively.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.memory.overhead.factor</p></td>
 <td><p>Fraction of executor memory to be allocated as additional non-heap 
memory per executor process for Comet. Default value is 0.2.</p></td>
 <td><p>0.2</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.memory.overhead.min</p></td>
+<tr class="row-odd"><td><p>spark.comet.memory.overhead.min</p></td>
 <td><p>Minimum amount of additional memory to be allocated per executor 
process for Comet, in MiB.</p></td>
 <td><p>402653184b</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.nativeLoadRequired</p></td>
+<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
 <td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
+<tr class="row-odd"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
 <td><p>Whether to use Java direct byte buffer when reading Parquet. By 
default, this is false</p></td>
 <td><p>false</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.rowToColumnar.supportedOperatorList</p></td>
+<tr 
class="row-even"><td><p>spark.comet.rowToColumnar.supportedOperatorList</p></td>
 <td><p>A comma-separated list of row-based operators that will be converted to 
columnar format when ‘spark.comet.rowToColumnar.enabled’ is true</p></td>
 <td><p>Range,InMemoryTableScan</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.enabled</p></td>
 <td><p>Whether to enable Comet scan. When this is turned on, Spark will use 
Comet to read Parquet data source. Note that to enable native vectorized 
execution, both this config and ‘spark.comet.exec.enabled’ need to be enabled. 
By default, this config is true.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.preFetch.enabled</p></td>
 <td><p>Whether to enable pre-fetching feature of CometScan. By default is 
disabled.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
 <td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. By default it is 2. Note that 
more pre-fetching threads means more memory requirement to store pre-fetched 
row groups.</p></td>
 <td><p>2</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
+<tr class="row-even"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
 <td><p>The ratio of total values to distinct values in a string column to 
decide whether to prefer dictionary encoding when shuffling the column. If the 
ratio is higher than this config, dictionary encoding will be used on shuffling 
string column. This config is effective if it is higher than 1.0. By default, 
this config is 10.0. Note that this config is only used when 
‘spark.comet.columnar.shuffle.enabled’ is true.</p></td>
 <td><p>10.0</p></td>
 </tr>
diff --git a/user-guide/installation.html b/user-guide/installation.html
index c8e00f1f..d8bfddb1 100644
--- a/user-guide/installation.html
+++ b/user-guide/installation.html
@@ -370,7 +370,8 @@ make release PROFILES=&quot;-Pspark-3.4&quot;
     --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \
     --conf spark.comet.enabled=true \
     --conf spark.comet.exec.enabled=true \
-    --conf spark.comet.exec.all.enabled=true
+    --conf spark.comet.exec.all.enabled=true \
+    --conf spark.comet.explainFallback.enabled=true
 </pre></div>
 </div>
 <section id="verify-comet-enabled-for-spark-sql-query">
@@ -395,6 +396,15 @@ make release PROFILES=&quot;-Pspark-3.4&quot;
 <span class="w">             </span><span class="nc">PushedFilters</span><span 
class="p">:</span><span class="w"> </span><span class="p">[</span><span 
class="nc">IsNotNull</span><span class="p">(</span><span 
class="n">a</span><span class="p">),</span><span class="w"> </span><span 
class="nc">GreaterThan</span><span class="p">(</span><span 
class="n">a</span><span class="p">,</span><span class="mi">5</span><span 
class="p">)],</span><span class="w"> </span><span class="nc">ReadSchema</span>< 
[...]
 </pre></div>
 </div>
+<p>With the configuration <code class="docutils literal notranslate"><span 
class="pre">spark.comet.explainFallback.enabled=true</span></code>, Comet will 
log any reasons that prevent a plan from
+being executed natively.</p>
+<div class="highlight-scala notranslate"><div 
class="highlight"><pre><span></span><span class="n">scala</span><span 
class="o">&gt;</span><span class="w"> </span><span class="nc">Seq</span><span 
class="p">(</span><span class="mi">1</span><span class="p">,</span><span 
class="mi">2</span><span class="p">,</span><span class="mi">3</span><span 
class="p">,</span><span class="mi">4</span><span class="p">).</span><span 
class="n">toDF</span><span class="p">(</span><span class="s">&quot;a&quot;</s 
[...]
+<span class="nc">WARN</span><span class="w"> </span><span 
class="nc">CometSparkSessionExtensions$CometExecRule</span><span 
class="p">:</span><span class="w"> </span><span class="nc">Comet</span><span 
class="w"> </span><span class="n">cannot</span><span class="w"> </span><span 
class="n">execute</span><span class="w"> </span><span 
class="n">some</span><span class="w"> </span><span class="n">parts</span><span 
class="w"> </span><span class="n">of</span><span class="w"> </span><span 
class="bp [...]
+<span class="w">  </span><span class="o">-</span><span class="w"> </span><span 
class="nc">LocalTableScan</span><span class="w"> </span><span 
class="n">is</span><span class="w"> </span><span class="n">not</span><span 
class="w"> </span><span class="n">supported</span>
+<span class="w">  </span><span class="o">-</span><span class="w"> </span><span 
class="nc">WriteFiles</span><span class="w"> </span><span 
class="n">is</span><span class="w"> </span><span class="n">not</span><span 
class="w"> </span><span class="n">supported</span>
+<span class="w">  </span><span class="o">-</span><span class="w"> </span><span 
class="nc">Execute</span><span class="w"> </span><span 
class="nc">InsertIntoHadoopFsRelationCommand</span><span class="w"> 
</span><span class="n">is</span><span class="w"> </span><span 
class="n">not</span><span class="w"> </span><span class="n">supported</span>
+</pre></div>
+</div>
 </section>
 <section id="enable-comet-shuffle">
 <h3>Enable Comet shuffle<a class="headerlink" href="#enable-comet-shuffle" 
title="Link to this heading">¶</a></h3>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to