This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 388280a0 Publish built docs triggered by 
cb3e9775661ca09dac5f5fb86154ae1096d6d4a2
388280a0 is described below

commit 388280a0bc386de36a0837c6635da789753cd117
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Mon Oct 21 19:44:08 2024 +0000

    Publish built docs triggered by cb3e9775661ca09dac5f5fb86154ae1096d6d4a2
---
 _sources/contributor-guide/benchmarking.md.txt  |  1 +
 _sources/user-guide/configs.md.txt              |  1 +
 _sources/user-guide/tuning.md.txt               | 12 +++++
 contributor-guide/adding_a_new_expression.html  |  2 +-
 contributor-guide/benchmark-results/tpc-ds.html |  2 +-
 contributor-guide/benchmark-results/tpc-h.html  |  2 +-
 contributor-guide/benchmarking.html             |  3 +-
 contributor-guide/contributing.html             |  2 +-
 contributor-guide/debugging.html                |  2 +-
 contributor-guide/development.html              |  2 +-
 contributor-guide/plugin_overview.html          |  2 +-
 contributor-guide/profiling_native_code.html    |  2 +-
 contributor-guide/spark-sql-tests.html          |  2 +-
 genindex.html                                   |  2 +-
 index.html                                      |  2 +-
 search.html                                     |  2 +-
 searchindex.js                                  |  2 +-
 user-guide/compatibility.html                   |  2 +-
 user-guide/configs.html                         | 60 +++++++++++++------------
 user-guide/datasources.html                     |  2 +-
 user-guide/datatypes.html                       |  2 +-
 user-guide/expressions.html                     |  2 +-
 user-guide/installation.html                    |  2 +-
 user-guide/kubernetes.html                      |  2 +-
 user-guide/operators.html                       |  2 +-
 user-guide/overview.html                        |  2 +-
 user-guide/source.html                          |  2 +-
 user-guide/tuning.html                          | 17 ++++++-
 28 files changed, 86 insertions(+), 52 deletions(-)

diff --git a/_sources/contributor-guide/benchmarking.md.txt 
b/_sources/contributor-guide/benchmarking.md.txt
index 456cacef..8c8d53e6 100644
--- a/_sources/contributor-guide/benchmarking.md.txt
+++ b/_sources/contributor-guide/benchmarking.md.txt
@@ -64,6 +64,7 @@ $SPARK_HOME/bin/spark-submit \
     --conf spark.executor.extraClassPath=$COMET_JAR \
     --conf spark.plugins=org.apache.spark.CometPlugin \
     --conf spark.comet.cast.allowIncompatible=true \
+    --conf spark.comet.exec.replaceSortMergeJoin=true \
     --conf spark.comet.exec.shuffle.enabled=true \
     --conf 
spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager
 \
     tpcbench.py \
diff --git a/_sources/user-guide/configs.md.txt 
b/_sources/user-guide/configs.md.txt
index f7ef1d55..1618a034 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -50,6 +50,7 @@ Comet provides the following configuration settings.
 | spark.comet.exec.localLimit.enabled | Whether to enable localLimit by 
default. | true |
 | spark.comet.exec.memoryFraction | The fraction of memory from Comet memory 
overhead that the native memory manager can use for execution. The purpose of 
this config is to set aside memory for untracked data structures, as well as 
imprecise size estimation during memory acquisition. Default value is 0.7. | 
0.7 |
 | spark.comet.exec.project.enabled | Whether to enable project by default. | 
true |
+| spark.comet.exec.replaceSortMergeJoin | Experimental feature to force Spark 
to replace SortMergeJoin with ShuffledHashJoin for improved performance. This 
feature is not stable yet. For more information, refer to the Comet Tuning 
Guide (https://datafusion.apache.org/comet/user-guide/tuning.html). | false |
 | spark.comet.exec.shuffle.codec | The codec of Comet native shuffle used to 
compress shuffle data. Only zstd is supported. | zstd |
 | spark.comet.exec.shuffle.enabled | Whether to enable Comet native shuffle. 
Note that this requires setting 'spark.shuffle.manager' to 
'org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager'. 
'spark.shuffle.manager' must be set before starting the Spark application and 
cannot be changed during the application. | true |
 | spark.comet.exec.sort.enabled | Whether to enable sort by default. | true |
diff --git a/_sources/user-guide/tuning.md.txt 
b/_sources/user-guide/tuning.md.txt
index 2baced09..30ada4c9 100644
--- a/_sources/user-guide/tuning.md.txt
+++ b/_sources/user-guide/tuning.md.txt
@@ -78,6 +78,18 @@ Note that there is currently a known issue where this will 
be inaccurate when us
 does not take executor concurrency into account. The tracking issue for this is
 https://github.com/apache/datafusion-comet/issues/949.
 
+## Optimizing Joins
+
+Spark often chooses `SortMergeJoin` over `ShuffledHashJoin` for stability 
reasons. If the build-side of a
+`ShuffledHashJoin` is very large then it could lead to OOM in Spark.
+
+Vectorized query engines tend to perform better with `ShuffledHashJoin`, so 
for best performance it is often preferable
+to configure Comet to convert `SortMergeJoin` to `ShuffledHashJoin`. Comet 
does not yet provide spill-to-disk for
+`ShuffledHashJoin` so this could result in OOM. Also, `SortMergeJoin` may 
still be faster in some cases. It is best
+to test with both for your specific workloads.
+
+To configure Comet to convert `SortMergeJoin` to `ShuffledHashJoin`, set 
`spark.comet.exec.replaceSortMergeJoin=true`.
+
 ## Shuffle
 
 Comet provides accelerated shuffle implementations that can be used to improve 
the performance of your queries.
diff --git a/contributor-guide/adding_a_new_expression.html 
b/contributor-guide/adding_a_new_expression.html
index 452cbbf2..0699189c 100644
--- a/contributor-guide/adding_a_new_expression.html
+++ b/contributor-guide/adding_a_new_expression.html
@@ -614,7 +614,7 @@ under the License.
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/benchmark-results/tpc-ds.html 
b/contributor-guide/benchmark-results/tpc-ds.html
index ba53cf02..43930201 100644
--- a/contributor-guide/benchmark-results/tpc-ds.html
+++ b/contributor-guide/benchmark-results/tpc-ds.html
@@ -368,7 +368,7 @@ and we encourage you to run these benchmarks in your own 
environments.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/benchmark-results/tpc-h.html 
b/contributor-guide/benchmark-results/tpc-h.html
index c5b95d3a..a98351ea 100644
--- a/contributor-guide/benchmark-results/tpc-h.html
+++ b/contributor-guide/benchmark-results/tpc-h.html
@@ -369,7 +369,7 @@ and we encourage you to run these benchmarks in your own 
environments.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/benchmarking.html 
b/contributor-guide/benchmarking.html
index 1220293f..e53c88a2 100644
--- a/contributor-guide/benchmarking.html
+++ b/contributor-guide/benchmarking.html
@@ -384,6 +384,7 @@ repository.</p>
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.executor.extraClassPath<span class="o">=</span><span 
class="nv">$COMET_JAR</span><span class="w"> </span><span class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> </span>spark.plugins<span 
class="o">=</span>org.apache.spark.CometPlugin<span class="w"> </span><span 
class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.comet.cast.allowIncompatible<span class="o">=</span><span 
class="nb">true</span><span class="w"> </span><span class="se">\</span>
+<span class="w">    </span>--conf<span class="w"> 
</span>spark.comet.exec.replaceSortMergeJoin<span class="o">=</span><span 
class="nb">true</span><span class="w"> </span><span class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.comet.exec.shuffle.enabled<span class="o">=</span><span 
class="nb">true</span><span class="w"> </span><span class="se">\</span>
 <span class="w">    </span>--conf<span class="w"> 
</span>spark.shuffle.manager<span 
class="o">=</span>org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager<span
 class="w"> </span><span class="se">\</span>
 <span class="w">    </span>tpcbench.py<span class="w"> </span><span 
class="se">\</span>
@@ -445,7 +446,7 @@ repository.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/contributing.html 
b/contributor-guide/contributing.html
index 05e86dc8..3eefa9ff 100644
--- a/contributor-guide/contributing.html
+++ b/contributor-guide/contributing.html
@@ -420,7 +420,7 @@ coordinate on issues that they are working on.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/debugging.html b/contributor-guide/debugging.html
index f287e73f..a3d30460 100644
--- a/contributor-guide/debugging.html
+++ b/contributor-guide/debugging.html
@@ -531,7 +531,7 @@ To enable this option with Comet it is needed to include 
<code class="docutils l
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/development.html 
b/contributor-guide/development.html
index 184f5855..aa7440e5 100644
--- a/contributor-guide/development.html
+++ b/contributor-guide/development.html
@@ -553,7 +553,7 @@ automatically format the code. Before submitting a pull 
request, you can simply
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/plugin_overview.html 
b/contributor-guide/plugin_overview.html
index bd34811e..5d6640f5 100644
--- a/contributor-guide/plugin_overview.html
+++ b/contributor-guide/plugin_overview.html
@@ -454,7 +454,7 @@ addresses of these Arrow arrays to the native code.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/profiling_native_code.html 
b/contributor-guide/profiling_native_code.html
index 3ea5d136..d538c345 100644
--- a/contributor-guide/profiling_native_code.html
+++ b/contributor-guide/profiling_native_code.html
@@ -440,7 +440,7 @@ running flamegraph.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/contributor-guide/spark-sql-tests.html 
b/contributor-guide/spark-sql-tests.html
index 25b1fef4..0ca76eac 100644
--- a/contributor-guide/spark-sql-tests.html
+++ b/contributor-guide/spark-sql-tests.html
@@ -490,7 +490,7 @@ new version.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/genindex.html b/genindex.html
index d973c14e..aee84d66 100644
--- a/genindex.html
+++ b/genindex.html
@@ -315,7 +315,7 @@ under the License.
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/index.html b/index.html
index 42718630..e681db74 100644
--- a/index.html
+++ b/index.html
@@ -391,7 +391,7 @@ as a native runtime to achieve improvement in terms of 
query efficiency and quer
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/search.html b/search.html
index 343b0b67..7cb76e4b 100644
--- a/search.html
+++ b/search.html
@@ -342,7 +342,7 @@ under the License.
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/searchindex.js b/searchindex.js
index ba2a82d1..ec810ac6 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[9, "install-comet"]], "2. 
Clone Spark and Apply Diff": [[9, "clone-spark-and-apply-diff"]], "3. Run Spark 
SQL Tests": [[9, "run-spark-sql-tests"]], "ANSI mode": [[11, "ansi-mode"]], 
"API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "ASF Links": [[10, null]], "Adding 
Spark-side Tests for the New Expression": [[0, 
"adding-spark-side-tests-for-the-new-expression"]], "Adding a New Expression": 
[[0,  [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[9, "install-comet"]], "2. 
Clone Spark and Apply Diff": [[9, "clone-spark-and-apply-diff"]], "3. Run Spark 
SQL Tests": [[9, "run-spark-sql-tests"]], "ANSI mode": [[11, "ansi-mode"]], 
"API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "ASF Links": [[10, null]], "Adding 
Spark-side Tests for the New Expression": [[0, 
"adding-spark-side-tests-for-the-new-expression"]], "Adding a New Expression": 
[[0,  [...]
\ No newline at end of file
diff --git a/user-guide/compatibility.html b/user-guide/compatibility.html
index 1bb201a1..8c39a118 100644
--- a/user-guide/compatibility.html
+++ b/user-guide/compatibility.html
@@ -792,7 +792,7 @@ Spark.</p></li>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/configs.html b/user-guide/configs.html
index d59bee6c..8f23dce4 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -441,111 +441,115 @@ under the License.
 <td><p>Whether to enable project by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.codec</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.replaceSortMergeJoin</p></td>
+<td><p>Experimental feature to force Spark to replace SortMergeJoin with 
ShuffledHashJoin for improved performance. This feature is not stable yet. For 
more information, refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.codec</p></td>
 <td><p>The codec of Comet native shuffle used to compress shuffle data. Only 
zstd is supported.</p></td>
 <td><p>zstd</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.shuffle.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.shuffle.enabled</p></td>
 <td><p>Whether to enable Comet native shuffle. Note that this requires setting 
‘spark.shuffle.manager’ to 
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’. 
‘spark.shuffle.manager’ must be set before starting the Spark application and 
cannot be changed during the application.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.sort.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.sort.enabled</p></td>
 <td><p>Whether to enable sort by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
 <td><p>Whether to enable sortMergeJoin by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
+<tr 
class="row-even"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
 <td><p>Experimental support for Sort Merge Join with filter</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.stddev.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.stddev.enabled</p></td>
 <td><p>Whether to enable stddev by default. stddev is slower than Spark’s 
implementation.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
+<tr 
class="row-even"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
 <td><p>Whether to enable takeOrderedAndProject by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.union.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.union.enabled</p></td>
 <td><p>Whether to enable union by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.window.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.window.enabled</p></td>
 <td><p>Whether to enable window by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.explain.native.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.explain.native.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide a tree representation 
of the native query plan before execution and again after execution, with 
metrics.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.explain.verbose.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.explain.verbose.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide a verbose tree 
representation of the extended information.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.explainFallback.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.explainFallback.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide logging explaining the 
reason(s) why a query stage cannot be executed natively. Set this to false to 
reduce the amount of logging.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.memory.overhead.factor</p></td>
+<tr class="row-even"><td><p>spark.comet.memory.overhead.factor</p></td>
 <td><p>Fraction of executor memory to be allocated as additional non-heap 
memory per executor process for Comet. Default value is 0.2.</p></td>
 <td><p>0.2</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.memory.overhead.min</p></td>
+<tr class="row-odd"><td><p>spark.comet.memory.overhead.min</p></td>
 <td><p>Minimum amount of additional memory to be allocated per executor 
process for Comet, in MiB.</p></td>
 <td><p>402653184b</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.nativeLoadRequired</p></td>
+<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
 <td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
+<tr class="row-odd"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
 <td><p>Whether to use Java direct byte buffer when reading Parquet. By 
default, this is false</p></td>
 <td><p>false</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
+<tr 
class="row-even"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
 <td><p>In the parallel reader, if the read ranges submitted are skewed in 
sizes, this option will cause the reader to break up larger read ranges into 
smaller ranges to reduce the skew. This will result in a slightly larger number 
of connections opened to the file system but may give improved performance. The 
option is off by default.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
+<tr class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
 <td><p>When enabled the parallel reader will try to merge ranges of data that 
are separated by less than ‘comet.parquet.read.io.mergeRanges.delta’ bytes. 
Longer continuous reads are faster on cloud storage. The default behavior is to 
merge consecutive ranges.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
+<tr 
class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
 <td><p>The delta in bytes between consecutive read ranges below which the 
parallel reader will try to merge the ranges. The default is 8MB.</p></td>
 <td><p>8388608</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
 <td><p>Whether to enable Comet’s parallel reader for Parquet files. The 
parallel reader reads ranges of consecutive data in a  file in parallel. It is 
faster for large files and row groups but uses more resources. The parallel 
reader is enabled by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
+<tr 
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
 <td><p>The maximum number of parallel threads the parallel reader will use in 
a single executor. For executors configured with a smaller number of cores, use 
a smaller number.</p></td>
 <td><p>16</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.regexp.allowIncompatible</p></td>
+<tr class="row-odd"><td><p>spark.comet.regexp.allowIncompatible</p></td>
 <td><p>Comet is not currently fully compatible with Spark for all regular 
expressions. Set this config to true to allow them anyway using Rust’s regular 
expression engine. See compatibility guide for more information.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
 <td><p>Whether to enable native scans. When this is turned on, Spark will use 
Comet to read supported data sources (currently only Parquet is supported 
natively). Note that to enable native vectorized execution, both this config 
and ‘spark.comet.exec.enabled’ need to be enabled. By default, this config is 
true.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.preFetch.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
 <td><p>Whether to enable pre-fetching feature of CometScan. By default is 
disabled.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
 <td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. By default it is 2. Note that 
more pre-fetching threads means more memory requirement to store pre-fetched 
row groups.</p></td>
 <td><p>2</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
+<tr class="row-odd"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
 <td><p>The ratio of total values to distinct values in a string column to 
decide whether to prefer dictionary encoding when shuffling the column. If the 
ratio is higher than this config, dictionary encoding will be used on shuffling 
string column. This config is effective if it is higher than 1.0. By default, 
this config is 10.0. Note that this config is only used when <code 
class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.shuffle.mode</span></code> is <code class= [...]
 <td><p>10.0</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
+<tr 
class="row-even"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
 <td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when ‘spark.comet.sparkToColumnar.enabled’ is true</p></td>
 <td><p>Range,InMemoryTableScan</p></td>
 </tr>
@@ -595,7 +599,7 @@ under the License.
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/datasources.html b/user-guide/datasources.html
index 0e2eeac4..9de7b14c 100644
--- a/user-guide/datasources.html
+++ b/user-guide/datasources.html
@@ -405,7 +405,7 @@ converted into Arrow format, allowing native execution to 
happen after that.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/datatypes.html b/user-guide/datatypes.html
index 1230412c..88265922 100644
--- a/user-guide/datatypes.html
+++ b/user-guide/datatypes.html
@@ -394,7 +394,7 @@ under the License.
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/expressions.html b/user-guide/expressions.html
index 3ec7bd05..f56c2521 100644
--- a/user-guide/expressions.html
+++ b/user-guide/expressions.html
@@ -945,7 +945,7 @@ under the License.
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/installation.html b/user-guide/installation.html
index 59e8d07f..b8fc4726 100644
--- a/user-guide/installation.html
+++ b/user-guide/installation.html
@@ -543,7 +543,7 @@ allocation for native execution. See <a class="reference 
internal" href="tuning.
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/kubernetes.html b/user-guide/kubernetes.html
index 39c6f880..cd21df24 100644
--- a/user-guide/kubernetes.html
+++ b/user-guide/kubernetes.html
@@ -472,7 +472,7 @@ spec:
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/operators.html b/user-guide/operators.html
index bb739a2a..c127ade3 100644
--- a/user-guide/operators.html
+++ b/user-guide/operators.html
@@ -412,7 +412,7 @@ not supported by Comet will fall back to regular Spark 
execution.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/overview.html b/user-guide/overview.html
index 90aca87a..98af6825 100644
--- a/user-guide/overview.html
+++ b/user-guide/overview.html
@@ -422,7 +422,7 @@ enabled.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/source.html b/user-guide/source.html
index 22428e44..82f7d2fd 100644
--- a/user-guide/source.html
+++ b/user-guide/source.html
@@ -421,7 +421,7 @@ under the License.
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     
diff --git a/user-guide/tuning.html b/user-guide/tuning.html
index c01343f3..2f226a38 100644
--- a/user-guide/tuning.html
+++ b/user-guide/tuning.html
@@ -308,6 +308,11 @@ under the License.
    Configuring spark.executor.memoryOverhead
   </a>
  </li>
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" href="#optimizing-joins">
+   Optimizing Joins
+  </a>
+ </li>
  <li class="toc-h2 nav-item toc-entry">
   <a class="reference internal nav-link" href="#shuffle">
    Shuffle
@@ -437,6 +442,16 @@ resource managers respect Apache Spark memory 
configuration before starting the
 does not take executor concurrency into account. The tracking issue for this is
 https://github.com/apache/datafusion-comet/issues/949.</p>
 </section>
+<section id="optimizing-joins">
+<h2>Optimizing Joins<a class="headerlink" href="#optimizing-joins" title="Link 
to this heading">¶</a></h2>
+<p>Spark often chooses <code class="docutils literal notranslate"><span 
class="pre">SortMergeJoin</span></code> over <code class="docutils literal 
notranslate"><span class="pre">ShuffledHashJoin</span></code> for stability 
reasons. If the build-side of a
+<code class="docutils literal notranslate"><span 
class="pre">ShuffledHashJoin</span></code> is very large then it could lead to 
OOM in Spark.</p>
+<p>Vectorized query engines tend to perform better with <code class="docutils 
literal notranslate"><span class="pre">ShuffledHashJoin</span></code>, so for 
best performance it is often preferable
+to configure Comet to convert <code class="docutils literal notranslate"><span 
class="pre">SortMergeJoin</span></code> to <code class="docutils literal 
notranslate"><span class="pre">ShuffledHashJoin</span></code>. Comet does not 
yet provide spill-to-disk for
+<code class="docutils literal notranslate"><span 
class="pre">ShuffledHashJoin</span></code> so this could result in OOM. Also, 
<code class="docutils literal notranslate"><span 
class="pre">SortMergeJoin</span></code> may still be faster in some cases. It 
is best
+to test with both for your specific workloads.</p>
+<p>To configure Comet to convert <code class="docutils literal 
notranslate"><span class="pre">SortMergeJoin</span></code> to <code 
class="docutils literal notranslate"><span 
class="pre">ShuffledHashJoin</span></code>, set <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.exec.replaceSortMergeJoin=true</span></code>.</p>
+</section>
 <section id="shuffle">
 <h2>Shuffle<a class="headerlink" href="#shuffle" title="Link to this 
heading">¶</a></h2>
 <p>Comet provides accelerated shuffle implementations that can be used to 
improve the performance of your queries.</p>
@@ -524,7 +539,7 @@ of 41 seconds reported as 23 seconds for example.</p>
     
     <div class="footer-item">
       <p class="sphinx-version">
-Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.0.<br>
+Created using <a href="http://sphinx-doc.org/";>Sphinx</a> 8.1.3.<br>
 </p>
     </div>
     


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to