This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 4f25691f Publish built docs triggered by
ea6d20511e813a2698c47a964b3a0739e9543add
4f25691f is described below
commit 4f25691fe03a8d71c7a2f88d15b35bbc7fa4d4ce
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Fri Dec 20 18:11:44 2024 +0000
Publish built docs triggered by ea6d20511e813a2698c47a964b3a0739e9543add
---
_sources/user-guide/configs.md.txt | 3 +-
_sources/user-guide/tuning.md.txt | 6 ++++
searchindex.js | 2 +-
user-guide/configs.html | 60 ++++++++++++++++++++------------------
user-guide/tuning.html | 11 +++++++
5 files changed, 52 insertions(+), 30 deletions(-)
diff --git a/_sources/user-guide/configs.md.txt
b/_sources/user-guide/configs.md.txt
index 69da7922..7881f076 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -50,7 +50,8 @@ Comet provides the following configuration settings.
| spark.comet.exec.memoryFraction | The fraction of memory from Comet memory
overhead that the native memory manager can use for execution. The purpose of
this config is to set aside memory for untracked data structures, as well as
imprecise size estimation during memory acquisition. | 0.7 |
| spark.comet.exec.project.enabled | Whether to enable project by default. |
true |
| spark.comet.exec.replaceSortMergeJoin | Experimental feature to force Spark
to replace SortMergeJoin with ShuffledHashJoin for improved performance. This
feature is not stable yet. For more information, refer to the Comet Tuning
Guide (https://datafusion.apache.org/comet/user-guide/tuning.html). | false |
-| spark.comet.exec.shuffle.codec | The codec of Comet native shuffle used to
compress shuffle data. Only zstd is supported. | zstd |
+| spark.comet.exec.shuffle.compression.codec | The codec of Comet native
shuffle used to compress shuffle data. Only zstd is supported. Compression can
be disabled by setting spark.shuffle.compress=false. | zstd |
+| spark.comet.exec.shuffle.compression.level | The compression level to use
when compression shuffle files. | 1 |
| spark.comet.exec.shuffle.enabled | Whether to enable Comet native shuffle.
Note that this requires setting 'spark.shuffle.manager' to
'org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager'.
'spark.shuffle.manager' must be set before starting the Spark application and
cannot be changed during the application. | true |
| spark.comet.exec.sort.enabled | Whether to enable sort by default. | true |
| spark.comet.exec.sortMergeJoin.enabled | Whether to enable sortMergeJoin by
default. | true |
diff --git a/_sources/user-guide/tuning.md.txt
b/_sources/user-guide/tuning.md.txt
index d68481d1..e04e750b 100644
--- a/_sources/user-guide/tuning.md.txt
+++ b/_sources/user-guide/tuning.md.txt
@@ -103,6 +103,12 @@ native shuffle currently only supports `HashPartitioning`
and `SinglePartitionin
To enable native shuffle, set `spark.comet.exec.shuffle.mode` to `native`. If
this mode is explicitly set,
then any shuffle operations that cannot be supported in this mode will fall
back to Spark.
+### Shuffle Compression
+
+By default, Spark compresses shuffle files using LZ4 compression. Comet
overrides this behavior with ZSTD compression.
+Compression can be disabled by setting `spark.shuffle.compress=false`, which
may result in faster shuffle times in
+certain environments, such as single-node setups with fast NVMe drives, at the
expense of increased disk space usage.
+
## Explain Plan
### Extended Explain
With Spark 4.0.0 and newer, Comet can provide extended explain plan
information in the Spark UI. Currently this lists
diff --git a/searchindex.js b/searchindex.js
index cd9a2048..3c4dbc66 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[9, "install-comet"]], "2.
Clone Spark and Apply Diff": [[9, "clone-spark-and-apply-diff"]], "3. Run Spark
SQL Tests": [[9, "run-spark-sql-tests"]], "ANSI mode": [[11, "ansi-mode"]],
"API Differences Between Spark Versions": [[0,
"api-differences-between-spark-versions"]], "ASF Links": [[10, null]], "Adding
Spark-side Tests for the New Expression": [[0,
"adding-spark-side-tests-for-the-new-expression"]], "Adding a New Expression":
[[0, [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[9, "install-comet"]], "2.
Clone Spark and Apply Diff": [[9, "clone-spark-and-apply-diff"]], "3. Run Spark
SQL Tests": [[9, "run-spark-sql-tests"]], "ANSI mode": [[11, "ansi-mode"]],
"API Differences Between Spark Versions": [[0,
"api-differences-between-spark-versions"]], "ASF Links": [[10, null]], "Adding
Spark-side Tests for the New Expression": [[0,
"adding-spark-side-tests-for-the-new-expression"]], "Adding a New Expression":
[[0, [...]
\ No newline at end of file
diff --git a/user-guide/configs.html b/user-guide/configs.html
index f4235bb1..735311c6 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -446,111 +446,115 @@ under the License.
<td><p>Experimental feature to force Spark to replace SortMergeJoin with
ShuffledHashJoin for improved performance. This feature is not stable yet. For
more information, refer to the Comet Tuning Guide
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.codec</p></td>
-<td><p>The codec of Comet native shuffle used to compress shuffle data. Only
zstd is supported.</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.shuffle.compression.codec</p></td>
+<td><p>The codec of Comet native shuffle used to compress shuffle data. Only
zstd is supported. Compression can be disabled by setting
spark.shuffle.compress=false.</p></td>
<td><p>zstd</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.exec.shuffle.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.compression.level</p></td>
+<td><p>The compression level to use when compression shuffle files.</p></td>
+<td><p>1</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.exec.shuffle.enabled</p></td>
<td><p>Whether to enable Comet native shuffle. Note that this requires setting
‘spark.shuffle.manager’ to
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’.
‘spark.shuffle.manager’ must be set before starting the Spark application and
cannot be changed during the application.</p></td>
<td><p>true</p></td>
</tr>
-<tr class="row-odd"><td><p>spark.comet.exec.sort.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.sort.enabled</p></td>
<td><p>Whether to enable sort by default.</p></td>
<td><p>true</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
<td><p>Whether to enable sortMergeJoin by default.</p></td>
<td><p>true</p></td>
</tr>
-<tr
class="row-odd"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
+<tr
class="row-even"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
<td><p>Experimental support for Sort Merge Join with filter</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.exec.stddev.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.stddev.enabled</p></td>
<td><p>Whether to enable stddev by default. stddev is slower than Spark’s
implementation.</p></td>
<td><p>true</p></td>
</tr>
-<tr
class="row-odd"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
+<tr
class="row-even"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
<td><p>Whether to enable takeOrderedAndProject by default.</p></td>
<td><p>true</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.exec.union.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.union.enabled</p></td>
<td><p>Whether to enable union by default.</p></td>
<td><p>true</p></td>
</tr>
-<tr class="row-odd"><td><p>spark.comet.exec.window.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.window.enabled</p></td>
<td><p>Whether to enable window by default.</p></td>
<td><p>true</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.explain.native.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.explain.native.enabled</p></td>
<td><p>When this setting is enabled, Comet will provide a tree representation
of the native query plan before execution and again after execution, with
metrics.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-odd"><td><p>spark.comet.explain.verbose.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.explain.verbose.enabled</p></td>
<td><p>When this setting is enabled, Comet will provide a verbose tree
representation of the extended information.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.explainFallback.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.explainFallback.enabled</p></td>
<td><p>When this setting is enabled, Comet will provide logging explaining the
reason(s) why a query stage cannot be executed natively. Set this to false to
reduce the amount of logging.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-odd"><td><p>spark.comet.memory.overhead.factor</p></td>
+<tr class="row-even"><td><p>spark.comet.memory.overhead.factor</p></td>
<td><p>Fraction of executor memory to be allocated as additional non-heap
memory per executor process for Comet.</p></td>
<td><p>0.2</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.memory.overhead.min</p></td>
+<tr class="row-odd"><td><p>spark.comet.memory.overhead.min</p></td>
<td><p>Minimum amount of additional memory to be allocated per executor
process for Comet, in MiB.</p></td>
<td><p>402653184b</p></td>
</tr>
-<tr class="row-odd"><td><p>spark.comet.nativeLoadRequired</p></td>
+<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
<td><p>Whether to require Comet native library to load successfully when Comet
is enabled. If not, Comet will silently fallback to Spark when it fails to load
the native lib. Otherwise, an error will be thrown and the Spark job will be
aborted.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
+<tr class="row-odd"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
<td><p>Whether to use Java direct byte buffer when reading Parquet.</p></td>
<td><p>false</p></td>
</tr>
-<tr
class="row-odd"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
+<tr
class="row-even"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
<td><p>In the parallel reader, if the read ranges submitted are skewed in
sizes, this option will cause the reader to break up larger read ranges into
smaller ranges to reduce the skew. This will result in a slightly larger number
of connections opened to the file system but may give improved
performance.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
+<tr class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
<td><p>When enabled the parallel reader will try to merge ranges of data that
are separated by less than ‘comet.parquet.read.io.mergeRanges.delta’ bytes.
Longer continuous reads are faster on cloud storage.</p></td>
<td><p>true</p></td>
</tr>
-<tr
class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
+<tr
class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
<td><p>The delta in bytes between consecutive read ranges below which the
parallel reader will try to merge the ranges. The default is 8MB.</p></td>
<td><p>8388608</p></td>
</tr>
-<tr
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
+<tr
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
<td><p>Whether to enable Comet’s parallel reader for Parquet files. The
parallel reader reads ranges of consecutive data in a file in parallel. It is
faster for large files and row groups but uses more resources.</p></td>
<td><p>true</p></td>
</tr>
-<tr
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
+<tr
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
<td><p>The maximum number of parallel threads the parallel reader will use in
a single executor. For executors configured with a smaller number of cores, use
a smaller number.</p></td>
<td><p>16</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.regexp.allowIncompatible</p></td>
+<tr class="row-odd"><td><p>spark.comet.regexp.allowIncompatible</p></td>
<td><p>Comet is not currently fully compatible with Spark for all regular
expressions. Set this config to true to allow them anyway using Rust’s regular
expression engine. See compatibility guide for more information.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-odd"><td><p>spark.comet.scan.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
<td><p>Whether to enable native scans. When this is turned on, Spark will use
Comet to read supported data sources (currently only Parquet is supported
natively). Note that to enable native vectorized execution, both this config
and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
<td><p>true</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.scan.preFetch.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
<td><p>Whether to enable pre-fetching feature of CometScan.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-odd"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
<td><p>The number of threads running pre-fetching for CometScan. Effective if
spark.comet.scan.preFetch.enabled is enabled. Note that more pre-fetching
threads means more memory requirement to store pre-fetched row groups.</p></td>
<td><p>2</p></td>
</tr>
-<tr class="row-even"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
+<tr class="row-odd"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
<td><p>The ratio of total values to distinct values in a string column to
decide whether to prefer dictionary encoding when shuffling the column. If the
ratio is higher than this config, dictionary encoding will be used on shuffling
string column. This config is effective if it is higher than 1.0. Note that
this config is only used when <code class="docutils literal notranslate"><span
class="pre">spark.comet.exec.shuffle.mode</span></code> is <code
class="docutils literal notranslate"><s [...]
<td><p>10.0</p></td>
</tr>
-<tr
class="row-odd"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
+<tr
class="row-even"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
<td><p>A comma-separated list of operators that will be converted to Arrow
columnar format when ‘spark.comet.sparkToColumnar.enabled’ is true</p></td>
<td><p>Range,InMemoryTableScan</p></td>
</tr>
diff --git a/user-guide/tuning.html b/user-guide/tuning.html
index 30fecd2c..b3a4ae27 100644
--- a/user-guide/tuning.html
+++ b/user-guide/tuning.html
@@ -335,6 +335,11 @@ under the License.
</li>
</ul>
</li>
+ <li class="toc-h3 nav-item toc-entry">
+ <a class="reference internal nav-link" href="#shuffle-compression">
+ Shuffle Compression
+ </a>
+ </li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
@@ -467,6 +472,12 @@ native shuffle currently only supports <code
class="docutils literal notranslate
then any shuffle operations that cannot be supported in this mode will fall
back to Spark.</p>
</section>
</section>
+<section id="shuffle-compression">
+<h3>Shuffle Compression<a class="headerlink" href="#shuffle-compression"
title="Link to this heading">¶</a></h3>
+<p>By default, Spark compresses shuffle files using LZ4 compression. Comet
overrides this behavior with ZSTD compression.
+Compression can be disabled by setting <code class="docutils literal
notranslate"><span class="pre">spark.shuffle.compress=false</span></code>,
which may result in faster shuffle times in
+certain environments, such as single-node setups with fast NVMe drives, at the
expense of increased disk space usage.</p>
+</section>
</section>
<section id="explain-plan">
<h2>Explain Plan<a class="headerlink" href="#explain-plan" title="Link to this
heading">¶</a></h2>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]