This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 6c8ecaa98 Publish built docs triggered by 
9320aedc8df2e8f7e5acb42ecdc44f33dff5d592
6c8ecaa98 is described below

commit 6c8ecaa98416a8ab55adff623c4d1faf4235479f
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Fri Jan 3 06:34:23 2025 +0000

    Publish built docs triggered by 9320aedc8df2e8f7e5acb42ecdc44f33dff5d592
---
 _sources/user-guide/configs.md.txt |  1 +
 searchindex.js                     |  2 +-
 user-guide/configs.html            | 64 ++++++++++++++++++++------------------
 3 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/_sources/user-guide/configs.md.txt 
b/_sources/user-guide/configs.md.txt
index 7881f0763..ecea70254 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -48,6 +48,7 @@ Comet provides the following configuration settings.
 | spark.comet.exec.hashJoin.enabled | Whether to enable hashJoin by default. | 
true |
 | spark.comet.exec.localLimit.enabled | Whether to enable localLimit by 
default. | true |
 | spark.comet.exec.memoryFraction | The fraction of memory from Comet memory 
overhead that the native memory manager can use for execution. The purpose of 
this config is to set aside memory for untracked data structures, as well as 
imprecise size estimation during memory acquisition. | 0.7 |
+| spark.comet.exec.memoryPool | The type of memory pool to be used for Comet 
native execution. Available memory pool types are 'greedy', 'fair_spill', 
'greedy_task_shared', 'fair_spill_task_shared', 'greedy_global' and 
'fair_spill_global', By default, this config is 'greedy_task_shared'. | 
greedy_task_shared |
 | spark.comet.exec.project.enabled | Whether to enable project by default. | 
true |
 | spark.comet.exec.replaceSortMergeJoin | Experimental feature to force Spark 
to replace SortMergeJoin with ShuffledHashJoin for improved performance. This 
feature is not stable yet. For more information, refer to the Comet Tuning 
Guide (https://datafusion.apache.org/comet/user-guide/tuning.html). | false |
 | spark.comet.exec.shuffle.compression.codec | The codec of Comet native 
shuffle used to compress shuffle data. Only zstd is supported. Compression can 
be disabled by setting spark.shuffle.compress=false. | zstd |
diff --git a/searchindex.js b/searchindex.js
index 3c4dbc666..1bc13a52d 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[9, "install-comet"]], "2. 
Clone Spark and Apply Diff": [[9, "clone-spark-and-apply-diff"]], "3. Run Spark 
SQL Tests": [[9, "run-spark-sql-tests"]], "ANSI mode": [[11, "ansi-mode"]], 
"API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "ASF Links": [[10, null]], "Adding 
Spark-side Tests for the New Expression": [[0, 
"adding-spark-side-tests-for-the-new-expression"]], "Adding a New Expression": 
[[0,  [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[9, "install-comet"]], "2. 
Clone Spark and Apply Diff": [[9, "clone-spark-and-apply-diff"]], "3. Run Spark 
SQL Tests": [[9, "run-spark-sql-tests"]], "ANSI mode": [[11, "ansi-mode"]], 
"API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "ASF Links": [[10, null]], "Adding 
Spark-side Tests for the New Expression": [[0, 
"adding-spark-side-tests-for-the-new-expression"]], "Adding a New Expression": 
[[0,  [...]
\ No newline at end of file
diff --git a/user-guide/configs.html b/user-guide/configs.html
index 735311c69..ff5ac6112 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -438,123 +438,127 @@ under the License.
 <td><p>The fraction of memory from Comet memory overhead that the native 
memory manager can use for execution. The purpose of this config is to set 
aside memory for untracked data structures, as well as imprecise size 
estimation during memory acquisition.</p></td>
 <td><p>0.7</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.project.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.memoryPool</p></td>
+<td><p>The type of memory pool to be used for Comet native execution. 
Available memory pool types are ‘greedy’, ‘fair_spill’, ‘greedy_task_shared’, 
‘fair_spill_task_shared’, ‘greedy_global’ and ‘fair_spill_global’, By default, 
this config is ‘greedy_task_shared’.</p></td>
+<td><p>greedy_task_shared</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.exec.project.enabled</p></td>
 <td><p>Whether to enable project by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.replaceSortMergeJoin</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.replaceSortMergeJoin</p></td>
 <td><p>Experimental feature to force Spark to replace SortMergeJoin with 
ShuffledHashJoin for improved performance. This feature is not stable yet. For 
more information, refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.compression.codec</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.compression.codec</p></td>
 <td><p>The codec of Comet native shuffle used to compress shuffle data. Only 
zstd is supported. Compression can be disabled by setting 
spark.shuffle.compress=false.</p></td>
 <td><p>zstd</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.shuffle.compression.level</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.shuffle.compression.level</p></td>
 <td><p>The compression level to use when compression shuffle files.</p></td>
 <td><p>1</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.enabled</p></td>
 <td><p>Whether to enable Comet native shuffle. Note that this requires setting 
‘spark.shuffle.manager’ to 
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’. 
‘spark.shuffle.manager’ must be set before starting the Spark application and 
cannot be changed during the application.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.sort.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.sort.enabled</p></td>
 <td><p>Whether to enable sort by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
 <td><p>Whether to enable sortMergeJoin by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
 <td><p>Experimental support for Sort Merge Join with filter</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.stddev.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.stddev.enabled</p></td>
 <td><p>Whether to enable stddev by default. stddev is slower than Spark’s 
implementation.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
 <td><p>Whether to enable takeOrderedAndProject by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.union.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.union.enabled</p></td>
 <td><p>Whether to enable union by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.window.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.window.enabled</p></td>
 <td><p>Whether to enable window by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.explain.native.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.explain.native.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide a tree representation 
of the native query plan before execution and again after execution, with 
metrics.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.explain.verbose.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.explain.verbose.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide a verbose tree 
representation of the extended information.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.explainFallback.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.explainFallback.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide logging explaining the 
reason(s) why a query stage cannot be executed natively. Set this to false to 
reduce the amount of logging.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.memory.overhead.factor</p></td>
+<tr class="row-odd"><td><p>spark.comet.memory.overhead.factor</p></td>
 <td><p>Fraction of executor memory to be allocated as additional non-heap 
memory per executor process for Comet.</p></td>
 <td><p>0.2</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.memory.overhead.min</p></td>
+<tr class="row-even"><td><p>spark.comet.memory.overhead.min</p></td>
 <td><p>Minimum amount of additional memory to be allocated per executor 
process for Comet, in MiB.</p></td>
 <td><p>402653184b</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
+<tr class="row-odd"><td><p>spark.comet.nativeLoadRequired</p></td>
 <td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
+<tr class="row-even"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
 <td><p>Whether to use Java direct byte buffer when reading Parquet.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
 <td><p>In the parallel reader, if the read ranges submitted are skewed in 
sizes, this option will cause the reader to break up larger read ranges into 
smaller ranges to reduce the skew. This will result in a slightly larger number 
of connections opened to the file system but may give improved 
performance.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
+<tr class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
 <td><p>When enabled the parallel reader will try to merge ranges of data that 
are separated by less than ‘comet.parquet.read.io.mergeRanges.delta’ bytes. 
Longer continuous reads are faster on cloud storage.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
 <td><p>The delta in bytes between consecutive read ranges below which the 
parallel reader will try to merge the ranges. The default is 8MB.</p></td>
 <td><p>8388608</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
+<tr 
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
 <td><p>Whether to enable Comet’s parallel reader for Parquet files. The 
parallel reader reads ranges of consecutive data in a  file in parallel. It is 
faster for large files and row groups but uses more resources.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
 <td><p>The maximum number of parallel threads the parallel reader will use in 
a single executor. For executors configured with a smaller number of cores, use 
a smaller number.</p></td>
 <td><p>16</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.regexp.allowIncompatible</p></td>
+<tr class="row-even"><td><p>spark.comet.regexp.allowIncompatible</p></td>
 <td><p>Comet is not currently fully compatible with Spark for all regular 
expressions. Set this config to true to allow them anyway using Rust’s regular 
expression engine. See compatibility guide for more information.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.enabled</p></td>
 <td><p>Whether to enable native scans. When this is turned on, Spark will use 
Comet to read supported data sources (currently only Parquet is supported 
natively). Note that to enable native vectorized execution, both this config 
and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.preFetch.enabled</p></td>
 <td><p>Whether to enable pre-fetching feature of CometScan.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
 <td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. Note that more pre-fetching 
threads means more memory requirement to store pre-fetched row groups.</p></td>
 <td><p>2</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
+<tr class="row-even"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
 <td><p>The ratio of total values to distinct values in a string column to 
decide whether to prefer dictionary encoding when shuffling the column. If the 
ratio is higher than this config, dictionary encoding will be used on shuffling 
string column. This config is effective if it is higher than 1.0. Note that 
this config is only used when <code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.shuffle.mode</span></code> is <code 
class="docutils literal notranslate"><s [...]
 <td><p>10.0</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
 <td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when ‘spark.comet.sparkToColumnar.enabled’ is true</p></td>
 <td><p>Range,InMemoryTableScan</p></td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to