This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 801118ed Publish built docs triggered by 
2bf7d12109ff2644855db357306fc752dccddc6e
801118ed is described below

commit 801118ed5b64aa32b8c1900e19d00e8fef79f11f
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Tue May 14 21:17:44 2024 +0000

    Publish built docs triggered by 2bf7d12109ff2644855db357306fc752dccddc6e
---
 _sources/user-guide/configs.md.txt |  1 -
 searchindex.js                     |  2 +-
 user-guide/configs.html            | 32 ++++++++++++++------------------
 3 files changed, 15 insertions(+), 20 deletions(-)

diff --git a/_sources/user-guide/configs.md.txt 
b/_sources/user-guide/configs.md.txt
index 24f408a0..0204b0c5 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -35,7 +35,6 @@ Comet provides the following configuration settings.
 | spark.comet.enabled | Whether to enable Comet extension for Spark. When this 
is turned on, Spark will use Comet to read Parquet data source. Note that to 
enable native vectorized execution, both this config and 
'spark.comet.exec.enabled' need to be enabled. By default, this config is the 
value of the env var `ENABLE_COMET` if set, or true otherwise. | true |
 | spark.comet.exceptionOnDatetimeRebase | Whether to throw exception when 
seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar. 
Since Spark 3, dates/timestamps were written according to the Proleptic 
Gregorian calendar. When this is true, Comet will throw exceptions when seeing 
these dates/timestamps that were written by Spark version before 3.0. If this 
is false, these dates/timestamps will be read as if they were written to the 
Proleptic Gregorian calendar and w [...]
 | spark.comet.exec.all.enabled | Whether to enable all Comet operators. By 
default, this config is false. Note that this config precedes all separate 
config 'spark.comet.exec.<operator_name>.enabled'. That being said, if this 
config is enabled, separate configs are ignored. | false |
-| spark.comet.exec.all.expr.enabled | Whether to enable all Comet exprs. By 
default, this config is false. Note that this config precedes all separate 
config 'spark.comet.exec.<expr_name>.enabled'. That being said, if this config 
is enabled, separate configs are ignored. | false |
 | spark.comet.exec.enabled | Whether to enable Comet native vectorized 
execution for Spark. This controls whether Spark should convert operators into 
their Comet counterparts and execute them in native space. Note: each operator 
is associated with a separate config in the format of 
'spark.comet.exec.<operator_name>.enabled' at the moment, and both the config 
and this need to be turned on, in order for the operator to be executed in 
native. By default, this config is false. | false |
 | spark.comet.exec.memoryFraction | The fraction of memory from Comet memory 
overhead that the native memory manager can use for execution. The purpose of 
this config is to set aside memory for untracked data structures, as well as 
imprecise size estimation during memory acquisition. Default value is 0.7. | 
0.7 |
 | spark.comet.exec.shuffle.codec | The codec of Comet native shuffle used to 
compress shuffle data. Only zstd is supported. | zstd |
diff --git a/searchindex.js b/searchindex.js
index 515c36b3..32a410b3 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"ANSI mode": [[5, "ansi-mode"]], "ASF Links": 
[[4, null]], "Additional Info": [[1, "additional-info"]], "After your debugging 
is done": [[1, "after-your-debugging-is-done"]], "Apache DataFusion Comet": 
[[4, "apache-datafusion-comet"]], "Architecture": [[11, "architecture"]], 
"Asking for Help": [[0, "asking-for-help"]], "Benchmark": [[2, "benchmark"]], 
"Build & Test": [[2, "build-test"]], "Building From Source": [[9, 
"building-from-source"]], "CLion": [[2, " [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"ANSI mode": [[5, "ansi-mode"]], "ASF Links": 
[[4, null]], "Additional Info": [[1, "additional-info"]], "After your debugging 
is done": [[1, "after-your-debugging-is-done"]], "Apache DataFusion Comet": 
[[4, "apache-datafusion-comet"]], "Architecture": [[11, "architecture"]], 
"Asking for Help": [[0, "asking-for-help"]], "Benchmark": [[2, "benchmark"]], 
"Build & Test": [[2, "build-test"]], "Building From Source": [[9, 
"building-from-source"]], "CLion": [[2, " [...]
\ No newline at end of file
diff --git a/user-guide/configs.html b/user-guide/configs.html
index 01dd6cf7..a82aee63 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -341,63 +341,59 @@ under the License.
 <td><p>Whether to enable all Comet operators. By default, this config is 
false. Note that this config precedes all separate config 
‘spark.comet.exec.&lt;operator_name&gt;.enabled’. That being said, if this 
config is enabled, separate configs are ignored.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.all.expr.enabled</p></td>
-<td><p>Whether to enable all Comet exprs. By default, this config is false. 
Note that this config precedes all separate config 
‘spark.comet.exec.&lt;expr_name&gt;.enabled’. That being said, if this config 
is enabled, separate configs are ignored.</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-odd"><td><p>spark.comet.exec.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.enabled</p></td>
 <td><p>Whether to enable Comet native vectorized execution for Spark. This 
controls whether Spark should convert operators into their Comet counterparts 
and execute them in native space. Note: each operator is associated with a 
separate config in the format of 
‘spark.comet.exec.&lt;operator_name&gt;.enabled’ at the moment, and both the 
config and this need to be turned on, in order for the operator to be executed 
in native. By default, this config is false.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.memoryFraction</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.memoryFraction</p></td>
 <td><p>The fraction of memory from Comet memory overhead that the native 
memory manager can use for execution. The purpose of this config is to set 
aside memory for untracked data structures, as well as imprecise size 
estimation during memory acquisition. Default value is 0.7.</p></td>
 <td><p>0.7</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.codec</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.codec</p></td>
 <td><p>The codec of Comet native shuffle used to compress shuffle data. Only 
zstd is supported.</p></td>
 <td><p>zstd</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.shuffle.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.shuffle.enabled</p></td>
 <td><p>Whether to enable Comet native shuffle. By default, this config is 
false. Note that this requires setting ‘spark.shuffle.manager’ to 
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’. 
‘spark.shuffle.manager’ must be set before starting the Spark application and 
cannot be changed during the application.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.explainFallback.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.explainFallback.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide logging explaining the 
reason(s) why a query stage cannot be executed natively.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.memory.overhead.factor</p></td>
+<tr class="row-odd"><td><p>spark.comet.memory.overhead.factor</p></td>
 <td><p>Fraction of executor memory to be allocated as additional non-heap 
memory per executor process for Comet. Default value is 0.2.</p></td>
 <td><p>0.2</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.memory.overhead.min</p></td>
+<tr class="row-even"><td><p>spark.comet.memory.overhead.min</p></td>
 <td><p>Minimum amount of additional memory to be allocated per executor 
process for Comet, in MiB.</p></td>
 <td><p>402653184b</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
+<tr class="row-odd"><td><p>spark.comet.nativeLoadRequired</p></td>
 <td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
+<tr class="row-even"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
 <td><p>Whether to use Java direct byte buffer when reading Parquet. By 
default, this is false</p></td>
 <td><p>false</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.rowToColumnar.supportedOperatorList</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.rowToColumnar.supportedOperatorList</p></td>
 <td><p>A comma-separated list of row-based operators that will be converted to 
columnar format when ‘spark.comet.rowToColumnar.enabled’ is true</p></td>
 <td><p>Range,InMemoryTableScan</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
 <td><p>Whether to enable Comet scan. When this is turned on, Spark will use 
Comet to read Parquet data source. Note that to enable native vectorized 
execution, both this config and ‘spark.comet.exec.enabled’ need to be enabled. 
By default, this config is true.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.preFetch.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
 <td><p>Whether to enable pre-fetching feature of CometScan. By default is 
disabled.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
 <td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. By default it is 2. Note that 
more pre-fetching threads means more memory requirement to store pre-fetched 
row groups.</p></td>
 <td><p>2</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
+<tr class="row-odd"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
 <td><p>The ratio of total values to distinct values in a string column to 
decide whether to prefer dictionary encoding when shuffling the column. If the 
ratio is higher than this config, dictionary encoding will be used on shuffling 
string column. This config is effective if it is higher than 1.0. By default, 
this config is 10.0. Note that this config is only used when 
‘spark.comet.columnar.shuffle.enabled’ is true.</p></td>
 <td><p>10.0</p></td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to