This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new c6eba3db Publish built docs triggered by 
cc74b7f51133d0b8b78a4d5b9f30adec07756437
c6eba3db is described below

commit c6eba3dbd44ef27db582629b0758d346154c179e
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Mon May 13 21:16:37 2024 +0000

    Publish built docs triggered by cc74b7f51133d0b8b78a4d5b9f30adec07756437
---
 _sources/user-guide/configs.md.txt |  1 -
 searchindex.js                     |  2 +-
 user-guide/configs.html            | 30 +++++++++++++-----------------
 3 files changed, 14 insertions(+), 19 deletions(-)

diff --git a/_sources/user-guide/configs.md.txt 
b/_sources/user-guide/configs.md.txt
index 22a7a098..d75059a9 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -36,7 +36,6 @@ Comet provides the following configuration settings.
 | spark.comet.exceptionOnDatetimeRebase | Whether to throw exception when 
seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar. 
Since Spark 3, dates/timestamps were written according to the Proleptic 
Gregorian calendar. When this is true, Comet will throw exceptions when seeing 
these dates/timestamps that were written by Spark version before 3.0. If this 
is false, these dates/timestamps will be read as if they were written to the 
Proleptic Gregorian calendar and w [...]
 | spark.comet.exec.all.enabled | Whether to enable all Comet operators. By 
default, this config is false. Note that this config precedes all separate 
config 'spark.comet.exec.<operator_name>.enabled'. That being said, if this 
config is enabled, separate configs are ignored. | false |
 | spark.comet.exec.all.expr.enabled | Whether to enable all Comet exprs. By 
default, this config is false. Note that this config precedes all separate 
config 'spark.comet.exec.<expr_name>.enabled'. That being said, if this config 
is enabled, separate configs are ignored. | false |
-| spark.comet.exec.broadcast.enabled | Whether to force enabling broadcasting 
for Comet native operators. By default, this config is false. Comet broadcast 
feature will be enabled automatically by Comet extension. But for unit tests, 
we need this feature to force enabling it for invalid cases. So this config is 
only used for unit test. | false |
 | spark.comet.exec.enabled | Whether to enable Comet native vectorized 
execution for Spark. This controls whether Spark should convert operators into 
their Comet counterparts and execute them in native space. Note: each operator 
is associated with a separate config in the format of 
'spark.comet.exec.<operator_name>.enabled' at the moment, and both the config 
and this need to be turned on, in order for the operator to be executed in 
native. By default, this config is false. | false |
 | spark.comet.exec.memoryFraction | The fraction of memory from Comet memory 
overhead that the native memory manager can use for execution. The purpose of 
this config is to set aside memory for untracked data structures, as well as 
imprecise size estimation during memory acquisition. Default value is 0.7. | 
0.7 |
 | spark.comet.exec.shuffle.codec | The codec of Comet native shuffle used to 
compress shuffle data. Only zstd is supported. | zstd |
diff --git a/searchindex.js b/searchindex.js
index 383e9c0b..bc580ade 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"ANSI mode": [[5, "ansi-mode"], [6, 
"ansi-mode"]], "ASF Links": [[4, null]], "Additional Info": [[1, 
"additional-info"]], "After your debugging is done,": [[1, 
"after-your-debugging-is-done"]], "Apache DataFusion Comet": [[4, 
"apache-datafusion-comet"]], "Architecture": [[13, "architecture"]], "Asking 
for Help": [[0, "asking-for-help"]], "Benchmark": [[2, "benchmark"]], "Build & 
Test": [[2, "build-test"]], "Building From Source": [[11, "building-from-source 
[...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"ANSI mode": [[5, "ansi-mode"], [6, 
"ansi-mode"]], "ASF Links": [[4, null]], "Additional Info": [[1, 
"additional-info"]], "After your debugging is done,": [[1, 
"after-your-debugging-is-done"]], "Apache DataFusion Comet": [[4, 
"apache-datafusion-comet"]], "Architecture": [[13, "architecture"]], "Asking 
for Help": [[0, "asking-for-help"]], "Benchmark": [[2, "benchmark"]], "Build & 
Test": [[2, "build-test"]], "Building From Source": [[11, "building-from-source 
[...]
\ No newline at end of file
diff --git a/user-guide/configs.html b/user-guide/configs.html
index 795a315c..3ee2d74f 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -345,59 +345,55 @@ under the License.
 <td><p>Whether to enable all Comet exprs. By default, this config is false. 
Note that this config precedes all separate config 
‘spark.comet.exec.&lt;expr_name&gt;.enabled’. That being said, if this config 
is enabled, separate configs are ignored.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.broadcast.enabled</p></td>
-<td><p>Whether to force enabling broadcasting for Comet native operators. By 
default, this config is false. Comet broadcast feature will be enabled 
automatically by Comet extension. But for unit tests, we need this feature to 
force enabling it for invalid cases. So this config is only used for unit 
test.</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-even"><td><p>spark.comet.exec.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.enabled</p></td>
 <td><p>Whether to enable Comet native vectorized execution for Spark. This 
controls whether Spark should convert operators into their Comet counterparts 
and execute them in native space. Note: each operator is associated with a 
separate config in the format of 
‘spark.comet.exec.&lt;operator_name&gt;.enabled’ at the moment, and both the 
config and this need to be turned on, in order for the operator to be executed 
in native. By default, this config is false.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.memoryFraction</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.memoryFraction</p></td>
 <td><p>The fraction of memory from Comet memory overhead that the native 
memory manager can use for execution. The purpose of this config is to set 
aside memory for untracked data structures, as well as imprecise size 
estimation during memory acquisition. Default value is 0.7.</p></td>
 <td><p>0.7</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.shuffle.codec</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.shuffle.codec</p></td>
 <td><p>The codec of Comet native shuffle used to compress shuffle data. Only 
zstd is supported.</p></td>
 <td><p>zstd</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.enabled</p></td>
 <td><p>Whether to enable Comet native shuffle. By default, this config is 
false. Note that this requires setting ‘spark.shuffle.manager’ to 
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’. 
‘spark.shuffle.manager’ must be set before starting the Spark application and 
cannot be changed during the application.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.memory.overhead.factor</p></td>
+<tr class="row-odd"><td><p>spark.comet.memory.overhead.factor</p></td>
 <td><p>Fraction of executor memory to be allocated as additional non-heap 
memory per executor process for Comet. Default value is 0.2.</p></td>
 <td><p>0.2</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.memory.overhead.min</p></td>
+<tr class="row-even"><td><p>spark.comet.memory.overhead.min</p></td>
 <td><p>Minimum amount of additional memory to be allocated per executor 
process for Comet, in MiB.</p></td>
 <td><p>402653184b</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
+<tr class="row-odd"><td><p>spark.comet.nativeLoadRequired</p></td>
 <td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
+<tr class="row-even"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
 <td><p>Whether to use Java direct byte buffer when reading Parquet. By 
default, this is false</p></td>
 <td><p>false</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.rowToColumnar.supportedOperatorList</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.rowToColumnar.supportedOperatorList</p></td>
 <td><p>A comma-separated list of row-based operators that will be converted to 
columnar format when ‘spark.comet.rowToColumnar.enabled’ is true</p></td>
 <td><p>Range,InMemoryTableScan</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
 <td><p>Whether to enable Comet scan. When this is turned on, Spark will use 
Comet to read Parquet data source. Note that to enable native vectorized 
execution, both this config and ‘spark.comet.exec.enabled’ need to be enabled. 
By default, this config is true.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.preFetch.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
 <td><p>Whether to enable pre-fetching feature of CometScan. By default is 
disabled.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
 <td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. By default it is 2. Note that 
more pre-fetching threads means more memory requirement to store pre-fetched 
row groups.</p></td>
 <td><p>2</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
+<tr class="row-odd"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
 <td><p>The ratio of total values to distinct values in a string column to 
decide whether to prefer dictionary encoding when shuffling the column. If the 
ratio is higher than this config, dictionary encoding will be used on shuffling 
string column. This config is effective if it is higher than 1.0. By default, 
this config is 10.0. Note that this config is only used when 
‘spark.comet.columnar.shuffle.enabled’ is true.</p></td>
 <td><p>10.0</p></td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to