This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 16691380e Publish built docs triggered by 
79516b6c9eab64adb8ce5fcae698b9f03b655d66
16691380e is described below

commit 16691380ec5a54e7352192910e19d64af7f96fd4
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Thu Sep 11 23:29:53 2025 +0000

    Publish built docs triggered by 79516b6c9eab64adb8ce5fcae698b9f03b655d66
---
 _sources/user-guide/latest/configs.md.txt     |  1 -
 _sources/user-guide/latest/expressions.md.txt | 11 +++-
 searchindex.js                                |  2 +-
 user-guide/latest/configs.html                | 88 +++++++++++++--------------
 user-guide/latest/expressions.html            |  9 ++-
 5 files changed, 59 insertions(+), 52 deletions(-)

diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index 0582c2902..c923c5668 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -48,7 +48,6 @@ Comet provides the following configuration settings.
 | spark.comet.exec.filter.enabled | Whether to enable filter by default. | 
true |
 | spark.comet.exec.globalLimit.enabled | Whether to enable globalLimit by 
default. | true |
 | spark.comet.exec.hashJoin.enabled | Whether to enable hashJoin by default. | 
true |
-| spark.comet.exec.initCap.enabled | Whether to enable initCap by default. | 
false |
 | spark.comet.exec.localLimit.enabled | Whether to enable localLimit by 
default. | true |
 | spark.comet.exec.memoryPool | The type of memory pool to be used for Comet 
native execution. When running Spark in on-heap mode, available pool types are 
'greedy', 'fair_spill', 'greedy_task_shared', 'fair_spill_task_shared', 
'greedy_global', 'fair_spill_global', and `unbounded`. When running Spark in 
off-heap mode, available pool types are 'unified' and `fair_unified`. The 
default pool type is `greedy_task_shared` for on-heap mode and `unified` for 
off-heap mode. For more information, [...]
 | spark.comet.exec.project.enabled | Whether to enable project by default. | 
true |
diff --git a/_sources/user-guide/latest/expressions.md.txt 
b/_sources/user-guide/latest/expressions.md.txt
index 2746b02ff..5f7beb42b 100644
--- a/_sources/user-guide/latest/expressions.md.txt
+++ b/_sources/user-guide/latest/expressions.md.txt
@@ -23,8 +23,15 @@ Comet supports the following Spark expressions. Expressions 
that are marked as S
 natively in Comet and provide the same results as Spark, or will fall back to 
Spark for cases that would not
 be compatible.
 
-Expressions that are not Spark-compatible are disabled by default and can be 
enabled by setting
-`spark.comet.expression.allowIncompatible=true`.
+All expressions are enabled by default, but can be disabled by setting
+`spark.comet.expression.EXPRNAME.enabled=false`, where `EXPRNAME` is the 
expression name as specified in 
+the following tables, such as `Length`, or `StartsWith`.
+
+Expressions that are not Spark-compatible will fall back to Spark by default 
and can be enabled by setting
+`spark.comet.expression.EXPRNAME.allowIncompatible=true`.
+
+It is also possible to specify `spark.comet.expression.allowIncompatible=true` 
to enable all 
+incompatible expressions.
 
 ## Conditional Expressions
 
diff --git a/searchindex.js b/searchindex.js
index 868b5bb89..c1d26170d 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[12, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[12, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[12, "run-spark-sql-tests"]], "ANSI Mode": [[43, 
"ansi-mode"]], "ANSI mode": [[17, "ansi-mode"], [30, "ansi-mode"]], "API 
Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "Accelerating Apache Iceberg 
Parquet Scans using Comet (Experimental)": [[22, null], [35, null], [48, 
null]],  [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[12, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[12, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[12, "run-spark-sql-tests"]], "ANSI Mode": [[43, 
"ansi-mode"]], "ANSI mode": [[17, "ansi-mode"], [30, "ansi-mode"]], "API 
Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "Accelerating Apache Iceberg 
Parquet Scans using Comet (Experimental)": [[22, null], [35, null], [48, 
null]],  [...]
\ No newline at end of file
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index 462a66732..7a0793cae 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -611,175 +611,171 @@ under the License.
 <td><p>Whether to enable hashJoin by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.initCap.enabled</p></td>
-<td><p>Whether to enable initCap by default.</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-odd"><td><p>spark.comet.exec.localLimit.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.localLimit.enabled</p></td>
 <td><p>Whether to enable localLimit by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.memoryPool</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.memoryPool</p></td>
 <td><p>The type of memory pool to be used for Comet native execution. When 
running Spark in on-heap mode, available pool types are ‘greedy’, ‘fair_spill’, 
‘greedy_task_shared’, ‘fair_spill_task_shared’, ‘greedy_global’, 
‘fair_spill_global’, and <code class="docutils literal notranslate"><span 
class="pre">unbounded</span></code>. When running Spark in off-heap mode, 
available pool types are ‘unified’ and <code class="docutils literal 
notranslate"><span class="pre">fair_unified</span></cod [...]
 <td><p>default</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.project.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.project.enabled</p></td>
 <td><p>Whether to enable project by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.replaceSortMergeJoin</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.replaceSortMergeJoin</p></td>
 <td><p>Experimental feature to force Spark to replace SortMergeJoin with 
ShuffledHashJoin for improved performance. This feature is not stable yet. For 
more information, refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.compression.codec</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.compression.codec</p></td>
 <td><p>The codec of Comet native shuffle used to compress shuffle data. lz4, 
zstd, and snappy are supported. Compression can be disabled by setting 
spark.shuffle.compress=false.</p></td>
 <td><p>lz4</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.exec.shuffle.compression.zstd.level</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.exec.shuffle.compression.zstd.level</p></td>
 <td><p>The compression level to use when compressing shuffle files with 
zstd.</p></td>
 <td><p>1</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.enabled</p></td>
 <td><p>Whether to enable Comet native shuffle. Note that this requires setting 
‘spark.shuffle.manager’ to 
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’. 
‘spark.shuffle.manager’ must be set before starting the Spark application and 
cannot be changed during the application.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.sort.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.sort.enabled</p></td>
 <td><p>Whether to enable sort by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
 <td><p>Whether to enable sortMergeJoin by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
 <td><p>Experimental support for Sort Merge Join with filter</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.stddev.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.stddev.enabled</p></td>
 <td><p>Whether to enable stddev by default. stddev is slower than Spark’s 
implementation.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
 <td><p>Whether to enable takeOrderedAndProject by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.union.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.union.enabled</p></td>
 <td><p>Whether to enable union by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.window.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.window.enabled</p></td>
 <td><p>Whether to enable window by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.explain.native.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.explain.native.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide a tree representation 
of the native query plan before execution and again after execution, with 
metrics.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.explain.verbose.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.explain.verbose.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide a verbose tree 
representation of the extended information.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.explainFallback.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.explainFallback.enabled</p></td>
 <td><p>When this setting is enabled, Comet will provide logging explaining the 
reason(s) why a query stage cannot be executed natively. Set this to false to 
reduce the amount of logging.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.expression.allowIncompatible</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.allowIncompatible</p></td>
 <td><p>Comet is not currently fully compatible with Spark for all expressions. 
Set this config to true to allow them anyway. For more information, refer to 
the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.logFallbackReasons.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.logFallbackReasons.enabled</p></td>
 <td><p>When this setting is enabled, Comet will log warnings for all fallback 
reasons.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.memory.overhead.factor</p></td>
+<tr class="row-odd"><td><p>spark.comet.memory.overhead.factor</p></td>
 <td><p>Fraction of executor memory to be allocated as additional memory for 
Comet when running Spark in on-heap mode. For more information, refer to the 
Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
 <td><p>0.2</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.memory.overhead.min</p></td>
+<tr class="row-even"><td><p>spark.comet.memory.overhead.min</p></td>
 <td><p>Minimum amount of additional memory to be allocated per executor 
process for Comet, in MiB, when running Spark in on-heap mode. For more 
information, refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
 <td><p>402653184b</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.memoryOverhead</p></td>
+<tr class="row-odd"><td><p>spark.comet.memoryOverhead</p></td>
 <td><p>The amount of additional memory to be allocated per executor process 
for Comet, in MiB, when running Spark in on-heap mode. This config is optional. 
If this is not specified, it will be set to <code class="docutils literal 
notranslate"><span class="pre">spark.comet.memory.overhead.factor</span></code> 
* <code class="docutils literal notranslate"><span 
class="pre">spark.executor.memory</span></code>. For more information, refer to 
the Comet Tuning Guide (https://datafusion.apache.o [...]
 <td><p></p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.metrics.updateInterval</p></td>
+<tr class="row-even"><td><p>spark.comet.metrics.updateInterval</p></td>
 <td><p>The interval in milliseconds to update metrics. If interval is 
negative, metrics will be updated upon task completion.</p></td>
 <td><p>3000</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.native.shuffle.partitioning.hash.enabled</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.native.shuffle.partitioning.hash.enabled</p></td>
 <td><p>Whether to enable hash partitioning for Comet native shuffle.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.native.shuffle.partitioning.range.enabled</p></td>
+<tr 
class="row-even"><td><p>spark.comet.native.shuffle.partitioning.range.enabled</p></td>
 <td><p>Experimental feature to enable range partitioning for Comet native 
shuffle. This feature is experimental while we investigate scenarios that don’t 
partition data correctly.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
+<tr class="row-odd"><td><p>spark.comet.nativeLoadRequired</p></td>
 <td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
+<tr class="row-even"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
 <td><p>Whether to use Java direct byte buffer when reading Parquet.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
 <td><p>In the parallel reader, if the read ranges submitted are skewed in 
sizes, this option will cause the reader to break up larger read ranges into 
smaller ranges to reduce the skew. This will result in a slightly larger number 
of connections opened to the file system but may give improved 
performance.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
+<tr class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
 <td><p>When enabled the parallel reader will try to merge ranges of data that 
are separated by less than ‘comet.parquet.read.io.mergeRanges.delta’ bytes. 
Longer continuous reads are faster on cloud storage.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
 <td><p>The delta in bytes between consecutive read ranges below which the 
parallel reader will try to merge the ranges. The default is 8MB.</p></td>
 <td><p>8388608</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
+<tr 
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
 <td><p>Whether to enable Comet’s parallel reader for Parquet files. The 
parallel reader reads ranges of consecutive data in a  file in parallel. It is 
faster for large files and row groups but uses more resources.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
 <td><p>The maximum number of parallel threads the parallel reader will use in 
a single executor. For executors configured with a smaller number of cores, use 
a smaller number.</p></td>
 <td><p>16</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.respectFilterPushdown</p></td>
+<tr class="row-even"><td><p>spark.comet.parquet.respectFilterPushdown</p></td>
 <td><p>Whether to respect Spark’s PARQUET_FILTER_PUSHDOWN_ENABLED config. This 
needs to be respected when running the Spark SQL test suite but the default 
setting results in poor performance in Comet when using the new native scans, 
disabled by default</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.regexp.allowIncompatible</p></td>
+<tr class="row-odd"><td><p>spark.comet.regexp.allowIncompatible</p></td>
 <td><p>Comet is not currently fully compatible with Spark for all regular 
expressions. Set this config to true to allow them anyway. For more 
information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.allowIncompatible</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.allowIncompatible</p></td>
 <td><p>Some Comet scan implementations are not currently fully compatible with 
Spark for all datatypes. Set this config to true to allow them anyway. For more 
information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.enabled</p></td>
 <td><p>Whether to enable native scans. When this is turned on, Spark will use 
Comet to read supported data sources (currently only Parquet is supported 
natively). Note that to enable native vectorized execution, both this config 
and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.preFetch.enabled</p></td>
 <td><p>Whether to enable pre-fetching feature of CometScan.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
 <td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. Note that more pre-fetching 
threads means more memory requirement to store pre-fetched row groups.</p></td>
 <td><p>2</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
+<tr class="row-even"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
 <td><p>The ratio of total values to distinct values in a string column to 
decide whether to prefer dictionary encoding when shuffling the column. If the 
ratio is higher than this config, dictionary encoding will be used on shuffling 
string column. This config is effective if it is higher than 1.0. Note that 
this config is only used when <code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.shuffle.mode</span></code> is <code 
class="docutils literal notranslate"><s [...]
 <td><p>10.0</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.shuffle.sizeInBytesMultiplier</p></td>
+<tr class="row-odd"><td><p>spark.comet.shuffle.sizeInBytesMultiplier</p></td>
 <td><p>Comet reports smaller sizes for shuffle due to using Arrow’s columnar 
memory format and this can result in Spark choosing a different join strategy 
due to the estimated size of the exchange being smaller. Comet will multiple 
sizeInBytes by this amount to avoid regressions in join strategy.</p></td>
 <td><p>1.0</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
+<tr 
class="row-even"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
 <td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when ‘spark.comet.sparkToColumnar.enabled’ is true</p></td>
 <td><p>Range,InMemoryTableScan</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.hadoop.fs.comet.libhdfs.schemes</p></td>
+<tr class="row-odd"><td><p>spark.hadoop.fs.comet.libhdfs.schemes</p></td>
 <td><p>Defines filesystem schemes (e.g., hdfs, webhdfs) that the native side 
accesses via libhdfs, separated by commas. Valid only when built with hdfs 
feature enabled.</p></td>
 <td><p></p></td>
 </tr>
diff --git a/user-guide/latest/expressions.html 
b/user-guide/latest/expressions.html
index 1f4d622d6..ce93a31e7 100644
--- a/user-guide/latest/expressions.html
+++ b/user-guide/latest/expressions.html
@@ -585,8 +585,13 @@ under the License.
 <p>Comet supports the following Spark expressions. Expressions that are marked 
as Spark-compatible will either run
 natively in Comet and provide the same results as Spark, or will fall back to 
Spark for cases that would not
 be compatible.</p>
-<p>Expressions that are not Spark-compatible are disabled by default and can 
be enabled by setting
-<code class="docutils literal notranslate"><span 
class="pre">spark.comet.expression.allowIncompatible=true</span></code>.</p>
+<p>All expressions are enabled by default, but can be disabled by setting
+<code class="docutils literal notranslate"><span 
class="pre">spark.comet.expression.EXPRNAME.enabled=false</span></code>, where 
<code class="docutils literal notranslate"><span 
class="pre">EXPRNAME</span></code> is the expression name as specified in
+the following tables, such as <code class="docutils literal notranslate"><span 
class="pre">Length</span></code>, or <code class="docutils literal 
notranslate"><span class="pre">StartsWith</span></code>.</p>
+<p>Expressions that are not Spark-compatible will fall back to Spark by 
default and can be enabled by setting
+<code class="docutils literal notranslate"><span 
class="pre">spark.comet.expression.EXPRNAME.allowIncompatible=true</span></code>.</p>
+<p>It is also possible to specify <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.expression.allowIncompatible=true</span></code> to 
enable all
+incompatible expressions.</p>
 <section id="conditional-expressions">
 <h2>Conditional Expressions<a class="headerlink" 
href="#conditional-expressions" title="Link to this heading">¶</a></h2>
 <table class="table">


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@datafusion.apache.org
For additional commands, e-mail: commits-h...@datafusion.apache.org

Reply via email to