This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a25e7efd8 Publish built docs triggered by
f35d80c39db2a5173ee923b77eb8840feab472f7
a25e7efd8 is described below
commit a25e7efd8f3ceab7261835a73a7c2ecf00bfd320
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Sat Nov 15 18:25:46 2025 +0000
Publish built docs triggered by f35d80c39db2a5173ee923b77eb8840feab472f7
---
_sources/user-guide/latest/compatibility.md.txt | 8 +++---
_sources/user-guide/latest/configs.md.txt | 35 ++++++++++++-------------
_sources/user-guide/latest/expressions.md.txt | 3 ---
_sources/user-guide/latest/kubernetes.md.txt | 1 -
searchindex.js | 2 +-
user-guide/latest/compatibility.html | 7 +++--
user-guide/latest/configs.html | 20 ++++++--------
user-guide/latest/expressions.html | 2 --
user-guide/latest/kubernetes.html | 1 -
9 files changed, 32 insertions(+), 47 deletions(-)
diff --git a/_sources/user-guide/latest/compatibility.md.txt
b/_sources/user-guide/latest/compatibility.md.txt
index 73d27d10a..17a951578 100644
--- a/_sources/user-guide/latest/compatibility.md.txt
+++ b/_sources/user-guide/latest/compatibility.md.txt
@@ -32,8 +32,9 @@ Comet has the following limitations when reading Parquet
files:
## ANSI Mode
-Comet will fall back to Spark for the following expressions when ANSI mode is
enabled, unless
-`spark.comet.expression.allowIncompatible=true`.
+Comet will fall back to Spark for the following expressions when ANSI mode is
enabled. Thes expressions can be enabled by setting
+`spark.comet.expression.EXPRNAME.allowIncompatible=true`, where `EXPRNAME` is
the Spark expression class name. See
+the [Comet Supported Expressions Guide](expressions.md) for more information
on this configuration setting.
- Average
- Sum
@@ -58,9 +59,6 @@ Expressions that are not 100% Spark-compatible will fall back
to Spark by defaul
`spark.comet.expression.EXPRNAME.allowIncompatible=true`, where `EXPRNAME` is
the Spark expression class name. See
the [Comet Supported Expressions Guide](expressions.md) for more information
on this configuration setting.
-It is also possible to specify `spark.comet.expression.allowIncompatible=true`
to enable all
-incompatible expressions.
-
## Regular Expressions
Comet uses the Rust regexp crate for evaluating regular expressions, and this
has different behavior from Java's
diff --git a/_sources/user-guide/latest/configs.md.txt
b/_sources/user-guide/latest/configs.md.txt
index ea8589e94..7e3d2a79f 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -58,21 +58,20 @@ Comet provides the following configuration settings.
<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
<!--BEGIN:CONFIG_TABLE[exec]-->
-| Config | Description
[...]
-| ------------------------------------------ |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[...]
-| `spark.comet.caseConversion.enabled` | Java uses locale-specific rules
when converting strings to upper or lower case and Rust does not, so we disable
upper and lower by default.
[...]
-| `spark.comet.debug.enabled` | Whether to enable debug mode
for Comet. When enabled, Comet will do additional checks for debugging purpose.
For example, validating array when importing arrays from JVM at native side.
Note that these checks may be expensive in performance and should only be
enabled for debugging purpose.
[...]
-| `spark.comet.dppFallback.enabled` | Whether to fall back to Spark
for queries that use DPP.
[...]
-| `spark.comet.enabled` | Whether to enable Comet
extension for Spark. When this is turned on, Spark will use Comet to read
Parquet data source. Note that to enable native vectorized execution, both this
config and `spark.comet.exec.enabled` need to be enabled. Can be overridden by
environment variable `ENABLE_COMET`.
[...]
-| `spark.comet.exceptionOnDatetimeRebase` | Whether to throw exception when
seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar.
Since Spark 3, dates/timestamps were written according to the Proleptic
Gregorian calendar. When this is true, Comet will throw exceptions when seeing
these dates/timestamps that were written by Spark version before 3.0. If this
is false, these dates/timestamps will be read as if they were written to the
Proleptic Gregorian calendar [...]
-| `spark.comet.exec.enabled` | Whether to enable Comet native
vectorized execution for Spark. This controls whether Spark should convert
operators into their Comet counterparts and execute them in native space. Note:
each operator is associated with a separate config in the format of
`spark.comet.exec.<operator_name>.enabled` at the moment, and both the config
and this need to be turned on, in order for the operator to be executed in
native. [...]
-| `spark.comet.exec.replaceSortMergeJoin` | Experimental feature to force
Spark to replace SortMergeJoin with ShuffledHashJoin for improved performance.
This feature is not stable yet. For more information, refer to the [Comet
Tuning Guide](https://datafusion.apache.org/comet/user-guide/tuning.html).
[...]
-| `spark.comet.exec.strictFloatingPoint` | When enabled, fall back to
Spark for floating-point operations that may differ from Spark, such as when
comparing or sorting -0.0 and 0.0. For more information, refer to the [Comet
Compatibility
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html).
[...]
-| `spark.comet.expression.allowIncompatible` | Comet is not currently fully
compatible with Spark for all expressions. Set this config to true to allow
them anyway. For more information, refer to the [Comet Compatibility
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html).
[...]
-| `spark.comet.maxTempDirectorySize` | The maximum amount of data (in
bytes) stored inside the temporary directories.
[...]
-| `spark.comet.metrics.updateInterval` | The interval in milliseconds to
update metrics. If interval is negative, metrics will be updated upon task
completion.
[...]
-| `spark.comet.nativeLoadRequired` | Whether to require Comet native
library to load successfully when Comet is enabled. If not, Comet will silently
fallback to Spark when it fails to load the native lib. Otherwise, an error
will be thrown and the Spark job will be aborted.
[...]
-| `spark.comet.regexp.allowIncompatible` | Comet is not currently fully
compatible with Spark for all regular expressions. Set this config to true to
allow them anyway. For more information, refer to the [Comet Compatibility
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html).
[...]
+| Config | Description
[...]
+| --------------------------------------- |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[...]
+| `spark.comet.caseConversion.enabled` | Java uses locale-specific rules
when converting strings to upper or lower case and Rust does not, so we disable
upper and lower by default.
[...]
+| `spark.comet.debug.enabled` | Whether to enable debug mode for
Comet. When enabled, Comet will do additional checks for debugging purpose. For
example, validating array when importing arrays from JVM at native side. Note
that these checks may be expensive in performance and should only be enabled
for debugging purpose.
[...]
+| `spark.comet.dppFallback.enabled` | Whether to fall back to Spark for
queries that use DPP.
[...]
+| `spark.comet.enabled` | Whether to enable Comet extension
for Spark. When this is turned on, Spark will use Comet to read Parquet data
source. Note that to enable native vectorized execution, both this config and
`spark.comet.exec.enabled` need to be enabled. It can be overridden by the
environment variable `ENABLE_COMET`.
[...]
+| `spark.comet.exceptionOnDatetimeRebase` | Whether to throw exception when
seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar.
Since Spark 3, dates/timestamps were written according to the Proleptic
Gregorian calendar. When this is true, Comet will throw exceptions when seeing
these dates/timestamps that were written by Spark version before 3.0. If this
is false, these dates/timestamps will be read as if they were written to the
Proleptic Gregorian calendar and [...]
+| `spark.comet.exec.enabled` | Whether to enable Comet native
vectorized execution for Spark. This controls whether Spark should convert
operators into their Comet counterparts and execute them in native space. Note:
each operator is associated with a separate config in the format of
`spark.comet.exec.<operator_name>.enabled` at the moment, and both the config
and this need to be turned on, in order for the operator to be executed in
native. [...]
+| `spark.comet.exec.replaceSortMergeJoin` | Experimental feature to force
Spark to replace SortMergeJoin with ShuffledHashJoin for improved performance.
This feature is not stable yet. For more information, refer to the [Comet
Tuning Guide](https://datafusion.apache.org/comet/user-guide/tuning.html).
[...]
+| `spark.comet.exec.strictFloatingPoint` | When enabled, fall back to Spark
for floating-point operations that may differ from Spark, such as when
comparing or sorting -0.0 and 0.0. For more information, refer to the [Comet
Compatibility
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html).
[...]
+| `spark.comet.maxTempDirectorySize` | The maximum amount of data (in
bytes) stored inside the temporary directories.
[...]
+| `spark.comet.metrics.updateInterval` | The interval in milliseconds to
update metrics. If interval is negative, metrics will be updated upon task
completion.
[...]
+| `spark.comet.nativeLoadRequired` | Whether to require Comet native
library to load successfully when Comet is enabled. If not, Comet will silently
fallback to Spark when it fails to load the native lib. Otherwise, an error
will be thrown and the Spark job will be aborted.
[...]
+| `spark.comet.regexp.allowIncompatible` | Comet is not currently fully
compatible with Spark for all regular expressions. Set this config to true to
allow them anyway. For more information, refer to the [Comet Compatibility
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html).
[...]
<!--END:CONFIG_TABLE-->
@@ -89,7 +88,7 @@ These settings can be used to determine which parts of the
plan are accelerated
| `spark.comet.explain.native.enabled` | When this setting is enabled,
Comet will provide a tree representation of the native query plan before
execution and again after execution, with metrics.
| false |
| `spark.comet.explain.rules` | When this setting is enabled,
Comet will log all plan transformations performed in physical optimizer rules.
Default: false
|
false |
| `spark.comet.explainFallback.enabled` | When this setting is enabled,
Comet will provide logging explaining the reason(s) why a query stage cannot be
executed natively. Set this to false to reduce the amount of logging.
|
false |
-| `spark.comet.logFallbackReasons.enabled` | When this setting is enabled,
Comet will log warnings for all fallback reasons. Can be overridden by
environment variable `ENABLE_COMET_LOG_FALLBACK_REASONS`.
| false |
+| `spark.comet.logFallbackReasons.enabled` | When this setting is enabled,
Comet will log warnings for all fallback reasons. It can be overridden by the
environment variable `ENABLE_COMET_LOG_FALLBACK_REASONS`.
| false |
<!--END:CONFIG_TABLE-->
@@ -139,12 +138,12 @@ These settings can be used to determine which parts of
the plan are accelerated
| `spark.comet.convert.csv.enabled` | When enabled, data
from Spark (non-native) CSV v1 and v2 scans will be converted to Arrow format.
This is an experimental feature and has known issues with non-UTC timezones.
| false |
| `spark.comet.convert.json.enabled` | When enabled, data
from Spark (non-native) JSON v1 and v2 scans will be converted to Arrow format.
This is an experimental feature and has known issues with non-UTC timezones.
| false |
| `spark.comet.convert.parquet.enabled` | When enabled, data
from Spark (non-native) Parquet v1 and v2 scans will be converted to Arrow
format. This is an experimental feature and has known issues with non-UTC
timezones.
| false |
-| `spark.comet.exec.onHeap.enabled` | Whether to allow Comet
to run in on-heap mode. Required for running Spark SQL tests. Can be overridden
by environment variable `ENABLE_COMET_ONHEAP`.
| false |
+| `spark.comet.exec.onHeap.enabled` | Whether to allow Comet
to run in on-heap mode. Required for running Spark SQL tests. It can be
overridden by the environment variable `ENABLE_COMET_ONHEAP`.
| false |
| `spark.comet.exec.onHeap.memoryPool` | The type of memory
pool to be used for Comet native execution when running Spark in on-heap mode.
Available pool types are `greedy`, `fair_spill`, `greedy_task_shared`,
`fair_spill_task_shared`, `greedy_global`, `fair_spill_global`, and
`unbounded`. | greedy_task_shared
|
| `spark.comet.memoryOverhead` | The amount of
additional memory to be allocated per executor process for Comet, in MiB, when
running Spark in on-heap mode.
| 1024 MiB |
| `spark.comet.sparkToColumnar.enabled` | Whether to enable
Spark to Arrow columnar conversion. When this is turned on, Comet will convert
operators in `spark.comet.sparkToColumnar.supportedOperatorList` into Arrow
columnar format before processing. This is an experimental feature and has
known issues with non-UTC timezones. | false |
| `spark.comet.sparkToColumnar.supportedOperatorList` | A comma-separated list
of operators that will be converted to Arrow columnar format when
`spark.comet.sparkToColumnar.enabled` is true.
| Range,InMemoryTableScan,RDDScan |
-| `spark.comet.testing.strict` | Experimental option to
enable strict testing, which will fail tests that could be more comprehensive,
such as checking for a specific fallback reason. Can be overridden by
environment variable `ENABLE_COMET_STRICT_TESTING`.
| false |
+| `spark.comet.testing.strict` | Experimental option to
enable strict testing, which will fail tests that could be more comprehensive,
such as checking for a specific fallback reason. It can be overridden by the
environment variable `ENABLE_COMET_STRICT_TESTING`.
| false |
<!--END:CONFIG_TABLE-->
diff --git a/_sources/user-guide/latest/expressions.md.txt
b/_sources/user-guide/latest/expressions.md.txt
index f56fe1975..d58fc8a90 100644
--- a/_sources/user-guide/latest/expressions.md.txt
+++ b/_sources/user-guide/latest/expressions.md.txt
@@ -31,9 +31,6 @@ of expressions that be disabled.
Expressions that are not Spark-compatible will fall back to Spark by default
and can be enabled by setting
`spark.comet.expression.EXPRNAME.allowIncompatible=true`.
-It is also possible to specify `spark.comet.expression.allowIncompatible=true`
to enable all
-incompatible expressions.
-
## Conditional Expressions
| Expression | SQL | Spark-Compatible?
|
diff --git a/_sources/user-guide/latest/kubernetes.md.txt
b/_sources/user-guide/latest/kubernetes.md.txt
index 4aa5a88ad..2fb037d63 100644
--- a/_sources/user-guide/latest/kubernetes.md.txt
+++ b/_sources/user-guide/latest/kubernetes.md.txt
@@ -79,7 +79,6 @@ spec:
"spark.plugins": "org.apache.spark.CometPlugin"
"spark.comet.enabled": "true"
"spark.comet.exec.enabled": "true"
- "spark.comet.expression.allowIncompatible": "true"
"spark.comet.exec.shuffle.enabled": "true"
"spark.comet.exec.shuffle.mode": "auto"
"spark.shuffle.manager":
"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager"
diff --git a/searchindex.js b/searchindex.js
index 9f59b19ee..309dcdb44 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[18, "install-comet"]],
"2. Clone Spark and Apply Diff": [[18, "clone-spark-and-apply-diff"]], "3. Run
Spark SQL Tests": [[18, "run-spark-sql-tests"]], "ANSI Mode": [[21,
"ansi-mode"], [34, "ansi-mode"], [74, "ansi-mode"]], "ANSI mode": [[47,
"ansi-mode"], [60, "ansi-mode"]], "API Differences Between Spark Versions":
[[3, "api-differences-between-spark-versions"]], "ASF Links": [[2, null], [2,
null]], "Accelerating Apache Iceberg Parque [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[18, "install-comet"]],
"2. Clone Spark and Apply Diff": [[18, "clone-spark-and-apply-diff"]], "3. Run
Spark SQL Tests": [[18, "run-spark-sql-tests"]], "ANSI Mode": [[21,
"ansi-mode"], [34, "ansi-mode"], [74, "ansi-mode"]], "ANSI mode": [[47,
"ansi-mode"], [60, "ansi-mode"]], "API Differences Between Spark Versions":
[[3, "api-differences-between-spark-versions"]], "ASF Links": [[2, null], [2,
null]], "Accelerating Apache Iceberg Parque [...]
\ No newline at end of file
diff --git a/user-guide/latest/compatibility.html
b/user-guide/latest/compatibility.html
index f0f61aceb..8d4d07d1f 100644
--- a/user-guide/latest/compatibility.html
+++ b/user-guide/latest/compatibility.html
@@ -472,8 +472,9 @@ under the License.
</section>
<section id="ansi-mode">
<h2>ANSI Mode<a class="headerlink" href="#ansi-mode" title="Link to this
heading">#</a></h2>
-<p>Comet will fall back to Spark for the following expressions when ANSI mode
is enabled, unless
-<code class="docutils literal notranslate"><span
class="pre">spark.comet.expression.allowIncompatible=true</span></code>.</p>
+<p>Comet will fall back to Spark for the following expressions when ANSI mode
is enabled. Thes expressions can be enabled by setting
+<code class="docutils literal notranslate"><span
class="pre">spark.comet.expression.EXPRNAME.allowIncompatible=true</span></code>,
where <code class="docutils literal notranslate"><span
class="pre">EXPRNAME</span></code> is the Spark expression class name. See
+the <a class="reference internal" href="expressions.html"><span class="std
std-doc">Comet Supported Expressions Guide</span></a> for more information on
this configuration setting.</p>
<ul class="simple">
<li><p>Average</p></li>
<li><p>Sum</p></li>
@@ -497,8 +498,6 @@ will make relevant operations fall back to Spark.</p>
<p>Expressions that are not 100% Spark-compatible will fall back to Spark by
default and can be enabled by setting
<code class="docutils literal notranslate"><span
class="pre">spark.comet.expression.EXPRNAME.allowIncompatible=true</span></code>,
where <code class="docutils literal notranslate"><span
class="pre">EXPRNAME</span></code> is the Spark expression class name. See
the <a class="reference internal" href="expressions.html"><span class="std
std-doc">Comet Supported Expressions Guide</span></a> for more information on
this configuration setting.</p>
-<p>It is also possible to specify <code class="docutils literal
notranslate"><span
class="pre">spark.comet.expression.allowIncompatible=true</span></code> to
enable all
-incompatible expressions.</p>
</section>
<section id="regular-expressions">
<h2>Regular Expressions<a class="headerlink" href="#regular-expressions"
title="Link to this heading">#</a></h2>
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index 524efe20b..aaef1fc7a 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -568,7 +568,7 @@ under the License.
<td><p>true</p></td>
</tr>
<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.enabled</span></code></p></td>
-<td><p>Whether to enable Comet extension for Spark. When this is turned on,
Spark will use Comet to read Parquet data source. Note that to enable native
vectorized execution, both this config and <code class="docutils literal
notranslate"><span class="pre">spark.comet.exec.enabled</span></code> need to
be enabled. Can be overridden by environment variable <code class="docutils
literal notranslate"><span class="pre">ENABLE_COMET</span></code>.</p></td>
+<td><p>Whether to enable Comet extension for Spark. When this is turned on,
Spark will use Comet to read Parquet data source. Note that to enable native
vectorized execution, both this config and <code class="docutils literal
notranslate"><span class="pre">spark.comet.exec.enabled</span></code> need to
be enabled. It can be overridden by the environment variable <code
class="docutils literal notranslate"><span
class="pre">ENABLE_COMET</span></code>.</p></td>
<td><p>true</p></td>
</tr>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.exceptionOnDatetimeRebase</span></code></p></td>
@@ -587,23 +587,19 @@ under the License.
<td><p>When enabled, fall back to Spark for floating-point operations that may
differ from Spark, such as when comparing or sorting -0.0 and 0.0. For more
information, refer to the <a class="reference external"
href="https://datafusion.apache.org/comet/user-guide/compatibility.html">Comet
Compatibility Guide</a>.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.expression.allowIncompatible</span></code></p></td>
-<td><p>Comet is not currently fully compatible with Spark for all expressions.
Set this config to true to allow them anyway. For more information, refer to
the <a class="reference external"
href="https://datafusion.apache.org/comet/user-guide/compatibility.html">Comet
Compatibility Guide</a>.</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.maxTempDirectorySize</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.maxTempDirectorySize</span></code></p></td>
<td><p>The maximum amount of data (in bytes) stored inside the temporary
directories.</p></td>
<td><p>107374182400b</p></td>
</tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.metrics.updateInterval</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.metrics.updateInterval</span></code></p></td>
<td><p>The interval in milliseconds to update metrics. If interval is
negative, metrics will be updated upon task completion.</p></td>
<td><p>3000</p></td>
</tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.nativeLoadRequired</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.nativeLoadRequired</span></code></p></td>
<td><p>Whether to require Comet native library to load successfully when Comet
is enabled. If not, Comet will silently fallback to Spark when it fails to load
the native lib. Otherwise, an error will be thrown and the Spark job will be
aborted.</p></td>
<td><p>false</p></td>
</tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.regexp.allowIncompatible</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.regexp.allowIncompatible</span></code></p></td>
<td><p>Comet is not currently fully compatible with Spark for all regular
expressions. Set this config to true to allow them anyway. For more
information, refer to the <a class="reference external"
href="https://datafusion.apache.org/comet/user-guide/compatibility.html">Comet
Compatibility Guide</a>.</p></td>
<td><p>false</p></td>
</tr>
@@ -642,7 +638,7 @@ under the License.
<td><p>false</p></td>
</tr>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.logFallbackReasons.enabled</span></code></p></td>
-<td><p>When this setting is enabled, Comet will log warnings for all fallback
reasons. Can be overridden by environment variable <code class="docutils
literal notranslate"><span
class="pre">ENABLE_COMET_LOG_FALLBACK_REASONS</span></code>.</p></td>
+<td><p>When this setting is enabled, Comet will log warnings for all fallback
reasons. It can be overridden by the environment variable <code class="docutils
literal notranslate"><span
class="pre">ENABLE_COMET_LOG_FALLBACK_REASONS</span></code>.</p></td>
<td><p>false</p></td>
</tr>
</tbody>
@@ -773,7 +769,7 @@ under the License.
<td><p>false</p></td>
</tr>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.exec.onHeap.enabled</span></code></p></td>
-<td><p>Whether to allow Comet to run in on-heap mode. Required for running
Spark SQL tests. Can be overridden by environment variable <code
class="docutils literal notranslate"><span
class="pre">ENABLE_COMET_ONHEAP</span></code>.</p></td>
+<td><p>Whether to allow Comet to run in on-heap mode. Required for running
Spark SQL tests. It can be overridden by the environment variable <code
class="docutils literal notranslate"><span
class="pre">ENABLE_COMET_ONHEAP</span></code>.</p></td>
<td><p>false</p></td>
</tr>
<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.exec.onHeap.memoryPool</span></code></p></td>
@@ -793,7 +789,7 @@ under the License.
<td><p>Range,InMemoryTableScan,RDDScan</p></td>
</tr>
<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span
class="pre">spark.comet.testing.strict</span></code></p></td>
-<td><p>Experimental option to enable strict testing, which will fail tests
that could be more comprehensive, such as checking for a specific fallback
reason. Can be overridden by environment variable <code class="docutils literal
notranslate"><span
class="pre">ENABLE_COMET_STRICT_TESTING</span></code>.</p></td>
+<td><p>Experimental option to enable strict testing, which will fail tests
that could be more comprehensive, such as checking for a specific fallback
reason. It can be overridden by the environment variable <code class="docutils
literal notranslate"><span
class="pre">ENABLE_COMET_STRICT_TESTING</span></code>.</p></td>
<td><p>false</p></td>
</tr>
</tbody>
diff --git a/user-guide/latest/expressions.html
b/user-guide/latest/expressions.html
index d34d471ab..ebf15d910 100644
--- a/user-guide/latest/expressions.html
+++ b/user-guide/latest/expressions.html
@@ -469,8 +469,6 @@ the following tables, such as <code class="docutils literal
notranslate"><span c
of expressions that be disabled.</p>
<p>Expressions that are not Spark-compatible will fall back to Spark by
default and can be enabled by setting
<code class="docutils literal notranslate"><span
class="pre">spark.comet.expression.EXPRNAME.allowIncompatible=true</span></code>.</p>
-<p>It is also possible to specify <code class="docutils literal
notranslate"><span
class="pre">spark.comet.expression.allowIncompatible=true</span></code> to
enable all
-incompatible expressions.</p>
<section id="conditional-expressions">
<h2>Conditional Expressions<a class="headerlink"
href="#conditional-expressions" title="Link to this heading">#</a></h2>
<div class="pst-scrollable-table-container"><table class="table">
diff --git a/user-guide/latest/kubernetes.html
b/user-guide/latest/kubernetes.html
index 63f030f8f..29ca538ba 100644
--- a/user-guide/latest/kubernetes.html
+++ b/user-guide/latest/kubernetes.html
@@ -513,7 +513,6 @@ spec:
<span class="w"> </span><span
class="s2">"spark.plugins"</span>:<span class="w"> </span><span
class="s2">"org.apache.spark.CometPlugin"</span>
<span class="w"> </span><span
class="s2">"spark.comet.enabled"</span>:<span class="w"> </span><span
class="s2">"true"</span>
<span class="w"> </span><span
class="s2">"spark.comet.exec.enabled"</span>:<span class="w">
</span><span class="s2">"true"</span>
-<span class="w"> </span><span
class="s2">"spark.comet.expression.allowIncompatible"</span>:<span
class="w"> </span><span class="s2">"true"</span>
<span class="w"> </span><span
class="s2">"spark.comet.exec.shuffle.enabled"</span>:<span class="w">
</span><span class="s2">"true"</span>
<span class="w"> </span><span
class="s2">"spark.comet.exec.shuffle.mode"</span>:<span class="w">
</span><span class="s2">"auto"</span>
<span class="w"> </span><span
class="s2">"spark.shuffle.manager"</span>:<span class="w">
</span><span
class="s2">"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager"</span>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]