This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 6d9b51892 Publish built docs triggered by 
f8eb57a4213165060b5b9a1620dbc4782ae56d79
6d9b51892 is described below

commit 6d9b518928ada0d2289140356ba3881432fd71a5
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Tue Oct 14 23:09:27 2025 +0000

    Publish built docs triggered by f8eb57a4213165060b5b9a1620dbc4782ae56d79
---
 _sources/user-guide/latest/configs.md.txt     |  288 ++++++-
 _sources/user-guide/latest/expressions.md.txt |    6 +-
 searchindex.js                                |    2 +-
 user-guide/latest/configs.html                | 1052 +++++++++++++++++++++----
 user-guide/latest/expressions.html            |    5 +-
 5 files changed, 1179 insertions(+), 174 deletions(-)

diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index 918fbc4b1..1a7cba1a4 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -21,72 +21,286 @@ under the License.
 
 Comet provides the following configuration settings.
 
-<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+## Scan Configuration Settings
 
-<!--BEGIN:CONFIG_TABLE-->
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[scan]-->
 | Config | Description | Default Value |
 |--------|-------------|---------------|
-| spark.comet.batchSize | The columnar batch size, i.e., the maximum number of 
rows that a batch can contain. | 8192 |
-| spark.comet.caseConversion.enabled | Java uses locale-specific rules when 
converting strings to upper or lower case and Rust does not, so we disable 
upper and lower by default. | false |
-| spark.comet.columnar.shuffle.async.enabled | Whether to enable asynchronous 
shuffle for Arrow-based shuffle. | false |
-| spark.comet.columnar.shuffle.async.max.thread.num | Maximum number of 
threads on an executor used for Comet async columnar shuffle. This is the upper 
bound of total number of shuffle threads per executor. In other words, if the 
number of cores * the number of shuffle threads per task 
`spark.comet.columnar.shuffle.async.thread.num` is larger than this config. 
Comet will use this config as the number of shuffle threads per executor 
instead. | 100 |
-| spark.comet.columnar.shuffle.async.thread.num | Number of threads used for 
Comet async columnar shuffle per shuffle task. Note that more threads means 
more memory requirement to buffer shuffle data before flushing to disk. Also, 
more threads may not always improve performance, and should be set based on the 
number of cores available. | 3 |
 | spark.comet.convert.csv.enabled | When enabled, data from Spark (non-native) 
CSV v1 and v2 scans will be converted to Arrow format. Note that to enable 
native vectorized execution, both this config and 'spark.comet.exec.enabled' 
need to be enabled. | false |
 | spark.comet.convert.json.enabled | When enabled, data from Spark 
(non-native) JSON v1 and v2 scans will be converted to Arrow format. Note that 
to enable native vectorized execution, both this config and 
'spark.comet.exec.enabled' need to be enabled. | false |
 | spark.comet.convert.parquet.enabled | When enabled, data from Spark 
(non-native) Parquet v1 and v2 scans will be converted to Arrow format. Note 
that to enable native vectorized execution, both this config and 
'spark.comet.exec.enabled' need to be enabled. | false |
+| spark.comet.scan.allowIncompatible | Some Comet scan implementations are not 
currently fully compatible with Spark for all datatypes. Set this config to 
true to allow them anyway. For more information, refer to the Comet 
Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html). | false |
+| spark.comet.scan.enabled | Whether to enable native scans. When this is 
turned on, Spark will use Comet to read supported data sources (currently only 
Parquet is supported natively). Note that to enable native vectorized 
execution, both this config and 'spark.comet.exec.enabled' need to be enabled. 
| true |
+| spark.comet.scan.preFetch.enabled | Whether to enable pre-fetching feature 
of CometScan. | false |
+| spark.comet.scan.preFetch.threadNum | The number of threads running 
pre-fetching for CometScan. Effective if spark.comet.scan.preFetch.enabled is 
enabled. Note that more pre-fetching threads means more memory requirement to 
store pre-fetched row groups. | 2 |
+| spark.comet.sparkToColumnar.enabled | Whether to enable Spark to Arrow 
columnar conversion. When this is turned on, Comet will convert operators in 
`spark.comet.sparkToColumnar.supportedOperatorList` into Arrow columnar format 
before processing. | false |
+| spark.comet.sparkToColumnar.supportedOperatorList | A comma-separated list 
of operators that will be converted to Arrow columnar format when 
'spark.comet.sparkToColumnar.enabled' is true | Range,InMemoryTableScan,RDDScan 
|
+| spark.hadoop.fs.comet.libhdfs.schemes | Defines filesystem schemes (e.g., 
hdfs, webhdfs) that the native side accesses via libhdfs, separated by commas. 
Valid only when built with hdfs feature enabled. | |
+<!--END:CONFIG_TABLE-->
+
+## Parquet Reader Configuration Settings
+
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[parquet]-->
+| Config | Description | Default Value |
+|--------|-------------|---------------|
+| spark.comet.parquet.enable.directBuffer | Whether to use Java direct byte 
buffer when reading Parquet. | false |
+| spark.comet.parquet.read.io.adjust.readRange.skew | In the parallel reader, 
if the read ranges submitted are skewed in sizes, this option will cause the 
reader to break up larger read ranges into smaller ranges to reduce the skew. 
This will result in a slightly larger number of connections opened to the file 
system but may give improved performance. | false |
+| spark.comet.parquet.read.io.mergeRanges | When enabled the parallel reader 
will try to merge ranges of data that are separated by less than 
'comet.parquet.read.io.mergeRanges.delta' bytes. Longer continuous reads are 
faster on cloud storage. | true |
+| spark.comet.parquet.read.io.mergeRanges.delta | The delta in bytes between 
consecutive read ranges below which the parallel reader will try to merge the 
ranges. The default is 8MB. | 8388608 |
+| spark.comet.parquet.read.parallel.io.enabled | Whether to enable Comet's 
parallel reader for Parquet files. The parallel reader reads ranges of 
consecutive data in a  file in parallel. It is faster for large files and row 
groups but uses more resources. | true |
+| spark.comet.parquet.read.parallel.io.thread-pool.size | The maximum number 
of parallel threads the parallel reader will use in a single executor. For 
executors configured with a smaller number of cores, use a smaller number. | 16 
|
+| spark.comet.parquet.respectFilterPushdown | Whether to respect Spark's 
PARQUET_FILTER_PUSHDOWN_ENABLED config. This needs to be respected when running 
the Spark SQL test suite but the default setting results in poor performance in 
Comet when using the new native scans, disabled by default | false |
+<!--END:CONFIG_TABLE-->
+
+## Query Execution Settings
+
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[exec]-->
+| Config | Description | Default Value |
+|--------|-------------|---------------|
+| spark.comet.caseConversion.enabled | Java uses locale-specific rules when 
converting strings to upper or lower case and Rust does not, so we disable 
upper and lower by default. | false |
 | spark.comet.debug.enabled | Whether to enable debug mode for Comet. When 
enabled, Comet will do additional checks for debugging purpose. For example, 
validating array when importing arrays from JVM at native side. Note that these 
checks may be expensive in performance and should only be enabled for debugging 
purpose. | false |
 | spark.comet.dppFallback.enabled | Whether to fall back to Spark for queries 
that use DPP. | true |
 | spark.comet.enabled | Whether to enable Comet extension for Spark. When this 
is turned on, Spark will use Comet to read Parquet data source. Note that to 
enable native vectorized execution, both this config and 
'spark.comet.exec.enabled' need to be enabled. By default, this config is the 
value of the env var `ENABLE_COMET` if set, or true otherwise. | true |
 | spark.comet.exceptionOnDatetimeRebase | Whether to throw exception when 
seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar. 
Since Spark 3, dates/timestamps were written according to the Proleptic 
Gregorian calendar. When this is true, Comet will throw exceptions when seeing 
these dates/timestamps that were written by Spark version before 3.0. If this 
is false, these dates/timestamps will be read as if they were written to the 
Proleptic Gregorian calendar and w [...]
+| spark.comet.exec.enabled | Whether to enable Comet native vectorized 
execution for Spark. This controls whether Spark should convert operators into 
their Comet counterparts and execute them in native space. Note: each operator 
is associated with a separate config in the format of 
'spark.comet.exec.<operator_name>.enabled' at the moment, and both the config 
and this need to be turned on, in order for the operator to be executed in 
native. | true |
+| spark.comet.exec.replaceSortMergeJoin | Experimental feature to force Spark 
to replace SortMergeJoin with ShuffledHashJoin for improved performance. This 
feature is not stable yet. For more information, refer to the Comet Tuning 
Guide (https://datafusion.apache.org/comet/user-guide/tuning.html). | false |
+| spark.comet.explain.native.enabled | When this setting is enabled, Comet 
will provide a tree representation of the native query plan before execution 
and again after execution, with metrics. | false |
+| spark.comet.explain.verbose.enabled | When this setting is enabled, Comet's 
extended explain output will provide the full query plan annotated with 
fallback reasons as well as a summary of how much of the plan was accelerated 
by Comet. When this setting is disabled, a list of fallback reasons will be 
provided instead. | false |
+| spark.comet.explainFallback.enabled | When this setting is enabled, Comet 
will provide logging explaining the reason(s) why a query stage cannot be 
executed natively. Set this to false to reduce the amount of logging. | false |
+| spark.comet.expression.allowIncompatible | Comet is not currently fully 
compatible with Spark for all expressions. Set this config to true to allow 
them anyway. For more information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html). | false |
+| spark.comet.logFallbackReasons.enabled | When this setting is enabled, Comet 
will log warnings for all fallback reasons. | false |
+| spark.comet.maxTempDirectorySize | The maximum amount of data (in bytes) 
stored inside the temporary directories. | 107374182400b |
+| spark.comet.metrics.updateInterval | The interval in milliseconds to update 
metrics. If interval is negative, metrics will be updated upon task completion. 
| 3000 |
+| spark.comet.nativeLoadRequired | Whether to require Comet native library to 
load successfully when Comet is enabled. If not, Comet will silently fallback 
to Spark when it fails to load the native lib. Otherwise, an error will be 
thrown and the Spark job will be aborted. | false |
+| spark.comet.regexp.allowIncompatible | Comet is not currently fully 
compatible with Spark for all regular expressions. Set this config to true to 
allow them anyway. For more information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html). | false |
+<!--END:CONFIG_TABLE-->
+
+## Enabling or Disabling Individual Operators
+
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[enable_exec]-->
+| Config | Description | Default Value |
+|--------|-------------|---------------|
 | spark.comet.exec.aggregate.enabled | Whether to enable aggregate by default. 
| true |
 | spark.comet.exec.broadcastExchange.enabled | Whether to enable 
broadcastExchange by default. | true |
 | spark.comet.exec.broadcastHashJoin.enabled | Whether to enable 
broadcastHashJoin by default. | true |
 | spark.comet.exec.coalesce.enabled | Whether to enable coalesce by default. | 
true |
 | spark.comet.exec.collectLimit.enabled | Whether to enable collectLimit by 
default. | true |
-| spark.comet.exec.enabled | Whether to enable Comet native vectorized 
execution for Spark. This controls whether Spark should convert operators into 
their Comet counterparts and execute them in native space. Note: each operator 
is associated with a separate config in the format of 
'spark.comet.exec.<operator_name>.enabled' at the moment, and both the config 
and this need to be turned on, in order for the operator to be executed in 
native. | true |
 | spark.comet.exec.expand.enabled | Whether to enable expand by default. | 
true |
 | spark.comet.exec.filter.enabled | Whether to enable filter by default. | 
true |
 | spark.comet.exec.globalLimit.enabled | Whether to enable globalLimit by 
default. | true |
 | spark.comet.exec.hashJoin.enabled | Whether to enable hashJoin by default. | 
true |
 | spark.comet.exec.localLimit.enabled | Whether to enable localLimit by 
default. | true |
-| spark.comet.exec.memoryPool | The type of memory pool to be used for Comet 
native execution when running Spark in off-heap mode. Available pool types are 
'greedy_unified' and `fair_unified`. For more information, refer to the Comet 
Tuning Guide (https://datafusion.apache.org/comet/user-guide/tuning.html). | 
fair_unified |
-| spark.comet.exec.memoryPool.fraction | Fraction of off-heap memory pool that 
is available to Comet. Only applies to off-heap mode. For more information, 
refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html). | 1.0 |
 | spark.comet.exec.project.enabled | Whether to enable project by default. | 
true |
-| spark.comet.exec.replaceSortMergeJoin | Experimental feature to force Spark 
to replace SortMergeJoin with ShuffledHashJoin for improved performance. This 
feature is not stable yet. For more information, refer to the Comet Tuning 
Guide (https://datafusion.apache.org/comet/user-guide/tuning.html). | false |
-| spark.comet.exec.shuffle.compression.codec | The codec of Comet native 
shuffle used to compress shuffle data. lz4, zstd, and snappy are supported. 
Compression can be disabled by setting spark.shuffle.compress=false. | lz4 |
-| spark.comet.exec.shuffle.compression.zstd.level | The compression level to 
use when compressing shuffle files with zstd. | 1 |
-| spark.comet.exec.shuffle.enabled | Whether to enable Comet native shuffle. 
Note that this requires setting 'spark.shuffle.manager' to 
'org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager'. 
'spark.shuffle.manager' must be set before starting the Spark application and 
cannot be changed during the application. | true |
 | spark.comet.exec.sort.enabled | Whether to enable sort by default. | true |
 | spark.comet.exec.sortMergeJoin.enabled | Whether to enable sortMergeJoin by 
default. | true |
 | spark.comet.exec.sortMergeJoinWithJoinFilter.enabled | Experimental support 
for Sort Merge Join with filter | false |
-| spark.comet.exec.stddev.enabled | Whether to enable stddev by default. 
stddev is slower than Spark's implementation. | true |
 | spark.comet.exec.takeOrderedAndProject.enabled | Whether to enable 
takeOrderedAndProject by default. | true |
 | spark.comet.exec.union.enabled | Whether to enable union by default. | true |
 | spark.comet.exec.window.enabled | Whether to enable window by default. | 
true |
-| spark.comet.explain.native.enabled | When this setting is enabled, Comet 
will provide a tree representation of the native query plan before execution 
and again after execution, with metrics. | false |
-| spark.comet.explain.verbose.enabled | When this setting is enabled, Comet's 
extended explain output will provide the full query plan annotated with 
fallback reasons as well as a summary of how much of the plan was accelerated 
by Comet. When this setting is disabled, a list of fallback reasons will be 
provided instead. | false |
-| spark.comet.explainFallback.enabled | When this setting is enabled, Comet 
will provide logging explaining the reason(s) why a query stage cannot be 
executed natively. Set this to false to reduce the amount of logging. | false |
-| spark.comet.expression.allowIncompatible | Comet is not currently fully 
compatible with Spark for all expressions. Set this config to true to allow 
them anyway. For more information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html). | false |
-| spark.comet.logFallbackReasons.enabled | When this setting is enabled, Comet 
will log warnings for all fallback reasons. | false |
-| spark.comet.maxTempDirectorySize | The maximum amount of data (in bytes) 
stored inside the temporary directories. | 107374182400b |
-| spark.comet.metrics.updateInterval | The interval in milliseconds to update 
metrics. If interval is negative, metrics will be updated upon task completion. 
| 3000 |
+<!--END:CONFIG_TABLE-->
+
+## Enabling or Disabling Individual Scalar Expressions
+
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[enable_expr]-->
+| Config | Description | Default Value |
+|--------|-------------|---------------|
+| spark.comet.expression.Acos.enabled | Enable Comet acceleration for Acos | 
true |
+| spark.comet.expression.Add.enabled | Enable Comet acceleration for Add | 
true |
+| spark.comet.expression.Alias.enabled | Enable Comet acceleration for Alias | 
true |
+| spark.comet.expression.And.enabled | Enable Comet acceleration for And | 
true |
+| spark.comet.expression.ArrayAppend.enabled | Enable Comet acceleration for 
ArrayAppend | true |
+| spark.comet.expression.ArrayCompact.enabled | Enable Comet acceleration for 
ArrayCompact | true |
+| spark.comet.expression.ArrayContains.enabled | Enable Comet acceleration for 
ArrayContains | true |
+| spark.comet.expression.ArrayDistinct.enabled | Enable Comet acceleration for 
ArrayDistinct | true |
+| spark.comet.expression.ArrayExcept.enabled | Enable Comet acceleration for 
ArrayExcept | true |
+| spark.comet.expression.ArrayFilter.enabled | Enable Comet acceleration for 
ArrayFilter | true |
+| spark.comet.expression.ArrayInsert.enabled | Enable Comet acceleration for 
ArrayInsert | true |
+| spark.comet.expression.ArrayIntersect.enabled | Enable Comet acceleration 
for ArrayIntersect | true |
+| spark.comet.expression.ArrayJoin.enabled | Enable Comet acceleration for 
ArrayJoin | true |
+| spark.comet.expression.ArrayMax.enabled | Enable Comet acceleration for 
ArrayMax | true |
+| spark.comet.expression.ArrayMin.enabled | Enable Comet acceleration for 
ArrayMin | true |
+| spark.comet.expression.ArrayRemove.enabled | Enable Comet acceleration for 
ArrayRemove | true |
+| spark.comet.expression.ArrayRepeat.enabled | Enable Comet acceleration for 
ArrayRepeat | true |
+| spark.comet.expression.ArrayUnion.enabled | Enable Comet acceleration for 
ArrayUnion | true |
+| spark.comet.expression.ArraysOverlap.enabled | Enable Comet acceleration for 
ArraysOverlap | true |
+| spark.comet.expression.Ascii.enabled | Enable Comet acceleration for Ascii | 
true |
+| spark.comet.expression.Asin.enabled | Enable Comet acceleration for Asin | 
true |
+| spark.comet.expression.Atan.enabled | Enable Comet acceleration for Atan | 
true |
+| spark.comet.expression.Atan2.enabled | Enable Comet acceleration for Atan2 | 
true |
+| spark.comet.expression.AttributeReference.enabled | Enable Comet 
acceleration for AttributeReference | true |
+| spark.comet.expression.BitLength.enabled | Enable Comet acceleration for 
BitLength | true |
+| spark.comet.expression.BitwiseAnd.enabled | Enable Comet acceleration for 
BitwiseAnd | true |
+| spark.comet.expression.BitwiseCount.enabled | Enable Comet acceleration for 
BitwiseCount | true |
+| spark.comet.expression.BitwiseGet.enabled | Enable Comet acceleration for 
BitwiseGet | true |
+| spark.comet.expression.BitwiseNot.enabled | Enable Comet acceleration for 
BitwiseNot | true |
+| spark.comet.expression.BitwiseOr.enabled | Enable Comet acceleration for 
BitwiseOr | true |
+| spark.comet.expression.BitwiseXor.enabled | Enable Comet acceleration for 
BitwiseXor | true |
+| spark.comet.expression.CaseWhen.enabled | Enable Comet acceleration for 
CaseWhen | true |
+| spark.comet.expression.Cast.enabled | Enable Comet acceleration for Cast | 
true |
+| spark.comet.expression.Ceil.enabled | Enable Comet acceleration for Ceil | 
true |
+| spark.comet.expression.CheckOverflow.enabled | Enable Comet acceleration for 
CheckOverflow | true |
+| spark.comet.expression.Chr.enabled | Enable Comet acceleration for Chr | 
true |
+| spark.comet.expression.Coalesce.enabled | Enable Comet acceleration for 
Coalesce | true |
+| spark.comet.expression.ConcatWs.enabled | Enable Comet acceleration for 
ConcatWs | true |
+| spark.comet.expression.Contains.enabled | Enable Comet acceleration for 
Contains | true |
+| spark.comet.expression.Cos.enabled | Enable Comet acceleration for Cos | 
true |
+| spark.comet.expression.CreateArray.enabled | Enable Comet acceleration for 
CreateArray | true |
+| spark.comet.expression.CreateNamedStruct.enabled | Enable Comet acceleration 
for CreateNamedStruct | true |
+| spark.comet.expression.DateAdd.enabled | Enable Comet acceleration for 
DateAdd | true |
+| spark.comet.expression.DateSub.enabled | Enable Comet acceleration for 
DateSub | true |
+| spark.comet.expression.DayOfMonth.enabled | Enable Comet acceleration for 
DayOfMonth | true |
+| spark.comet.expression.DayOfWeek.enabled | Enable Comet acceleration for 
DayOfWeek | true |
+| spark.comet.expression.DayOfYear.enabled | Enable Comet acceleration for 
DayOfYear | true |
+| spark.comet.expression.Divide.enabled | Enable Comet acceleration for Divide 
| true |
+| spark.comet.expression.ElementAt.enabled | Enable Comet acceleration for 
ElementAt | true |
+| spark.comet.expression.EndsWith.enabled | Enable Comet acceleration for 
EndsWith | true |
+| spark.comet.expression.EqualNullSafe.enabled | Enable Comet acceleration for 
EqualNullSafe | true |
+| spark.comet.expression.EqualTo.enabled | Enable Comet acceleration for 
EqualTo | true |
+| spark.comet.expression.Exp.enabled | Enable Comet acceleration for Exp | 
true |
+| spark.comet.expression.Expm1.enabled | Enable Comet acceleration for Expm1 | 
true |
+| spark.comet.expression.Flatten.enabled | Enable Comet acceleration for 
Flatten | true |
+| spark.comet.expression.Floor.enabled | Enable Comet acceleration for Floor | 
true |
+| spark.comet.expression.FromUnixTime.enabled | Enable Comet acceleration for 
FromUnixTime | true |
+| spark.comet.expression.GetArrayItem.enabled | Enable Comet acceleration for 
GetArrayItem | true |
+| spark.comet.expression.GetArrayStructFields.enabled | Enable Comet 
acceleration for GetArrayStructFields | true |
+| spark.comet.expression.GetMapValue.enabled | Enable Comet acceleration for 
GetMapValue | true |
+| spark.comet.expression.GetStructField.enabled | Enable Comet acceleration 
for GetStructField | true |
+| spark.comet.expression.GreaterThan.enabled | Enable Comet acceleration for 
GreaterThan | true |
+| spark.comet.expression.GreaterThanOrEqual.enabled | Enable Comet 
acceleration for GreaterThanOrEqual | true |
+| spark.comet.expression.Hex.enabled | Enable Comet acceleration for Hex | 
true |
+| spark.comet.expression.Hour.enabled | Enable Comet acceleration for Hour | 
true |
+| spark.comet.expression.If.enabled | Enable Comet acceleration for If | true |
+| spark.comet.expression.In.enabled | Enable Comet acceleration for In | true |
+| spark.comet.expression.InSet.enabled | Enable Comet acceleration for InSet | 
true |
+| spark.comet.expression.InitCap.enabled | Enable Comet acceleration for 
InitCap | true |
+| spark.comet.expression.IntegralDivide.enabled | Enable Comet acceleration 
for IntegralDivide | true |
+| spark.comet.expression.IsNaN.enabled | Enable Comet acceleration for IsNaN | 
true |
+| spark.comet.expression.IsNotNull.enabled | Enable Comet acceleration for 
IsNotNull | true |
+| spark.comet.expression.IsNull.enabled | Enable Comet acceleration for IsNull 
| true |
+| spark.comet.expression.Length.enabled | Enable Comet acceleration for Length 
| true |
+| spark.comet.expression.LessThan.enabled | Enable Comet acceleration for 
LessThan | true |
+| spark.comet.expression.LessThanOrEqual.enabled | Enable Comet acceleration 
for LessThanOrEqual | true |
+| spark.comet.expression.Like.enabled | Enable Comet acceleration for Like | 
true |
+| spark.comet.expression.Literal.enabled | Enable Comet acceleration for 
Literal | true |
+| spark.comet.expression.Log.enabled | Enable Comet acceleration for Log | 
true |
+| spark.comet.expression.Log10.enabled | Enable Comet acceleration for Log10 | 
true |
+| spark.comet.expression.Log2.enabled | Enable Comet acceleration for Log2 | 
true |
+| spark.comet.expression.Lower.enabled | Enable Comet acceleration for Lower | 
true |
+| spark.comet.expression.MapEntries.enabled | Enable Comet acceleration for 
MapEntries | true |
+| spark.comet.expression.MapFromArrays.enabled | Enable Comet acceleration for 
MapFromArrays | true |
+| spark.comet.expression.MapKeys.enabled | Enable Comet acceleration for 
MapKeys | true |
+| spark.comet.expression.MapValues.enabled | Enable Comet acceleration for 
MapValues | true |
+| spark.comet.expression.Md5.enabled | Enable Comet acceleration for Md5 | 
true |
+| spark.comet.expression.Minute.enabled | Enable Comet acceleration for Minute 
| true |
+| spark.comet.expression.MonotonicallyIncreasingID.enabled | Enable Comet 
acceleration for MonotonicallyIncreasingID | true |
+| spark.comet.expression.Month.enabled | Enable Comet acceleration for Month | 
true |
+| spark.comet.expression.Multiply.enabled | Enable Comet acceleration for 
Multiply | true |
+| spark.comet.expression.Murmur3Hash.enabled | Enable Comet acceleration for 
Murmur3Hash | true |
+| spark.comet.expression.Not.enabled | Enable Comet acceleration for Not | 
true |
+| spark.comet.expression.OctetLength.enabled | Enable Comet acceleration for 
OctetLength | true |
+| spark.comet.expression.Or.enabled | Enable Comet acceleration for Or | true |
+| spark.comet.expression.Pow.enabled | Enable Comet acceleration for Pow | 
true |
+| spark.comet.expression.Quarter.enabled | Enable Comet acceleration for 
Quarter | true |
+| spark.comet.expression.RLike.enabled | Enable Comet acceleration for RLike | 
true |
+| spark.comet.expression.Rand.enabled | Enable Comet acceleration for Rand | 
true |
+| spark.comet.expression.Randn.enabled | Enable Comet acceleration for Randn | 
true |
+| spark.comet.expression.RegExpReplace.enabled | Enable Comet acceleration for 
RegExpReplace | true |
+| spark.comet.expression.Remainder.enabled | Enable Comet acceleration for 
Remainder | true |
+| spark.comet.expression.Reverse.enabled | Enable Comet acceleration for 
Reverse | true |
+| spark.comet.expression.Round.enabled | Enable Comet acceleration for Round | 
true |
+| spark.comet.expression.Second.enabled | Enable Comet acceleration for Second 
| true |
+| spark.comet.expression.Sha2.enabled | Enable Comet acceleration for Sha2 | 
true |
+| spark.comet.expression.ShiftLeft.enabled | Enable Comet acceleration for 
ShiftLeft | true |
+| spark.comet.expression.ShiftRight.enabled | Enable Comet acceleration for 
ShiftRight | true |
+| spark.comet.expression.Signum.enabled | Enable Comet acceleration for Signum 
| true |
+| spark.comet.expression.Sin.enabled | Enable Comet acceleration for Sin | 
true |
+| spark.comet.expression.SparkPartitionID.enabled | Enable Comet acceleration 
for SparkPartitionID | true |
+| spark.comet.expression.Sqrt.enabled | Enable Comet acceleration for Sqrt | 
true |
+| spark.comet.expression.StartsWith.enabled | Enable Comet acceleration for 
StartsWith | true |
+| spark.comet.expression.StringInstr.enabled | Enable Comet acceleration for 
StringInstr | true |
+| spark.comet.expression.StringLPad.enabled | Enable Comet acceleration for 
StringLPad | true |
+| spark.comet.expression.StringRPad.enabled | Enable Comet acceleration for 
StringRPad | true |
+| spark.comet.expression.StringRepeat.enabled | Enable Comet acceleration for 
StringRepeat | true |
+| spark.comet.expression.StringReplace.enabled | Enable Comet acceleration for 
StringReplace | true |
+| spark.comet.expression.StringSpace.enabled | Enable Comet acceleration for 
StringSpace | true |
+| spark.comet.expression.StringTranslate.enabled | Enable Comet acceleration 
for StringTranslate | true |
+| spark.comet.expression.StringTrim.enabled | Enable Comet acceleration for 
StringTrim | true |
+| spark.comet.expression.StringTrimBoth.enabled | Enable Comet acceleration 
for StringTrimBoth | true |
+| spark.comet.expression.StringTrimLeft.enabled | Enable Comet acceleration 
for StringTrimLeft | true |
+| spark.comet.expression.StringTrimRight.enabled | Enable Comet acceleration 
for StringTrimRight | true |
+| spark.comet.expression.StructsToJson.enabled | Enable Comet acceleration for 
StructsToJson | true |
+| spark.comet.expression.Substring.enabled | Enable Comet acceleration for 
Substring | true |
+| spark.comet.expression.Subtract.enabled | Enable Comet acceleration for 
Subtract | true |
+| spark.comet.expression.Tan.enabled | Enable Comet acceleration for Tan | 
true |
+| spark.comet.expression.TruncDate.enabled | Enable Comet acceleration for 
TruncDate | true |
+| spark.comet.expression.TruncTimestamp.enabled | Enable Comet acceleration 
for TruncTimestamp | true |
+| spark.comet.expression.UnaryMinus.enabled | Enable Comet acceleration for 
UnaryMinus | true |
+| spark.comet.expression.Unhex.enabled | Enable Comet acceleration for Unhex | 
true |
+| spark.comet.expression.Upper.enabled | Enable Comet acceleration for Upper | 
true |
+| spark.comet.expression.WeekDay.enabled | Enable Comet acceleration for 
WeekDay | true |
+| spark.comet.expression.WeekOfYear.enabled | Enable Comet acceleration for 
WeekOfYear | true |
+| spark.comet.expression.XxHash64.enabled | Enable Comet acceleration for 
XxHash64 | true |
+| spark.comet.expression.Year.enabled | Enable Comet acceleration for Year | 
true |
+<!--END:CONFIG_TABLE-->
+
+## Enabling or Disabling Individual Aggregate Expressions
+
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[enable_agg_expr]-->
+| Config | Description | Default Value |
+|--------|-------------|---------------|
+| spark.comet.expression.Average.enabled | Enable Comet acceleration for 
Average | true |
+| spark.comet.expression.BitAndAgg.enabled | Enable Comet acceleration for 
BitAndAgg | true |
+| spark.comet.expression.BitOrAgg.enabled | Enable Comet acceleration for 
BitOrAgg | true |
+| spark.comet.expression.BitXorAgg.enabled | Enable Comet acceleration for 
BitXorAgg | true |
+| spark.comet.expression.BloomFilterAggregate.enabled | Enable Comet 
acceleration for BloomFilterAggregate | true |
+| spark.comet.expression.Corr.enabled | Enable Comet acceleration for Corr | 
true |
+| spark.comet.expression.Count.enabled | Enable Comet acceleration for Count | 
true |
+| spark.comet.expression.CovPopulation.enabled | Enable Comet acceleration for 
CovPopulation | true |
+| spark.comet.expression.CovSample.enabled | Enable Comet acceleration for 
CovSample | true |
+| spark.comet.expression.First.enabled | Enable Comet acceleration for First | 
true |
+| spark.comet.expression.Last.enabled | Enable Comet acceleration for Last | 
true |
+| spark.comet.expression.Max.enabled | Enable Comet acceleration for Max | 
true |
+| spark.comet.expression.Min.enabled | Enable Comet acceleration for Min | 
true |
+| spark.comet.expression.StddevPop.enabled | Enable Comet acceleration for 
StddevPop | true |
+| spark.comet.expression.StddevSamp.enabled | Enable Comet acceleration for 
StddevSamp | true |
+| spark.comet.expression.Sum.enabled | Enable Comet acceleration for Sum | 
true |
+| spark.comet.expression.VariancePop.enabled | Enable Comet acceleration for 
VariancePop | true |
+| spark.comet.expression.VarianceSamp.enabled | Enable Comet acceleration for 
VarianceSamp | true |
+<!--END:CONFIG_TABLE-->
+
+## Shuffle Configuration Settings
+
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[shuffle]-->
+| Config | Description | Default Value |
+|--------|-------------|---------------|
+| spark.comet.columnar.shuffle.async.enabled | Whether to enable asynchronous 
shuffle for Arrow-based shuffle. | false |
+| spark.comet.columnar.shuffle.async.max.thread.num | Maximum number of 
threads on an executor used for Comet async columnar shuffle. This is the upper 
bound of total number of shuffle threads per executor. In other words, if the 
number of cores * the number of shuffle threads per task 
`spark.comet.columnar.shuffle.async.thread.num` is larger than this config. 
Comet will use this config as the number of shuffle threads per executor 
instead. | 100 |
+| spark.comet.columnar.shuffle.async.thread.num | Number of threads used for 
Comet async columnar shuffle per shuffle task. Note that more threads means 
more memory requirement to buffer shuffle data before flushing to disk. Also, 
more threads may not always improve performance, and should be set based on the 
number of cores available. | 3 |
+| spark.comet.columnar.shuffle.batch.size | Batch size when writing out sorted 
spill files on the native side. Note that this should not be larger than batch 
size (i.e., `spark.comet.batchSize`). Otherwise it will produce larger batches 
than expected in the native operator after shuffle. | 8192 |
+| spark.comet.exec.shuffle.compression.codec | The codec of Comet native 
shuffle used to compress shuffle data. lz4, zstd, and snappy are supported. 
Compression can be disabled by setting spark.shuffle.compress=false. | lz4 |
+| spark.comet.exec.shuffle.compression.zstd.level | The compression level to 
use when compressing shuffle files with zstd. | 1 |
+| spark.comet.exec.shuffle.enabled | Whether to enable Comet native shuffle. 
Note that this requires setting 'spark.shuffle.manager' to 
'org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager'. 
'spark.shuffle.manager' must be set before starting the Spark application and 
cannot be changed during the application. | true |
 | spark.comet.native.shuffle.partitioning.hash.enabled | Whether to enable 
hash partitioning for Comet native shuffle. | true |
 | spark.comet.native.shuffle.partitioning.range.enabled | Whether to enable 
range partitioning for Comet native shuffle. | true |
-| spark.comet.nativeLoadRequired | Whether to require Comet native library to 
load successfully when Comet is enabled. If not, Comet will silently fallback 
to Spark when it fails to load the native lib. Otherwise, an error will be 
thrown and the Spark job will be aborted. | false |
-| spark.comet.parquet.enable.directBuffer | Whether to use Java direct byte 
buffer when reading Parquet. | false |
-| spark.comet.parquet.read.io.adjust.readRange.skew | In the parallel reader, 
if the read ranges submitted are skewed in sizes, this option will cause the 
reader to break up larger read ranges into smaller ranges to reduce the skew. 
This will result in a slightly larger number of connections opened to the file 
system but may give improved performance. | false |
-| spark.comet.parquet.read.io.mergeRanges | When enabled the parallel reader 
will try to merge ranges of data that are separated by less than 
'comet.parquet.read.io.mergeRanges.delta' bytes. Longer continuous reads are 
faster on cloud storage. | true |
-| spark.comet.parquet.read.io.mergeRanges.delta | The delta in bytes between 
consecutive read ranges below which the parallel reader will try to merge the 
ranges. The default is 8MB. | 8388608 |
-| spark.comet.parquet.read.parallel.io.enabled | Whether to enable Comet's 
parallel reader for Parquet files. The parallel reader reads ranges of 
consecutive data in a  file in parallel. It is faster for large files and row 
groups but uses more resources. | true |
-| spark.comet.parquet.read.parallel.io.thread-pool.size | The maximum number 
of parallel threads the parallel reader will use in a single executor. For 
executors configured with a smaller number of cores, use a smaller number. | 16 
|
-| spark.comet.parquet.respectFilterPushdown | Whether to respect Spark's 
PARQUET_FILTER_PUSHDOWN_ENABLED config. This needs to be respected when running 
the Spark SQL test suite but the default setting results in poor performance in 
Comet when using the new native scans, disabled by default | false |
-| spark.comet.regexp.allowIncompatible | Comet is not currently fully 
compatible with Spark for all regular expressions. Set this config to true to 
allow them anyway. For more information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html). | false |
-| spark.comet.scan.allowIncompatible | Some Comet scan implementations are not 
currently fully compatible with Spark for all datatypes. Set this config to 
true to allow them anyway. For more information, refer to the Comet 
Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html). | false |
-| spark.comet.scan.enabled | Whether to enable native scans. When this is 
turned on, Spark will use Comet to read supported data sources (currently only 
Parquet is supported natively). Note that to enable native vectorized 
execution, both this config and 'spark.comet.exec.enabled' need to be enabled. 
| true |
-| spark.comet.scan.preFetch.enabled | Whether to enable pre-fetching feature 
of CometScan. | false |
-| spark.comet.scan.preFetch.threadNum | The number of threads running 
pre-fetching for CometScan. Effective if spark.comet.scan.preFetch.enabled is 
enabled. Note that more pre-fetching threads means more memory requirement to 
store pre-fetched row groups. | 2 |
 | spark.comet.shuffle.preferDictionary.ratio | The ratio of total values to 
distinct values in a string column to decide whether to prefer dictionary 
encoding when shuffling the column. If the ratio is higher than this config, 
dictionary encoding will be used on shuffling string column. This config is 
effective if it is higher than 1.0. Note that this config is only used when 
`spark.comet.exec.shuffle.mode` is `jvm`. | 10.0 |
 | spark.comet.shuffle.sizeInBytesMultiplier | Comet reports smaller sizes for 
shuffle due to using Arrow's columnar memory format and this can result in 
Spark choosing a different join strategy due to the estimated size of the 
exchange being smaller. Comet will multiple sizeInBytes by this amount to avoid 
regressions in join strategy. | 1.0 |
-| spark.comet.sparkToColumnar.supportedOperatorList | A comma-separated list 
of operators that will be converted to Arrow columnar format when 
'spark.comet.sparkToColumnar.enabled' is true | Range,InMemoryTableScan,RDDScan 
|
-| spark.hadoop.fs.comet.libhdfs.schemes | Defines filesystem schemes (e.g., 
hdfs, webhdfs) that the native side accesses via libhdfs, separated by commas. 
Valid only when built with hdfs feature enabled. | |
+<!--END:CONFIG_TABLE-->
+
+## Memory & Tuning Configuration Settings
+
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[tuning]-->
+| Config | Description | Default Value |
+|--------|-------------|---------------|
+| spark.comet.batchSize | The columnar batch size, i.e., the maximum number of 
rows that a batch can contain. | 8192 |
+| spark.comet.exec.memoryPool | The type of memory pool to be used for Comet 
native execution when running Spark in off-heap mode. Available pool types are 
`greedy_unified` and `fair_unified`. For more information, refer to the Comet 
Tuning Guide (https://datafusion.apache.org/comet/user-guide/tuning.html). | 
fair_unified |
+| spark.comet.exec.memoryPool.fraction | Fraction of off-heap memory pool that 
is available to Comet. Only applies to off-heap mode. For more information, 
refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html). | 1.0 |
+| spark.comet.tracing.enabled | Enable fine-grained tracing of events and 
memory usage. For more information, refer to the Comet Tracing Guide 
(https://datafusion.apache.org/comet/user-guide/tracing.html). | false |
 <!--END:CONFIG_TABLE-->
diff --git a/_sources/user-guide/latest/expressions.md.txt 
b/_sources/user-guide/latest/expressions.md.txt
index 003d03a53..b07d788db 100644
--- a/_sources/user-guide/latest/expressions.md.txt
+++ b/_sources/user-guide/latest/expressions.md.txt
@@ -23,9 +23,10 @@ Comet supports the following Spark expressions. Expressions 
that are marked as S
 natively in Comet and provide the same results as Spark, or will fall back to 
Spark for cases that would not
 be compatible.
 
-All expressions are enabled by default, but can be disabled by setting
+All expressions are enabled by default, but most can be disabled by setting
 `spark.comet.expression.EXPRNAME.enabled=false`, where `EXPRNAME` is the 
expression name as specified in
-the following tables, such as `Length`, or `StartsWith`.
+the following tables, such as `Length`, or `StartsWith`. See the [Comet 
Configuration Guide] for a full list
+of expressions that be disabled.
 
 Expressions that are not Spark-compatible will fall back to Spark by default 
and can be enabled by setting
 `spark.comet.expression.EXPRNAME.allowIncompatible=true`.
@@ -269,4 +270,5 @@ incompatible expressions.
 | ToPrettyString               | Yes               |                           
                                                  |
 | UnscaledValue                | Yes               |                           
                                                  |
 
+[Comet Configuration Guide]: configs.md
 [Comet Compatibility Guide]: compatibility.md
diff --git a/searchindex.js b/searchindex.js
index add786aee..46094e263 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[12, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[12, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[12, "run-spark-sql-tests"]], "ANSI Mode": [[17, 
"ansi-mode"], [56, "ansi-mode"]], "ANSI mode": [[30, "ansi-mode"], [43, 
"ansi-mode"]], "API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "Accelerating Apache Iceberg 
Parquet Scans using Comet (Experimental)": [[22, null], [35, n [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[12, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[12, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[12, "run-spark-sql-tests"]], "ANSI Mode": [[17, 
"ansi-mode"], [56, "ansi-mode"]], "ANSI mode": [[30, "ansi-mode"], [43, 
"ansi-mode"]], "API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "Accelerating Apache Iceberg 
Parquet Scans using Comet (Experimental)": [[22, null], [35, n [...]
\ No newline at end of file
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index d8518ba6a..273142c14 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -535,9 +535,54 @@ under the License.
               
               <div class="toc-item">
                 
+<div class="tocsection onthispage pt-5 pb-3">
+    <i class="fas fa-list"></i> On this page
+</div>
 
 <nav id="bd-toc-nav">
-    
+    <ul class="visible nav section-nav flex-column">
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" href="#scan-configuration-settings">
+   Scan Configuration Settings
+  </a>
+ </li>
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" 
href="#parquet-reader-configuration-settings">
+   Parquet Reader Configuration Settings
+  </a>
+ </li>
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" href="#query-execution-settings">
+   Query Execution Settings
+  </a>
+ </li>
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" 
href="#enabling-or-disabling-individual-operators">
+   Enabling or Disabling Individual Operators
+  </a>
+ </li>
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" 
href="#enabling-or-disabling-individual-scalar-expressions">
+   Enabling or Disabling Individual Scalar Expressions
+  </a>
+ </li>
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" 
href="#enabling-or-disabling-individual-aggregate-expressions">
+   Enabling or Disabling Individual Aggregate Expressions
+  </a>
+ </li>
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" 
href="#shuffle-configuration-settings">
+   Shuffle Configuration Settings
+  </a>
+ </li>
+ <li class="toc-h2 nav-item toc-entry">
+  <a class="reference internal nav-link" 
href="#memory-tuning-configuration-settings">
+   Memory &amp; Tuning Configuration Settings
+  </a>
+ </li>
+</ul>
+
 </nav>
               </div>
               
@@ -585,8 +630,10 @@ under the License.
 <section id="comet-configuration-settings">
 <h1>Comet Configuration Settings<a class="headerlink" 
href="#comet-configuration-settings" title="Link to this heading">¶</a></h1>
 <p>Comet provides the following configuration settings.</p>
+<section id="scan-configuration-settings">
+<h2>Scan Configuration Settings<a class="headerlink" 
href="#scan-configuration-settings" title="Link to this heading">¶</a></h2>
 <!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
-<!--BEGIN:CONFIG_TABLE-->
+<!--BEGIN:CONFIG_TABLE[scan]-->
 <table class="table">
 <thead>
 <tr class="row-odd"><th class="head"><p>Config</p></th>
@@ -595,54 +642,186 @@ under the License.
 </tr>
 </thead>
 <tbody>
-<tr class="row-even"><td><p>spark.comet.batchSize</p></td>
-<td><p>The columnar batch size, i.e., the maximum number of rows that a batch 
can contain.</p></td>
-<td><p>8192</p></td>
+<tr class="row-even"><td><p>spark.comet.convert.csv.enabled</p></td>
+<td><p>When enabled, data from Spark (non-native) CSV v1 and v2 scans will be 
converted to Arrow format. Note that to enable native vectorized execution, 
both this config and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
+<td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.caseConversion.enabled</p></td>
-<td><p>Java uses locale-specific rules when converting strings to upper or 
lower case and Rust does not, so we disable upper and lower by default.</p></td>
+<tr class="row-odd"><td><p>spark.comet.convert.json.enabled</p></td>
+<td><p>When enabled, data from Spark (non-native) JSON v1 and v2 scans will be 
converted to Arrow format. Note that to enable native vectorized execution, 
both this config and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.columnar.shuffle.async.enabled</p></td>
-<td><p>Whether to enable asynchronous shuffle for Arrow-based shuffle.</p></td>
+<tr class="row-even"><td><p>spark.comet.convert.parquet.enabled</p></td>
+<td><p>When enabled, data from Spark (non-native) Parquet v1 and v2 scans will 
be converted to Arrow format. Note that to enable native vectorized execution, 
both this config and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.columnar.shuffle.async.max.thread.num</p></td>
-<td><p>Maximum number of threads on an executor used for Comet async columnar 
shuffle. This is the upper bound of total number of shuffle threads per 
executor. In other words, if the number of cores * the number of shuffle 
threads per task <code class="docutils literal notranslate"><span 
class="pre">spark.comet.columnar.shuffle.async.thread.num</span></code> is 
larger than this config. Comet will use this config as the number of shuffle 
threads per executor instead.</p></td>
-<td><p>100</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.allowIncompatible</p></td>
+<td><p>Some Comet scan implementations are not currently fully compatible with 
Spark for all datatypes. Set this config to true to allow them anyway. For more 
information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
+<td><p>false</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.columnar.shuffle.async.thread.num</p></td>
-<td><p>Number of threads used for Comet async columnar shuffle per shuffle 
task. Note that more threads means more memory requirement to buffer shuffle 
data before flushing to disk. Also, more threads may not always improve 
performance, and should be set based on the number of cores available.</p></td>
-<td><p>3</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
+<td><p>Whether to enable native scans. When this is turned on, Spark will use 
Comet to read supported data sources (currently only Parquet is supported 
natively). Note that to enable native vectorized execution, both this config 
and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.convert.csv.enabled</p></td>
-<td><p>When enabled, data from Spark (non-native) CSV v1 and v2 scans will be 
converted to Arrow format. Note that to enable native vectorized execution, 
both this config and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
+<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
+<td><p>Whether to enable pre-fetching feature of CometScan.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.convert.json.enabled</p></td>
-<td><p>When enabled, data from Spark (non-native) JSON v1 and v2 scans will be 
converted to Arrow format. Note that to enable native vectorized execution, 
both this config and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
+<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
+<td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. Note that more pre-fetching 
threads means more memory requirement to store pre-fetched row groups.</p></td>
+<td><p>2</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.sparkToColumnar.enabled</p></td>
+<td><p>Whether to enable Spark to Arrow columnar conversion. When this is 
turned on, Comet will convert operators in <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.sparkToColumnar.supportedOperatorList</span></code> 
into Arrow columnar format before processing.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.convert.parquet.enabled</p></td>
-<td><p>When enabled, data from Spark (non-native) Parquet v1 and v2 scans will 
be converted to Arrow format. Note that to enable native vectorized execution, 
both this config and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
+<tr 
class="row-even"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
+<td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when ‘spark.comet.sparkToColumnar.enabled’ is true</p></td>
+<td><p>Range,InMemoryTableScan,RDDScan</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.hadoop.fs.comet.libhdfs.schemes</p></td>
+<td><p>Defines filesystem schemes (e.g., hdfs, webhdfs) that the native side 
accesses via libhdfs, separated by commas. Valid only when built with hdfs 
feature enabled.</p></td>
+<td><p></p></td>
+</tr>
+</tbody>
+</table>
+<!--END:CONFIG_TABLE-->
+</section>
+<section id="parquet-reader-configuration-settings">
+<h2>Parquet Reader Configuration Settings<a class="headerlink" 
href="#parquet-reader-configuration-settings" title="Link to this 
heading">¶</a></h2>
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[parquet]-->
+<table class="table">
+<thead>
+<tr class="row-odd"><th class="head"><p>Config</p></th>
+<th class="head"><p>Description</p></th>
+<th class="head"><p>Default Value</p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="row-even"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
+<td><p>Whether to use Java direct byte buffer when reading Parquet.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
+<td><p>In the parallel reader, if the read ranges submitted are skewed in 
sizes, this option will cause the reader to break up larger read ranges into 
smaller ranges to reduce the skew. This will result in a slightly larger number 
of connections opened to the file system but may give improved 
performance.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
+<td><p>When enabled the parallel reader will try to merge ranges of data that 
are separated by less than ‘comet.parquet.read.io.mergeRanges.delta’ bytes. 
Longer continuous reads are faster on cloud storage.</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
+<td><p>The delta in bytes between consecutive read ranges below which the 
parallel reader will try to merge the ranges. The default is 8MB.</p></td>
+<td><p>8388608</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
+<td><p>Whether to enable Comet’s parallel reader for Parquet files. The 
parallel reader reads ranges of consecutive data in a  file in parallel. It is 
faster for large files and row groups but uses more resources.</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
+<td><p>The maximum number of parallel threads the parallel reader will use in 
a single executor. For executors configured with a smaller number of cores, use 
a smaller number.</p></td>
+<td><p>16</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.parquet.respectFilterPushdown</p></td>
+<td><p>Whether to respect Spark’s PARQUET_FILTER_PUSHDOWN_ENABLED config. This 
needs to be respected when running the Spark SQL test suite but the default 
setting results in poor performance in Comet when using the new native scans, 
disabled by default</p></td>
+<td><p>false</p></td>
+</tr>
+</tbody>
+</table>
+<!--END:CONFIG_TABLE-->
+</section>
+<section id="query-execution-settings">
+<h2>Query Execution Settings<a class="headerlink" 
href="#query-execution-settings" title="Link to this heading">¶</a></h2>
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[exec]-->
+<table class="table">
+<thead>
+<tr class="row-odd"><th class="head"><p>Config</p></th>
+<th class="head"><p>Description</p></th>
+<th class="head"><p>Default Value</p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="row-even"><td><p>spark.comet.caseConversion.enabled</p></td>
+<td><p>Java uses locale-specific rules when converting strings to upper or 
lower case and Rust does not, so we disable upper and lower by default.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.debug.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.debug.enabled</p></td>
 <td><p>Whether to enable debug mode for Comet. When enabled, Comet will do 
additional checks for debugging purpose. For example, validating array when 
importing arrays from JVM at native side. Note that these checks may be 
expensive in performance and should only be enabled for debugging 
purpose.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.dppFallback.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.dppFallback.enabled</p></td>
 <td><p>Whether to fall back to Spark for queries that use DPP.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.enabled</p></td>
 <td><p>Whether to enable Comet extension for Spark. When this is turned on, 
Spark will use Comet to read Parquet data source. Note that to enable native 
vectorized execution, both this config and ‘spark.comet.exec.enabled’ need to 
be enabled. By default, this config is the value of the env var <code 
class="docutils literal notranslate"><span 
class="pre">ENABLE_COMET</span></code> if set, or true otherwise.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exceptionOnDatetimeRebase</p></td>
+<tr class="row-even"><td><p>spark.comet.exceptionOnDatetimeRebase</p></td>
 <td><p>Whether to throw exception when seeing dates/timestamps from the legacy 
hybrid (Julian + Gregorian) calendar. Since Spark 3, dates/timestamps were 
written according to the Proleptic Gregorian calendar. When this is true, Comet 
will throw exceptions when seeing these dates/timestamps that were written by 
Spark version before 3.0. If this is false, these dates/timestamps will be read 
as if they were written to the Proleptic Gregorian calendar and will not be 
rebased.</p></td>
 <td><p>false</p></td>
 </tr>
+<tr class="row-odd"><td><p>spark.comet.exec.enabled</p></td>
+<td><p>Whether to enable Comet native vectorized execution for Spark. This 
controls whether Spark should convert operators into their Comet counterparts 
and execute them in native space. Note: each operator is associated with a 
separate config in the format of 
‘spark.comet.exec.&lt;operator_name&gt;.enabled’ at the moment, and both the 
config and this need to be turned on, in order for the operator to be executed 
in native.</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.exec.replaceSortMergeJoin</p></td>
+<td><p>Experimental feature to force Spark to replace SortMergeJoin with 
ShuffledHashJoin for improved performance. This feature is not stable yet. For 
more information, refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.explain.native.enabled</p></td>
+<td><p>When this setting is enabled, Comet will provide a tree representation 
of the native query plan before execution and again after execution, with 
metrics.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.explain.verbose.enabled</p></td>
+<td><p>When this setting is enabled, Comet’s extended explain output will 
provide the full query plan annotated with fallback reasons as well as a 
summary of how much of the plan was accelerated by Comet. When this setting is 
disabled, a list of fallback reasons will be provided instead.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.explainFallback.enabled</p></td>
+<td><p>When this setting is enabled, Comet will provide logging explaining the 
reason(s) why a query stage cannot be executed natively. Set this to false to 
reduce the amount of logging.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.allowIncompatible</p></td>
+<td><p>Comet is not currently fully compatible with Spark for all expressions. 
Set this config to true to allow them anyway. For more information, refer to 
the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.logFallbackReasons.enabled</p></td>
+<td><p>When this setting is enabled, Comet will log warnings for all fallback 
reasons.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.maxTempDirectorySize</p></td>
+<td><p>The maximum amount of data (in bytes) stored inside the temporary 
directories.</p></td>
+<td><p>107374182400b</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.metrics.updateInterval</p></td>
+<td><p>The interval in milliseconds to update metrics. If interval is 
negative, metrics will be updated upon task completion.</p></td>
+<td><p>3000</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
+<td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.regexp.allowIncompatible</p></td>
+<td><p>Comet is not currently fully compatible with Spark for all regular 
expressions. Set this config to true to allow them anyway. For more 
information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
+<td><p>false</p></td>
+</tr>
+</tbody>
+</table>
+<!--END:CONFIG_TABLE-->
+</section>
+<section id="enabling-or-disabling-individual-operators">
+<h2>Enabling or Disabling Individual Operators<a class="headerlink" 
href="#enabling-or-disabling-individual-operators" title="Link to this 
heading">¶</a></h2>
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[enable_exec]-->
+<table class="table">
+<thead>
+<tr class="row-odd"><th class="head"><p>Config</p></th>
+<th class="head"><p>Description</p></th>
+<th class="head"><p>Default Value</p></th>
+</tr>
+</thead>
+<tbody>
 <tr class="row-even"><td><p>spark.comet.exec.aggregate.enabled</p></td>
 <td><p>Whether to enable aggregate by default.</p></td>
 <td><p>true</p></td>
@@ -663,74 +842,42 @@ under the License.
 <td><p>Whether to enable collectLimit by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.enabled</p></td>
-<td><p>Whether to enable Comet native vectorized execution for Spark. This 
controls whether Spark should convert operators into their Comet counterparts 
and execute them in native space. Note: each operator is associated with a 
separate config in the format of 
‘spark.comet.exec.&lt;operator_name&gt;.enabled’ at the moment, and both the 
config and this need to be turned on, in order for the operator to be executed 
in native.</p></td>
-<td><p>true</p></td>
-</tr>
-<tr class="row-even"><td><p>spark.comet.exec.expand.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.expand.enabled</p></td>
 <td><p>Whether to enable expand by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.filter.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.filter.enabled</p></td>
 <td><p>Whether to enable filter by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.globalLimit.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.globalLimit.enabled</p></td>
 <td><p>Whether to enable globalLimit by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.hashJoin.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.hashJoin.enabled</p></td>
 <td><p>Whether to enable hashJoin by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.localLimit.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.localLimit.enabled</p></td>
 <td><p>Whether to enable localLimit by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.memoryPool</p></td>
-<td><p>The type of memory pool to be used for Comet native execution when 
running Spark in off-heap mode. Available pool types are ‘greedy_unified’ and 
<code class="docutils literal notranslate"><span 
class="pre">fair_unified</span></code>. For more information, refer to the 
Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
-<td><p>fair_unified</p></td>
-</tr>
-<tr class="row-even"><td><p>spark.comet.exec.memoryPool.fraction</p></td>
-<td><p>Fraction of off-heap memory pool that is available to Comet. Only 
applies to off-heap mode. For more information, refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
-<td><p>1.0</p></td>
-</tr>
-<tr class="row-odd"><td><p>spark.comet.exec.project.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.project.enabled</p></td>
 <td><p>Whether to enable project by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.exec.replaceSortMergeJoin</p></td>
-<td><p>Experimental feature to force Spark to replace SortMergeJoin with 
ShuffledHashJoin for improved performance. This feature is not stable yet. For 
more information, refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
-<td><p>false</p></td>
-</tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.compression.codec</p></td>
-<td><p>The codec of Comet native shuffle used to compress shuffle data. lz4, 
zstd, and snappy are supported. Compression can be disabled by setting 
spark.shuffle.compress=false.</p></td>
-<td><p>lz4</p></td>
-</tr>
-<tr 
class="row-even"><td><p>spark.comet.exec.shuffle.compression.zstd.level</p></td>
-<td><p>The compression level to use when compressing shuffle files with 
zstd.</p></td>
-<td><p>1</p></td>
-</tr>
-<tr class="row-odd"><td><p>spark.comet.exec.shuffle.enabled</p></td>
-<td><p>Whether to enable Comet native shuffle. Note that this requires setting 
‘spark.shuffle.manager’ to 
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’. 
‘spark.shuffle.manager’ must be set before starting the Spark application and 
cannot be changed during the application.</p></td>
-<td><p>true</p></td>
-</tr>
-<tr class="row-even"><td><p>spark.comet.exec.sort.enabled</p></td>
+<tr class="row-odd"><td><p>spark.comet.exec.sort.enabled</p></td>
 <td><p>Whether to enable sort by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
+<tr class="row-even"><td><p>spark.comet.exec.sortMergeJoin.enabled</p></td>
 <td><p>Whether to enable sortMergeJoin by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.exec.sortMergeJoinWithJoinFilter.enabled</p></td>
 <td><p>Experimental support for Sort Merge Join with filter</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.exec.stddev.enabled</p></td>
-<td><p>Whether to enable stddev by default. stddev is slower than Spark’s 
implementation.</p></td>
-<td><p>true</p></td>
-</tr>
 <tr 
class="row-even"><td><p>spark.comet.exec.takeOrderedAndProject.enabled</p></td>
 <td><p>Whether to enable takeOrderedAndProject by default.</p></td>
 <td><p>true</p></td>
@@ -743,93 +890,709 @@ under the License.
 <td><p>Whether to enable window by default.</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.explain.native.enabled</p></td>
-<td><p>When this setting is enabled, Comet will provide a tree representation 
of the native query plan before execution and again after execution, with 
metrics.</p></td>
-<td><p>false</p></td>
+</tbody>
+</table>
+<!--END:CONFIG_TABLE-->
+</section>
+<section id="enabling-or-disabling-individual-scalar-expressions">
+<h2>Enabling or Disabling Individual Scalar Expressions<a class="headerlink" 
href="#enabling-or-disabling-individual-scalar-expressions" title="Link to this 
heading">¶</a></h2>
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[enable_expr]-->
+<table class="table">
+<thead>
+<tr class="row-odd"><th class="head"><p>Config</p></th>
+<th class="head"><p>Description</p></th>
+<th class="head"><p>Default Value</p></th>
 </tr>
-<tr class="row-even"><td><p>spark.comet.explain.verbose.enabled</p></td>
-<td><p>When this setting is enabled, Comet’s extended explain output will 
provide the full query plan annotated with fallback reasons as well as a 
summary of how much of the plan was accelerated by Comet. When this setting is 
disabled, a list of fallback reasons will be provided instead.</p></td>
-<td><p>false</p></td>
+</thead>
+<tbody>
+<tr class="row-even"><td><p>spark.comet.expression.Acos.enabled</p></td>
+<td><p>Enable Comet acceleration for Acos</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.explainFallback.enabled</p></td>
-<td><p>When this setting is enabled, Comet will provide logging explaining the 
reason(s) why a query stage cannot be executed natively. Set this to false to 
reduce the amount of logging.</p></td>
-<td><p>false</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.Add.enabled</p></td>
+<td><p>Enable Comet acceleration for Add</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.expression.allowIncompatible</p></td>
-<td><p>Comet is not currently fully compatible with Spark for all expressions. 
Set this config to true to allow them anyway. For more information, refer to 
the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
-<td><p>false</p></td>
+<tr class="row-even"><td><p>spark.comet.expression.Alias.enabled</p></td>
+<td><p>Enable Comet acceleration for Alias</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.logFallbackReasons.enabled</p></td>
-<td><p>When this setting is enabled, Comet will log warnings for all fallback 
reasons.</p></td>
-<td><p>false</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.And.enabled</p></td>
+<td><p>Enable Comet acceleration for And</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.maxTempDirectorySize</p></td>
-<td><p>The maximum amount of data (in bytes) stored inside the temporary 
directories.</p></td>
-<td><p>107374182400b</p></td>
+<tr class="row-even"><td><p>spark.comet.expression.ArrayAppend.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayAppend</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.metrics.updateInterval</p></td>
-<td><p>The interval in milliseconds to update metrics. If interval is 
negative, metrics will be updated upon task completion.</p></td>
-<td><p>3000</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.ArrayCompact.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayCompact</p></td>
+<td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.native.shuffle.partitioning.hash.enabled</p></td>
-<td><p>Whether to enable hash partitioning for Comet native shuffle.</p></td>
+<tr 
class="row-even"><td><p>spark.comet.expression.ArrayContains.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayContains</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.native.shuffle.partitioning.range.enabled</p></td>
-<td><p>Whether to enable range partitioning for Comet native shuffle.</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.expression.ArrayDistinct.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayDistinct</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.nativeLoadRequired</p></td>
-<td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
-<td><p>false</p></td>
+<tr class="row-even"><td><p>spark.comet.expression.ArrayExcept.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayExcept</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.enable.directBuffer</p></td>
-<td><p>Whether to use Java direct byte buffer when reading Parquet.</p></td>
-<td><p>false</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.ArrayFilter.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayFilter</p></td>
+<td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.io.adjust.readRange.skew</p></td>
-<td><p>In the parallel reader, if the read ranges submitted are skewed in 
sizes, this option will cause the reader to break up larger read ranges into 
smaller ranges to reduce the skew. This will result in a slightly larger number 
of connections opened to the file system but may give improved 
performance.</p></td>
-<td><p>false</p></td>
+<tr class="row-even"><td><p>spark.comet.expression.ArrayInsert.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayInsert</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.read.io.mergeRanges</p></td>
-<td><p>When enabled the parallel reader will try to merge ranges of data that 
are separated by less than ‘comet.parquet.read.io.mergeRanges.delta’ bytes. 
Longer continuous reads are faster on cloud storage.</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.expression.ArrayIntersect.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayIntersect</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.io.mergeRanges.delta</p></td>
-<td><p>The delta in bytes between consecutive read ranges below which the 
parallel reader will try to merge the ranges. The default is 8MB.</p></td>
-<td><p>8388608</p></td>
+<tr class="row-even"><td><p>spark.comet.expression.ArrayJoin.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayJoin</p></td>
+<td><p>true</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.parquet.read.parallel.io.enabled</p></td>
-<td><p>Whether to enable Comet’s parallel reader for Parquet files. The 
parallel reader reads ranges of consecutive data in a  file in parallel. It is 
faster for large files and row groups but uses more resources.</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.ArrayMax.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayMax</p></td>
 <td><p>true</p></td>
 </tr>
-<tr 
class="row-even"><td><p>spark.comet.parquet.read.parallel.io.thread-pool.size</p></td>
-<td><p>The maximum number of parallel threads the parallel reader will use in 
a single executor. For executors configured with a smaller number of cores, use 
a smaller number.</p></td>
-<td><p>16</p></td>
+<tr class="row-even"><td><p>spark.comet.expression.ArrayMin.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayMin</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.parquet.respectFilterPushdown</p></td>
-<td><p>Whether to respect Spark’s PARQUET_FILTER_PUSHDOWN_ENABLED config. This 
needs to be respected when running the Spark SQL test suite but the default 
setting results in poor performance in Comet when using the new native scans, 
disabled by default</p></td>
-<td><p>false</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.ArrayRemove.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayRemove</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.regexp.allowIncompatible</p></td>
-<td><p>Comet is not currently fully compatible with Spark for all regular 
expressions. Set this config to true to allow them anyway. For more 
information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
-<td><p>false</p></td>
+<tr class="row-even"><td><p>spark.comet.expression.ArrayRepeat.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayRepeat</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.allowIncompatible</p></td>
-<td><p>Some Comet scan implementations are not currently fully compatible with 
Spark for all datatypes. Set this config to true to allow them anyway. For more 
information, refer to the Comet Compatibility Guide 
(https://datafusion.apache.org/comet/user-guide/compatibility.html).</p></td>
-<td><p>false</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.ArrayUnion.enabled</p></td>
+<td><p>Enable Comet acceleration for ArrayUnion</p></td>
+<td><p>true</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.enabled</p></td>
-<td><p>Whether to enable native scans. When this is turned on, Spark will use 
Comet to read supported data sources (currently only Parquet is supported 
natively). Note that to enable native vectorized execution, both this config 
and ‘spark.comet.exec.enabled’ need to be enabled.</p></td>
+<tr 
class="row-even"><td><p>spark.comet.expression.ArraysOverlap.enabled</p></td>
+<td><p>Enable Comet acceleration for ArraysOverlap</p></td>
 <td><p>true</p></td>
 </tr>
-<tr class="row-odd"><td><p>spark.comet.scan.preFetch.enabled</p></td>
-<td><p>Whether to enable pre-fetching feature of CometScan.</p></td>
+<tr class="row-odd"><td><p>spark.comet.expression.Ascii.enabled</p></td>
+<td><p>Enable Comet acceleration for Ascii</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Asin.enabled</p></td>
+<td><p>Enable Comet acceleration for Asin</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Atan.enabled</p></td>
+<td><p>Enable Comet acceleration for Atan</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Atan2.enabled</p></td>
+<td><p>Enable Comet acceleration for Atan2</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.AttributeReference.enabled</p></td>
+<td><p>Enable Comet acceleration for AttributeReference</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.BitLength.enabled</p></td>
+<td><p>Enable Comet acceleration for BitLength</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.BitwiseAnd.enabled</p></td>
+<td><p>Enable Comet acceleration for BitwiseAnd</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.BitwiseCount.enabled</p></td>
+<td><p>Enable Comet acceleration for BitwiseCount</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.BitwiseGet.enabled</p></td>
+<td><p>Enable Comet acceleration for BitwiseGet</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.BitwiseNot.enabled</p></td>
+<td><p>Enable Comet acceleration for BitwiseNot</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.BitwiseOr.enabled</p></td>
+<td><p>Enable Comet acceleration for BitwiseOr</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.BitwiseXor.enabled</p></td>
+<td><p>Enable Comet acceleration for BitwiseXor</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.CaseWhen.enabled</p></td>
+<td><p>Enable Comet acceleration for CaseWhen</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Cast.enabled</p></td>
+<td><p>Enable Comet acceleration for Cast</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Ceil.enabled</p></td>
+<td><p>Enable Comet acceleration for Ceil</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.CheckOverflow.enabled</p></td>
+<td><p>Enable Comet acceleration for CheckOverflow</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Chr.enabled</p></td>
+<td><p>Enable Comet acceleration for Chr</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Coalesce.enabled</p></td>
+<td><p>Enable Comet acceleration for Coalesce</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.ConcatWs.enabled</p></td>
+<td><p>Enable Comet acceleration for ConcatWs</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Contains.enabled</p></td>
+<td><p>Enable Comet acceleration for Contains</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Cos.enabled</p></td>
+<td><p>Enable Comet acceleration for Cos</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.CreateArray.enabled</p></td>
+<td><p>Enable Comet acceleration for CreateArray</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.CreateNamedStruct.enabled</p></td>
+<td><p>Enable Comet acceleration for CreateNamedStruct</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.DateAdd.enabled</p></td>
+<td><p>Enable Comet acceleration for DateAdd</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.DateSub.enabled</p></td>
+<td><p>Enable Comet acceleration for DateSub</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.DayOfMonth.enabled</p></td>
+<td><p>Enable Comet acceleration for DayOfMonth</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.DayOfWeek.enabled</p></td>
+<td><p>Enable Comet acceleration for DayOfWeek</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.DayOfYear.enabled</p></td>
+<td><p>Enable Comet acceleration for DayOfYear</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Divide.enabled</p></td>
+<td><p>Enable Comet acceleration for Divide</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.ElementAt.enabled</p></td>
+<td><p>Enable Comet acceleration for ElementAt</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.EndsWith.enabled</p></td>
+<td><p>Enable Comet acceleration for EndsWith</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.EqualNullSafe.enabled</p></td>
+<td><p>Enable Comet acceleration for EqualNullSafe</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.EqualTo.enabled</p></td>
+<td><p>Enable Comet acceleration for EqualTo</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Exp.enabled</p></td>
+<td><p>Enable Comet acceleration for Exp</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Expm1.enabled</p></td>
+<td><p>Enable Comet acceleration for Expm1</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Flatten.enabled</p></td>
+<td><p>Enable Comet acceleration for Flatten</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Floor.enabled</p></td>
+<td><p>Enable Comet acceleration for Floor</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.FromUnixTime.enabled</p></td>
+<td><p>Enable Comet acceleration for FromUnixTime</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.GetArrayItem.enabled</p></td>
+<td><p>Enable Comet acceleration for GetArrayItem</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.GetArrayStructFields.enabled</p></td>
+<td><p>Enable Comet acceleration for GetArrayStructFields</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.GetMapValue.enabled</p></td>
+<td><p>Enable Comet acceleration for GetMapValue</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.GetStructField.enabled</p></td>
+<td><p>Enable Comet acceleration for GetStructField</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.GreaterThan.enabled</p></td>
+<td><p>Enable Comet acceleration for GreaterThan</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.GreaterThanOrEqual.enabled</p></td>
+<td><p>Enable Comet acceleration for GreaterThanOrEqual</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Hex.enabled</p></td>
+<td><p>Enable Comet acceleration for Hex</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Hour.enabled</p></td>
+<td><p>Enable Comet acceleration for Hour</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.If.enabled</p></td>
+<td><p>Enable Comet acceleration for If</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.In.enabled</p></td>
+<td><p>Enable Comet acceleration for In</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.InSet.enabled</p></td>
+<td><p>Enable Comet acceleration for InSet</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.InitCap.enabled</p></td>
+<td><p>Enable Comet acceleration for InitCap</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.IntegralDivide.enabled</p></td>
+<td><p>Enable Comet acceleration for IntegralDivide</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.IsNaN.enabled</p></td>
+<td><p>Enable Comet acceleration for IsNaN</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.IsNotNull.enabled</p></td>
+<td><p>Enable Comet acceleration for IsNotNull</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.IsNull.enabled</p></td>
+<td><p>Enable Comet acceleration for IsNull</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Length.enabled</p></td>
+<td><p>Enable Comet acceleration for Length</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.LessThan.enabled</p></td>
+<td><p>Enable Comet acceleration for LessThan</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.LessThanOrEqual.enabled</p></td>
+<td><p>Enable Comet acceleration for LessThanOrEqual</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Like.enabled</p></td>
+<td><p>Enable Comet acceleration for Like</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Literal.enabled</p></td>
+<td><p>Enable Comet acceleration for Literal</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Log.enabled</p></td>
+<td><p>Enable Comet acceleration for Log</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Log10.enabled</p></td>
+<td><p>Enable Comet acceleration for Log10</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Log2.enabled</p></td>
+<td><p>Enable Comet acceleration for Log2</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Lower.enabled</p></td>
+<td><p>Enable Comet acceleration for Lower</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.MapEntries.enabled</p></td>
+<td><p>Enable Comet acceleration for MapEntries</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.MapFromArrays.enabled</p></td>
+<td><p>Enable Comet acceleration for MapFromArrays</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.MapKeys.enabled</p></td>
+<td><p>Enable Comet acceleration for MapKeys</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.MapValues.enabled</p></td>
+<td><p>Enable Comet acceleration for MapValues</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Md5.enabled</p></td>
+<td><p>Enable Comet acceleration for Md5</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Minute.enabled</p></td>
+<td><p>Enable Comet acceleration for Minute</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.MonotonicallyIncreasingID.enabled</p></td>
+<td><p>Enable Comet acceleration for MonotonicallyIncreasingID</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Month.enabled</p></td>
+<td><p>Enable Comet acceleration for Month</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Multiply.enabled</p></td>
+<td><p>Enable Comet acceleration for Multiply</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Murmur3Hash.enabled</p></td>
+<td><p>Enable Comet acceleration for Murmur3Hash</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Not.enabled</p></td>
+<td><p>Enable Comet acceleration for Not</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.OctetLength.enabled</p></td>
+<td><p>Enable Comet acceleration for OctetLength</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Or.enabled</p></td>
+<td><p>Enable Comet acceleration for Or</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Pow.enabled</p></td>
+<td><p>Enable Comet acceleration for Pow</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Quarter.enabled</p></td>
+<td><p>Enable Comet acceleration for Quarter</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.RLike.enabled</p></td>
+<td><p>Enable Comet acceleration for RLike</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Rand.enabled</p></td>
+<td><p>Enable Comet acceleration for Rand</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Randn.enabled</p></td>
+<td><p>Enable Comet acceleration for Randn</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.RegExpReplace.enabled</p></td>
+<td><p>Enable Comet acceleration for RegExpReplace</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Remainder.enabled</p></td>
+<td><p>Enable Comet acceleration for Remainder</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Reverse.enabled</p></td>
+<td><p>Enable Comet acceleration for Reverse</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Round.enabled</p></td>
+<td><p>Enable Comet acceleration for Round</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Second.enabled</p></td>
+<td><p>Enable Comet acceleration for Second</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Sha2.enabled</p></td>
+<td><p>Enable Comet acceleration for Sha2</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.ShiftLeft.enabled</p></td>
+<td><p>Enable Comet acceleration for ShiftLeft</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.ShiftRight.enabled</p></td>
+<td><p>Enable Comet acceleration for ShiftRight</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Signum.enabled</p></td>
+<td><p>Enable Comet acceleration for Signum</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Sin.enabled</p></td>
+<td><p>Enable Comet acceleration for Sin</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.SparkPartitionID.enabled</p></td>
+<td><p>Enable Comet acceleration for SparkPartitionID</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Sqrt.enabled</p></td>
+<td><p>Enable Comet acceleration for Sqrt</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.StartsWith.enabled</p></td>
+<td><p>Enable Comet acceleration for StartsWith</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.StringInstr.enabled</p></td>
+<td><p>Enable Comet acceleration for StringInstr</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.StringLPad.enabled</p></td>
+<td><p>Enable Comet acceleration for StringLPad</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.StringRPad.enabled</p></td>
+<td><p>Enable Comet acceleration for StringRPad</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.StringRepeat.enabled</p></td>
+<td><p>Enable Comet acceleration for StringRepeat</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.StringReplace.enabled</p></td>
+<td><p>Enable Comet acceleration for StringReplace</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.StringSpace.enabled</p></td>
+<td><p>Enable Comet acceleration for StringSpace</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.StringTranslate.enabled</p></td>
+<td><p>Enable Comet acceleration for StringTranslate</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.StringTrim.enabled</p></td>
+<td><p>Enable Comet acceleration for StringTrim</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.StringTrimBoth.enabled</p></td>
+<td><p>Enable Comet acceleration for StringTrimBoth</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.StringTrimLeft.enabled</p></td>
+<td><p>Enable Comet acceleration for StringTrimLeft</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.StringTrimRight.enabled</p></td>
+<td><p>Enable Comet acceleration for StringTrimRight</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.StructsToJson.enabled</p></td>
+<td><p>Enable Comet acceleration for StructsToJson</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Substring.enabled</p></td>
+<td><p>Enable Comet acceleration for Substring</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Subtract.enabled</p></td>
+<td><p>Enable Comet acceleration for Subtract</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Tan.enabled</p></td>
+<td><p>Enable Comet acceleration for Tan</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.TruncDate.enabled</p></td>
+<td><p>Enable Comet acceleration for TruncDate</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.TruncTimestamp.enabled</p></td>
+<td><p>Enable Comet acceleration for TruncTimestamp</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.UnaryMinus.enabled</p></td>
+<td><p>Enable Comet acceleration for UnaryMinus</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Unhex.enabled</p></td>
+<td><p>Enable Comet acceleration for Unhex</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Upper.enabled</p></td>
+<td><p>Enable Comet acceleration for Upper</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.WeekDay.enabled</p></td>
+<td><p>Enable Comet acceleration for WeekDay</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.WeekOfYear.enabled</p></td>
+<td><p>Enable Comet acceleration for WeekOfYear</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.XxHash64.enabled</p></td>
+<td><p>Enable Comet acceleration for XxHash64</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Year.enabled</p></td>
+<td><p>Enable Comet acceleration for Year</p></td>
+<td><p>true</p></td>
+</tr>
+</tbody>
+</table>
+<!--END:CONFIG_TABLE-->
+</section>
+<section id="enabling-or-disabling-individual-aggregate-expressions">
+<h2>Enabling or Disabling Individual Aggregate Expressions<a 
class="headerlink" 
href="#enabling-or-disabling-individual-aggregate-expressions" title="Link to 
this heading">¶</a></h2>
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[enable_agg_expr]-->
+<table class="table">
+<thead>
+<tr class="row-odd"><th class="head"><p>Config</p></th>
+<th class="head"><p>Description</p></th>
+<th class="head"><p>Default Value</p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="row-even"><td><p>spark.comet.expression.Average.enabled</p></td>
+<td><p>Enable Comet acceleration for Average</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.BitAndAgg.enabled</p></td>
+<td><p>Enable Comet acceleration for BitAndAgg</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.BitOrAgg.enabled</p></td>
+<td><p>Enable Comet acceleration for BitOrAgg</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.BitXorAgg.enabled</p></td>
+<td><p>Enable Comet acceleration for BitXorAgg</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.expression.BloomFilterAggregate.enabled</p></td>
+<td><p>Enable Comet acceleration for BloomFilterAggregate</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Corr.enabled</p></td>
+<td><p>Enable Comet acceleration for Corr</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Count.enabled</p></td>
+<td><p>Enable Comet acceleration for Count</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.expression.CovPopulation.enabled</p></td>
+<td><p>Enable Comet acceleration for CovPopulation</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.CovSample.enabled</p></td>
+<td><p>Enable Comet acceleration for CovSample</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.First.enabled</p></td>
+<td><p>Enable Comet acceleration for First</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Last.enabled</p></td>
+<td><p>Enable Comet acceleration for Last</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Max.enabled</p></td>
+<td><p>Enable Comet acceleration for Max</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.Min.enabled</p></td>
+<td><p>Enable Comet acceleration for Min</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.StddevPop.enabled</p></td>
+<td><p>Enable Comet acceleration for StddevPop</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.StddevSamp.enabled</p></td>
+<td><p>Enable Comet acceleration for StddevSamp</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.Sum.enabled</p></td>
+<td><p>Enable Comet acceleration for Sum</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.expression.VariancePop.enabled</p></td>
+<td><p>Enable Comet acceleration for VariancePop</p></td>
+<td><p>true</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.expression.VarianceSamp.enabled</p></td>
+<td><p>Enable Comet acceleration for VarianceSamp</p></td>
+<td><p>true</p></td>
+</tr>
+</tbody>
+</table>
+<!--END:CONFIG_TABLE-->
+</section>
+<section id="shuffle-configuration-settings">
+<h2>Shuffle Configuration Settings<a class="headerlink" 
href="#shuffle-configuration-settings" title="Link to this heading">¶</a></h2>
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[shuffle]-->
+<table class="table">
+<thead>
+<tr class="row-odd"><th class="head"><p>Config</p></th>
+<th class="head"><p>Description</p></th>
+<th class="head"><p>Default Value</p></th>
+</tr>
+</thead>
+<tbody>
+<tr class="row-even"><td><p>spark.comet.columnar.shuffle.async.enabled</p></td>
+<td><p>Whether to enable asynchronous shuffle for Arrow-based shuffle.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p>spark.comet.scan.preFetch.threadNum</p></td>
-<td><p>The number of threads running pre-fetching for CometScan. Effective if 
spark.comet.scan.preFetch.enabled is enabled. Note that more pre-fetching 
threads means more memory requirement to store pre-fetched row groups.</p></td>
-<td><p>2</p></td>
+<tr 
class="row-odd"><td><p>spark.comet.columnar.shuffle.async.max.thread.num</p></td>
+<td><p>Maximum number of threads on an executor used for Comet async columnar 
shuffle. This is the upper bound of total number of shuffle threads per 
executor. In other words, if the number of cores * the number of shuffle 
threads per task <code class="docutils literal notranslate"><span 
class="pre">spark.comet.columnar.shuffle.async.thread.num</span></code> is 
larger than this config. Comet will use this config as the number of shuffle 
threads per executor instead.</p></td>
+<td><p>100</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.columnar.shuffle.async.thread.num</p></td>
+<td><p>Number of threads used for Comet async columnar shuffle per shuffle 
task. Note that more threads means more memory requirement to buffer shuffle 
data before flushing to disk. Also, more threads may not always improve 
performance, and should be set based on the number of cores available.</p></td>
+<td><p>3</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.columnar.shuffle.batch.size</p></td>
+<td><p>Batch size when writing out sorted spill files on the native side. Note 
that this should not be larger than batch size (i.e., <code class="docutils 
literal notranslate"><span class="pre">spark.comet.batchSize</span></code>). 
Otherwise it will produce larger batches than expected in the native operator 
after shuffle.</p></td>
+<td><p>8192</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.compression.codec</p></td>
+<td><p>The codec of Comet native shuffle used to compress shuffle data. lz4, 
zstd, and snappy are supported. Compression can be disabled by setting 
spark.shuffle.compress=false.</p></td>
+<td><p>lz4</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.exec.shuffle.compression.zstd.level</p></td>
+<td><p>The compression level to use when compressing shuffle files with 
zstd.</p></td>
+<td><p>1</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.exec.shuffle.enabled</p></td>
+<td><p>Whether to enable Comet native shuffle. Note that this requires setting 
‘spark.shuffle.manager’ to 
‘org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager’. 
‘spark.shuffle.manager’ must be set before starting the Spark application and 
cannot be changed during the application.</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>spark.comet.native.shuffle.partitioning.hash.enabled</p></td>
+<td><p>Whether to enable hash partitioning for Comet native shuffle.</p></td>
+<td><p>true</p></td>
+</tr>
+<tr 
class="row-even"><td><p>spark.comet.native.shuffle.partitioning.range.enabled</p></td>
+<td><p>Whether to enable range partitioning for Comet native shuffle.</p></td>
+<td><p>true</p></td>
 </tr>
 <tr class="row-odd"><td><p>spark.comet.shuffle.preferDictionary.ratio</p></td>
 <td><p>The ratio of total values to distinct values in a string column to 
decide whether to prefer dictionary encoding when shuffling the column. If the 
ratio is higher than this config, dictionary encoding will be used on shuffling 
string column. This config is effective if it is higher than 1.0. Note that 
this config is only used when <code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.shuffle.mode</span></code> is <code 
class="docutils literal notranslate"><s [...]
@@ -839,17 +1602,42 @@ under the License.
 <td><p>Comet reports smaller sizes for shuffle due to using Arrow’s columnar 
memory format and this can result in Spark choosing a different join strategy 
due to the estimated size of the exchange being smaller. Comet will multiple 
sizeInBytes by this amount to avoid regressions in join strategy.</p></td>
 <td><p>1.0</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
-<td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when ‘spark.comet.sparkToColumnar.enabled’ is true</p></td>
-<td><p>Range,InMemoryTableScan,RDDScan</p></td>
+</tbody>
+</table>
+<!--END:CONFIG_TABLE-->
+</section>
+<section id="memory-tuning-configuration-settings">
+<h2>Memory &amp; Tuning Configuration Settings<a class="headerlink" 
href="#memory-tuning-configuration-settings" title="Link to this 
heading">¶</a></h2>
+<!-- WARNING! DO NOT MANUALLY MODIFY CONTENT BETWEEN THE BEGIN AND END TAGS -->
+<!--BEGIN:CONFIG_TABLE[tuning]-->
+<table class="table">
+<thead>
+<tr class="row-odd"><th class="head"><p>Config</p></th>
+<th class="head"><p>Description</p></th>
+<th class="head"><p>Default Value</p></th>
 </tr>
-<tr class="row-even"><td><p>spark.hadoop.fs.comet.libhdfs.schemes</p></td>
-<td><p>Defines filesystem schemes (e.g., hdfs, webhdfs) that the native side 
accesses via libhdfs, separated by commas. Valid only when built with hdfs 
feature enabled.</p></td>
-<td><p></p></td>
+</thead>
+<tbody>
+<tr class="row-even"><td><p>spark.comet.batchSize</p></td>
+<td><p>The columnar batch size, i.e., the maximum number of rows that a batch 
can contain.</p></td>
+<td><p>8192</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.exec.memoryPool</p></td>
+<td><p>The type of memory pool to be used for Comet native execution when 
running Spark in off-heap mode. Available pool types are <code class="docutils 
literal notranslate"><span class="pre">greedy_unified</span></code> and <code 
class="docutils literal notranslate"><span 
class="pre">fair_unified</span></code>. For more information, refer to the 
Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
+<td><p>fair_unified</p></td>
+</tr>
+<tr class="row-even"><td><p>spark.comet.exec.memoryPool.fraction</p></td>
+<td><p>Fraction of off-heap memory pool that is available to Comet. Only 
applies to off-heap mode. For more information, refer to the Comet Tuning Guide 
(https://datafusion.apache.org/comet/user-guide/tuning.html).</p></td>
+<td><p>1.0</p></td>
+</tr>
+<tr class="row-odd"><td><p>spark.comet.tracing.enabled</p></td>
+<td><p>Enable fine-grained tracing of events and memory usage. For more 
information, refer to the Comet Tracing Guide 
(https://datafusion.apache.org/comet/user-guide/tracing.html).</p></td>
+<td><p>false</p></td>
 </tr>
 </tbody>
 </table>
 <!--END:CONFIG_TABLE-->
+</section>
 </section>
 
 
diff --git a/user-guide/latest/expressions.html 
b/user-guide/latest/expressions.html
index 037190858..eabfe594b 100644
--- a/user-guide/latest/expressions.html
+++ b/user-guide/latest/expressions.html
@@ -657,9 +657,10 @@ under the License.
 <p>Comet supports the following Spark expressions. Expressions that are marked 
as Spark-compatible will either run
 natively in Comet and provide the same results as Spark, or will fall back to 
Spark for cases that would not
 be compatible.</p>
-<p>All expressions are enabled by default, but can be disabled by setting
+<p>All expressions are enabled by default, but most can be disabled by setting
 <code class="docutils literal notranslate"><span 
class="pre">spark.comet.expression.EXPRNAME.enabled=false</span></code>, where 
<code class="docutils literal notranslate"><span 
class="pre">EXPRNAME</span></code> is the expression name as specified in
-the following tables, such as <code class="docutils literal notranslate"><span 
class="pre">Length</span></code>, or <code class="docutils literal 
notranslate"><span class="pre">StartsWith</span></code>.</p>
+the following tables, such as <code class="docutils literal notranslate"><span 
class="pre">Length</span></code>, or <code class="docutils literal 
notranslate"><span class="pre">StartsWith</span></code>. See the <a 
class="reference internal" href="configs.html"><span class="std std-doc">Comet 
Configuration Guide</span></a> for a full list
+of expressions that be disabled.</p>
 <p>Expressions that are not Spark-compatible will fall back to Spark by 
default and can be enabled by setting
 <code class="docutils literal notranslate"><span 
class="pre">spark.comet.expression.EXPRNAME.allowIncompatible=true</span></code>.</p>
 <p>It is also possible to specify <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.expression.allowIncompatible=true</span></code> to 
enable all


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to