This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new b0b946e68 Publish built docs triggered by 
3778f34aee30746f4430db1acb1e49e452bce16a
b0b946e68 is described below

commit b0b946e6895a19250ddd99b280b0675e97f9fef1
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Sat Sep 20 16:33:53 2025 +0000

    Publish built docs triggered by 3778f34aee30746f4430db1acb1e49e452bce16a
---
 _sources/user-guide/latest/configs.md.txt | 2 +-
 searchindex.js                            | 2 +-
 user-guide/latest/configs.html            | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index c923c5668..fdfc2da35 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -88,6 +88,6 @@ Comet provides the following configuration settings.
 | spark.comet.scan.preFetch.threadNum | The number of threads running 
pre-fetching for CometScan. Effective if spark.comet.scan.preFetch.enabled is 
enabled. Note that more pre-fetching threads means more memory requirement to 
store pre-fetched row groups. | 2 |
 | spark.comet.shuffle.preferDictionary.ratio | The ratio of total values to 
distinct values in a string column to decide whether to prefer dictionary 
encoding when shuffling the column. If the ratio is higher than this config, 
dictionary encoding will be used on shuffling string column. This config is 
effective if it is higher than 1.0. Note that this config is only used when 
`spark.comet.exec.shuffle.mode` is `jvm`. | 10.0 |
 | spark.comet.shuffle.sizeInBytesMultiplier | Comet reports smaller sizes for 
shuffle due to using Arrow's columnar memory format and this can result in 
Spark choosing a different join strategy due to the estimated size of the 
exchange being smaller. Comet will multiple sizeInBytes by this amount to avoid 
regressions in join strategy. | 1.0 |
-| spark.comet.sparkToColumnar.supportedOperatorList | A comma-separated list 
of operators that will be converted to Arrow columnar format when 
'spark.comet.sparkToColumnar.enabled' is true | Range,InMemoryTableScan |
+| spark.comet.sparkToColumnar.supportedOperatorList | A comma-separated list 
of operators that will be converted to Arrow columnar format when 
'spark.comet.sparkToColumnar.enabled' is true | Range,InMemoryTableScan,RDDScan 
|
 | spark.hadoop.fs.comet.libhdfs.schemes | Defines filesystem schemes (e.g., 
hdfs, webhdfs) that the native side accesses via libhdfs, separated by commas. 
Valid only when built with hdfs feature enabled. | |
 <!--END:CONFIG_TABLE-->
diff --git a/searchindex.js b/searchindex.js
index 5f7868dfd..24b473174 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[12, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[12, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[12, "run-spark-sql-tests"]], "ANSI Mode": [[17, 
"ansi-mode"], [56, "ansi-mode"]], "ANSI mode": [[30, "ansi-mode"], [43, 
"ansi-mode"]], "API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "Accelerating Apache Iceberg 
Parquet Scans using Comet (Experimental)": [[22, null], [35, n [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[12, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[12, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[12, "run-spark-sql-tests"]], "ANSI Mode": [[17, 
"ansi-mode"], [56, "ansi-mode"]], "ANSI mode": [[30, "ansi-mode"], [43, 
"ansi-mode"]], "API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "Accelerating Apache Iceberg 
Parquet Scans using Comet (Experimental)": [[22, null], [35, n [...]
\ No newline at end of file
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index cd309751a..e6e821f15 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -845,7 +845,7 @@ under the License.
 </tr>
 <tr 
class="row-even"><td><p>spark.comet.sparkToColumnar.supportedOperatorList</p></td>
 <td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when ‘spark.comet.sparkToColumnar.enabled’ is true</p></td>
-<td><p>Range,InMemoryTableScan</p></td>
+<td><p>Range,InMemoryTableScan,RDDScan</p></td>
 </tr>
 <tr class="row-odd"><td><p>spark.hadoop.fs.comet.libhdfs.schemes</p></td>
 <td><p>Defines filesystem schemes (e.g., hdfs, webhdfs) that the native side 
accesses via libhdfs, separated by commas. Valid only when built with hdfs 
feature enabled.</p></td>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to