This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new dc6c0032a Publish built docs triggered by 
f538424d37f69019c7eed7032bd813d299f8d3cc
dc6c0032a is described below

commit dc6c0032ad3f1c676e119f9205f73bd0a002d29b
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Fri Jan 23 13:44:55 2026 +0000

    Publish built docs triggered by f538424d37f69019c7eed7032bd813d299f8d3cc
---
 _sources/user-guide/latest/configs.md.txt |  1 +
 searchindex.js                            |  2 +-
 user-guide/latest/configs.html            | 10 +++++++---
 3 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index 575c2ee80..7b32c2a4b 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -144,6 +144,7 @@ These settings can be used to determine which parts of the 
plan are accelerated
 | `spark.comet.exec.onHeap.memoryPool` | The type of memory pool to be used 
for Comet native execution when running Spark in on-heap mode. Available pool 
types are `greedy`, `fair_spill`, `greedy_task_shared`, 
`fair_spill_task_shared`, `greedy_global`, `fair_spill_global`, and 
`unbounded`. | greedy_task_shared |
 | `spark.comet.memoryOverhead` | The amount of additional memory to be 
allocated per executor process for Comet, in MiB, when running Spark in on-heap 
mode. | 1024 MiB |
 | `spark.comet.parquet.write.enabled` | Whether to enable native Parquet write 
through Comet. When enabled, Comet will intercept Parquet write operations and 
execute them natively. This feature is highly experimental and only partially 
implemented. It should not be used in production. | false |
+| `spark.comet.scan.csv.v2.enabled` | Whether to use the native Comet V2 CSV 
reader for improved performance. Default: false (uses standard Spark CSV 
reader) Experimental: Performance benefits are workload-dependent. | false |
 | `spark.comet.sparkToColumnar.enabled` | Whether to enable Spark to Arrow 
columnar conversion. When this is turned on, Comet will convert operators in 
`spark.comet.sparkToColumnar.supportedOperatorList` into Arrow columnar format 
before processing. This is an experimental feature and has known issues with 
non-UTC timezones. | false |
 | `spark.comet.sparkToColumnar.supportedOperatorList` | A comma-separated list 
of operators that will be converted to Arrow columnar format when 
`spark.comet.sparkToColumnar.enabled` is true. | 
Range,InMemoryTableScan,RDDScan |
 | `spark.comet.testing.strict` | Experimental option to enable strict testing, 
which will fail tests that could be more comprehensive, such as checking for a 
specific fallback reason. It can be overridden by the environment variable 
`ENABLE_COMET_STRICT_TESTING`. | false |
diff --git a/searchindex.js b/searchindex.js
index 448a32ced..29f590a44 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Format Your Code": [[12, 
"format-your-code"]], "1. Install Comet": [[21, "install-comet"]], "1. Native 
Operators (nativeExecs map)": [[4, "native-operators-nativeexecs-map"]], "2. 
Build and Verify": [[12, "build-and-verify"]], "2. Clone Spark and Apply Diff": 
[[21, "clone-spark-and-apply-diff"]], "2. Sink Operators (sinks map)": [[4, 
"sink-operators-sinks-map"]], "3. Comet JVM Operators": [[4, 
"comet-jvm-operators"]], "3. Run Clippy (Recommended)": [[12 [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Format Your Code": [[12, 
"format-your-code"]], "1. Install Comet": [[21, "install-comet"]], "1. Native 
Operators (nativeExecs map)": [[4, "native-operators-nativeexecs-map"]], "2. 
Build and Verify": [[12, "build-and-verify"]], "2. Clone Spark and Apply Diff": 
[[21, "clone-spark-and-apply-diff"]], "2. Sink Operators (sinks map)": [[4, 
"sink-operators-sinks-map"]], "3. Comet JVM Operators": [[4, 
"comet-jvm-operators"]], "3. Run Clippy (Recommended)": [[12 [...]
\ No newline at end of file
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index 4102c60cd..3418c13a3 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -806,15 +806,19 @@ under the License.
 <td><p>Whether to enable native Parquet write through Comet. When enabled, 
Comet will intercept Parquet write operations and execute them natively. This 
feature is highly experimental and only partially implemented. It should not be 
used in production.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.enabled</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.scan.csv.v2.enabled</span></code></p></td>
+<td><p>Whether to use the native Comet V2 CSV reader for improved performance. 
Default: false (uses standard Spark CSV reader) Experimental: Performance 
benefits are workload-dependent.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.enabled</span></code></p></td>
 <td><p>Whether to enable Spark to Arrow columnar conversion. When this is 
turned on, Comet will convert operators in <code class="docutils literal 
notranslate"><span 
class="pre">spark.comet.sparkToColumnar.supportedOperatorList</span></code> 
into Arrow columnar format before processing. This is an experimental feature 
and has known issues with non-UTC timezones.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.supportedOperatorList</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.supportedOperatorList</span></code></p></td>
 <td><p>A comma-separated list of operators that will be converted to Arrow 
columnar format when <code class="docutils literal notranslate"><span 
class="pre">spark.comet.sparkToColumnar.enabled</span></code> is true.</p></td>
 <td><p>Range,InMemoryTableScan,RDDScan</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.testing.strict</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.testing.strict</span></code></p></td>
 <td><p>Experimental option to enable strict testing, which will fail tests 
that could be more comprehensive, such as checking for a specific fallback 
reason. It can be overridden by the environment variable <code class="docutils 
literal notranslate"><span 
class="pre">ENABLE_COMET_STRICT_TESTING</span></code>.</p></td>
 <td><p>false</p></td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to