This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 5c58f20dd1 Publish built docs triggered by 
3c4e39ac0cf83bd8ead45722a5873bac731b53f1
5c58f20dd1 is described below

commit 5c58f20dd1090a39dd0047141b715df4d1dbb8d1
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Fri Jun 20 07:28:07 2025 +0000

    Publish built docs triggered by 3c4e39ac0cf83bd8ead45722a5873bac731b53f1
---
 _sources/user-guide/configs.md.txt |   1 +
 searchindex.js                     |   2 +-
 user-guide/configs.html            | 126 +++++++++++++++++++------------------
 3 files changed, 67 insertions(+), 62 deletions(-)

diff --git a/_sources/user-guide/configs.md.txt 
b/_sources/user-guide/configs.md.txt
index b55e63293f..23a35c896d 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -83,6 +83,7 @@ Environment variables are read during `SessionConfig` 
initialisation so they mus
 | datafusion.execution.parquet.maximum_buffered_record_batches_per_stream | 2  
                       | (writing) By default parallel parquet writer is tuned 
for minimum memory usage in a streaming execution plan. You may see a 
performance benefit when writing large parquet files by increasing 
maximum_parallel_row_group_writers and 
maximum_buffered_record_batches_per_stream if your system has idle cores and 
can tolerate additional memory usage. Boosting these values is likely 
worthwhile  [...]
 | datafusion.execution.planning_concurrency                               | 0  
                       | Fan-out during initial physical planning. This is 
mostly use to plan `UNION` children in parallel. Defaults to the number of CPU 
cores on the system                                                             
                                                                                
                                                                                
                    [...]
 | datafusion.execution.skip_physical_aggregate_schema_check               | 
false                     | When set to true, skips verifying that the schema 
produced by planning the input of `LogicalPlan::Aggregate` exactly matches the 
schema of the input plan. When set to false, if the schema does not match 
exactly (including nullability and metadata), a planning error will be raised. 
This is used to workaround bugs in the planner that are now caught by the new 
schema verification step.    [...]
+| datafusion.execution.spill_compression                                  | 
uncompressed              | Sets the compression codec used when spilling data 
to disk. Since datafusion writes spill files using the Arrow IPC Stream format, 
only codecs supported by the Arrow IPC Stream Writer are allowed. Valid values 
are: uncompressed, lz4_frame, zstd. Note: lz4_frame offers faster 
(de)compression, but typically results in larger spill files. In contrast, zstd 
achieves higher compression rati [...]
 | datafusion.execution.sort_spill_reservation_bytes                       | 
10485760                  | Specifies the reserved memory for each spillable 
sort operation to facilitate an in-memory merge. When a sort operation spills 
to disk, the in-memory data must be sorted and merged before being written to a 
file. This setting reserves a specific amount of memory for that in-memory 
sort/merge process. Note: This setting is irrelevant if the sort operation 
cannot spill (i.e., if there's  [...]
 | datafusion.execution.sort_in_place_threshold_bytes                      | 
1048576                   | When sorting, below what size should data be 
concatenated and sorted in a single RecordBatch rather than sorted in batches 
and merged.                                                                     
                                                                                
                                                                                
                          [...]
 | datafusion.execution.meta_fetch_concurrency                             | 32 
                       | Number of files to read in parallel when inferring 
schema and statistics                                                           
                                                                                
                                                                                
                                                                                
                  [...]
diff --git a/searchindex.js b/searchindex.js
index bf091943de..44a689de8b 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles":{"!=":[[57,"op-neq"]],"!~":[[57,"op-re-not-match"]],"!~*":[[57,"op-re-not-match-i"]],"!~~":[[57,"id19"]],"!~~*":[[57,"id20"]],"#":[[57,"op-bit-xor"]],"%":[[57,"op-modulo"]],"&":[[57,"op-bit-and"]],"(relation,
 name) tuples in logical fields and logical columns are 
unique":[[12,"relation-name-tuples-in-logical-fields-and-logical-columns-are-unique"]],"*":[[57,"op-multiply"]],"+":[[57,"op-plus"]],"-":[[57,"op-minus"]],"/":[[57,"op-divide"]],"<":[[57,"op-lt"]],"<
 [...]
\ No newline at end of file
+Search.setIndex({"alltitles":{"!=":[[57,"op-neq"]],"!~":[[57,"op-re-not-match"]],"!~*":[[57,"op-re-not-match-i"]],"!~~":[[57,"id19"]],"!~~*":[[57,"id20"]],"#":[[57,"op-bit-xor"]],"%":[[57,"op-modulo"]],"&":[[57,"op-bit-and"]],"(relation,
 name) tuples in logical fields and logical columns are 
unique":[[12,"relation-name-tuples-in-logical-fields-and-logical-columns-are-unique"]],"*":[[57,"op-multiply"]],"+":[[57,"op-plus"]],"-":[[57,"op-minus"]],"/":[[57,"op-divide"]],"<":[[57,"op-lt"]],"<
 [...]
\ No newline at end of file
diff --git a/user-guide/configs.html b/user-guide/configs.html
index 5e14371193..990a64af8f 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -797,247 +797,251 @@ Environment variables are read during <code 
class="docutils literal notranslate"
 <td><p>false</p></td>
 <td><p>When set to true, skips verifying that the schema produced by planning 
the input of <code class="docutils literal notranslate"><span 
class="pre">LogicalPlan::Aggregate</span></code> exactly matches the schema of 
the input plan. When set to false, if the schema does not match exactly 
(including nullability and metadata), a planning error will be raised. This is 
used to workaround bugs in the planner that are now caught by the new schema 
verification step.</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.execution.sort_spill_reservation_bytes</p></td>
+<tr class="row-even"><td><p>datafusion.execution.spill_compression</p></td>
+<td><p>uncompressed</p></td>
+<td><p>Sets the compression codec used when spilling data to disk. Since 
datafusion writes spill files using the Arrow IPC Stream format, only codecs 
supported by the Arrow IPC Stream Writer are allowed. Valid values are: 
uncompressed, lz4_frame, zstd. Note: lz4_frame offers faster (de)compression, 
but typically results in larger spill files. In contrast, zstd achieves higher 
compression ratios at the cost of slower (de)compression speed.</p></td>
+</tr>
+<tr 
class="row-odd"><td><p>datafusion.execution.sort_spill_reservation_bytes</p></td>
 <td><p>10485760</p></td>
 <td><p>Specifies the reserved memory for each spillable sort operation to 
facilitate an in-memory merge. When a sort operation spills to disk, the 
in-memory data must be sorted and merged before being written to a file. This 
setting reserves a specific amount of memory for that in-memory sort/merge 
process. Note: This setting is irrelevant if the sort operation cannot spill 
(i.e., if there’s no <code class="docutils literal notranslate"><span 
class="pre">DiskManager</span></code> configu [...]
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.execution.sort_in_place_threshold_bytes</p></td>
+<tr 
class="row-even"><td><p>datafusion.execution.sort_in_place_threshold_bytes</p></td>
 <td><p>1048576</p></td>
 <td><p>When sorting, below what size should data be concatenated and sorted in 
a single RecordBatch rather than sorted in batches and merged.</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.execution.meta_fetch_concurrency</p></td>
+<tr class="row-odd"><td><p>datafusion.execution.meta_fetch_concurrency</p></td>
 <td><p>32</p></td>
 <td><p>Number of files to read in parallel when inferring schema and 
statistics</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.execution.minimum_parallel_output_files</p></td>
+<tr 
class="row-even"><td><p>datafusion.execution.minimum_parallel_output_files</p></td>
 <td><p>4</p></td>
 <td><p>Guarantees a minimum level of output files running in parallel. 
RecordBatches will be distributed in round robin fashion to each parallel 
writer. Each writer is closed and a new file opened once 
soft_max_rows_per_output_file is reached.</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.execution.soft_max_rows_per_output_file</p></td>
+<tr 
class="row-odd"><td><p>datafusion.execution.soft_max_rows_per_output_file</p></td>
 <td><p>50000000</p></td>
 <td><p>Target number of rows in output files when writing multiple. This is a 
soft max, so it can be exceeded slightly. There also will be one file smaller 
than the limit if the total number of rows written is not roughly divisible by 
the soft max</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.execution.max_buffered_batches_per_output_file</p></td>
+<tr 
class="row-even"><td><p>datafusion.execution.max_buffered_batches_per_output_file</p></td>
 <td><p>2</p></td>
 <td><p>This is the maximum number of RecordBatches buffered for each output 
file being worked. Higher values can potentially give faster write performance 
at the cost of higher peak memory consumption</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.execution.listing_table_ignore_subdirectory</p></td>
+<tr 
class="row-odd"><td><p>datafusion.execution.listing_table_ignore_subdirectory</p></td>
 <td><p>true</p></td>
 <td><p>Should sub directories be ignored when scanning directories for data 
files. Defaults to true (ignores subdirectories), consistent with Hive. Note 
that this setting does not affect reading partitioned tables (e.g. <code 
class="docutils literal notranslate"><span 
class="pre">/table/year=2021/month=01/data.parquet</span></code>).</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.execution.enable_recursive_ctes</p></td>
+<tr class="row-even"><td><p>datafusion.execution.enable_recursive_ctes</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion support recursive CTEs</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.execution.split_file_groups_by_statistics</p></td>
+<tr 
class="row-odd"><td><p>datafusion.execution.split_file_groups_by_statistics</p></td>
 <td><p>false</p></td>
 <td><p>Attempt to eliminate sorts by packing &amp; sorting files with 
non-overlapping statistics into the same file groups. Currently 
experimental</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.execution.keep_partition_by_columns</p></td>
+<tr 
class="row-even"><td><p>datafusion.execution.keep_partition_by_columns</p></td>
 <td><p>false</p></td>
 <td><p>Should DataFusion keep the columns used for partition_by in the output 
RecordBatches</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.execution.skip_partial_aggregation_probe_ratio_threshold</p></td>
+<tr 
class="row-odd"><td><p>datafusion.execution.skip_partial_aggregation_probe_ratio_threshold</p></td>
 <td><p>0.8</p></td>
 <td><p>Aggregation ratio (number of distinct groups / number of input rows) 
threshold for skipping partial aggregation. If the value is greater then 
partial aggregation will skip aggregation for further input</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.execution.skip_partial_aggregation_probe_rows_threshold</p></td>
+<tr 
class="row-even"><td><p>datafusion.execution.skip_partial_aggregation_probe_rows_threshold</p></td>
 <td><p>100000</p></td>
 <td><p>Number of input rows partial aggregation partition should process, 
before aggregation ratio check and trying to switch to skipping aggregation 
mode</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.execution.use_row_number_estimates_to_optimize_partitioning</p></td>
+<tr 
class="row-odd"><td><p>datafusion.execution.use_row_number_estimates_to_optimize_partitioning</p></td>
 <td><p>false</p></td>
 <td><p>Should DataFusion use row number estimates at the input to decide 
whether increasing parallelism is beneficial or not. By default, only exact row 
numbers (not estimates) are used for this decision. Setting this flag to <code 
class="docutils literal notranslate"><span class="pre">true</span></code> will 
likely produce better plans. if the source of statistics is accurate. We plan 
to make this the default in the future.</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.execution.enforce_batch_size_in_joins</p></td>
+<tr 
class="row-even"><td><p>datafusion.execution.enforce_batch_size_in_joins</p></td>
 <td><p>false</p></td>
 <td><p>Should DataFusion enforce batch size in joins or not. By default, 
DataFusion will not enforce batch size in joins. Enforcing batch size in joins 
can reduce memory usage when joining large tables with a highly-selective join 
filter, but is also slightly slower.</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.execution.objectstore_writer_buffer_size</p></td>
+<tr 
class="row-odd"><td><p>datafusion.execution.objectstore_writer_buffer_size</p></td>
 <td><p>10485760</p></td>
 <td><p>Size (bytes) of data buffer DataFusion uses when writing output files. 
This affects the size of the data chunks that are uploaded to remote object 
stores (e.g. AWS S3). If very large (&gt;= 100 GiB) output files are being 
written, it may be necessary to increase this size to avoid errors from the 
remote end point.</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.optimizer.enable_distinct_aggregation_soft_limit</p></td>
+<tr 
class="row-even"><td><p>datafusion.optimizer.enable_distinct_aggregation_soft_limit</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the optimizer will push a limit operation into 
grouped aggregations which have no aggregate expressions, as a soft limit, 
emitting groups once the limit is reached, before all rows in the group are 
read.</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.optimizer.enable_round_robin_repartition</p></td>
+<tr 
class="row-odd"><td><p>datafusion.optimizer.enable_round_robin_repartition</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the physical plan optimizer will try to add round 
robin repartitioning to increase parallelism to leverage more CPU cores</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.optimizer.enable_topk_aggregation</p></td>
+<tr 
class="row-even"><td><p>datafusion.optimizer.enable_topk_aggregation</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the optimizer will attempt to perform limit 
operations during aggregations, if possible</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.optimizer.enable_dynamic_filter_pushdown</p></td>
+<tr 
class="row-odd"><td><p>datafusion.optimizer.enable_dynamic_filter_pushdown</p></td>
 <td><p>true</p></td>
 <td><p>When set to true attempts to push down dynamic filters generated by 
operators into the file scan phase. For example, for a query such as <code 
class="docutils literal notranslate"><span class="pre">SELECT</span> <span 
class="pre">*</span> <span class="pre">FROM</span> <span class="pre">t</span> 
<span class="pre">ORDER</span> <span class="pre">BY</span> <span 
class="pre">timestamp</span> <span class="pre">DESC</span> <span 
class="pre">LIMIT</span> <span class="pre">10</span></code> [...]
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.filter_null_join_keys</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.filter_null_join_keys</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the optimizer will insert filters before a join 
between a nullable and non-nullable column to filter out nulls on the nullable 
side. This filter can add additional overhead when the file format does not 
fully support predicate push down.</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.optimizer.repartition_aggregations</p></td>
+<tr 
class="row-odd"><td><p>datafusion.optimizer.repartition_aggregations</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion repartition data using the aggregate keys to execute 
aggregates in parallel using the provided <code class="docutils literal 
notranslate"><span class="pre">target_partitions</span></code> level</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.optimizer.repartition_file_min_size</p></td>
+<tr 
class="row-even"><td><p>datafusion.optimizer.repartition_file_min_size</p></td>
 <td><p>10485760</p></td>
 <td><p>Minimum total files size in bytes to perform file scan 
repartitioning.</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.repartition_joins</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.repartition_joins</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion repartition data using the join keys to execute joins 
in parallel using the provided <code class="docutils literal notranslate"><span 
class="pre">target_partitions</span></code> level</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.optimizer.allow_symmetric_joins_without_pruning</p></td>
+<tr 
class="row-even"><td><p>datafusion.optimizer.allow_symmetric_joins_without_pruning</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion allow symmetric hash joins for unbounded data sources 
even when its inputs do not have any ordering or filtering If the flag is not 
enabled, the SymmetricHashJoin operator will be unable to prune its internal 
buffers, resulting in certain join types - such as Full, Left, LeftAnti, 
LeftSemi, Right, RightAnti, and RightSemi - being produced only at the end of 
the execution. This is not typical in stream processing. Additionally, without 
proper design for long runne [...]
 </tr>
-<tr 
class="row-even"><td><p>datafusion.optimizer.repartition_file_scans</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.repartition_file_scans</p></td>
 <td><p>true</p></td>
 <td><p>When set to <code class="docutils literal notranslate"><span 
class="pre">true</span></code>, datasource partitions will be repartitioned to 
achieve maximum parallelism. This applies to both in-memory partitions and 
FileSource’s file groups (1 group is 1 partition). For FileSources, only 
Parquet and CSV formats are currently supported. If set to <code 
class="docutils literal notranslate"><span class="pre">true</span></code> for a 
FileSource, all files will be repartitioned evenly ( [...]
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.repartition_windows</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.repartition_windows</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion repartition data using the partitions keys to execute 
window functions in parallel using the provided <code class="docutils literal 
notranslate"><span class="pre">target_partitions</span></code> level</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.repartition_sorts</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.repartition_sorts</p></td>
 <td><p>true</p></td>
 <td><p>Should DataFusion execute sorts in a per-partition fashion and merge 
afterwards instead of coalescing first and sorting globally. With this flag is 
enabled, plans in the form below <code class="docutils literal 
notranslate"><span class="pre">text</span> <span 
class="pre">&quot;SortExec:</span> <span class="pre">[a&#64;0</span> <span 
class="pre">ASC]&quot;,</span> <span class="pre">&quot;</span> <span 
class="pre">CoalescePartitionsExec&quot;,</span> <span 
class="pre">&quot;</span>  [...]
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.prefer_existing_sort</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.prefer_existing_sort</p></td>
 <td><p>false</p></td>
 <td><p>When true, DataFusion will opportunistically remove sorts when the data 
is already sorted, (i.e. setting <code class="docutils literal 
notranslate"><span class="pre">preserve_order</span></code> to true on <code 
class="docutils literal notranslate"><span 
class="pre">RepartitionExec</span></code> and using <code class="docutils 
literal notranslate"><span class="pre">SortPreservingMergeExec</span></code>) 
When false, DataFusion will maximize plan parallelism using <code class="docut 
[...]
 </tr>
-<tr class="row-even"><td><p>datafusion.optimizer.skip_failed_rules</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.skip_failed_rules</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the logical plan optimizer will produce warning 
messages if any optimization rules produce errors and then proceed to the next 
rule. When set to false, any rules that produce errors will cause the query to 
fail</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.max_passes</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.max_passes</p></td>
 <td><p>3</p></td>
 <td><p>Number of times that the optimizer will attempt to optimize the 
plan</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.optimizer.top_down_join_key_reordering</p></td>
+<tr 
class="row-odd"><td><p>datafusion.optimizer.top_down_join_key_reordering</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the physical plan optimizer will run a top down 
process to reorder the join keys</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.prefer_hash_join</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.prefer_hash_join</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the physical plan optimizer will prefer HashJoin over 
SortMergeJoin. HashJoin can work more efficiently than SortMergeJoin but 
consumes more memory</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.optimizer.hash_join_single_partition_threshold</p></td>
+<tr 
class="row-odd"><td><p>datafusion.optimizer.hash_join_single_partition_threshold</p></td>
 <td><p>1048576</p></td>
 <td><p>The maximum estimated size in bytes for one input side of a HashJoin 
will be collected into a single partition</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.optimizer.hash_join_single_partition_threshold_rows</p></td>
+<tr 
class="row-even"><td><p>datafusion.optimizer.hash_join_single_partition_threshold_rows</p></td>
 <td><p>131072</p></td>
 <td><p>The maximum estimated size in rows for one input side of a HashJoin 
will be collected into a single partition</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.optimizer.default_filter_selectivity</p></td>
+<tr 
class="row-odd"><td><p>datafusion.optimizer.default_filter_selectivity</p></td>
 <td><p>20</p></td>
 <td><p>The default filter selectivity used by Filter Statistics when an exact 
selectivity cannot be determined. Valid values are between 0 (no selectivity) 
and 100 (all rows are selected).</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.prefer_existing_union</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.prefer_existing_union</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the optimizer will not attempt to convert Union to 
Interleave</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.optimizer.expand_views_at_output</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.expand_views_at_output</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, if the returned type is a view type then the output 
will be coerced to a non-view. Coerces <code class="docutils literal 
notranslate"><span class="pre">Utf8View</span></code> to <code class="docutils 
literal notranslate"><span class="pre">LargeUtf8</span></code>, and <code 
class="docutils literal notranslate"><span class="pre">BinaryView</span></code> 
to <code class="docutils literal notranslate"><span 
class="pre">LargeBinary</span></code>.</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.yield_period</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.yield_period</p></td>
 <td><p>64</p></td>
 <td><p>When DataFusion detects that a plan might not be promply cancellable 
due to the presence of tight-looping operators, it will attempt to mitigate 
this by inserting explicit yielding (in as few places as possible to avoid 
performance degradation). This value represents the yielding period (in 
batches) at such explicit yielding points. The default value is 64. If set to 
0, no DataFusion will not perform any explicit yielding.</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.explain.logical_plan_only</p></td>
+<tr class="row-odd"><td><p>datafusion.explain.logical_plan_only</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the explain statement will only print logical 
plans</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.explain.physical_plan_only</p></td>
+<tr class="row-even"><td><p>datafusion.explain.physical_plan_only</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the explain statement will only print physical 
plans</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.explain.show_statistics</p></td>
+<tr class="row-odd"><td><p>datafusion.explain.show_statistics</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the explain statement will print operator statistics 
for physical plans</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.explain.show_sizes</p></td>
+<tr class="row-even"><td><p>datafusion.explain.show_sizes</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, the explain statement will print the partition 
sizes</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.explain.show_schema</p></td>
+<tr class="row-odd"><td><p>datafusion.explain.show_schema</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the explain statement will print schema 
information</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.explain.format</p></td>
+<tr class="row-even"><td><p>datafusion.explain.format</p></td>
 <td><p>indent</p></td>
 <td><p>Display format of explain. Default is “indent”. When set to “tree”, it 
will print the plan in a tree-rendered format.</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.sql_parser.parse_float_as_decimal</p></td>
+<tr 
class="row-odd"><td><p>datafusion.sql_parser.parse_float_as_decimal</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, SQL parser will parse float as decimal type</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.sql_parser.enable_ident_normalization</p></td>
+<tr 
class="row-even"><td><p>datafusion.sql_parser.enable_ident_normalization</p></td>
 <td><p>true</p></td>
 <td><p>When set to true, SQL parser will normalize ident (convert ident to 
lowercase when not quoted)</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.sql_parser.enable_options_value_normalization</p></td>
+<tr 
class="row-odd"><td><p>datafusion.sql_parser.enable_options_value_normalization</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, SQL parser will normalize options value (convert 
value to lowercase). Note that this option is ignored and will be removed in 
the future. All case-insensitive values are normalized automatically.</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.sql_parser.dialect</p></td>
+<tr class="row-even"><td><p>datafusion.sql_parser.dialect</p></td>
 <td><p>generic</p></td>
 <td><p>Configure the SQL dialect used by DataFusion’s parser; supported values 
include: Generic, MySQL, PostgreSQL, Hive, SQLite, Snowflake, Redshift, MsSQL, 
ClickHouse, BigQuery, Ansi, DuckDB and Databricks.</p></td>
 </tr>
-<tr 
class="row-even"><td><p>datafusion.sql_parser.support_varchar_with_length</p></td>
+<tr 
class="row-odd"><td><p>datafusion.sql_parser.support_varchar_with_length</p></td>
 <td><p>true</p></td>
 <td><p>If true, permit lengths for <code class="docutils literal 
notranslate"><span class="pre">VARCHAR</span></code> such as <code 
class="docutils literal notranslate"><span 
class="pre">VARCHAR(20)</span></code>, but ignore the length. If false, error 
if a <code class="docutils literal notranslate"><span 
class="pre">VARCHAR</span></code> with a length is specified. The Arrow type 
system does not have a notion of maximum string length and thus DataFusion can 
not enforce such limits.</p></td>
 </tr>
-<tr 
class="row-odd"><td><p>datafusion.sql_parser.map_string_types_to_utf8view</p></td>
+<tr 
class="row-even"><td><p>datafusion.sql_parser.map_string_types_to_utf8view</p></td>
 <td><p>true</p></td>
 <td><p>If true, string types (VARCHAR, CHAR, Text, and String) are mapped to 
<code class="docutils literal notranslate"><span 
class="pre">Utf8View</span></code> during SQL planning. If false, they are 
mapped to <code class="docutils literal notranslate"><span 
class="pre">Utf8</span></code>. Default is true.</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.sql_parser.collect_spans</p></td>
+<tr class="row-odd"><td><p>datafusion.sql_parser.collect_spans</p></td>
 <td><p>false</p></td>
 <td><p>When set to true, the source locations relative to the original SQL 
query (i.e. <a class="reference external" 
href="https://docs.rs/sqlparser/latest/sqlparser/tokenizer/struct.Span.html";><code
 class="docutils literal notranslate"><span class="pre">Span</span></code></a>) 
will be collected and recorded in the logical plan nodes.</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.sql_parser.recursion_limit</p></td>
+<tr class="row-even"><td><p>datafusion.sql_parser.recursion_limit</p></td>
 <td><p>50</p></td>
 <td><p>Specifies the recursion depth limit when parsing complex SQL 
Queries</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.format.safe</p></td>
+<tr class="row-odd"><td><p>datafusion.format.safe</p></td>
 <td><p>true</p></td>
 <td><p>If set to <code class="docutils literal notranslate"><span 
class="pre">true</span></code> any formatting errors will be written to the 
output instead of being converted into a [<code class="docutils literal 
notranslate"><span class="pre">std::fmt::Error</span></code>]</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.format.null</p></td>
+<tr class="row-even"><td><p>datafusion.format.null</p></td>
 <td><p></p></td>
 <td><p>Format string for nulls</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.format.date_format</p></td>
+<tr class="row-odd"><td><p>datafusion.format.date_format</p></td>
 <td><p>%Y-%m-%d</p></td>
 <td><p>Date format for date arrays</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.format.datetime_format</p></td>
+<tr class="row-even"><td><p>datafusion.format.datetime_format</p></td>
 <td><p>%Y-%m-%dT%H:%M:%S%.f</p></td>
 <td><p>Format for DateTime arrays</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.format.timestamp_format</p></td>
+<tr class="row-odd"><td><p>datafusion.format.timestamp_format</p></td>
 <td><p>%Y-%m-%dT%H:%M:%S%.f</p></td>
 <td><p>Timestamp format for timestamp arrays</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.format.timestamp_tz_format</p></td>
+<tr class="row-even"><td><p>datafusion.format.timestamp_tz_format</p></td>
 <td><p>NULL</p></td>
 <td><p>Timestamp format for timestamp with timezone arrays. When <code 
class="docutils literal notranslate"><span class="pre">None</span></code>, ISO 
8601 format is used.</p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.format.time_format</p></td>
+<tr class="row-odd"><td><p>datafusion.format.time_format</p></td>
 <td><p>%H:%M:%S%.f</p></td>
 <td><p>Time format for time arrays</p></td>
 </tr>
-<tr class="row-odd"><td><p>datafusion.format.duration_format</p></td>
+<tr class="row-even"><td><p>datafusion.format.duration_format</p></td>
 <td><p>pretty</p></td>
 <td><p>Duration format. Can be either <code class="docutils literal 
notranslate"><span class="pre">&quot;pretty&quot;</span></code> or <code 
class="docutils literal notranslate"><span 
class="pre">&quot;ISO8601&quot;</span></code></p></td>
 </tr>
-<tr class="row-even"><td><p>datafusion.format.types_info</p></td>
+<tr class="row-odd"><td><p>datafusion.format.types_info</p></td>
 <td><p>false</p></td>
 <td><p>Show types in visual representation batches</p></td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@datafusion.apache.org
For additional commands, e-mail: commits-h...@datafusion.apache.org

Reply via email to