This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 856fe823c8 Publish built docs triggered by
b54e648a7ee82e1ef292ef36df7d49902171b94f
856fe823c8 is described below
commit 856fe823c80d12d9600bb5fbdbbb855076d3dfb0
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Wed Jan 1 13:03:44 2025 +0000
Publish built docs triggered by b54e648a7ee82e1ef292ef36df7d49902171b94f
---
_sources/user-guide/configs.md.txt | 1 +
searchindex.js | 2 +-
user-guide/configs.html | 132 +++++++++++++++++++------------------
3 files changed, 70 insertions(+), 65 deletions(-)
diff --git a/_sources/user-guide/configs.md.txt
b/_sources/user-guide/configs.md.txt
index 329b9a95c8..1c39064c15 100644
--- a/_sources/user-guide/configs.md.txt
+++ b/_sources/user-guide/configs.md.txt
@@ -61,6 +61,7 @@ Environment variables are read during `SessionConfig`
initialisation so they mus
| datafusion.execution.parquet.data_pagesize_limit |
1048576 | (writing) Sets best effort maximum size of data
page in bytes
[...]
| datafusion.execution.parquet.write_batch_size |
1024 | (writing) Sets write_batch_size in bytes
[...]
| datafusion.execution.parquet.writer_version |
1.0 | (writing) Sets parquet writer version valid values
are "1.0" and "2.0"
[...]
+| datafusion.execution.parquet.skip_arrow_metadata |
false | (writing) Skip encoding the embedded arrow metadata
in the KV_meta This is analogous to the
`ArrowWriterOptions::with_skip_arrow_metadata`. Refer to
<https://docs.rs/parquet/53.3.0/parquet/arrow/arrow_writer/struct.ArrowWriterOptions.html#method.with_skip_arrow_metadata>
[...]
| datafusion.execution.parquet.compression |
zstd(3) | (writing) Sets default parquet compression codec.
Valid values are: uncompressed, snappy, gzip(level), lzo, brotli(level), lz4,
zstd(level), and lz4_raw. These values are not case sensitive. If NULL, uses
default parquet writer setting Note that this default setting is not the same
as the default parquet writer setting.
[...]
| datafusion.execution.parquet.dictionary_enabled |
true | (writing) Sets if dictionary encoding is enabled.
If NULL, uses default parquet writer setting
[...]
| datafusion.execution.parquet.dictionary_page_size_limit |
1048576 | (writing) Sets best effort maximum dictionary page
size, in bytes
[...]
diff --git a/searchindex.js b/searchindex.js
index 9a77e304ef..828dae53c5 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"!=": [[48, "op-neq"]], "!~": [[48,
"op-re-not-match"]], "!~*": [[48, "op-re-not-match-i"]], "!~~": [[48, "id18"]],
"!~~*": [[48, "id19"]], "#": [[48, "op-bit-xor"]], "%": [[48, "op-modulo"]],
"&": [[48, "op-bit-and"]], "(relation, name) tuples in logical fields and
logical columns are unique": [[9,
"relation-name-tuples-in-logical-fields-and-logical-columns-are-unique"]], "*":
[[48, "op-multiply"]], "+": [[48, "op-plus"]], "-": [[48, "op-minus"]], "/":
[[4 [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"!=": [[48, "op-neq"]], "!~": [[48,
"op-re-not-match"]], "!~*": [[48, "op-re-not-match-i"]], "!~~": [[48, "id18"]],
"!~~*": [[48, "id19"]], "#": [[48, "op-bit-xor"]], "%": [[48, "op-modulo"]],
"&": [[48, "op-bit-and"]], "(relation, name) tuples in logical fields and
logical columns are unique": [[9,
"relation-name-tuples-in-logical-fields-and-logical-columns-are-unique"]], "*":
[[48, "op-multiply"]], "+": [[48, "op-plus"]], "-": [[48, "op-minus"]], "/":
[[4 [...]
\ No newline at end of file
diff --git a/user-guide/configs.html b/user-guide/configs.html
index 764e3ddbc7..43bfe5a61c 100644
--- a/user-guide/configs.html
+++ b/user-guide/configs.html
@@ -663,259 +663,263 @@ Environment variables are read during <code
class="docutils literal notranslate"
<td><p>1.0</p></td>
<td><p>(writing) Sets parquet writer version valid values are “1.0” and
“2.0”</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.execution.parquet.compression</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.parquet.skip_arrow_metadata</p></td>
+<td><p>false</p></td>
+<td><p>(writing) Skip encoding the embedded arrow metadata in the KV_meta This
is analogous to the <code class="docutils literal notranslate"><span
class="pre">ArrowWriterOptions::with_skip_arrow_metadata</span></code>. Refer
to <a class="reference external"
href="https://docs.rs/parquet/53.3.0/parquet/arrow/arrow_writer/struct.ArrowWriterOptions.html#method.with_skip_arrow_metadata">https://docs.rs/parquet/53.3.0/parquet/arrow/arrow_writer/struct.ArrowWriterOptions.html#method.with_skip
[...]
+</tr>
+<tr class="row-odd"><td><p>datafusion.execution.parquet.compression</p></td>
<td><p>zstd(3)</p></td>
<td><p>(writing) Sets default parquet compression codec. Valid values are:
uncompressed, snappy, gzip(level), lzo, brotli(level), lz4, zstd(level), and
lz4_raw. These values are not case sensitive. If NULL, uses default parquet
writer setting Note that this default setting is not the same as the default
parquet writer setting.</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.parquet.dictionary_enabled</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.parquet.dictionary_enabled</p></td>
<td><p>true</p></td>
<td><p>(writing) Sets if dictionary encoding is enabled. If NULL, uses default
parquet writer setting</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.parquet.dictionary_page_size_limit</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.parquet.dictionary_page_size_limit</p></td>
<td><p>1048576</p></td>
<td><p>(writing) Sets best effort maximum dictionary page size, in
bytes</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.parquet.statistics_enabled</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.parquet.statistics_enabled</p></td>
<td><p>page</p></td>
<td><p>(writing) Sets if statistics are enabled for any column Valid values
are: “none”, “chunk”, and “page” These values are not case sensitive. If NULL,
uses default parquet writer setting</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.parquet.max_statistics_size</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.parquet.max_statistics_size</p></td>
<td><p>4096</p></td>
<td><p>(writing) Sets max statistics size for any column. If NULL, uses
default parquet writer setting</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.parquet.max_row_group_size</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.parquet.max_row_group_size</p></td>
<td><p>1048576</p></td>
<td><p>(writing) Target maximum number of rows in each row group (defaults to
1M rows). Writing larger row groups requires more memory to write, but can get
better compression and be faster to read.</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.execution.parquet.created_by</p></td>
+<tr class="row-odd"><td><p>datafusion.execution.parquet.created_by</p></td>
<td><p>datafusion version 44.0.0</p></td>
<td><p>(writing) Sets “created by” property</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.parquet.column_index_truncate_length</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.parquet.column_index_truncate_length</p></td>
<td><p>64</p></td>
<td><p>(writing) Sets column index truncate length</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.parquet.data_page_row_count_limit</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.parquet.data_page_row_count_limit</p></td>
<td><p>20000</p></td>
<td><p>(writing) Sets best effort maximum number of rows in data page</p></td>
</tr>
-<tr class="row-odd"><td><p>datafusion.execution.parquet.encoding</p></td>
+<tr class="row-even"><td><p>datafusion.execution.parquet.encoding</p></td>
<td><p>NULL</p></td>
<td><p>(writing) Sets default encoding for any column. Valid values are:
plain, plain_dictionary, rle, bit_packed, delta_binary_packed,
delta_length_byte_array, delta_byte_array, rle_dictionary, and
byte_stream_split. These values are not case sensitive. If NULL, uses default
parquet writer setting</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.parquet.bloom_filter_on_read</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.parquet.bloom_filter_on_read</p></td>
<td><p>true</p></td>
<td><p>(writing) Use any available bloom filters when reading parquet
files</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.parquet.bloom_filter_on_write</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.parquet.bloom_filter_on_write</p></td>
<td><p>false</p></td>
<td><p>(writing) Write bloom filters for all columns when creating parquet
files</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.parquet.bloom_filter_fpp</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.parquet.bloom_filter_fpp</p></td>
<td><p>NULL</p></td>
<td><p>(writing) Sets bloom filter false positive probability. If NULL, uses
default parquet writer setting</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.parquet.bloom_filter_ndv</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.parquet.bloom_filter_ndv</p></td>
<td><p>NULL</p></td>
<td><p>(writing) Sets bloom filter number of distinct values. If NULL, uses
default parquet writer setting</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.parquet.allow_single_file_parallelism</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.parquet.allow_single_file_parallelism</p></td>
<td><p>true</p></td>
<td><p>(writing) Controls whether DataFusion will attempt to speed up writing
parquet files by serializing them in parallel. Each column in each row group in
each output file are serialized in parallel leveraging a maximum possible core
count of n_files<em>n_row_groups</em>n_columns.</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.parquet.maximum_parallel_row_group_writers</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.parquet.maximum_parallel_row_group_writers</p></td>
<td><p>1</p></td>
<td><p>(writing) By default parallel parquet writer is tuned for minimum
memory usage in a streaming execution plan. You may see a performance benefit
when writing large parquet files by increasing
maximum_parallel_row_group_writers and
maximum_buffered_record_batches_per_stream if your system has idle cores and
can tolerate additional memory usage. Boosting these values is likely
worthwhile when writing out already in-memory data, such as from a cached data
frame.</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.parquet.maximum_buffered_record_batches_per_stream</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.parquet.maximum_buffered_record_batches_per_stream</p></td>
<td><p>2</p></td>
<td><p>(writing) By default parallel parquet writer is tuned for minimum
memory usage in a streaming execution plan. You may see a performance benefit
when writing large parquet files by increasing
maximum_parallel_row_group_writers and
maximum_buffered_record_batches_per_stream if your system has idle cores and
can tolerate additional memory usage. Boosting these values is likely
worthwhile when writing out already in-memory data, such as from a cached data
frame.</p></td>
</tr>
-<tr class="row-odd"><td><p>datafusion.execution.planning_concurrency</p></td>
+<tr class="row-even"><td><p>datafusion.execution.planning_concurrency</p></td>
<td><p>0</p></td>
<td><p>Fan-out during initial physical planning. This is mostly use to plan
<code class="docutils literal notranslate"><span
class="pre">UNION</span></code> children in parallel. Defaults to the number of
CPU cores on the system</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.skip_physical_aggregate_schema_check</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.skip_physical_aggregate_schema_check</p></td>
<td><p>false</p></td>
<td><p>When set to true, skips verifying that the schema produced by planning
the input of <code class="docutils literal notranslate"><span
class="pre">LogicalPlan::Aggregate</span></code> exactly matches the schema of
the input plan. When set to false, if the schema does not match exactly
(including nullability and metadata), a planning error will be raised. This is
used to workaround bugs in the planner that are now caught by the new schema
verification step.</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.sort_spill_reservation_bytes</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.sort_spill_reservation_bytes</p></td>
<td><p>10485760</p></td>
<td><p>Specifies the reserved memory for each spillable sort operation to
facilitate an in-memory merge. When a sort operation spills to disk, the
in-memory data must be sorted and merged before being written to a file. This
setting reserves a specific amount of memory for that in-memory sort/merge
process. Note: This setting is irrelevant if the sort operation cannot spill
(i.e., if there’s no <code class="docutils literal notranslate"><span
class="pre">DiskManager</span></code> configu [...]
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.sort_in_place_threshold_bytes</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.sort_in_place_threshold_bytes</p></td>
<td><p>1048576</p></td>
<td><p>When sorting, below what size should data be concatenated and sorted in
a single RecordBatch rather than sorted in batches and merged.</p></td>
</tr>
-<tr class="row-odd"><td><p>datafusion.execution.meta_fetch_concurrency</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.meta_fetch_concurrency</p></td>
<td><p>32</p></td>
<td><p>Number of files to read in parallel when inferring schema and
statistics</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.minimum_parallel_output_files</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.minimum_parallel_output_files</p></td>
<td><p>4</p></td>
<td><p>Guarantees a minimum level of output files running in parallel.
RecordBatches will be distributed in round robin fashion to each parallel
writer. Each writer is closed and a new file opened once
soft_max_rows_per_output_file is reached.</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.soft_max_rows_per_output_file</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.soft_max_rows_per_output_file</p></td>
<td><p>50000000</p></td>
<td><p>Target number of rows in output files when writing multiple. This is a
soft max, so it can be exceeded slightly. There also will be one file smaller
than the limit if the total number of rows written is not roughly divisible by
the soft max</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.max_buffered_batches_per_output_file</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.max_buffered_batches_per_output_file</p></td>
<td><p>2</p></td>
<td><p>This is the maximum number of RecordBatches buffered for each output
file being worked. Higher values can potentially give faster write performance
at the cost of higher peak memory consumption</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.listing_table_ignore_subdirectory</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.listing_table_ignore_subdirectory</p></td>
<td><p>true</p></td>
<td><p>Should sub directories be ignored when scanning directories for data
files. Defaults to true (ignores subdirectories), consistent with Hive. Note
that this setting does not affect reading partitioned tables (e.g. <code
class="docutils literal notranslate"><span
class="pre">/table/year=2021/month=01/data.parquet</span></code>).</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.execution.enable_recursive_ctes</p></td>
+<tr class="row-odd"><td><p>datafusion.execution.enable_recursive_ctes</p></td>
<td><p>true</p></td>
<td><p>Should DataFusion support recursive CTEs</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.split_file_groups_by_statistics</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.split_file_groups_by_statistics</p></td>
<td><p>false</p></td>
<td><p>Attempt to eliminate sorts by packing & sorting files with
non-overlapping statistics into the same file groups. Currently
experimental</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.keep_partition_by_columns</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.keep_partition_by_columns</p></td>
<td><p>false</p></td>
<td><p>Should DataFusion keep the columns used for partition_by in the output
RecordBatches</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.skip_partial_aggregation_probe_ratio_threshold</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.skip_partial_aggregation_probe_ratio_threshold</p></td>
<td><p>0.8</p></td>
<td><p>Aggregation ratio (number of distinct groups / number of input rows)
threshold for skipping partial aggregation. If the value is greater then
partial aggregation will skip aggregation for further input</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.skip_partial_aggregation_probe_rows_threshold</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.skip_partial_aggregation_probe_rows_threshold</p></td>
<td><p>100000</p></td>
<td><p>Number of input rows partial aggregation partition should process,
before aggregation ratio check and trying to switch to skipping aggregation
mode</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.execution.use_row_number_estimates_to_optimize_partitioning</p></td>
+<tr
class="row-even"><td><p>datafusion.execution.use_row_number_estimates_to_optimize_partitioning</p></td>
<td><p>false</p></td>
<td><p>Should DataFusion use row number estimates at the input to decide
whether increasing parallelism is beneficial or not. By default, only exact row
numbers (not estimates) are used for this decision. Setting this flag to <code
class="docutils literal notranslate"><span class="pre">true</span></code> will
likely produce better plans. if the source of statistics is accurate. We plan
to make this the default in the future.</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.execution.enforce_batch_size_in_joins</p></td>
+<tr
class="row-odd"><td><p>datafusion.execution.enforce_batch_size_in_joins</p></td>
<td><p>false</p></td>
<td><p>Should DataFusion enforce batch size in joins or not. By default,
DataFusion will not enforce batch size in joins. Enforcing batch size in joins
can reduce memory usage when joining large tables with a highly-selective join
filter, but is also slightly slower.</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.optimizer.enable_distinct_aggregation_soft_limit</p></td>
+<tr
class="row-even"><td><p>datafusion.optimizer.enable_distinct_aggregation_soft_limit</p></td>
<td><p>true</p></td>
<td><p>When set to true, the optimizer will push a limit operation into
grouped aggregations which have no aggregate expressions, as a soft limit,
emitting groups once the limit is reached, before all rows in the group are
read.</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.optimizer.enable_round_robin_repartition</p></td>
+<tr
class="row-odd"><td><p>datafusion.optimizer.enable_round_robin_repartition</p></td>
<td><p>true</p></td>
<td><p>When set to true, the physical plan optimizer will try to add round
robin repartitioning to increase parallelism to leverage more CPU cores</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.optimizer.enable_topk_aggregation</p></td>
+<tr
class="row-even"><td><p>datafusion.optimizer.enable_topk_aggregation</p></td>
<td><p>true</p></td>
<td><p>When set to true, the optimizer will attempt to perform limit
operations during aggregations, if possible</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.optimizer.filter_null_join_keys</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.filter_null_join_keys</p></td>
<td><p>false</p></td>
<td><p>When set to true, the optimizer will insert filters before a join
between a nullable and non-nullable column to filter out nulls on the nullable
side. This filter can add additional overhead when the file format does not
fully support predicate push down.</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.optimizer.repartition_aggregations</p></td>
+<tr
class="row-even"><td><p>datafusion.optimizer.repartition_aggregations</p></td>
<td><p>true</p></td>
<td><p>Should DataFusion repartition data using the aggregate keys to execute
aggregates in parallel using the provided <code class="docutils literal
notranslate"><span class="pre">target_partitions</span></code> level</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.optimizer.repartition_file_min_size</p></td>
+<tr
class="row-odd"><td><p>datafusion.optimizer.repartition_file_min_size</p></td>
<td><p>10485760</p></td>
<td><p>Minimum total files size in bytes to perform file scan
repartitioning.</p></td>
</tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.repartition_joins</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.repartition_joins</p></td>
<td><p>true</p></td>
<td><p>Should DataFusion repartition data using the join keys to execute joins
in parallel using the provided <code class="docutils literal notranslate"><span
class="pre">target_partitions</span></code> level</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.optimizer.allow_symmetric_joins_without_pruning</p></td>
+<tr
class="row-odd"><td><p>datafusion.optimizer.allow_symmetric_joins_without_pruning</p></td>
<td><p>true</p></td>
<td><p>Should DataFusion allow symmetric hash joins for unbounded data sources
even when its inputs do not have any ordering or filtering If the flag is not
enabled, the SymmetricHashJoin operator will be unable to prune its internal
buffers, resulting in certain join types - such as Full, Left, LeftAnti,
LeftSemi, Right, RightAnti, and RightSemi - being produced only at the end of
the execution. This is not typical in stream processing. Additionally, without
proper design for long runne [...]
</tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.repartition_file_scans</p></td>
+<tr
class="row-even"><td><p>datafusion.optimizer.repartition_file_scans</p></td>
<td><p>true</p></td>
<td><p>When set to <code class="docutils literal notranslate"><span
class="pre">true</span></code>, file groups will be repartitioned to achieve
maximum parallelism. Currently Parquet and CSV formats are supported. If set to
<code class="docutils literal notranslate"><span
class="pre">true</span></code>, all files will be repartitioned evenly (i.e., a
single large file might be partitioned into smaller chunks) for parallel
scanning. If set to <code class="docutils literal notranslate"><s [...]
</tr>
-<tr class="row-even"><td><p>datafusion.optimizer.repartition_windows</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.repartition_windows</p></td>
<td><p>true</p></td>
<td><p>Should DataFusion repartition data using the partitions keys to execute
window functions in parallel using the provided <code class="docutils literal
notranslate"><span class="pre">target_partitions</span></code> level</p></td>
</tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.repartition_sorts</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.repartition_sorts</p></td>
<td><p>true</p></td>
<td><p>Should DataFusion execute sorts in a per-partition fashion and merge
afterwards instead of coalescing first and sorting globally. With this flag is
enabled, plans in the form below <code class="docutils literal
notranslate"><span class="pre">text</span> <span
class="pre">"SortExec:</span> <span class="pre">[a@0</span> <span
class="pre">ASC]",</span> <span class="pre">"</span> <span
class="pre">CoalescePartitionsExec",</span> <span
class="pre">"</span> [...]
</tr>
-<tr class="row-even"><td><p>datafusion.optimizer.prefer_existing_sort</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.prefer_existing_sort</p></td>
<td><p>false</p></td>
<td><p>When true, DataFusion will opportunistically remove sorts when the data
is already sorted, (i.e. setting <code class="docutils literal
notranslate"><span class="pre">preserve_order</span></code> to true on <code
class="docutils literal notranslate"><span
class="pre">RepartitionExec</span></code> and using <code class="docutils
literal notranslate"><span class="pre">SortPreservingMergeExec</span></code>)
When false, DataFusion will maximize plan parallelism using <code class="docut
[...]
</tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.skip_failed_rules</p></td>
+<tr class="row-even"><td><p>datafusion.optimizer.skip_failed_rules</p></td>
<td><p>false</p></td>
<td><p>When set to true, the logical plan optimizer will produce warning
messages if any optimization rules produce errors and then proceed to the next
rule. When set to false, any rules that produce errors will cause the query to
fail</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.optimizer.max_passes</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.max_passes</p></td>
<td><p>3</p></td>
<td><p>Number of times that the optimizer will attempt to optimize the
plan</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.optimizer.top_down_join_key_reordering</p></td>
+<tr
class="row-even"><td><p>datafusion.optimizer.top_down_join_key_reordering</p></td>
<td><p>true</p></td>
<td><p>When set to true, the physical plan optimizer will run a top down
process to reorder the join keys</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.optimizer.prefer_hash_join</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.prefer_hash_join</p></td>
<td><p>true</p></td>
<td><p>When set to true, the physical plan optimizer will prefer HashJoin over
SortMergeJoin. HashJoin can work more efficiently than SortMergeJoin but
consumes more memory</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.optimizer.hash_join_single_partition_threshold</p></td>
+<tr
class="row-even"><td><p>datafusion.optimizer.hash_join_single_partition_threshold</p></td>
<td><p>1048576</p></td>
<td><p>The maximum estimated size in bytes for one input side of a HashJoin
will be collected into a single partition</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.optimizer.hash_join_single_partition_threshold_rows</p></td>
+<tr
class="row-odd"><td><p>datafusion.optimizer.hash_join_single_partition_threshold_rows</p></td>
<td><p>131072</p></td>
<td><p>The maximum estimated size in rows for one input side of a HashJoin
will be collected into a single partition</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.optimizer.default_filter_selectivity</p></td>
+<tr
class="row-even"><td><p>datafusion.optimizer.default_filter_selectivity</p></td>
<td><p>20</p></td>
<td><p>The default filter selectivity used by Filter Statistics when an exact
selectivity cannot be determined. Valid values are between 0 (no selectivity)
and 100 (all rows are selected).</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.optimizer.prefer_existing_union</p></td>
+<tr class="row-odd"><td><p>datafusion.optimizer.prefer_existing_union</p></td>
<td><p>false</p></td>
<td><p>When set to true, the optimizer will not attempt to convert Union to
Interleave</p></td>
</tr>
-<tr class="row-odd"><td><p>datafusion.optimizer.expand_views_at_output</p></td>
+<tr
class="row-even"><td><p>datafusion.optimizer.expand_views_at_output</p></td>
<td><p>false</p></td>
<td><p>When set to true, if the returned type is a view type then the output
will be coerced to a non-view. Coerces <code class="docutils literal
notranslate"><span class="pre">Utf8View</span></code> to <code class="docutils
literal notranslate"><span class="pre">LargeUtf8</span></code>, and <code
class="docutils literal notranslate"><span class="pre">BinaryView</span></code>
to <code class="docutils literal notranslate"><span
class="pre">LargeBinary</span></code>.</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.explain.logical_plan_only</p></td>
+<tr class="row-odd"><td><p>datafusion.explain.logical_plan_only</p></td>
<td><p>false</p></td>
<td><p>When set to true, the explain statement will only print logical
plans</p></td>
</tr>
-<tr class="row-odd"><td><p>datafusion.explain.physical_plan_only</p></td>
+<tr class="row-even"><td><p>datafusion.explain.physical_plan_only</p></td>
<td><p>false</p></td>
<td><p>When set to true, the explain statement will only print physical
plans</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.explain.show_statistics</p></td>
+<tr class="row-odd"><td><p>datafusion.explain.show_statistics</p></td>
<td><p>false</p></td>
<td><p>When set to true, the explain statement will print operator statistics
for physical plans</p></td>
</tr>
-<tr class="row-odd"><td><p>datafusion.explain.show_sizes</p></td>
+<tr class="row-even"><td><p>datafusion.explain.show_sizes</p></td>
<td><p>true</p></td>
<td><p>When set to true, the explain statement will print the partition
sizes</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.explain.show_schema</p></td>
+<tr class="row-odd"><td><p>datafusion.explain.show_schema</p></td>
<td><p>false</p></td>
<td><p>When set to true, the explain statement will print schema
information</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.sql_parser.parse_float_as_decimal</p></td>
+<tr
class="row-even"><td><p>datafusion.sql_parser.parse_float_as_decimal</p></td>
<td><p>false</p></td>
<td><p>When set to true, SQL parser will parse float as decimal type</p></td>
</tr>
-<tr
class="row-even"><td><p>datafusion.sql_parser.enable_ident_normalization</p></td>
+<tr
class="row-odd"><td><p>datafusion.sql_parser.enable_ident_normalization</p></td>
<td><p>true</p></td>
<td><p>When set to true, SQL parser will normalize ident (convert ident to
lowercase when not quoted)</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.sql_parser.enable_options_value_normalization</p></td>
+<tr
class="row-even"><td><p>datafusion.sql_parser.enable_options_value_normalization</p></td>
<td><p>false</p></td>
<td><p>When set to true, SQL parser will normalize options value (convert
value to lowercase). Note that this option is ignored and will be removed in
the future. All case-insensitive values are normalized automatically.</p></td>
</tr>
-<tr class="row-even"><td><p>datafusion.sql_parser.dialect</p></td>
+<tr class="row-odd"><td><p>datafusion.sql_parser.dialect</p></td>
<td><p>generic</p></td>
<td><p>Configure the SQL dialect used by DataFusion’s parser; supported values
include: Generic, MySQL, PostgreSQL, Hive, SQLite, Snowflake, Redshift, MsSQL,
ClickHouse, BigQuery, and Ansi.</p></td>
</tr>
-<tr
class="row-odd"><td><p>datafusion.sql_parser.support_varchar_with_length</p></td>
+<tr
class="row-even"><td><p>datafusion.sql_parser.support_varchar_with_length</p></td>
<td><p>true</p></td>
<td><p>If true, permit lengths for <code class="docutils literal
notranslate"><span class="pre">VARCHAR</span></code> such as <code
class="docutils literal notranslate"><span
class="pre">VARCHAR(20)</span></code>, but ignore the length. If false, error
if a <code class="docutils literal notranslate"><span
class="pre">VARCHAR</span></code> with a length is specified. The Arrow type
system does not have a notion of maximum string length and thus DataFusion can
not enforce such limits.</p></td>
</tr>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]