This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 8fc968ad0 Publish built docs triggered by 
bab70d2a9d2760fdf71a7749a568135d60272648
8fc968ad0 is described below

commit 8fc968ad0c9d939939ff56c86dbc8775b884663b
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Wed May 28 18:19:29 2025 +0000

    Publish built docs triggered by bab70d2a9d2760fdf71a7749a568135d60272648
---
 _sources/user-guide/compatibility.md.txt | 1 +
 searchindex.js                           | 2 +-
 user-guide/compatibility.html            | 1 +
 3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/_sources/user-guide/compatibility.md.txt 
b/_sources/user-guide/compatibility.md.txt
index 4316c1c8c..b76e2616c 100644
--- a/_sources/user-guide/compatibility.md.txt
+++ b/_sources/user-guide/compatibility.md.txt
@@ -65,6 +65,7 @@ types (regardless of the logical type). This behavior can be 
disabled by setting
 - There is a known performance issue when pushing filters down to Parquet. See 
the [Comet Tuning Guide] for more
 information.
 - There are failures in the Spark SQL test suite when enabling these new scans 
(tracking issues: [#1542] and [#1545]).
+- No support for default values that are nested types (e.g., maps, arrays, 
structs). Literal default values are supported.
 
 [#1545]: https://github.com/apache/datafusion-comet/issues/1545
 [#1542]: https://github.com/apache/datafusion-comet/issues/1542
diff --git a/searchindex.js b/searchindex.js
index 7e502dc3c..8b488d8a5 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[11, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[11, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[11, "run-spark-sql-tests"]], "ANSI mode": [[14, 
"ansi-mode"]], "API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "ASF Links": [[13, null]], 
"Accelerating Apache Iceberg Parquet Scans using Comet (Experimental)": [[19, 
null]], "Adding Spark-side Tests for the New Expression":  [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[11, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[11, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[11, "run-spark-sql-tests"]], "ANSI mode": [[14, 
"ansi-mode"]], "API Differences Between Spark Versions": [[0, 
"api-differences-between-spark-versions"]], "ASF Links": [[13, null]], 
"Accelerating Apache Iceberg Parquet Scans using Comet (Experimental)": [[19, 
null]], "Adding Spark-side Tests for the New Expression":  [...]
\ No newline at end of file
diff --git a/user-guide/compatibility.html b/user-guide/compatibility.html
index c7e07b8fc..90aec5298 100644
--- a/user-guide/compatibility.html
+++ b/user-guide/compatibility.html
@@ -456,6 +456,7 @@ types (regardless of the logical type). This behavior can 
be disabled by setting
 <li><p>There is a known performance issue when pushing filters down to 
Parquet. See the <a class="reference internal" href="tuning.html"><span 
class="std std-doc">Comet Tuning Guide</span></a> for more
 information.</p></li>
 <li><p>There are failures in the Spark SQL test suite when enabling these new 
scans (tracking issues: <a class="reference external" 
href="https://github.com/apache/datafusion-comet/issues/1542";>#1542</a> and <a 
class="reference external" 
href="https://github.com/apache/datafusion-comet/issues/1545";>#1545</a>).</p></li>
+<li><p>No support for default values that are nested types (e.g., maps, 
arrays, structs). Literal default values are supported.</p></li>
 </ul>
 </section>
 <section id="ansi-mode">


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@datafusion.apache.org
For additional commands, e-mail: commits-h...@datafusion.apache.org

Reply via email to