This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/datafusion-comet.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 5c1a052bc Publish built docs triggered by 
52e6b34021b4ee803024eb3e57a35cb00315f10b
5c1a052bc is described below

commit 5c1a052bc03bf6627929261acf4a5efdf0bbf0be
Author: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
AuthorDate: Fri Nov 14 17:34:36 2025 +0000

    Publish built docs triggered by 52e6b34021b4ee803024eb3e57a35cb00315f10b
---
 _sources/user-guide/latest/compatibility.md.txt |  9 ++++-----
 _sources/user-guide/latest/configs.md.txt       |  1 +
 searchindex.js                                  |  2 +-
 user-guide/latest/compatibility.html            |  8 ++++----
 user-guide/latest/configs.html                  | 14 +++++++++-----
 5 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/_sources/user-guide/latest/compatibility.md.txt 
b/_sources/user-guide/latest/compatibility.md.txt
index 6042b4240..c67536c4e 100644
--- a/_sources/user-guide/latest/compatibility.md.txt
+++ b/_sources/user-guide/latest/compatibility.md.txt
@@ -47,11 +47,10 @@ Spark normalizes NaN and zero for floating point numbers 
for several cases. See
 However, one exception is comparison. Spark does not normalize NaN and zero 
when comparing values
 because they are handled well in Spark (e.g., 
`SQLOrderingUtil.compareFloats`). But the comparison
 functions of arrow-rs used by DataFusion do not normalize NaN and zero (e.g., 
[arrow::compute::kernels::cmp::eq](https://docs.rs/arrow/latest/arrow/compute/kernels/cmp/fn.eq.html#)).
-So Comet will add additional normalization expression of NaN and zero for 
comparison.
-
-Sorting on floating-point data types (or complex types containing 
floating-point values) is not compatible with
-Spark if the data contains both zero and negative zero. This is likely an edge 
case that is not of concern for many users
-and sorting on floating-point data can be enabled by setting 
`spark.comet.expression.SortOrder.allowIncompatible=true`.
+So Comet adds additional normalization expression of NaN and zero for 
comparisons, and may still have differences
+to Spark in some cases, especially when the data contains both positive and 
negative zero. This is likely an edge
+case that is not of concern for many users. If it is a concern, setting 
`spark.comet.exec.strictFloatingPoint=true`
+will make relevant operations fall back to Spark.
 
 ## Incompatible Expressions
 
diff --git a/_sources/user-guide/latest/configs.md.txt 
b/_sources/user-guide/latest/configs.md.txt
index 468ac5948..8236574b8 100644
--- a/_sources/user-guide/latest/configs.md.txt
+++ b/_sources/user-guide/latest/configs.md.txt
@@ -62,6 +62,7 @@ Comet provides the following configuration settings.
 | `spark.comet.exceptionOnDatetimeRebase` | Whether to throw exception when 
seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar. 
Since Spark 3, dates/timestamps were written according to the Proleptic 
Gregorian calendar. When this is true, Comet will throw exceptions when seeing 
these dates/timestamps that were written by Spark version before 3.0. If this 
is false, these dates/timestamps will be read as if they were written to the 
Proleptic Gregorian calendar and [...]
 | `spark.comet.exec.enabled` | Whether to enable Comet native vectorized 
execution for Spark. This controls whether Spark should convert operators into 
their Comet counterparts and execute them in native space. Note: each operator 
is associated with a separate config in the format of 
`spark.comet.exec.<operator_name>.enabled` at the moment, and both the config 
and this need to be turned on, in order for the operator to be executed in 
native. | true |
 | `spark.comet.exec.replaceSortMergeJoin` | Experimental feature to force 
Spark to replace SortMergeJoin with ShuffledHashJoin for improved performance. 
This feature is not stable yet. For more information, refer to the [Comet 
Tuning Guide](https://datafusion.apache.org/comet/user-guide/tuning.html). | 
false |
+| `spark.comet.exec.strictFloatingPoint` | When enabled, fall back to Spark 
for floating-point operations that differ from Spark, such as when comparing or 
sorting -0.0 and 0.0. For more information, refer to the [Comet Compatibility 
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html). | 
false |
 | `spark.comet.expression.allowIncompatible` | Comet is not currently fully 
compatible with Spark for all expressions. Set this config to true to allow 
them anyway. For more information, refer to the [Comet Compatibility 
Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html). | 
false |
 | `spark.comet.maxTempDirectorySize` | The maximum amount of data (in bytes) 
stored inside the temporary directories. | 107374182400b |
 | `spark.comet.metrics.updateInterval` | The interval in milliseconds to 
update metrics. If interval is negative, metrics will be updated upon task 
completion. | 3000 |
diff --git a/searchindex.js b/searchindex.js
index 97ba1794f..9709b5adc 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Install Comet": [[18, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[18, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[18, "run-spark-sql-tests"]], "ANSI Mode": [[21, 
"ansi-mode"], [34, "ansi-mode"], [74, "ansi-mode"]], "ANSI mode": [[47, 
"ansi-mode"], [60, "ansi-mode"]], "API Differences Between Spark Versions": 
[[3, "api-differences-between-spark-versions"]], "ASF Links": [[2, null], [2, 
null]], "Accelerating Apache Iceberg Parque [...]
\ No newline at end of file
+Search.setIndex({"alltitles": {"1. Install Comet": [[18, "install-comet"]], 
"2. Clone Spark and Apply Diff": [[18, "clone-spark-and-apply-diff"]], "3. Run 
Spark SQL Tests": [[18, "run-spark-sql-tests"]], "ANSI Mode": [[21, 
"ansi-mode"], [34, "ansi-mode"], [74, "ansi-mode"]], "ANSI mode": [[47, 
"ansi-mode"], [60, "ansi-mode"]], "API Differences Between Spark Versions": 
[[3, "api-differences-between-spark-versions"]], "ASF Links": [[2, null], [2, 
null]], "Accelerating Apache Iceberg Parque [...]
\ No newline at end of file
diff --git a/user-guide/latest/compatibility.html 
b/user-guide/latest/compatibility.html
index 92ea51a74..f0f61aceb 100644
--- a/user-guide/latest/compatibility.html
+++ b/user-guide/latest/compatibility.html
@@ -487,10 +487,10 @@ under the License.
 However, one exception is comparison. Spark does not normalize NaN and zero 
when comparing values
 because they are handled well in Spark (e.g., <code class="docutils literal 
notranslate"><span class="pre">SQLOrderingUtil.compareFloats</span></code>). 
But the comparison
 functions of arrow-rs used by DataFusion do not normalize NaN and zero (e.g., 
<a class="reference external" 
href="https://docs.rs/arrow/latest/arrow/compute/kernels/cmp/fn.eq.html#";>arrow::compute::kernels::cmp::eq</a>).
-So Comet will add additional normalization expression of NaN and zero for 
comparison.</p>
-<p>Sorting on floating-point data types (or complex types containing 
floating-point values) is not compatible with
-Spark if the data contains both zero and negative zero. This is likely an edge 
case that is not of concern for many users
-and sorting on floating-point data can be enabled by setting <code 
class="docutils literal notranslate"><span 
class="pre">spark.comet.expression.SortOrder.allowIncompatible=true</span></code>.</p>
+So Comet adds additional normalization expression of NaN and zero for 
comparisons, and may still have differences
+to Spark in some cases, especially when the data contains both positive and 
negative zero. This is likely an edge
+case that is not of concern for many users. If it is a concern, setting <code 
class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.strictFloatingPoint=true</span></code>
+will make relevant operations fall back to Spark.</p>
 </section>
 <section id="incompatible-expressions">
 <h2>Incompatible Expressions<a class="headerlink" 
href="#incompatible-expressions" title="Link to this heading">#</a></h2>
diff --git a/user-guide/latest/configs.html b/user-guide/latest/configs.html
index 1559e3a32..52aff947f 100644
--- a/user-guide/latest/configs.html
+++ b/user-guide/latest/configs.html
@@ -583,23 +583,27 @@ under the License.
 <td><p>Experimental feature to force Spark to replace SortMergeJoin with 
ShuffledHashJoin for improved performance. This feature is not stable yet. For 
more information, refer to the <a class="reference external" 
href="https://datafusion.apache.org/comet/user-guide/tuning.html";>Comet Tuning 
Guide</a>.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.expression.allowIncompatible</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.exec.strictFloatingPoint</span></code></p></td>
+<td><p>When enabled, fall back to Spark for floating-point operations that 
differ from Spark, such as when comparing or sorting -0.0 and 0.0. For more 
information, refer to the <a class="reference external" 
href="https://datafusion.apache.org/comet/user-guide/compatibility.html";>Comet 
Compatibility Guide</a>.</p></td>
+<td><p>false</p></td>
+</tr>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.expression.allowIncompatible</span></code></p></td>
 <td><p>Comet is not currently fully compatible with Spark for all expressions. 
Set this config to true to allow them anyway. For more information, refer to 
the <a class="reference external" 
href="https://datafusion.apache.org/comet/user-guide/compatibility.html";>Comet 
Compatibility Guide</a>.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.maxTempDirectorySize</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.maxTempDirectorySize</span></code></p></td>
 <td><p>The maximum amount of data (in bytes) stored inside the temporary 
directories.</p></td>
 <td><p>107374182400b</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.metrics.updateInterval</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.metrics.updateInterval</span></code></p></td>
 <td><p>The interval in milliseconds to update metrics. If interval is 
negative, metrics will be updated upon task completion.</p></td>
 <td><p>3000</p></td>
 </tr>
-<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.nativeLoadRequired</span></code></p></td>
+<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.nativeLoadRequired</span></code></p></td>
 <td><p>Whether to require Comet native library to load successfully when Comet 
is enabled. If not, Comet will silently fallback to Spark when it fails to load 
the native lib. Otherwise, an error will be thrown and the Spark job will be 
aborted.</p></td>
 <td><p>false</p></td>
 </tr>
-<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.regexp.allowIncompatible</span></code></p></td>
+<tr class="row-even"><td><p><code class="docutils literal notranslate"><span 
class="pre">spark.comet.regexp.allowIncompatible</span></code></p></td>
 <td><p>Comet is not currently fully compatible with Spark for all regular 
expressions. Set this config to true to allow them anyway. For more 
information, refer to the <a class="reference external" 
href="https://datafusion.apache.org/comet/user-guide/compatibility.html";>Comet 
Compatibility Guide</a>.</p></td>
 <td><p>false</p></td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to