[ 
https://issues.apache.org/jira/browse/SPARK-39845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navin Kumar updated SPARK-39845:
--------------------------------
    Description: 
This is a continuation of the issue described in SPARK-32110.

When using Array set-based functions {{array_union}}, {{array_intersect}}, 
{{array_except}} and {{arrays_overlap}}, {{0.0}} and {{-0.0}} have inconsistent 
behavior.

When parsed, {{-0.0}} is normalized to {{0.0}}. Therefore if I use 
{{array_union}} for example with these values directly, {{array(-0.0)}} becomes 
{{array(0.0)}}. See the example below using {{array_union}}:

{code:java}
scala> val df = spark.sql("SELECT array_union(array(0.0), array(-0.0))")
df: org.apache.spark.sql.DataFrame = [array_union(array(0.0), array(0.0)): 
array<decimal(1,1)>]
scala> df.collect()
res2: Array[org.apache.spark.sql.Row] = Array([WrappedArray(0.0)])
{code}

In this case, {{0.0}} and {{-0.0}} are considered equal and the union of the 
arrays produces a single value: {{0.0}}.

However, if I try this operation using a constructed dataframe, these values 
are not equal, and the result is an array with both {{0.0}} and {{-0.0}}.

{code:java}
scala> val df = List((Array(0.0), Array(-0.0))).toDF("a", "b")
df: org.apache.spark.sql.DataFrame = [a: array<double>, b: array<double>]

scala> df.selectExpr("array_union(a, b)").collect()
res3: Array[org.apache.spark.sql.Row] = Array([WrappedArray(0.0, -0.0)])
{code}

For {{arrays_overlap}}, here is a similar version of that inconsistency:

{code:java}
scala> val df = spark.sql("SELECT arrays_overlap(array(0.0), array(-0.0))")
df: org.apache.spark.sql.DataFrame = [arrays_overlap(array(0.0), array(0.0)): 
boolean]

scala> df.collect
res4: Array[org.apache.spark.sql.Row] = Array([true])
{code}

{code:java}
scala> val df = List((Array(0.0), Array(-0.0))).toDF("a", "b")
df: org.apache.spark.sql.DataFrame = [a: array<double>, b: array<double>]

scala> df.selectExpr("arrays_overlap(a, b)")
res5: org.apache.spark.sql.DataFrame = [arrays_overlap(a, b): boolean]

scala> df.selectExpr("arrays_overlap(a, b)").collect
res6: Array[org.apache.spark.sql.Row] = Array([false])
{code}

It looks like this is due to the fact that in the constructed dataframe case, 
the Double value is hashed by using {{java.lang.Double.doubleToLongBits}}, 
which will treat {{0.0}} and {{-0.0}} as distinct because of the sign bit.

See here for more information: 
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala#L312-L321

I can also confirm that the same behavior occurs with FloatType and the use of 
{{java.lang.Float.floatToIntBits}}

  was:
This is a continuation of the issue described in SPARK-32110.

When using Array set-based functions {{array_union}}, {{array_intersect}}, 
{{array_except}} and {{arrays_overlap}}, {{0.0}} and {{-0.0}} have inconsistent 
behavior.

When parsed, {{-0.0}} is normalized to {{0.0}}. Therefore if I use 
{{array_union}} for example with these values directly, {{array(-0.0)}} becomes 
{{array(0.0)}}. See the example below using {{array_union}}:

{code:java}
scala> val df = spark.sql("SELECT array_union(array(0.0), array(-0.0))")
df: org.apache.spark.sql.DataFrame = [array_union(array(0.0), array(0.0)): 
array<decimal(1,1)>]
scala> df.collect()
res2: Array[org.apache.spark.sql.Row] = Array([WrappedArray(0.0)])
{code}

In this case, {{0.0}} and {{-0.0}} are considered equal and the union of the 
arrays produces a single value: {{0.0}}.

However, if I try this operation using a constructed dataframe, these values 
are not equal, and the result is an array with both {{0.0}} and {{-0.0}}.

{code:java}
scala> val df = List((Array(0.0), Array(-0.0))).toDF("a", "b")
df: org.apache.spark.sql.DataFrame = [a: array<double>, b: array<double>]

scala> df.selectExpr("array_union(a, b)").collect()
res3: Array[org.apache.spark.sql.Row] = Array([WrappedArray(0.0, -0.0)])
{code}

For {{arrays_overlap}}, here is a similar version of that inconsistency:

{code:java}
scala> val df = spark.sql("SELECT arrays_overlap(array(0.0), array(-0.0))")
df: org.apache.spark.sql.DataFrame = [arrays_overlap(array(0.0), array(0.0)): 
boolean]

scala> df.collect
res4: Array[org.apache.spark.sql.Row] = Array([true])
{code}

{code:java}
scala> val df = List((Array(0.0), Array(-0.0))).toDF("a", "b")
df: org.apache.spark.sql.DataFrame = [a: array<double>, b: array<double>]

scala> df.selectExpr("arrays_overlap(a, b)")
res5: org.apache.spark.sql.DataFrame = [arrays_overlap(a, b): boolean]

scala> df.selectExpr("arrays_overlap(a, b)").collect
res6: Array[org.apache.spark.sql.Row] = Array([false])
{code}

It looks like this is due to the fact that in the constructed dataframe case, 
the Double value is hashed by using {{java.lang.Double.doubleToLongBits}}, 
which will treat {{0.0}} and {{-0.0}} as distinct because of the sign bit.

See here for more information: 
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala#L312-L321
 for 

I can also confirm that the same behavior occurs with FloatType and the use of 
{{java.lang.Float.floatToIntBits}}


> 0.0 and -0.0 are not consistent in set operations 
> --------------------------------------------------
>
>                 Key: SPARK-39845
>                 URL: https://issues.apache.org/jira/browse/SPARK-39845
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.2.1
>            Reporter: Navin Kumar
>            Priority: Major
>
> This is a continuation of the issue described in SPARK-32110.
> When using Array set-based functions {{array_union}}, {{array_intersect}}, 
> {{array_except}} and {{arrays_overlap}}, {{0.0}} and {{-0.0}} have 
> inconsistent behavior.
> When parsed, {{-0.0}} is normalized to {{0.0}}. Therefore if I use 
> {{array_union}} for example with these values directly, {{array(-0.0)}} 
> becomes {{array(0.0)}}. See the example below using {{array_union}}:
> {code:java}
> scala> val df = spark.sql("SELECT array_union(array(0.0), array(-0.0))")
> df: org.apache.spark.sql.DataFrame = [array_union(array(0.0), array(0.0)): 
> array<decimal(1,1)>]
> scala> df.collect()
> res2: Array[org.apache.spark.sql.Row] = Array([WrappedArray(0.0)])
> {code}
> In this case, {{0.0}} and {{-0.0}} are considered equal and the union of the 
> arrays produces a single value: {{0.0}}.
> However, if I try this operation using a constructed dataframe, these values 
> are not equal, and the result is an array with both {{0.0}} and {{-0.0}}.
> {code:java}
> scala> val df = List((Array(0.0), Array(-0.0))).toDF("a", "b")
> df: org.apache.spark.sql.DataFrame = [a: array<double>, b: array<double>]
> scala> df.selectExpr("array_union(a, b)").collect()
> res3: Array[org.apache.spark.sql.Row] = Array([WrappedArray(0.0, -0.0)])
> {code}
> For {{arrays_overlap}}, here is a similar version of that inconsistency:
> {code:java}
> scala> val df = spark.sql("SELECT arrays_overlap(array(0.0), array(-0.0))")
> df: org.apache.spark.sql.DataFrame = [arrays_overlap(array(0.0), array(0.0)): 
> boolean]
> scala> df.collect
> res4: Array[org.apache.spark.sql.Row] = Array([true])
> {code}
> {code:java}
> scala> val df = List((Array(0.0), Array(-0.0))).toDF("a", "b")
> df: org.apache.spark.sql.DataFrame = [a: array<double>, b: array<double>]
> scala> df.selectExpr("arrays_overlap(a, b)")
> res5: org.apache.spark.sql.DataFrame = [arrays_overlap(a, b): boolean]
> scala> df.selectExpr("arrays_overlap(a, b)").collect
> res6: Array[org.apache.spark.sql.Row] = Array([false])
> {code}
> It looks like this is due to the fact that in the constructed dataframe case, 
> the Double value is hashed by using {{java.lang.Double.doubleToLongBits}}, 
> which will treat {{0.0}} and {{-0.0}} as distinct because of the sign bit.
> See here for more information: 
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala#L312-L321
> I can also confirm that the same behavior occurs with FloatType and the use 
> of {{java.lang.Float.floatToIntBits}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to