[ 
https://issues.apache.org/jira/browse/SPARK-32672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17182345#comment-17182345
 ] 

Robert Joseph Evans commented on SPARK-32672:
---------------------------------------------

Honestly, it is not a big deal what happened.  I have worked on enough 
open-source projects that I know that all of this is best-effort run by 
volunteers.  Plus my involvement in the Spark project has not been frequent 
enough for a lot of people to know that I am a PMC member and honestly I have 
not been involved enough lately to know the process myself.  So I am happy to 
have people correct me or treat me like I am a contributor instead. The 
important thing is that we fixed the bug, and it should start to roll out soon.

> Data corruption in some cached compressed boolean columns
> ---------------------------------------------------------
>
>                 Key: SPARK-32672
>                 URL: https://issues.apache.org/jira/browse/SPARK-32672
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.4, 2.4.6, 3.0.0, 3.0.1, 3.1.0
>            Reporter: Robert Joseph Evans
>            Assignee: Robert Joseph Evans
>            Priority: Blocker
>              Labels: correctness
>             Fix For: 2.4.7, 3.0.1, 3.1.0
>
>         Attachments: bad_order.snappy.parquet, small_bad.snappy.parquet
>
>
> I found that when sorting some boolean data into the cache that the results 
> can change when the data is read back out.
> It needs to be a non-trivial amount of data, and it is highly dependent on 
> the order of the data.  If I disable compression in the cache the issue goes 
> away.  I was able to make this happen in 3.0.0.  I am going to try and 
> reproduce it in other versions too.
> I'll attach the parquet file with boolean data in an order that causes this 
> to happen. As you can see after the data is cached a single null values 
> switches over to be false.
> {code}
> scala> val bad_order = spark.read.parquet("./bad_order.snappy.parquet")
> bad_order: org.apache.spark.sql.DataFrame = [b: boolean]                      
>   
> scala> bad_order.groupBy("b").count.show
> +-----+-----+
> |    b|count|
> +-----+-----+
> | null| 7153|
> | true|54334|
> |false|54021|
> +-----+-----+
> scala> bad_order.cache()
> res1: bad_order.type = [b: boolean]
> scala> bad_order.groupBy("b").count.show
> +-----+-----+
> |    b|count|
> +-----+-----+
> | null| 7152|
> | true|54334|
> |false|54022|
> +-----+-----+
> scala> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to