[ 
https://issues.apache.org/jira/browse/SPARK-23304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349436#comment-16349436
 ] 

Xiao Li commented on SPARK-23304:
---------------------------------

In this release, we also made a change in the default of another SQLConf 
`spark.sql.hive.convertMetastoreOrc`. Could you also rerun the query after 
setting this conf to `false` and rerun the query in 2.3 release?

 

I am wondering if this is in your original query? I am unable to reproduce this 
one.

```NOT (something#226 = 00000000000000000000000000000000)))```

 

> Spark SQL coalesce() against hive not working
> ---------------------------------------------
>
>                 Key: SPARK-23304
>                 URL: https://issues.apache.org/jira/browse/SPARK-23304
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.0
>            Reporter: Thomas Graves
>            Assignee: Xiao Li
>            Priority: Major
>         Attachments: spark22_oldorc_explain.txt, spark23_oldorc_explain.txt
>
>
> The query below seems to ignore the coalesce. This is running spark 2.2 or 
> spark 2.3 against hive, which is reading orc:
>  
>  Query:
>  spark.sql("SELECT COUNT(DISTINCT(something)) FROM sometable WHERE dt >= 
> '20170301' AND dt <= '20170331' AND something IS NOT 
> NULL").coalesce(160000).show()
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to