[jira] [Assigned] (SPARK-24879) NPE in Hive partition filter pushdown for `partCol IN (NULL, ....)`

2018-07-20 Thread Xiao Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Li reassigned SPARK-24879:
---

Assignee: William Sheu

> NPE in Hive partition filter pushdown for `partCol IN (NULL, )`
> ---
>
> Key: SPARK-24879
> URL: https://issues.apache.org/jira/browse/SPARK-24879
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0, 2.3.1
>Reporter: William Sheu
>Assignee: William Sheu
>Priority: Major
> Fix For: 2.3.2, 2.4.0
>
>
> The following query triggers a NPE:
> {code:java}
> create table foo (col1 int) partitioned by (col2 int);
> select * from foo where col2 in (1, NULL);
> {code}
> We try to push down the filter to Hive in order to do partition pruning, but 
> the filter converter breaks on a `null`.
> Here's the stack:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiteral$2$.unapply(HiveShim.scala:601)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$$anonfun$5.apply(HiveShim.scala:609)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$$anonfun$5.apply(HiveShim.scala:609)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$.unapply(HiveShim.scala:609)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.org$apache$spark$sql$hive$client$Shim_v0_13$$convert$1(HiveShim.scala:671)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$$anonfun$convertFilters$1.apply(HiveShim.scala:704)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$$anonfun$convertFilters$1.apply(HiveShim.scala:704)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
> at scala.collection.immutable.List.flatMap(List.scala:355)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.convertFilters(HiveShim.scala:704)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:725)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:678)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:676)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:275)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:213)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:212)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:258)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:676)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1221)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1214)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.listPartitionsByFilter(HiveExternalCatalog.scala:1214)
> at 
> org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.listPartitionsByFilter(ExternalCatalogWithListener.scala:254)
> at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.listPartitionsByFilter(SessionCatalog.scala:955)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.rawPartitions$lzycompute(HiveTableScanExec.scala:172)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.rawPartitions(HiveTableScanExec.scala:164)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec$$anonfun$11.apply(HiveTableScanExec.scala:190)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec$$anonfun$11.apply(HiveTableScanExec.scala:190)
> at org.apache.spark.util.Utils$.withDummyCallSite(Utils.scala:2418)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.doExecute(HiveTableScanExec.scala:189)
> at 
> 

[jira] [Assigned] (SPARK-24879) NPE in Hive partition filter pushdown for `partCol IN (NULL, ....)`

2018-07-20 Thread Apache Spark (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-24879:


Assignee: (was: Apache Spark)

> NPE in Hive partition filter pushdown for `partCol IN (NULL, )`
> ---
>
> Key: SPARK-24879
> URL: https://issues.apache.org/jira/browse/SPARK-24879
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0, 2.3.1
>Reporter: William Sheu
>Priority: Major
>
> The following query triggers a NPE:
> {code:java}
> create table foo (col1 int) partitioned by (col2 int);
> select * from foo where col2 in (1, NULL);
> {code}
> We try to push down the filter to Hive in order to do partition pruning, but 
> the filter converter breaks on a `null`.
> Here's the stack:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiteral$2$.unapply(HiveShim.scala:601)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$$anonfun$5.apply(HiveShim.scala:609)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$$anonfun$5.apply(HiveShim.scala:609)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$.unapply(HiveShim.scala:609)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.org$apache$spark$sql$hive$client$Shim_v0_13$$convert$1(HiveShim.scala:671)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$$anonfun$convertFilters$1.apply(HiveShim.scala:704)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$$anonfun$convertFilters$1.apply(HiveShim.scala:704)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
> at scala.collection.immutable.List.flatMap(List.scala:355)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.convertFilters(HiveShim.scala:704)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:725)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:678)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:676)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:275)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:213)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:212)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:258)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:676)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1221)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1214)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.listPartitionsByFilter(HiveExternalCatalog.scala:1214)
> at 
> org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.listPartitionsByFilter(ExternalCatalogWithListener.scala:254)
> at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.listPartitionsByFilter(SessionCatalog.scala:955)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.rawPartitions$lzycompute(HiveTableScanExec.scala:172)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.rawPartitions(HiveTableScanExec.scala:164)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec$$anonfun$11.apply(HiveTableScanExec.scala:190)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec$$anonfun$11.apply(HiveTableScanExec.scala:190)
> at org.apache.spark.util.Utils$.withDummyCallSite(Utils.scala:2418)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.doExecute(HiveTableScanExec.scala:189)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
> 

[jira] [Assigned] (SPARK-24879) NPE in Hive partition filter pushdown for `partCol IN (NULL, ....)`

2018-07-20 Thread Apache Spark (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-24879:


Assignee: Apache Spark

> NPE in Hive partition filter pushdown for `partCol IN (NULL, )`
> ---
>
> Key: SPARK-24879
> URL: https://issues.apache.org/jira/browse/SPARK-24879
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0, 2.3.1
>Reporter: William Sheu
>Assignee: Apache Spark
>Priority: Major
>
> The following query triggers a NPE:
> {code:java}
> create table foo (col1 int) partitioned by (col2 int);
> select * from foo where col2 in (1, NULL);
> {code}
> We try to push down the filter to Hive in order to do partition pruning, but 
> the filter converter breaks on a `null`.
> Here's the stack:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiteral$2$.unapply(HiveShim.scala:601)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$$anonfun$5.apply(HiveShim.scala:609)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$$anonfun$5.apply(HiveShim.scala:609)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$ExtractableLiterals$2$.unapply(HiveShim.scala:609)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.org$apache$spark$sql$hive$client$Shim_v0_13$$convert$1(HiveShim.scala:671)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$$anonfun$convertFilters$1.apply(HiveShim.scala:704)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13$$anonfun$convertFilters$1.apply(HiveShim.scala:704)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
> at scala.collection.immutable.List.flatMap(List.scala:355)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.convertFilters(HiveShim.scala:704)
> at 
> org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:725)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:678)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:676)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:275)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:213)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:212)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:258)
> at 
> org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:676)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1221)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1214)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
> at 
> org.apache.spark.sql.hive.HiveExternalCatalog.listPartitionsByFilter(HiveExternalCatalog.scala:1214)
> at 
> org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.listPartitionsByFilter(ExternalCatalogWithListener.scala:254)
> at 
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.listPartitionsByFilter(SessionCatalog.scala:955)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.rawPartitions$lzycompute(HiveTableScanExec.scala:172)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.rawPartitions(HiveTableScanExec.scala:164)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec$$anonfun$11.apply(HiveTableScanExec.scala:190)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec$$anonfun$11.apply(HiveTableScanExec.scala:190)
> at org.apache.spark.util.Utils$.withDummyCallSite(Utils.scala:2418)
> at 
> org.apache.spark.sql.hive.execution.HiveTableScanExec.doExecute(HiveTableScanExec.scala:189)
> at 
>