[GitHub] spark pull request #17510: [SPARK-20194] Add support for partition pruning t...

2017-04-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/17510


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17510: [SPARK-20194] Add support for partition pruning t...

2017-04-03 Thread adrian-ionescu
Github user adrian-ionescu commented on a diff in the pull request:

https://github.com/apache/spark/pull/17510#discussion_r109388628
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1039,37 +1039,14 @@ private[spark] class HiveExternalCatalog(conf: 
SparkConf, hadoopConf: Configurat
   defaultTimeZoneId: String): Seq[CatalogTablePartition] = withClient {
 val rawTable = getRawTable(db, table)
 val catalogTable = restoreTableMetadata(rawTable)
-val partitionColumnNames = catalogTable.partitionColumnNames.toSet
-val nonPartitionPruningPredicates = predicates.filterNot {
-  _.references.map(_.name).toSet.subsetOf(partitionColumnNames)
-}
 
-if (nonPartitionPruningPredicates.nonEmpty) {
-sys.error("Expected only partition pruning predicates: " +
-  predicates.reduceLeft(And))
-}
+val partColNameMap = buildLowerCasePartColNameMap(catalogTable)
 
-val partitionSchema = catalogTable.partitionSchema
-val partColNameMap = buildLowerCasePartColNameMap(getTable(db, table))
-
-if (predicates.nonEmpty) {
-  val clientPrunedPartitions = client.getPartitionsByFilter(rawTable, 
predicates).map { part =>
+val clientPrunedPartitions =
+  client.getPartitionsByFilter(rawTable, predicates).map { part =>
--- End diff --

A similar optimization is done in the function itself: 
[client.getPartitionsByFilter()](https://github.com/adrian-ionescu/apache-spark/blob/d6bdcf480cbd1a4e3dff51bdd299e8eea6f4306d/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala#L617-L628),
 while Hive Shim_v0_12 delegates to `getAllPartitions` anyway.

I've now made the only piece of non-trivial code along that path lazy, so I 
think we're good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17510: [SPARK-20194] Add support for partition pruning t...

2017-04-03 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/17510#discussion_r109374888
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogSuite.scala
 ---
@@ -436,6 +439,42 @@ abstract class ExternalCatalogSuite extends 
SparkFunSuite with BeforeAndAfterEac
 assert(catalog.listPartitions("db2", "tbl2", Some(Map("a" -> 
"unknown"))).isEmpty)
   }
 
+  test("list partitions by filter") {
+val tz = TimeZone.getDefault.getID
+val catalog = newBasicCatalog()
+
+def checkAnswer(table: CatalogTable, filters: Seq[Expression],
+expected: Set[CatalogTablePartition]): Unit = {
--- End diff --

nit: code style
```
def checkAnswer(
param1: XX, param2: XX, ...
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17510: [SPARK-20194] Add support for partition pruning t...

2017-04-03 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/17510#discussion_r109374757
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1039,37 +1039,14 @@ private[spark] class HiveExternalCatalog(conf: 
SparkConf, hadoopConf: Configurat
   defaultTimeZoneId: String): Seq[CatalogTablePartition] = withClient {
 val rawTable = getRawTable(db, table)
 val catalogTable = restoreTableMetadata(rawTable)
-val partitionColumnNames = catalogTable.partitionColumnNames.toSet
-val nonPartitionPruningPredicates = predicates.filterNot {
-  _.references.map(_.name).toSet.subsetOf(partitionColumnNames)
-}
 
-if (nonPartitionPruningPredicates.nonEmpty) {
-sys.error("Expected only partition pruning predicates: " +
-  predicates.reduceLeft(And))
-}
+val partColNameMap = buildLowerCasePartColNameMap(catalogTable)
 
-val partitionSchema = catalogTable.partitionSchema
-val partColNameMap = buildLowerCasePartColNameMap(getTable(db, table))
-
-if (predicates.nonEmpty) {
-  val clientPrunedPartitions = client.getPartitionsByFilter(rawTable, 
predicates).map { part =>
+val clientPrunedPartitions =
+  client.getPartitionsByFilter(rawTable, predicates).map { part =>
--- End diff --

if `predicates.isEmpty`, the previous code will run `client.getPartitions`. 
Can you double check there is no performance regression?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17510: [SPARK-20194] Add support for partition pruning t...

2017-04-02 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/17510#discussion_r109314055
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogUtils.scala
 ---
@@ -125,6 +126,37 @@ object ExternalCatalogUtils {
 }
 escapePathName(col) + "=" + partitionString
   }
+
+  def prunePartitionsByFilter(
+  catalogTable: CatalogTable,
+  inputPartitions: Seq[CatalogTablePartition],
+  predicates: Seq[Expression],
+  defaultTimeZoneId: String): Seq[CatalogTablePartition] = {
+if (predicates.isEmpty) {
+  inputPartitions
+} else {
+  val partitionSchema = catalogTable.partitionSchema
+  val partitionColumnNames = catalogTable.partitionColumnNames.toSet
+
+  val nonPartitionPruningPredicates = predicates.filterNot {
+_.references.map(_.name).toSet.subsetOf(partitionColumnNames)
+  }
+  if (nonPartitionPruningPredicates.nonEmpty) {
+sys.error("Expected only partition pruning predicates: " + 
nonPartitionPruningPredicates)
--- End diff --

Could you add the negative test cases in your newly added test cases?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17510: [SPARK-20194] Add support for partition pruning t...

2017-04-02 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/17510#discussion_r109313930
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogSuite.scala
 ---
@@ -436,6 +438,37 @@ abstract class ExternalCatalogSuite extends 
SparkFunSuite with BeforeAndAfterEac
 assert(catalog.listPartitions("db2", "tbl2", Some(Map("a" -> 
"unknown"))).isEmpty)
   }
 
+  test("list partitions by filter") {
+val tz = TimeZone.getDefault().getID()
--- End diff --

Nit: `val tz = TimeZone.getDefault.getID`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17510: [SPARK-20194] Add support for partition pruning t...

2017-04-02 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/17510#discussion_r109313904
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogUtils.scala
 ---
@@ -125,6 +126,37 @@ object ExternalCatalogUtils {
 }
 escapePathName(col) + "=" + partitionString
   }
+
+  def prunePartitionsByFilter(
+  catalogTable: CatalogTable,
+  inputPartitions: Seq[CatalogTablePartition],
+  predicates: Seq[Expression],
+  defaultTimeZoneId: String): Seq[CatalogTablePartition] = {
+if (predicates.isEmpty) {
+  inputPartitions
+} else {
+  val partitionSchema = catalogTable.partitionSchema
+  val partitionColumnNames = catalogTable.partitionColumnNames.toSet
+
+  val nonPartitionPruningPredicates = predicates.filterNot {
+_.references.map(_.name).toSet.subsetOf(partitionColumnNames)
+  }
+  if (nonPartitionPruningPredicates.nonEmpty) {
+sys.error("Expected only partition pruning predicates: " + 
nonPartitionPruningPredicates)
--- End diff --

Nit: Throwing an `AnalysisException` is preferred. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17510: [SPARK-20194] Add support for partition pruning t...

2017-04-02 Thread adrian-ionescu
GitHub user adrian-ionescu opened a pull request:

https://github.com/apache/spark/pull/17510

[SPARK-20194] Add support for partition pruning to in-memory catalog

## What changes were proposed in this pull request?
This patch implements `listPartitionsByFilter()` for `InMemoryCatalog` and 
thus resolves an outstanding TODO causing the `PruneFileSourcePartitions` 
optimizer rule not to apply when "spark.sql.catalogImplementation" is set to 
"in-memory" (which is the default).

The change is straightforward: it extracts the code for further filtering 
of the list of partitions returned by the metastore's `getPartitionsByFilter()` 
out from `HiveExternalCatalog` into `ExternalCatalogUtils` and calls this new 
function from `InMemoryCatalog` on the whole list of partitions.

Now that this method is implemented we can always pass the `CatalogTable` 
to the `DataSource` in `FindDataSourceTable`, so that the latter is resolved to 
a relation with a `CatalogFileIndex`, which is what the 
`PruneFileSourcePartitions` rule matches for.

## How was this patch tested?
Ran existing tests and added new test for `listPartitionsByFilter` in 
`ExternalCatalogSuite`, which is subclassed by both `InMemoryCatalogSuite` and 
`HiveExternalCatalogSuite`.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/adrian-ionescu/apache-spark InMemoryCatalog

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/17510.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #17510


commit 3b031c75ba35324242993e8742961d23ce087445
Author: Adrian Ionescu 
Date:   2017-04-02T13:01:41Z

Implement listPartitionsByFilter() for InMemoryCatalog




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org