cfmcgrady commented on a change in pull request #32488:
URL: https://github.com/apache/spark/pull/32488#discussion_r638534679
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/UnwrapCastInBinaryComparison.scala
##########
@@ -21,15 +21,15 @@ import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.expressions.Literal.FalseLiteral
import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
import org.apache.spark.sql.catalyst.rules.Rule
-import org.apache.spark.sql.catalyst.trees.TreePattern.BINARY_COMPARISON
+import org.apache.spark.sql.catalyst.trees.TreePattern.{BINARY_COMPARISON, IN}
import org.apache.spark.sql.types._
/**
- * Unwrap casts in binary comparison operations with patterns like following:
+ * Unwrap casts in binary comparison or `In` operations with patterns like
following:
*
- * `BinaryComparison(Cast(fromExp, toType), Literal(value, toType))`
- * or
- * `BinaryComparison(Literal(value, toType), Cast(fromExp, toType))`
+ * - `BinaryComparison(Cast(fromExp, toType), Literal(value, toType))`
+ * - `BinaryComparison(Literal(value, toType), Cast(fromExp, toType))`
+ * - `In(Cast(fromExp, toType), Seq(Literal(v1, toType), Literal(v2, toType),
...)`
Review comment:
Yes, it does.
```scala
spark.range(50)
.selectExpr("cast(id as int) as id")
.write
.mode("overwrite")
.parquet("/tmp/parquet/t1")
spark.sql("SET spark.sql.planChangeLog.level=WARN")
val in = (1 to 20).map {
case i => Literal.create(i.toLong)
}
spark.read
.load("/tmp/parquet/t1")
.filter($"id".isin(in: _*))
.explain
```
Before this pr:
```
== Physical Plan ==
*(1) Filter cast(id#105 as bigint) INSET
(5,10,14,20,1,6,9,13,2,17,12,7,3,18,16,11,8,19,4,15)
+- *(1) ColumnarToRow
+- FileScan parquet [id#105] Batched: true, DataFilters: [cast(id#105 as
bigint) INSET (5,10,14,20,1,6,9,13,2,17,12,7,3,18,16,11,8,19,4,15)], Format:
Parquet, Location: InMemoryFileIndex(1 paths)[file:/tmp/parquet/t1],
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int>
```
After this pr;
```
15:33:02.191 WARN org.apache.spark.sql.catalyst.rules.PlanChangeLogger:
=== Applying Rule
org.apache.spark.sql.catalyst.optimizer.UnwrapCastInBinaryComparison ===
!Filter cast(id#105 as bigint) IN
(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20) Filter id#105 IN
(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)
+- Relation [id#105] parquet
+- Relation [id#105] parquet
15:33:02.197 WARN org.apache.spark.sql.catalyst.rules.PlanChangeLogger:
=== Applying Rule org.apache.spark.sql.catalyst.optimizer.OptimizeIn ===
!Filter id#105 IN (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)
Filter id#105 INSET (5,10,14,20,1,6,9,13,2,17,12,7,3,18,16,11,8,19,4,15)
+- Relation [id#105] parquet +-
Relation [id#105] parquet
== Physical Plan ==
*(1) Filter id#105 INSET (5,10,14,20,1,6,9,13,2,17,12,7,3,18,16,11,8,19,4,15)
+- *(1) ColumnarToRow
+- FileScan parquet [id#105] Batched: true, DataFilters: [id#105 INSET
(5,10,14,20,1,6,9,13,2,17,12,7,3,18,16,11,8,19,4,15)], Format: Parquet,
Location: InMemoryFileIndex(1 paths)[file:/tmp/parquet/t1], PartitionFilters:
[], PushedFilters: [In(id,
[5,10,14,20,1,6,9,13,2,17,12,7,3,18,16,11,8,19,4,15])], ReadSchema:
struct<id:int>
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]