Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19494#discussion_r144713232
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -104,7 +104,8 @@ case class InMemoryTableScanExec(
case In(a: AttributeReference, list: Seq[Expression]) if
list.forall(_.isInstanceOf[Literal]) =>
list.map(l => statsFor(a).lowerBound <= l.asInstanceOf[Literal] &&
- l.asInstanceOf[Literal] <= statsFor(a).upperBound).reduce(_ || _)
+ l.asInstanceOf[Literal] <= statsFor(a).upperBound)
--- End diff --
I think this is a bit more elegant and concise as code style, but your idea
can be more efficient (though I am not sure how much overhead is introduced by
a `False` evaluation). Then I am updating this PR according to your suggestion,
thanks.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]