Github user mgaido91 commented on a diff in the pull request: https://github.com/apache/spark/pull/21882#discussion_r205511472 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala --- @@ -183,6 +183,13 @@ case class InMemoryTableScanExec( private val stats = relation.partitionStatistics private def statsFor(a: Attribute) = stats.forAttribute(a) + // For some ColumnStats, for instance, ObjectColumnStats always has nulls for lower and upper + // bounds. + private def nullSafeEval( + attr: AttributeReference)(func: AttributeReference => Expression): Expression = { + attr.isNull || func(attr) --- End diff -- this basically means turning off the filtering for complex types. Despite this may be not a big deal, as probably we won't have complex types often here, can't we instead add the isNull filter only for complex types?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org