nsivabalan commented on code in PR #17601:
URL: https://github.com/apache/hudi/pull/17601#discussion_r2744056120
##########
hudi-spark-datasource/hudi-spark3.4.x/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/Spark34LegacyHoodieParquetFileFormat.scala:
##########
@@ -500,4 +516,71 @@ object Spark34LegacyHoodieParquetFileFormat {
original.getBlocks
)
}
+
+ // Helper to replace filters on timestamp-millis columns with AlwaysTrue to
avoid incorrect filter pushdown.
+ // This preserves compound filters (And/Or) so other parts can still be
pushed down.
+ private def replaceTimestampMillisFiltersWithAlwaysTrue(filters: Seq[Filter],
Review Comment:
we should remove this as well
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieFileIndex.scala:
##########
@@ -366,10 +365,34 @@ case class HoodieFileIndex(spark: SparkSession,
// threshold (of 100k records)
val shouldReadInMemory = columnStatsIndex.shouldReadInMemory(this,
queryReferencedColumns)
+ // Identify timestamp-millis columns from the Avro schema to skip from
filter translation
+ // (even if they're in the index, they may have been indexed before the
fix and should not be used for filtering)
Review Comment:
and did we add UTs or functional tests(not end to end) directly against data
skipping layer.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]