yihua commented on code in PR #18126:
URL: https://github.com/apache/hudi/pull/18126#discussion_r2824963879


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/SparkHoodieTableFileIndex.scala:
##########
@@ -222,10 +225,16 @@ class SparkHoodieTableFileIndex(spark: SparkSession,
   def listMatchingPartitionPaths(predicates: Seq[Expression]): 
Seq[PartitionPath] = {
     val resolve = spark.sessionState.analyzer.resolver
     val partitionColumnNames = getPartitionColumns
+    // Strip Spark's internal exprId suffix (e.g. #136) so nested_record#136 
matches nested_record.level
+    def logicalRefName(ref: String): String = ref.replaceAll("#\\d+$", "")
     val partitionPruningPredicates = predicates.filter {
       _.references.map(_.name).forall { ref =>
-        // NOTE: We're leveraging Spark's resolver here to appropriately 
handle case-sensitivity
-        partitionColumnNames.exists(partCol => resolve(ref, partCol))
+        val logicalRef = logicalRefName(ref)
+        // NOTE: We're leveraging Spark's resolver here to appropriately 
handle case-sensitivity.
+        // For nested partition columns (e.g. nested_record.level), ref may be 
the struct root
+        // (e.g. nested_record#136); match when logicalRef equals partCol or 
is a prefix of partCol.
+        partitionColumnNames.exists(partCol =>
+          resolve(logicalRef, partCol) || partCol.startsWith(logicalRef + "."))

Review Comment:
   The resolved column object (instead of `String`) should have the column name 
and ID.  Could we avoid custom parsing of column reference?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to