olaky commented on code in PR #39408:
URL: https://github.com/apache/spark/pull/39408#discussion_r1063498040
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala:
##########
@@ -258,6 +258,24 @@ object FileSourceStrategy extends Strategy with
PredicateHelper with Logging {
val outputAttributes = readDataColumns ++ generatedMetadataColumns ++
partitionColumns ++ constantMetadataColumns
+
+ // The metadata attribute references in the filters also have to be
categorized as either
+ // constant or generated metadata attributes. Only data filters can
contain metadata filters.
+ def categorizeFileSourceMetadataAttributesInFilters(
+ filters: Seq[Expression]): Seq[Expression] =
+ filters.map { filter =>
+ filter.transform {
+ case attr: AttributeReference if
FileSourceMetadataAttribute.unapply(attr).isDefined =>
+ if (attr.dataType.asInstanceOf[StructType].fieldNames
+ .forall(fieldName => constantMetadataColumns.exists(fieldName
== _.name))) {
Review Comment:
That is a really good idea, I did settle for something along those lines: I
am rebinding the references to the flattened columns. This way the filters do
again reference columns that are actually part of the schema
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]