viirya commented on a change in pull request #29412:
URL: https://github.com/apache/spark/pull/29412#discussion_r470205227
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFiltersBase.scala
##########
@@ -67,18 +65,12 @@ trait OrcFiltersBase {
}
}
- val primitiveFields = getPrimitiveFields(schema.fields)
- if (caseSensitive) {
- primitiveFields.toMap
- } else {
- // Don't consider ambiguity here, i.e. more than one field are matched
in case insensitive
- // mode, just skip pushdown for these fields, they will trigger
Exception when reading,
- // See: SPARK-25175.
- val dedupPrimitiveFields = primitiveFields
- .groupBy(_._1.toLowerCase(Locale.ROOT))
- .filter(_._2.size == 1)
- .mapValues(_.head._2)
- CaseInsensitiveMap(dedupPrimitiveFields)
- }
+ // Different with Parquet case, for case insensitive analysis, we will set
+ // `OrcConf.IS_SCHEMA_EVOLUTION_CASE_SENSITIVE`. So we don't need to worry
about
Review comment:
The test was added in #29427. Seems ORC library doesn't process pushed
predicate in case-insensitive approach as expected.
That's said, even `OrcConf.IS_SCHEMA_EVOLUTION_CASE_SENSITIVE ` is set to
false at Spark ORC datasource, the column names in pushed predicates are
matched in case-sensitive way.
This is shown in the added test.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]