SaurabhChawla100 commented on a change in pull request #29045:
URL: https://github.com/apache/spark/pull/29045#discussion_r453171980
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFileFormat.scala
##########
@@ -181,10 +183,19 @@ class OrcFileFormat
val readerOptions = OrcFile.readerOptions(conf).filesystem(fs)
val requestedColIdsOrEmptyFile =
Utils.tryWithResource(OrcFile.createReader(filePath, readerOptions)) {
reader =>
+ // for ORC file written by Hive, no field names
+ // in the physical schema, there is a need to send the
+ // entire dataSchema instead of required schema
+ val orcFieldNames = reader.getSchema.getFieldNames.asScala
+ if (orcFieldNames.forall(_.startsWith("_col"))) {
Review comment:
So this is for a ORC file written by Hive, no field names in the
physical schema. In that case it its having names like _col1, _col2 etc.
Check this code for reference
https://github.com/apache/spark/blob/84db660ebef4f9c543ab2709103c4542b407a829/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala#L133
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]