dongjoon-hyun commented on a change in pull request #29045:
URL: https://github.com/apache/spark/pull/29045#discussion_r453199114



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFileFormat.scala
##########
@@ -181,10 +183,19 @@ class OrcFileFormat
       val readerOptions = OrcFile.readerOptions(conf).filesystem(fs)
       val requestedColIdsOrEmptyFile =
         Utils.tryWithResource(OrcFile.createReader(filePath, readerOptions)) { 
reader =>
+          // for ORC file written by Hive, no field names
+          // in the physical schema, there is a need to send the
+          // entire dataSchema instead of required schema
+          val orcFieldNames = reader.getSchema.getFieldNames.asScala
+          if (orcFieldNames.forall(_.startsWith("_col"))) {
+            resultSchemaString = 
OrcUtils.orcTypeDescriptionString(actualSchema)
+          }

Review comment:
       Do you think we can have the above logic inside 
`OrcUtils.requestedColumnIds` function instead of this file, @SaurabhChawla100 ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to