beliefer commented on a change in pull request #34984:
URL: https://github.com/apache/spark/pull/34984#discussion_r782720716
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
##########
@@ -279,22 +275,40 @@ object OrcUtils extends Logging {
}
/**
- * Given a `StructType` object, this methods converts it to corresponding
string representation
- * in ORC.
+ * Given two `StructType` object, this methods converts it to corresponding
string representation
+ * in ORC. The second `StructType` used to change the `TimestampNTZType` as
LongType in result
+ * schema string when reading `TimestampNTZ` as `TimestampLTZ`.
*/
- def orcTypeDescriptionString(dt: DataType): String = dt match {
- case s: StructType =>
+ def getOrcSchemaString(
+ dt: DataType, orcDt: Option[DataType] = None): String = (dt, orcDt)
match {
+ case (s1: StructType, Some(s2: StructType)) =>
+ val fieldTypes = s1.fields.map { f =>
+ val idx = s2.fieldNames.indexWhere(caseSensitiveResolution(_, f.name))
Review comment:
`....map { f => ....indexWhere }` is widely used in Spark. I think it
could simplify the code.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]