cloud-fan commented on a change in pull request #34984:
URL: https://github.com/apache/spark/pull/34984#discussion_r823685909
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
##########
@@ -280,22 +276,41 @@ object OrcUtils extends Logging {
}
/**
- * Given a `StructType` object, this methods converts it to corresponding
string representation
- * in ORC.
+ * Given two `StructType` object, this methods converts it to corresponding
string representation
+ * in ORC. The second `StructType` used to change the `TimestampNTZType` as
LongType in result
+ * schema string when reading `TimestampNTZ` as `TimestampLTZ`.
*/
- def orcTypeDescriptionString(dt: DataType): String = dt match {
- case s: StructType =>
+ def getOrcSchemaString(
+ dt: DataType, orcDtOpt: Option[DataType] = None): String = (dt,
orcDtOpt) match {
+ case (s1: StructType, Some(s2: StructType)) =>
+ val orcDataTypeMap = s2.groupBy(_.name)
+ val fieldTypes = s1.fields.map { f =>
+ if (orcDataTypeMap.contains(f.name)) {
+ val orcFields = orcDataTypeMap(f.name)
Review comment:
This looks overly complicated, as we need to consider column reorder
which is part of schema revolution that should be taken care of by ORC today.
How about we don't allow reading NTZ as LTZ for now? We can support it later
when Spark controls the schema evolution for ORC. Then the code can be very
simply: just replace TimestampNTZType with LongType in the catalyst schema.
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcUtils.scala
##########
@@ -280,22 +276,41 @@ object OrcUtils extends Logging {
}
/**
- * Given a `StructType` object, this methods converts it to corresponding
string representation
- * in ORC.
+ * Given two `StructType` object, this methods converts it to corresponding
string representation
+ * in ORC. The second `StructType` used to change the `TimestampNTZType` as
LongType in result
+ * schema string when reading `TimestampNTZ` as `TimestampLTZ`.
*/
- def orcTypeDescriptionString(dt: DataType): String = dt match {
- case s: StructType =>
+ def getOrcSchemaString(
+ dt: DataType, orcDtOpt: Option[DataType] = None): String = (dt,
orcDtOpt) match {
+ case (s1: StructType, Some(s2: StructType)) =>
+ val orcDataTypeMap = s2.groupBy(_.name)
+ val fieldTypes = s1.fields.map { f =>
+ if (orcDataTypeMap.contains(f.name)) {
+ val orcFields = orcDataTypeMap(f.name)
Review comment:
This looks overly complicated, as we need to consider column reorder
which is part of schema revolution that should be taken care of by ORC today.
How about we don't allow reading NTZ as LTZ for now? We can support it later
when Spark controls the schema evolution for ORC. Then the code can be very
simple: just replace TimestampNTZType with LongType in the catalyst schema.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]