dongjoon-hyun commented on a change in pull request #31133:
URL: https://github.com/apache/spark/pull/31133#discussion_r570554343
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -388,6 +393,7 @@ private[hive] object HiveTableUtil {
private[hive] object DeserializerLock
private[hive] object HadoopTableReader extends HiveInspectors with Logging {
+
Review comment:
Could you remove this to reduce the diff?
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -239,7 +240,6 @@ class HadoopTableReader(
fillPartitionKeys(partValues, mutableRow)
val tableProperties = tableDesc.getProperties
-
Review comment:
Let's not remove this line~
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -248,11 +248,16 @@ class HadoopTableReader(
// SPARK-13709: For SerDes like AvroSerDe, some essential information
(e.g. Avro schema
// information) may be defined in table properties. Here we should
merge table properties
// and partition properties before initializing the deserializer. Note
that partition
- // properties take a higher priority here. For example, a partition
may have a different
- // SerDe as the one defined in table properties.
+ // properties take a higher priority here except for the Avro table
properties
+ // to support schema evolution: in that case the properties given at
table level will
+ // be used (for details please check SPARK-26836).
+ // For example, a partition may have a different SerDe as the one
defined in table
+ // properties.
val props = new Properties(tableProperties)
- partProps.asScala.foreach {
- case (key, value) => props.setProperty(key, value)
+ partProps.asScala.filterNot { case (k, _) =>
+ k == AvroTableProperties.SCHEMA_LITERAL.getPropName() &&
tableProperties.containsKey(k)
Review comment:
`&& tableProperties.containsKey(k)` looks risky to me and beyond this
PR's test coverage. According to the test case, it looks like `k ==
AvroTableProperties.SCHEMA_LITERAL.getPropName()` is enough to pass the test
case. Did I understand correctly?
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -248,11 +248,16 @@ class HadoopTableReader(
// SPARK-13709: For SerDes like AvroSerDe, some essential information
(e.g. Avro schema
// information) may be defined in table properties. Here we should
merge table properties
// and partition properties before initializing the deserializer. Note
that partition
- // properties take a higher priority here. For example, a partition
may have a different
- // SerDe as the one defined in table properties.
+ // properties take a higher priority here except for the Avro table
properties
+ // to support schema evolution: in that case the properties given at
table level will
+ // be used (for details please check SPARK-26836).
+ // For example, a partition may have a different SerDe as the one
defined in table
+ // properties.
val props = new Properties(tableProperties)
- partProps.asScala.foreach {
- case (key, value) => props.setProperty(key, value)
+ partProps.asScala.filterNot { case (k, _) =>
+ k == AvroTableProperties.SCHEMA_LITERAL.getPropName() &&
tableProperties.containsKey(k)
Review comment:
~`&& tableProperties.containsKey(k)` looks risky to me and beyond this
PR's test coverage. According to the test case, it looks like `k ==
AvroTableProperties.SCHEMA_LITERAL.getPropName()` is enough to pass the test
case. Did I understand correctly?~ My bad. Never mind.
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -248,11 +249,16 @@ class HadoopTableReader(
// SPARK-13709: For SerDes like AvroSerDe, some essential information
(e.g. Avro schema
// information) may be defined in table properties. Here we should
merge table properties
// and partition properties before initializing the deserializer. Note
that partition
- // properties take a higher priority here. For example, a partition
may have a different
- // SerDe as the one defined in table properties.
+ // properties take a higher priority here except for the Avro table
properties
+ // to support schema evolution: in that case the properties given at
table level will
+ // be used (for details please check SPARK-26836).
+ // For example, a partition may have a different SerDe as the one
defined in table
+ // properties.
val props = new Properties(tableProperties)
- partProps.asScala.foreach {
- case (key, value) => props.setProperty(key, value)
+ partProps.asScala.filterNot { case (k, _) =>
+ k == AvroTableProperties.SCHEMA_LITERAL.getPropName() &&
tableProperties.containsKey(k)
+ }.foreach { case (key, value) =>
+ props.setProperty(key, value)
Review comment:
Let's revert to the original form like the following. It will make your
PR content more clear.
```scala
- }.foreach { case (key, value) =>
- props.setProperty(key, value)
+ }.foreach {
+ case (key, value) => props.setProperty(key, value)
```
##########
File path:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##########
@@ -1883,6 +1883,58 @@ class HiveDDLSuite
}
}
+ test("SPARK-26836: support Avro schema evolution") {
Review comment:
Could you add an opposite test case which column removal schema
evolution, please?
##########
File path:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##########
@@ -1883,6 +1883,58 @@ class HiveDDLSuite
}
}
+ test("SPARK-26836: support Avro schema evolution") {
Review comment:
`Avro` is known to support (1), (2), (3) from the following four.
```
* 1. Add a column
* 2. Hide a column
* 3. Change a column position
* 4. Change a column type (Upcast)
```
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -248,11 +249,16 @@ class HadoopTableReader(
// SPARK-13709: For SerDes like AvroSerDe, some essential information
(e.g. Avro schema
// information) may be defined in table properties. Here we should
merge table properties
// and partition properties before initializing the deserializer. Note
that partition
- // properties take a higher priority here. For example, a partition
may have a different
- // SerDe as the one defined in table properties.
+ // properties take a higher priority here except for the Avro table
properties
+ // to support schema evolution: in that case the properties given at
table level will
+ // be used (for details please check SPARK-26836).
+ // For example, a partition may have a different SerDe as the one
defined in table
+ // properties.
val props = new Properties(tableProperties)
- partProps.asScala.foreach {
- case (key, value) => props.setProperty(key, value)
+ partProps.asScala.filterNot { case (k, _) =>
+ k == AvroTableProperties.SCHEMA_LITERAL.getPropName() &&
tableProperties.containsKey(k)
+ }.foreach { case (key, value) =>
+ props.setProperty(key, value)
Review comment:
Yes, that one.
##########
File path:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##########
@@ -1883,6 +1883,58 @@ class HiveDDLSuite
}
}
+ test("SPARK-26836: support Avro schema evolution") {
Review comment:
Sure, take your time~
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]