attilapiros commented on a change in pull request #31133:
URL: https://github.com/apache/spark/pull/31133#discussion_r570304183
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -388,6 +394,9 @@ private[hive] object HiveTableUtil {
private[hive] object DeserializerLock
private[hive] object HadoopTableReader extends HiveInspectors with Logging {
+
+ val avroTableProperties =
AvroTableProperties.values().map(_.getPropName()).toSet
Review comment:
@dongjoon-hyun I can offer a compromised solution where
`AvroTableProperties` is not used at all. Instead of that the Avro property
names will be detected with a `startsWith("avro.")` check.
Is it ok with you?
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -388,6 +394,9 @@ private[hive] object HiveTableUtil {
private[hive] object DeserializerLock
private[hive] object HadoopTableReader extends HiveInspectors with Logging {
+
+ val avroTableProperties =
AvroTableProperties.values().map(_.getPropName()).toSet
Review comment:
@xkrogen thanks for the correct pattern: `startsWith("avro.schema.")`!
I will use that one.
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -388,6 +394,9 @@ private[hive] object HiveTableUtil {
private[hive] object DeserializerLock
private[hive] object HadoopTableReader extends HiveInspectors with Logging {
+
+ val avroTableProperties =
AvroTableProperties.values().map(_.getPropName()).toSet
Review comment:
Sorry I read this message after my git push, but sure I will check that.
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -388,6 +394,9 @@ private[hive] object HiveTableUtil {
private[hive] object DeserializerLock
private[hive] object HadoopTableReader extends HiveInspectors with Logging {
+
+ val avroTableProperties =
AvroTableProperties.values().map(_.getPropName()).toSet
Review comment:
Yes, but thanks for all your inputs.
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -248,11 +249,16 @@ class HadoopTableReader(
// SPARK-13709: For SerDes like AvroSerDe, some essential information
(e.g. Avro schema
// information) may be defined in table properties. Here we should
merge table properties
// and partition properties before initializing the deserializer. Note
that partition
- // properties take a higher priority here. For example, a partition
may have a different
- // SerDe as the one defined in table properties.
+ // properties take a higher priority here except for the Avro table
properties
+ // to support schema evolution: in that case the properties given at
table level will
+ // be used (for details please check SPARK-26836).
+ // For example, a partition may have a different SerDe as the one
defined in table
+ // properties.
val props = new Properties(tableProperties)
- partProps.asScala.foreach {
- case (key, value) => props.setProperty(key, value)
+ partProps.asScala.filterNot { case (k, _) =>
+ k == AvroTableProperties.SCHEMA_LITERAL.getPropName() &&
tableProperties.containsKey(k)
+ }.foreach { case (key, value) =>
+ props.setProperty(key, value)
Review comment:
Even when 2 lines above the other style looks good to avoid the 100
character limit for lines?
So do you prefer this?
```
partProps.asScala.filterNot { case (k, _) =>
k == AvroTableProperties.SCHEMA_LITERAL.getPropName() &&
tableProperties.containsKey(k)
}.foreach {
case (key, value) => props.setProperty(key, value)
}
```
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -248,11 +249,16 @@ class HadoopTableReader(
// SPARK-13709: For SerDes like AvroSerDe, some essential information
(e.g. Avro schema
// information) may be defined in table properties. Here we should
merge table properties
// and partition properties before initializing the deserializer. Note
that partition
- // properties take a higher priority here. For example, a partition
may have a different
- // SerDe as the one defined in table properties.
+ // properties take a higher priority here except for the Avro table
properties
+ // to support schema evolution: in that case the properties given at
table level will
+ // be used (for details please check SPARK-26836).
+ // For example, a partition may have a different SerDe as the one
defined in table
+ // properties.
val props = new Properties(tableProperties)
- partProps.asScala.foreach {
- case (key, value) => props.setProperty(key, value)
+ partProps.asScala.filterNot { case (k, _) =>
+ k == AvroTableProperties.SCHEMA_LITERAL.getPropName() &&
tableProperties.containsKey(k)
+ }.foreach { case (key, value) =>
+ props.setProperty(key, value)
Review comment:
I have chosen the consistent over keeping the old style. Or should I
modify the `filterNot` body too?
##########
File path: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala
##########
@@ -248,11 +249,16 @@ class HadoopTableReader(
// SPARK-13709: For SerDes like AvroSerDe, some essential information
(e.g. Avro schema
// information) may be defined in table properties. Here we should
merge table properties
// and partition properties before initializing the deserializer. Note
that partition
- // properties take a higher priority here. For example, a partition
may have a different
- // SerDe as the one defined in table properties.
+ // properties take a higher priority here except for the Avro table
properties
+ // to support schema evolution: in that case the properties given at
table level will
+ // be used (for details please check SPARK-26836).
+ // For example, a partition may have a different SerDe as the one
defined in table
+ // properties.
val props = new Properties(tableProperties)
- partProps.asScala.foreach {
- case (key, value) => props.setProperty(key, value)
+ partProps.asScala.filterNot { case (k, _) =>
+ k == AvroTableProperties.SCHEMA_LITERAL.getPropName() &&
tableProperties.containsKey(k)
+ }.foreach { case (key, value) =>
+ props.setProperty(key, value)
Review comment:
I have chosen the consistent style over keeping the old one. Or should I
modify the `filterNot` body too?
##########
File path:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##########
@@ -1883,6 +1883,58 @@ class HiveDDLSuite
}
}
+ test("SPARK-26836: support Avro schema evolution") {
Review comment:
It's 22:53 here so the rest of the requirements remains for the future.
##########
File path:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
##########
@@ -1883,6 +1883,58 @@ class HiveDDLSuite
}
}
+ test("SPARK-26836: support Avro schema evolution") {
Review comment:
I have added a new unit test: `SPARK-26836: support Avro schema
evolution (remove column)`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]