Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/14537#discussion_r75904621
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -237,21 +237,26 @@ private[hive] class
HiveMetastoreCatalog(sparkSession: SparkSession) extends Log
new Path(metastoreRelation.catalogTable.storage.locationUri.get),
partitionSpec)
- val inferredSchema = if (fileType.equals("parquet")) {
- val inferredSchema =
- defaultSource.inferSchema(sparkSession, options,
fileCatalog.allFiles())
- inferredSchema.map { inferred =>
- ParquetFileFormat.mergeMetastoreParquetSchema(metastoreSchema,
inferred)
- }.getOrElse(metastoreSchema)
- } else {
- defaultSource.inferSchema(sparkSession, options,
fileCatalog.allFiles()).get
+ val inferredSchema =
+ defaultSource.inferSchema(sparkSession, options,
fileCatalog.allFiles())
+ val schema = fileType match {
+ case "parquet" =>
+ // For Parquet, get correct schema by merging Metastore schema
data types
+ // and Parquet schema field names.
+ inferredSchema.map { schema =>
+
ParquetFileFormat.mergeMetastoreParquetSchema(metastoreSchema, schema)
+ }.getOrElse(metastoreSchema)
+ case "orc" =>
+ inferredSchema.getOrElse(metastoreSchema)
+ case _ =>
+ inferredSchema.get
--- End diff --
IMHO returning `null` here would be worse than `inferredSchema.get`. Nobody
likes debugging a `NullPointerException`. ð
If we're going to assert that `fileType` must be either `parquet` or `orc`,
I think throwing an exception like
throw new RuntimeException(s"Cannot convert a $fileType to a
HadoopFsRelation")
here would be most appropriate.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]