Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/21757#discussion_r202248225
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
---
@@ -254,13 +254,15 @@ class FindDataSourceTable(sparkSession: SparkSession)
extends Rule[LogicalPlan]
override def apply(plan: LogicalPlan): LogicalPlan = plan transform {
case i @ InsertIntoTable(UnresolvedCatalogRelation(tableMeta), _, _,
_, _)
- if DDLUtils.isDatasourceTable(tableMeta) =>
+ if DDLUtils.isDatasourceTable(tableMeta) &&
+ DDLUtils.convertSchema(tableMeta, sparkSession) =>
--- End diff --
I do not think this is a right fix. If the original table is the native
data source table, we will always use our parquet/orc reader instead of hive
serde.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]