Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/21757#discussion_r202416374
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
---
@@ -254,13 +254,15 @@ class FindDataSourceTable(sparkSession: SparkSession)
extends Rule[LogicalPlan]
override def apply(plan: LogicalPlan): LogicalPlan = plan transform {
case i @ InsertIntoTable(UnresolvedCatalogRelation(tableMeta), _, _,
_, _)
- if DDLUtils.isDatasourceTable(tableMeta) =>
+ if DDLUtils.isDatasourceTable(tableMeta) &&
+ DDLUtils.convertSchema(tableMeta, sparkSession) =>
--- End diff --
If you are using `format("parquet")` to create a new table, it will be a
data source table. We always use the native reader/writer to read/write such a
table.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]