Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17758#discussion_r124221270
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -185,8 +183,21 @@ case class DataSource(
}
SchemaUtils.checkColumnNameDuplication(
- (dataSchema ++ partitionSchema).map(_.name), "in the data schema and
the partition schema",
- sparkSession.sessionState.conf.caseSensitiveAnalysis)
+ dataSchema.map(_.name), "in the data schema", equality)
+ SchemaUtils.checkColumnNameDuplication(
+ partitionSchema.map(_.name), "in the partition schema", equality)
+
+ // We just print a waring message if the data schema and partition
schema have the duplicate
+ // columns. This is because we allow users to do so in the previous
Spark releases and
+ // we have the existing tests for the cases (e.g.,
`ParquetHadoopFsRelationSuite`).
+ // See SPARK-18108 and SPARK-21144 for related discussions.
+ try {
+ SchemaUtils.checkColumnNameDuplication(
--- End diff --
shall we put this check in the constructor of `DataSource`? so it works for
both read nad write path
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]