Github user maropu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17758#discussion_r122911093
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
 ---
    @@ -62,13 +63,8 @@ case class InsertIntoHadoopFsRelationCommand(
         assert(children.length == 1)
     
         // Most formats don't do well with duplicate columns, so lets not 
allow that
    -    if (query.schema.fieldNames.length != 
query.schema.fieldNames.distinct.length) {
    -      val duplicateColumns = 
query.schema.fieldNames.groupBy(identity).collect {
    -        case (x, ys) if ys.length > 1 => "\"" + x + "\""
    -      }.mkString(", ")
    -      throw new AnalysisException(s"Duplicate column(s): $duplicateColumns 
found, " +
    -        "cannot save to file.")
    -    }
    +    SchemaUtils.checkSchemaColumnNameDuplication(
    +      query.schema, "the query", 
sparkSession.sessionState.conf.caseSensitiveAnalysis)
    --- End diff --
    
    I updated like `when inserting into $outputPath`: 
https://github.com/apache/spark/pull/17758/commits/7ab8c4913b5645ef82275d0c3f1ae3ce76a16302#diff-5d2ebf4e9ca5a990136b276859769289R1126


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to