Github user dongjoon-hyun commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19124#discussion_r137390245
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
    @@ -201,13 +201,14 @@ case class AlterTableAddColumnsCommand(
     
         // make sure any partition columns are at the end of the fields
         val reorderedSchema = catalogTable.dataSchema ++ columns ++ 
catalogTable.partitionSchema
    +    val newSchema = catalogTable.schema.copy(fields = 
reorderedSchema.toArray)
     
         SchemaUtils.checkColumnNameDuplication(
           reorderedSchema.map(_.name), "in the table definition of " + 
table.identifier,
           conf.caseSensitiveAnalysis)
    +    DDLUtils.checkDataSchemaFieldNames(catalogTable.copy(schema = 
newSchema))
    --- End diff --
    
    It's okay. Inside `checkDataSchemaFieldNames`, we only uses 
`table.dataSchema` like the following.
    ```
    ParquetSchemaConverter.checkFieldNames(table.dataSchema)
    ```
    
    For the partition columns, we have been allowing the special characters.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to