Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19124#discussion_r137383267
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -201,13 +201,14 @@ case class AlterTableAddColumnsCommand(
// make sure any partition columns are at the end of the fields
val reorderedSchema = catalogTable.dataSchema ++ columns ++
catalogTable.partitionSchema
+ val newSchema = catalogTable.schema.copy(fields =
reorderedSchema.toArray)
SchemaUtils.checkColumnNameDuplication(
reorderedSchema.map(_.name), "in the table definition of " +
table.identifier,
conf.caseSensitiveAnalysis)
+ DDLUtils.checkDataSchemaFieldNames(catalogTable.copy(schema =
newSchema))
--- End diff --
`newSchema ` also contains `partition schema`. How about partition schema?
Do we have the same limits on it?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]