allisonwang-db commented on code in PR #37099:
URL: https://github.com/apache/spark/pull/37099#discussion_r917189063
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala:
##########
@@ -141,7 +143,19 @@ case class CreateDataSourceTableAsSelectCommand(
mode: SaveMode,
query: LogicalPlan,
outputColumnNames: Seq[String])
- extends DataWritingCommand {
+ extends V1WriteCommand {
+
+ override def requiredOrdering: Seq[SortOrder] = {
+ val unresolvedPartitionColumns =
table.partitionColumnNames.map(UnresolvedAttribute.quoted)
+ val partitionColumns = DataSource.resolvePartitionColumns(
+ unresolvedPartitionColumns,
+ outputColumns,
+ query,
+ SparkSession.active.sessionState.conf.resolver)
+ // We do not need the path option from the table location to get writer
bucket spec.
Review Comment:
I've removed the confusing comment. It means we don't need other option
values like table path when getting the writer bucket spec.
https://github.com/apache/spark/blob/3331d4ccb7df9aeb1972ed86472269a9dbd261ff/sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala#L176-L180
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]