Hisoka-X commented on code in PR #42220:
URL: https://github.com/apache/spark/pull/42220#discussion_r1297920816
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TableOutputResolver.scala:
##########
@@ -238,11 +238,17 @@ object TableOutputResolver {
if (reordered.length == expectedCols.length) {
if (matchedCols.size < inputCols.length) {
- val extraCols = inputCols.filterNot(col =>
matchedCols.contains(col.name))
- .map(col => s"${toSQLId(col.name)}").mkString(", ")
- throw
QueryCompilationErrors.incompatibleDataToTableExtraStructFieldsError(
- tableName, colPath.quoted, extraCols
- )
+ if (colPath.isEmpty) {
+ val cannotFindCol = expectedCols.filter(col =>
!matchedCols.contains(col.name)).head.name
+ throw
QueryCompilationErrors.incompatibleDataToTableCannotFindDataError(tableName,
Review Comment:
https://github.com/apache/spark/pull/42220#discussion_r1297314416
The main reason are when reach here, the top-level columns can not be
`TOO_MANY_DATA_COLUMNS`, because it already be checked in
`TableOutputResolver:51`. Only will be had extra column and missing column, so
we throw `CANNOT_FIND_DATA` to make sure v1 and v2 align.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]