viirya commented on a change in pull request #31440:
URL: https://github.com/apache/spark/pull/31440#discussion_r568891478



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -4034,3 +4009,35 @@ object ApplyCharTypePadding extends Rule[LogicalPlan] {
     if (targetLength > charLength) StringRPad(expr, Literal(targetLength)) 
else expr
   }
 }
+
+/**
+ * This rule removes metadata columns from `DataSourceV2Relation` under 2 
cases:
+ *   - A single v2 scan (can be produced by `spark.table`), which is similar 
to star expansion, and
+ *     metadata columns should only be picked by explicit references.
+ *   - V2 scans under writing commands, as we can't insert into metadata 
columns.

Review comment:
       This is for the `table` in `InsertIntoStatement`. How about the `query`? 
E.g. `spark.table(...).write.insertInto(...)`. Do we need to remove metadata 
columns for the query here if it also is a v2 scan?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to