dtenedor opened a new pull request, #37423:
URL: https://github.com/apache/spark/pull/37423

   ### What changes were proposed in this pull request?
   
   Enable implicit DEFAULT column values in inserts from DataFrames.
   
   This mostly already worked since the DataFrame inserts already converted to 
LogicalPlans. I added testing and a small analysis change since the operators 
are resolved one-by-one instead of all at once.
   
   Note that explicit column "default" references are not supported in write 
operations from DataFrames: since the operators are resolved one-by-one, any 
`.select` referring to "default" generates a "column not found" error before 
any following `.insertInto`.
   
   ### Why are the changes needed?
   
   This makes inserts from DataFrames produce the same results as those from 
SQL commands, for consistency and correctness.
   
   ### Does this PR introduce _any_ user-facing change?
   
   No.
   
   ### How was this patch tested?
   
   Extended the `InsertSuite` in this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to