morvenhuang commented on PR #36020:
URL: https://github.com/apache/spark/pull/36020#issuecomment-1090092627

   @HyukjinKwon Hi Hyukjin, thanks for the comment, that implementation is a 
great job, although there's a tiny issue due to unnecessary column number 
check, say we have a table t1 with 2 columns c1 int, c2 int, `insert into 
t1(c1) values(100)`  is still gonna fail even when the 
useNullsForMissingDefautValues is enabled.
   
   I believe that column number check is not necessary anymore when this option 
is enabled, since internally all columns of the table will be in the query 
output due to that implementation, which means `insert into t1(c1) values(100)` 
is actually `insert into t1(c1, c2) values(100, null)` after parse.
   
   I've made a commit to get rid of the check, so that the `insert into t1(c1) 
values(100)` statement can work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to