gengliangwang edited a comment on pull request #35690:
URL: https://github.com/apache/spark/pull/35690#issuecomment-1057620996


   Supporting default column values is very common among DBMS. However, this 
will be a breaking change for Spark SQL
   Currently Spark SQL
   ```
   > create table t(i int, j int);
   > insert into t values(1);
   Error in query: `default`.`t` requires that the data to be inserted have the 
same number of columns as the target table: target table has 2 column(s) but 
the inserted data has 1 column(s), including 0 partition column(s) having 
constant value(s).
   ```
   
   After supporting default column value:
   ```
   > create table t(i int, j int);
   > insert into t values(1);
   > select * from t;
   1    NULL
   
   > create table t2(i int, j int default 0);
   > insert into t2 values(1);
   > select * from t2;
   1    0
   ```
   
   I am +1 with the change.
   Before merging this PR, I would like to collect the opinions of more 
committers. We can send SPIP for voting if necessary.
   cc @cloud-fan  @dongjoon-hyun @viirya @dbtsai @huaxingao @maropu @zsxwing 
@wangyum @yaooqinn WDYT? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to