dtenedor opened a new pull request #35855:
URL: https://github.com/apache/spark/pull/35855


   ### What changes were proposed in this pull request?
   
   Extend CREATE TABLE and REPLACE TABLE statements to support columns with 
DEFAULT values. Following INSERT INTO statements may then omit the default 
values or refer to them explicitly with the DEFAULT keyword, in which case the 
Spark analyzer will automatically insert the appropriate corresponding values 
in the right places.
   
   Example:
   ```
   CREATE TABLE T(a INT DEFAULT 4, b INT NOT NULL DEFAULT 5);
   INSERT INTO T VALUES (1, 2);
   INSERT INTO T VALUES (1, DEFAULT);
   INSERT INTO T VALUES (DEFAULT, 6);
   SELECT * FROM T;
   (1, 2)
   (1, 5)
   (4, 6)
   ```
   
   ### Why are the changes needed?
   
   This helps users issue INSERT INTO statements with less effort, and helps 
people creating or updating tables to add custom optional columns for use in 
specific circumstances as desired. 
   
   ### How was this patch tested?
   
   This change is covered by new and existing unit test coverage as well as new 
INSERT INTO query test cases covering a variety of positive and negative 
scenarios.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to