gengliangwang commented on code in PR #37256:
URL: https://github.com/apache/spark/pull/37256#discussion_r929562347


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -2919,6 +2919,17 @@ object SQLConf {
       .stringConf
       .createWithDefault("csv,json,orc,parquet")
 
+  val ADD_DEFAULT_COLUMN_EXISTING_TABLE_BANNED_PROVIDERS =
+    buildConf("spark.sql.defaultColumn.addColumnExistingTableBannedProviders")
+      .internal()
+      .doc("List of table providers wherein SQL commands are NOT permitted to 
assign DEFAULT " +
+        "values to new columns in existing tables, such as when using the 
ALTER TABLE ... " +
+        "ADD COLUMNS command in SQL. Comma-separated list, whitespace ignored, 
case-insensitive.")

Review Comment:
   We do support enum configurations via checking string values. You can check 
STORE_ASSIGNMENT_POLICY as an example.
   For data sources, we can't make it enum since there are external data 
sources, e.g. Delta, BigQuery, etc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to