amaliujia commented on code in PR #37256:
URL: https://github.com/apache/spark/pull/37256#discussion_r930307735


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -2919,6 +2919,17 @@ object SQLConf {
       .stringConf
       .createWithDefault("csv,json,orc,parquet")
 
+  val ADD_DEFAULT_COLUMN_EXISTING_TABLE_BANNED_PROVIDERS =
+    buildConf("spark.sql.defaultColumn.addColumnExistingTableBannedProviders")
+      .internal()
+      .doc("List of table providers wherein SQL commands are NOT permitted to 
assign DEFAULT " +
+        "values to new columns in existing tables, such as when using the 
ALTER TABLE ... " +
+        "ADD COLUMNS command in SQL. Comma-separated list, whitespace ignored, 
case-insensitive.")

Review Comment:
   To me, it's ok to have a mixed known list of data formats that might cover 
many usages (or re-use `DEFAULT_COLUMN_ALLOWED_PROVIDERS`) while leave room for 
unknown or new but working data format checking. 
   
   I honestly don't have a good idea on this. Will also be happy to hear for 
different opinions from Spark community side. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to