cloud-fan commented on a change in pull request #30888:
URL: https://github.com/apache/spark/pull/30888#discussion_r547822642



##########
File path: docs/sql-ref-syntax-dml-insert-into.md
##########
@@ -40,11 +40,20 @@ INSERT INTO [ TABLE ] table_identifier [ partition_spec ]
 
 * **partition_spec**
 
-    An optional parameter that specifies a comma separated list of key and 
value pairs
+    An optional parameter that specifies a comma-separated list of key and 
value pairs
     for partitions.
 
     **Syntax:** `PARTITION ( partition_col_name  = partition_col_val [ , ... ] 
)`
 
+* **column_list**
+
+    An optional parameter that specifies a comma-separated list of columns 
belonging to the `table_identifier` table.
+
+    **Note:**The current behaviour has some limitations:
+    - All specified columns should exist in the table and not be duplicated 
from each other. It includes all columns except the static partition columns.
+    - The size of the column list should be exactly the size of the data from 
`VALUES` clause or query.
+    - The order of the column list is alterable and determines how the data 
from `VALUES` clause or query to be inserted by position.

Review comment:
       Can we move the last point to the description?
   ```
   An optional parameter that specifies .... Spark will reorder the columns of 
the input query to match the table schema according to the specified column 
list.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to