srowen commented on a change in pull request #25522:
[SPARK-28787][DOC][SQL]Document LOAD DATA statement in SQL Reference
URL: https://github.com/apache/spark/pull/25522#discussion_r316333339
##########
File path: docs/sql-ref-syntax-dml-load.md
##########
@@ -19,4 +19,37 @@ license: |
limitations under the License.
---
-**This page is under construction**
+### Description
+The LOAD DATA statement can be used to load data from a file into a table or a
partition in the table. The target table must not be temporary. A partition
spec must be provided if and only if the target table is partitioned. The LOAD
DATA statement is only supported for tables created using the Hive format.
+
+### Syntax
+{% highlight sql %}
+LOAD DATA [LOCAL] INPATH path [OVERWRITE] INTO TABLE [db_name.]table_name
+ [PARTITION partition_spec]
+
+partition_spec:
+ : (part_col_name1=val1, part_col_name2=val2, ...)
+{% endhighlight %}
+
+### Example
+{% highlight sql %}
+LOAD DATA LOCAL INPATH 'data/files/f1.txt'
+ OVERWRITE INTO TABLE testDB.testTable PARTITION (p1 = 3, p2 = 4)
+{% endhighlight %}
+
+### Parameters
+
+#### ***path***:
+Path of the file system.
+
+#### ***table_name***:
+The name of an existing table.
+
+#### ***partition_spec***:
+One or more partition column name and value pairs.
+
+##### ***LOCAL***:
+If specified, local file system is used. Otherwise, the default file system is
used.
Review comment:
Maybe say that it causes the INPATH to be resolved against the local file
system, instead of the default file system, which is typically distributed
storage.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]