afeldman1 commented on a change in pull request #1761:
URL: https://github.com/apache/hudi/pull/1761#discussion_r446372801



##########
File path: docs/_docs/2_2_writing_data.md
##########
@@ -176,15 +176,49 @@ In some cases, you may want to migrate your existing 
table into Hudi beforehand.
 
 ## Datasource Writer
 
-The `hudi-spark` module offers the DataSource API to write (and also read) any 
data frame into a Hudi table.
-Following is how we can upsert a dataframe, while specifying the field names 
that need to be used
-for `recordKey => _row_key`, `partitionPath => partition` and `precombineKey 
=> timestamp`
+The `hudi-spark` module offers the DataSource API to write (and read) a Spark 
DataFrame into a Hudi table. There are a number of options available:
 
+**`HoodieWriteConfig`**:
+
+**TABLE_NAME** (Required)<br>
+
+
+**`DataSourceWriteOptions`**:
+
+**RECORDKEY_FIELD_OPT_KEY** (Required): Primary key field(s). Nested fields 
can be specified using the dot notation eg: `a.b.c`. When using multiple 
columns as primary key use comma seperated notaion, eg: `"col1,col2,col3,etc"`. 
Single or multiple columns as primary key specified by 
`KEYGENERATOR_CLASS_OPT_KEY` property.<br>
+Default value: `"uuid"`<br>
+
+**PARTITIONPATH_FIELD_OPT_KEY** (Required): Columns to be used for 
partitioning the table. To prevent partitioning, provide empty string as value 
eg: `""`. Specify paritioning/no partitioning using 
`HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY`<br>
+Default value: `"partitionpath"`<br>
+
+**PRECOMBINE_FIELD_OPT_KEY** (Required): When two records have the same key 
value, the record with the largest value from the field specified here will be 
choosen.<br>
+Default value: `"ts"`<br>
+
+**OPERATION_OPT_KEY**: The [write operations](#write-operations) to use. Note: 
this cannot change across writes.<br>

Review comment:
       Good catch, that was meant to go in the description of the next 
parameter, TABLE_TYPE_OPT_KEY.
   
   The text can be made more clear by using the following language instead
   Instead of "Note: this cannot change across writes."
   Use "Note: After the initial creation of a table, this value must stay 
consistent when writing to (updating) the table using the Spark SaveMode.Append 
mode."
   
   Also, I believe that the key and partition columns, cannot be changed after 
the table is first created as well, so this note snippet can be added to those 
fields as well. Is this correct?
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to