yihua commented on code in PR #5998:
URL: https://github.com/apache/hudi/pull/5998#discussion_r909189508


##########
website/docs/faq.md:
##########
@@ -236,7 +237,7 @@ set 
hive.input.format=org.apache.hudi.hadoop.hive.HoodieCombineHiveInputFormat
 
 ### Can I register my Hudi dataset with Apache Hive metastore?
 
-Yes. This can be performed either via the standalone [Hive Sync 
tool](https://hudi.apache.org/docs/syncing_metastore#hive-sync-tool) or using 
options in 
[deltastreamer](https://github.com/apache/hudi/blob/d3edac4612bde2fa9deca9536801dbc48961fb95/docker/demo/sparksql-incremental.commands#L50)
 tool or 
[datasource](https://hudi.apache.org/docs/configurations#hoodiedatasourcehive_syncenable).
+Yes. This can be performed either via the standalone [Hive Sync 
tool](https://hudi.apache.org/docs/writing_data/#syncing-to-hive) or using 
options in 
[deltastreamer](https://github.com/apache/hudi/blob/d3edac4612bde2fa9deca9536801dbc48961fb95/docker/demo/sparksql-incremental.commands#L50)
 tool or 
[datasource](https://hudi.apache.org/docs/configurations#hoodiedatasourcehive_syncenable).

Review Comment:
   https://hudi.apache.org/docs/writing_data/#syncing-to-hive is not a valid 
link.



##########
website/docs/faq.md:
##########
@@ -236,7 +237,7 @@ set 
hive.input.format=org.apache.hudi.hadoop.hive.HoodieCombineHiveInputFormat
 
 ### Can I register my Hudi dataset with Apache Hive metastore?
 
-Yes. This can be performed either via the standalone [Hive Sync 
tool](https://hudi.apache.org/docs/syncing_metastore#hive-sync-tool) or using 
options in 
[deltastreamer](https://github.com/apache/hudi/blob/d3edac4612bde2fa9deca9536801dbc48961fb95/docker/demo/sparksql-incremental.commands#L50)
 tool or 
[datasource](https://hudi.apache.org/docs/configurations#hoodiedatasourcehive_syncenable).
+Yes. This can be performed either via the standalone [Hive Sync 
tool](https://hudi.apache.org/docs/writing_data/#syncing-to-hive) or using 
options in 
[deltastreamer](https://github.com/apache/hudi/blob/d3edac4612bde2fa9deca9536801dbc48961fb95/docker/demo/sparksql-incremental.commands#L50)
 tool or 
[datasource](https://hudi.apache.org/docs/configurations#hoodiedatasourcehive_syncenable).

Review Comment:
   https://hudi.apache.org/docs/writing_data/#syncing-to-hive is not a valid 
link.



##########
website/docs/faq.md:
##########
@@ -94,7 +95,7 @@ At a high level, Hudi is based on MVCC design that writes 
data to versioned parq
 
 ### What are some ways to write a Hudi dataset?
 
-Typically, you obtain a set of partial updates/inserts from your source and 
issue [write operations](https://hudi.apache.org/docs/write_operations/) 
against a Hudi dataset.  If you ingesting data from any of the standard sources 
like Kafka, or tailing DFS, the [delta 
streamer](https://hudi.apache.org/docs/hoodie_deltastreamer#deltastreamer) tool 
is invaluable and provides an easy, self-managed solution to getting data 
written into Hudi. You can also write your own code to capture data from a 
custom source using the Spark datasource API and use a [Hudi 
datasource](https://hudi.apache.org/docs/writing_data/#spark-datasource-writer) 
to write into Hudi. 
+Typically, you obtain a set of partial updates/inserts from your source and 
issue [write operations](https://hudi.apache.org/docs/writing_data/) against a 
Hudi dataset.  If you ingesting data from any of the standard sources like 
Kafka, or tailing DFS, the [delta 
streamer](https://hudi.apache.org/docs/writing_data/#deltastreamer) tool is 
invaluable and provides an easy, self-managed solution to getting data written 
into Hudi. You can also write your own code to capture data from a custom 
source using the Spark datasource API and use a [Hudi 
datasource](https://hudi.apache.org/docs/writing_data/#datasource-writer) to 
write into Hudi. 

Review Comment:
   https://hudi.apache.org/docs/writing_data/#deltastreamer does not point to 
deltastreamer.



##########
website/docs/faq.md:
##########
@@ -94,7 +95,7 @@ At a high level, Hudi is based on MVCC design that writes 
data to versioned parq
 
 ### What are some ways to write a Hudi dataset?
 
-Typically, you obtain a set of partial updates/inserts from your source and 
issue [write operations](https://hudi.apache.org/docs/write_operations/) 
against a Hudi dataset.  If you ingesting data from any of the standard sources 
like Kafka, or tailing DFS, the [delta 
streamer](https://hudi.apache.org/docs/hoodie_deltastreamer#deltastreamer) tool 
is invaluable and provides an easy, self-managed solution to getting data 
written into Hudi. You can also write your own code to capture data from a 
custom source using the Spark datasource API and use a [Hudi 
datasource](https://hudi.apache.org/docs/writing_data/#spark-datasource-writer) 
to write into Hudi. 
+Typically, you obtain a set of partial updates/inserts from your source and 
issue [write operations](https://hudi.apache.org/docs/writing_data/) against a 
Hudi dataset.  If you ingesting data from any of the standard sources like 
Kafka, or tailing DFS, the [delta 
streamer](https://hudi.apache.org/docs/writing_data/#deltastreamer) tool is 
invaluable and provides an easy, self-managed solution to getting data written 
into Hudi. You can also write your own code to capture data from a custom 
source using the Spark datasource API and use a [Hudi 
datasource](https://hudi.apache.org/docs/writing_data/#datasource-writer) to 
write into Hudi. 

Review Comment:
   I think the link of https://hudi.apache.org/docs/write_operations/ should be 
used for write operations.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to