n3nash commented on a change in pull request #2168:
URL: https://github.com/apache/hudi/pull/2168#discussion_r520127705
##########
File path: hudi-integ-test/README.md
##########
@@ -267,3 +269,170 @@ spark-submit \
--table-type MERGE_ON_READ \
--compact-scheduling-minshare 1
```
+
+For long running test suite, validation has to be done differently. Idea is to
run same dag in a repeated manner.
+Hence "ValidateDatasetNode" is introduced which will read entire input data
and compare it with hudi contents both via
+spark datasource and hive table via spark sql engine.
+
+If you have "ValidateDatasetNode" in your dag, do not replace hive jars as
instructed above. Spark sql engine does not
+go well w/ hive2* jars. So, after running docker setup, just copy
test.properties and your dag of interest and you are
+good to go ahead.
+
+For repeated runs, two additional configs need to be set. "--num-rounds N" and
"--delay-between-rounds-mins Y".
+This means that your dag will be repeated for N times w/ a delay of Y mins
between each round.
+
+Also, ValidateDatasetNode can be configured in two ways. Either with
"delete_input_data: true" set or not set.
+When "delete_input_data" is set for ValidateDatasetNode, once validation is
complete, entire input data will be deleted.
+So, suggestion is to use this ValidateDatasetNode as the last node in the dag
with "delete_input_data".
+Example dag:
+```
+ Insert
+ Upsert
+ ValidateDatasetNode with delete_input_data = true
+```
+
+If above dag is run with "--num-rounds 10 --delay-between-rounds-mins 10",
then this dag will run for 10 times with 10
Review comment:
What happens if the execution of one round takes more than 10 mins ?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]