This is an automated email from the ASF dual-hosted git repository.
danny0405 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/master by this push:
new bdfaa4e116 HUDI-5398. Fix Typo in hudi-integ-test#README.md. (#7477)
bdfaa4e116 is described below
commit bdfaa4e1167efd6e47ca2c62d6115e7c4a0f98db
Author: slfan1989 <[email protected]>
AuthorDate: Fri Dec 23 13:03:20 2022 +0800
HUDI-5398. Fix Typo in hudi-integ-test#README.md. (#7477)
Co-authored-by: komao <[email protected]>
---
hudi-integ-test/README.md | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/hudi-integ-test/README.md b/hudi-integ-test/README.md
index bea9219294..ac62e61e03 100644
--- a/hudi-integ-test/README.md
+++ b/hudi-integ-test/README.md
@@ -359,7 +359,7 @@ If you wish to do a cumulative validation, do not set
delete_input_data in Valid
may not scale beyond certain point since input data as well as hudi content's
keeps occupying the disk and grows for
every cycle.
-Lets see an example where you don't set "delete_input_data" as part of
Validation.
+Let's see an example where you don't set "delete_input_data" as part of
Validation.
```
Insert
Upsert
@@ -491,7 +491,7 @@ cow-long-running-multi-partitions.yaml: long running dag
wit 50 iterations with
```
To run test suite jobs for MOR table, pretty much any of these dags can be
used as is. Only change is with the
-spark-shell commnad, you need to fix the table type.
+spark-shell command, you need to fix the table type.
```
--table-type MERGE_ON_READ
```
@@ -547,7 +547,7 @@ multi-writer-local-2.properties
multi-writer-local-3.properties
multi-writer-local-4.properties
-These have configs that uses InProcessLockProvider. Configs specifc to
InProcessLockProvider is:
+These have configs that uses InProcessLockProvider. Configs specific to
InProcessLockProvider is:
hoodie.write.lock.provider=org.apache.hudi.client.transaction.lock.InProcessLockProvider
multi-writer-1.properties
@@ -632,7 +632,7 @@ Sample spark-submit command to test one delta streamer and
a spark data source w
Properties that differ from previous scenario and this one are:
--input-base-paths refers to 4 paths instead of 2
---props-paths again, refers to 4 paths intead of 2.
+--props-paths again, refers to 4 paths instead of 2.
-- Each property file will contain properties for one spark datasource
writer.
--workload-yaml-paths refers to 4 paths instead of 2.
-- Each yaml file used different range of partitions so that there won't be
any conflicts while doing concurrent writes.
@@ -646,7 +646,7 @@ die because there is an inflight delta commit from another
writer.
=======
### Testing async table services
We can test async table services with deltastreamer using below command. 3
additional arguments are required to test async
-table services comapared to previous command.
+table services compared to previous command.
```shell
--continuous \
@@ -704,7 +704,7 @@ Example command : // execute the command from within docker
folder.
./generate_test_suite.sh --execute_test_suite false
--include_medium_test_suite_yaml true --include_long_test_suite_yaml true
By default, generate_test_suite will run sanity test. In addition it supports
3 more yamls.
-medium_test_suite, long_test_suite and clustering_test_suite. Users can add
the required yamls via command line as per thier
+medium_test_suite, long_test_suite and clustering_test_suite. Users can add
the required yamls via command line as per their
necessity.
Also, "--execute_test_suite" false will generate all required files and yamls
in a local staging directory if users want to inspect them.