[
https://issues.apache.org/jira/browse/HUDI-2267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17399758#comment-17399758
]
ASF GitHub Bot commented on HUDI-2267:
--------------------------------------
nsivabalan commented on a change in pull request #3482:
URL: https://github.com/apache/hudi/pull/3482#discussion_r689564878
##########
File path: docker/generate_test_suite.sh
##########
@@ -16,6 +16,28 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+usage="
Review comment:
Have you tried using this script (generate_test_suite.sh) in EMR?
##########
File path: hudi-integ-test/README.md
##########
@@ -177,20 +177,13 @@ cd /opt
Copy the integration tests jar into the docker container
```
-docker cp
packaging/hudi-integ-test-bundle/target/hudi-integ-test-bundle-0.8.0-SNAPSHOT.jar
adhoc-2:/opt
+docker cp
packaging/hudi-integ-test-bundle/target/hudi-integ-test-bundle-0.9.0-SNAPSHOT.jar
adhoc-2:/opt
```
```
docker exec -it adhoc-2 /bin/bash
```
-Clean the working directories before starting a new test:
Review comment:
why removed this?
##########
File path: docker/generate_test_suite.sh
##########
@@ -16,6 +16,28 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+usage="
+USAGE:
+$(basename "$0") [--help] [--all boolen] -- Script to generate the test suite
according to arguments provided and run these test suites.
Review comment:
Can we add some example commands.
##########
File path: hudi-integ-test/README.md
##########
@@ -253,23 +254,119 @@ spark-submit \
--conf spark.network.timeout=600s \
--conf spark.yarn.max.executor.failures=10 \
--conf spark.sql.catalogImplementation=hive \
+--conf spark.driver.extraClassPath=/var/demo/jars/* \
+--conf spark.executor.extraClassPath=/var/demo/jars/* \
--class org.apache.hudi.integ.testsuite.HoodieTestSuiteJob \
-/opt/hudi-integ-test-bundle-0.8.0-SNAPSHOT.jar \
+/opt/hudi-integ-test-bundle-0.9.0-SNAPSHOT.jar \
--source-ordering-field test_suite_source_ordering_field \
--use-deltastreamer \
--target-base-path /user/hive/warehouse/hudi-integ-test-suite/output \
--input-base-path /user/hive/warehouse/hudi-integ-test-suite/input \
--target-table table1 \
--props file:/var/hoodie/ws/docker/demo/config/test-suite/test.properties \
---schemaprovider-class
org.apache.hudi.utilities.schema.FilebasedSchemaProvider \
+--schemaprovider-class
org.apache.hudi.integ.testsuite.schema.TestSuiteFileBasedSchemaProvider \
--source-class org.apache.hudi.utilities.sources.AvroDFSSource \
--input-file-size 125829120 \
--workload-yaml-path
file:/var/hoodie/ws/docker/demo/config/test-suite/complex-dag-mor.yaml \
--workload-generator-classname
org.apache.hudi.integ.testsuite.dag.WorkflowDagGenerator \
--table-type MERGE_ON_READ \
---compact-scheduling-minshare 1
+--compact-scheduling-minshare 1 \
+--hoodie-conf hoodie.metrics.on=true \
+--hoodie-conf hoodie.metrics.reporter.type=GRAPHITE \
+--hoodie-conf hoodie.metrics.graphite.host=graphite \
+--hoodie-conf hoodie.metrics.graphite.port=2003 \
+--clean-input \
+--clean-output
```
+## Visualize and inspect the hoodie metrics and performance (local)
+Graphite server is already setup (and up) in ```docker/setup_demo.sh```.
+
+Open browser and access metrics at
+```
+http://localhost:80
+```
+Dashboard
+```
+http://localhost/dashboard
+
+```
+
+## Running test suite on an EMR cluster
+- Copy over the necessary files and jars that are required to your cluster.
Review comment:
Can we call out the files that are required.
If running manually, we need below files:
test.properties,
source and target schema,
yaml file,
integ-test-suite bundle jar.
but with generate_test_suite.sh, may not be clear as to what all needs to be
copied over.
##########
File path: docker/generate_test_suite.sh
##########
@@ -16,6 +16,28 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+usage="
+USAGE:
+$(basename "$0") [--help] [--all boolen] -- Script to generate the test suite
according to arguments provided and run these test suites.
+
+where:
+ --help show this help text
+ --all set the seed value
+ --execute_test_suite flag if test need to execute (DEFAULT- true)
+ --medium_num_iterations number of medium iterations (DEFAULT- 20)
+ --long_num_iterations number of long iterations (DEFAULT- 30)
+ --intermittent_delay_mins delay after every test run (DEFAULT- 1)
+ --table_type hoodie table type to test (DEFAULT COPY_ON_WRITE)
+ --include_long_test_suite_yaml include long infra test suite (DEFAULT
false)
+ --include_medium_test_suite_yaml include medium infra test suite (DEFAULT
false)
+ --cluster_num_itr number of cluster iterations (DEFAULT 30)
+ --include_cluster_yaml include cluster infra test suite (DEFAULT false)
+ --include_cluster_yaml include cluster infra test suite (DEFAULT false)
Review comment:
remove repeated lines
--include_cluster_yaml
##########
File path: docker/compose/docker-compose_hadoop284_hive233_spark244.yml
##########
@@ -201,25 +201,34 @@ services:
command: coordinator
presto-worker-1:
- container_name: presto-worker-1
- hostname: presto-worker-1
- image: apachehudi/hudi-hadoop_2.8.4-prestobase_0.217:latest
- depends_on: ["presto-coordinator-1"]
- environment:
- - PRESTO_JVM_MAX_HEAP=512M
- - PRESTO_QUERY_MAX_MEMORY=1GB
- - PRESTO_QUERY_MAX_MEMORY_PER_NODE=256MB
- - PRESTO_QUERY_MAX_TOTAL_MEMORY_PER_NODE=384MB
- - PRESTO_MEMORY_HEAP_HEADROOM_PER_NODE=100MB
- - TERM=xterm
- links:
- - "hivemetastore"
- - "hiveserver"
- - "hive-metastore-postgresql"
- - "namenode"
- volumes:
- - ${HUDI_WS}:/var/hoodie/ws
- command: worker
+ container_name: presto-worker-1
Review comment:
can we revert the unintended changes.
##########
File path: docker/generate_test_suite.sh
##########
@@ -16,6 +16,28 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+usage="
+USAGE:
+$(basename "$0") [--help] [--all boolen] -- Script to generate the test suite
according to arguments provided and run these test suites.
Review comment:
you can assume some sample s3 path.
s3://hudi_test_bucket/
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
> Test suite infra Automate with playbook
> ---------------------------------------
>
> Key: HUDI-2267
> URL: https://issues.apache.org/jira/browse/HUDI-2267
> Project: Apache Hudi
> Issue Type: Improvement
> Components: Usability
> Reporter: sivabalan narayanan
> Priority: Major
> Labels: pull-request-available
> Fix For: 0.9.0
>
>
> Build a test infra (a suite of tests) that can be run w/ jenkins or CI
> (optionally run it), and also scriptify to run in cluster/AWS infra.
> Purpose:
> There are lot of additional features in Hudi that does not get tested when
> developing some new features. Some of the non-core features are clustering,
> archival, bulk_insert row writer path etc don't get necessary attention while
> developing a particular feature. So, we are in need of a test infra which one
> can leverage. One should be able to trigger a script called certify_patch or
> something and it should run all different tests that could one could possibly
> hit out there in the wild and produce a result if all flows succeeded or if
> anything failed.
> Operations to be verified:
> For both types of table:
> bulk insert, insert, upsert, delete, insert override, insert override table.
> delete partition.
> bulk_insert row writer with above operations.
> Test cleaning and archival gets triggered and executed as expected for both
> above flows.
> Clustering.
> Metadata table.
> For MOR:
> Compaction
> Clustering and compaction one after another.
> Clustering and compaction triggered concurrently.
> Note: For all tests, verify the sanity of data after every test. i.e. Save
> the input data and verify w/ hudi dataset.
> * Test infra should have capability to test with schema of user's choice.
> * Should be able to test all 3 levels(write client, deltastreamer, spark
> datasource). Some operations may not be feasible to test in all lavels, but
> thats understandable.
> * Once we have end to end support for spark, we need to add support for
> flink and java as well. Scope for java might be less since there is no spark
> datasource layer. But we can revisit later once we have covered spark engine.
> Publish a playbook on how to use this test infra. Both with an already
> released version or by using a locally built hudi bundle jar.
> * cluster/AWS run
> * local docker run.
> * CI integration
> Future scope:
> We can make versions of spark, hadoop, hive, etc configurable down the line.
> but for first cut, wanted to get an end to end flow working smoothly. Should
> be usable by anyone from the community or a new user who is looking to use
> Hudi.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)