n3nash commented on a change in pull request #1100:
URL: https://github.com/apache/hudi/pull/1100#discussion_r441245907
##########
File path: hudi-spark/src/main/java/org/apache/hudi/keygen/KeyGenerator.java
##########
@@ -24,12 +24,17 @@
import org.apache.avro.generic.GenericRecord;
import java.io.Serializable;
+import java.util.List;
/**
* Abstract class to extend for plugging in extraction of {@link HoodieKey}
from an Avro record.
*/
public abstract class KeyGenerator implements Serializable {
+ protected List<String> recordKeyFields;
Review comment:
Yes, this was a side effect of merging since the complexKeyGenerator was
added explicitly to the test suite, refactored and made this conform the
previous abstractions.
##########
File path:
hudi-spark/src/main/java/org/apache/hudi/keygen/NonpartitionedKeyGenerator.java
##########
@@ -38,9 +38,10 @@ public NonpartitionedKeyGenerator(TypedProperties props) {
@Override
public HoodieKey getKey(GenericRecord record) {
- String recordKey = DataSourceUtils.getNestedFieldValAsString(record,
recordKeyField, true);
+ String recordKey = DataSourceUtils.getNestedFieldValAsString(record,
recordKeyFields.get(0), true);
Review comment:
Ack, same above.
##########
File path: hudi-spark/src/main/scala/org/apache/hudi/DataSourceOptions.scala
##########
@@ -265,6 +265,7 @@ object DataSourceWriteOptions {
val HIVE_ASSUME_DATE_PARTITION_OPT_KEY =
"hoodie.datasource.hive_sync.assume_date_partitioning"
val HIVE_USE_PRE_APACHE_INPUT_FORMAT_OPT_KEY =
"hoodie.datasource.hive_sync.use_pre_apache_input_format"
val HIVE_USE_JDBC_OPT_KEY = "hoodie.datasource.hive_sync.use_jdbc"
+ val HIVE_ENABLE_TEST_SUITE_OPT_KEY =
"hoodie.datasource.hive_sync.run_test_suite"
Review comment:
removed
##########
File path: hudi-test-suite/README.md
##########
@@ -0,0 +1,300 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+This page describes in detail how to run end to end tests on a hudi dataset
that helps in improving our confidence
+in a release as well as perform large scale performance benchmarks.
+
+# Objectives
+
+1. Test with different versions of core libraries and components such as
`hdfs`, `parquet`, `spark`,
+`hive` and `avro`.
+2. Generate different types of workloads across different dimensions such as
`payload size`, `number of updates`,
+`number of inserts`, `number of partitions`
+3. Perform multiple types of operations such as `insert`, `bulk_insert`,
`upsert`, `compact`, `query`
+4. Support custom post process actions and validations
+
+# High Level Design
+
+The Hudi test suite runs as a long running spark job. The suite is divided
into the following high level components :
+
+## Workload Generation
+
+This component does the work of generating the workload; `inserts`, `upserts`
etc.
+
+## Workload Scheduling
+
+Depending on the type of workload generated, data is either ingested into the
target hudi
+dataset or the corresponding workload operation is executed. For example
compaction does not necessarily need a workload
+to be generated/ingested but can require an execution.
+
+## Other actions/operatons
+
+The test suite supports different types of operations besides ingestion such
as Hive Query execution, Clean action etc.
+
+# Usage instructions
+
+
+## Entry class to the test suite
+
+```
+org.apache.hudi.testsuite.HoodieTestSuiteJob.java - Entry Point of the hudi
test suite job. This
+class wraps all the functionalities required to run a configurable integration
suite.
+```
+
+## Configurations required to run the job
+```
+org.apache.hudi.testsuite.HoodieTestSuiteJob.HoodieTestSuiteConfig - Config
class that drives the behavior of the
+integration test suite. This class extends from
com.uber.hoodie.utilities.DeltaStreamerConfig. Look at
+link#HudiDeltaStreamer page to learn about all the available configs
applicable to your test suite.
+```
+
+## Generating a custom Workload Pattern
+
+There are 2 ways to generate a workload pattern
+
+ 1.Programatically
+
+Choose to write up the entire DAG of operations programatically, take a look
at `WorkflowDagGenerator` class.
+Once you're ready with the DAG you want to execute, simply pass the class name
as follows:
+
+```
+spark-submit
+...
+...
+--class org.apache.hudi.testsuite.HoodieTestSuiteJob
+--workload-generator-classname
org.apache.hudi.testsuite.dag.scheduler.<your_workflowdaggenerator>
+...
+```
+
+ 2.YAML file
+
+Choose to write up the entire DAG of operations in YAML, take a look at
`complex-workload-dag-cow.yaml` or
+`complex-workload-dag-mor.yaml`.
+Once you're ready with the DAG you want to execute, simply pass the yaml file
path as follows:
+
+```
+spark-submit
+...
+...
+--class org.apache.hudi.testsuite.HoodieTestSuiteJob
+--workload-yaml-path /path/to/your-workflow-dag.yaml
+...
+```
+
+## Building the test suite
+
+The test suite can be found in the `hudi-test-suite` module. Use the
`prepare_integration_suite.sh` script to build
+the test suite, you can provide different parameters to the script.
+
+```
+shell$ ./prepare_integration_suite.sh --help
+Usage: prepare_integration_suite.sh
+ --spark-command, prints the spark command
+ -h, hdfs-version
+ -s, spark version
+ -p, parquet version
+ -a, avro version
+ -s, hive version
+```
+
+```
+shell$ ./prepare_integration_suite.sh
+....
+....
+Final command : mvn clean install -DskipTests
Review comment:
removed
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]