nsivabalan commented on code in PR #7092:
URL: https://github.com/apache/hudi/pull/7092#discussion_r1017399907


##########
hudi-integ-test/src/main/java/org/apache/hudi/integ/testsuite/dag/nodes/ValidateDatasetNode.java:
##########
@@ -51,7 +51,7 @@ public Dataset<Row> getDatasetToValidate(SparkSession 
session, ExecutionContext
                                            StructType inputSchema) {
     String partitionPathField = 
context.getWriterContext().getProps().getString(DataSourceWriteOptions.PARTITIONPATH_FIELD().key());
     String hudiPath = 
context.getHoodieTestSuiteWriter().getCfg().targetBasePath + 
(partitionPathField.isEmpty() ? "/" : "/*/*/*");
-    Dataset<Row> hudiDf = 
session.read().option(HoodieMetadataConfig.ENABLE.key(), 
String.valueOf(config.isEnableMetadataValidate()))
+    Dataset<Row> hudiDf = 
session.read().option(HoodieMetadataConfig.ENABLE.key(), 
String.valueOf(context.getHoodieTestSuiteWriter().getCfg().enableMetadataOnRead))

Review Comment:
   Here is how I am deciding where to fit the config. 
   some of them are just a top level config. like table base path, index type, 
isMetadata enabled since we wish to apply it to all nodes. while some are node 
level configs. for eg, num_records_to_insert, "query to use to validate" etc. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to