leesf commented on a change in pull request #3083:
URL: https://github.com/apache/hudi/pull/3083#discussion_r654496714
##########
File path:
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/metadata/TestHoodieBackedMetadata.java
##########
@@ -120,46 +120,63 @@ public void testDefaultNoMetadataTable() throws Exception
{
assertThrows(TableNotFoundException.class, () ->
HoodieTableMetaClient.builder().setConf(hadoopConf).setBasePath(metadataTableBasePath).build());
// Metadata table is not created if disabled by config
+ String firstCommitTime = HoodieActiveTimeline.createNewInstantTime();
try (SparkRDDWriteClient client = new SparkRDDWriteClient(engineContext,
getWriteConfig(true, false))) {
- client.startCommitWithTime("001");
- client.insert(jsc.emptyRDD(), "001");
+ client.startCommitWithTime(firstCommitTime);
+ client.insert(jsc.parallelize(dataGen.generateInserts(firstCommitTime,
5)), firstCommitTime);
Review comment:
would we remove calling `syncTableMetadata()` in
`SparkRDDWriteClient#preWrite`? since it will do nothing as there are in
progress instant always.
##########
File path:
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/metadata/TestHoodieBackedMetadata.java
##########
@@ -191,8 +208,9 @@ public void testOnlyValidPartitionsAdded() throws Exception
{
final HoodieWriteConfig writeConfig = getWriteConfigBuilder(true, true,
false)
.withMetadataConfig(HoodieMetadataConfig.newBuilder().enable(true).withDirectoryFilterRegex(filterDirRegex).build()).build();
- try (SparkRDDWriteClient client = new SparkRDDWriteClient(engineContext,
writeConfig)) {
+ try (SparkRDDWriteClient client = new SparkRDDWriteClient(engineContext,
writeConfig, true)) {
Review comment:
called `Deprecated` method.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]