nsivabalan commented on code in PR #13533:
URL: https://github.com/apache/hudi/pull/13533#discussion_r2214194802
##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/compact/ITTestHoodieFlinkCompactor.java:
##########
@@ -239,9 +239,9 @@ public void
testHoodieFlinkCompactorWithUpgradeAndDowngrade(boolean upgrade) thr
if (upgrade) {
metaClient.getTableConfig().setTableVersion(HoodieTableVersion.SIX);
HoodieTableConfig.update(metaClient.getStorage(),
metaClient.getMetaPath(), metaClient.getTableConfig().getProps());
- new UpgradeDowngrade(metaClient, writeClient.getConfig(),
writeClient.getEngineContext(),
FlinkUpgradeDowngradeHelper.getInstance()).run(HoodieTableVersion.EIGHT,
"none");
+ new UpgradeDowngrade(metaClient, writeClient.getConfig(),
writeClient.getEngineContext(),
FlinkUpgradeDowngradeHelper.getInstance()).run(HoodieTableVersion.NINE, "none");
Review Comment:
instead of HoodieTableVersion.NINE, shouldn't we do
`HoodieTableVersion.current()`
but then we also need to fix the tests properly.
this was written from 6 to 8 and then downgrading from 8 to 6.
now that we have 9, should this become 8 to 9 and 9 back to 8.
##########
hudi-utilities/src/test/java/org/apache/hudi/utilities/deltastreamer/TestHoodieDeltaStreamer.java:
##########
@@ -670,15 +670,15 @@ public void testSchemaEvolution(String tableType, boolean
useUserProvidedSchema,
private static Stream<Arguments> continuousModeArgs() {
return Stream.of(
- Arguments.of("AVRO", "EIGHT"),
- Arguments.of("SPARK", "EIGHT"),
+ Arguments.of("AVRO", "NINE"),
Review Comment:
similar comment as above to use `HoodieTableVersion`.
##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/utils/TestStreamerUtil.java:
##########
@@ -88,7 +88,7 @@ void testInitTableIfNotExists() throws IOException {
assertArrayEquals(metaClient1.getTableConfig().getPartitionFields().get(),
new String[] {"p0", "p1"});
assertEquals(metaClient1.getTableConfig().getPreCombineField(), "ts");
assertEquals(metaClient1.getTableConfig().getKeyGeneratorClassName(),
SimpleAvroKeyGenerator.class.getName());
- assertEquals(HoodieTableVersion.EIGHT,
metaClient1.getTableConfig().getTableVersion());
+ assertEquals(HoodieTableVersion.NINE,
metaClient1.getTableConfig().getTableVersion());
Review Comment:
lets do `HoodieTableVersion.current()`
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/TestMORDataSource.scala:
##########
@@ -98,17 +98,17 @@ class TestMORDataSource extends HoodieSparkClientTestBase
with SparkDatasetMixin
@ParameterizedTest
@CsvSource(Array(
// Inferred as COMMIT_TIME_ORDERING
- "AVRO, AVRO, avro, false,,,EIGHT", "AVRO, SPARK, parquet, false,,,EIGHT",
- "SPARK, AVRO, parquet, false,,,EIGHT", "SPARK, SPARK, parquet,
false,,,EIGHT",
+ "AVRO, AVRO, avro, false,,,CURRENT", "AVRO, SPARK, parquet,
false,,,CURRENT",
Review Comment:
we should also think about the combinations we are running here.
previously we were running it for v6 and v8.
now, we should be running it for 6,8 and 9.
I am not saying every test should be parametrized.
but few such functional tests we can run it for all table versions.
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/procedure/TestUpgradeOrDowngradeProcedure.scala:
##########
@@ -59,10 +59,10 @@ class TestUpgradeOrDowngradeProcedure extends
HoodieSparkProcedureTestBase {
var metaClient = createMetaClient(spark, tablePath)
// verify hoodie.table.version of the original table
- assertResult(HoodieTableVersion.EIGHT.versionCode) {
+ assertResult(HoodieTableVersion.NINE.versionCode) {
metaClient.getTableConfig.getTableVersion.versionCode()
}
- assertTableVersionFromPropertyFile(metaClient,
HoodieTableVersion.EIGHT.versionCode)
+ assertTableVersionFromPropertyFile(metaClient,
HoodieTableVersion.NINE.versionCode)
Review Comment:
`HoodieTableVersion.current()`
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/upgrade/EightToNineUpgradeHandler.java:
##########
@@ -30,6 +31,11 @@ public class EightToNineUpgradeHandler implements
UpgradeHandler {
@Override
public Map<ConfigProperty, String> upgrade(HoodieWriteConfig config,
HoodieEngineContext context,
String instantTime,
SupportsUpgradeDowngrade upgradeDowngradeHelper) {
+ // If auto upgrade is disabled, set writer version to 8 and return
+ if (!config.autoUpgrade()) {
+ config.setValue(HoodieWriteConfig.WRITE_TABLE_VERSION,
String.valueOf(HoodieTableVersion.EIGHT.versionCode()));
+ return Collections.emptyMap();
+ }
Review Comment:
why are we not triggering rollbacks and full compaction when upgrading to V
9?
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/dml/others/TestMergeModeCommitTimeOrdering.scala:
##########
@@ -31,7 +31,7 @@ import
org.apache.spark.sql.hudi.common.HoodieSparkSqlTestBase.validateTableConf
class TestMergeModeCommitTimeOrdering extends HoodieSparkSqlTestBase {
Seq(
- "cow,8,false,false", "cow,8,false,true", "cow,8,true,false",
+ "cow,9,false,false", "cow,8,false,true", "cow,9,true,false",
Review Comment:
For these test parameters, can we take in `HoodieTableVersion` as an
argument.
then we can set it to `HoodieTableVersion.current()` here. and in
validations also, we can leverage `HoodieTableVersion.current().versionCode`.
w/ this change, we don't need to keep fixing these tests when we upgrade to
v 10. should just work oob
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]