vinothchandar commented on a change in pull request #1650:
URL: https://github.com/apache/incubator-hudi/pull/1650#discussion_r429399564
##########
File path:
hudi-client/src/main/java/org/apache/hudi/table/action/rollback/MergeOnReadRollbackActionExecutor.java
##########
@@ -124,7 +124,7 @@ public MergeOnReadRollbackActionExecutor(JavaSparkContext
jsc,
case HoodieTimeline.COMMIT_ACTION:
LOG.info("Rolling back commit action. There are higher delta
commits. So only rolling back this instant");
partitionRollbackRequests.add(
-
RollbackRequest.createRollbackRequestWithDeleteDataAndLogFilesAction(partitionPath,
instantToRollback));
+
RollbackRequest.createRollbackRequestWithDeleteBaseAndLogFilesAction(partitionPath,
instantToRollback));
Review comment:
there is no rollback plan that gets serialized etc.. so this should be
fine
##########
File path:
hudi-common/src/main/java/org/apache/hudi/common/table/view/RemoteHoodieTableFileSystemView.java
##########
@@ -71,18 +71,18 @@
public static final String PENDING_COMPACTION_OPS = String.format("%s/%s",
BASE_URL, "compactions/pending/");
- public static final String LATEST_PARTITION_DATA_FILES_URL =
- String.format("%s/%s", BASE_URL, "datafiles/latest/partition");
- public static final String LATEST_PARTITION_DATA_FILE_URL =
- String.format("%s/%s", BASE_URL, "datafile/latest/partition");
- public static final String ALL_DATA_FILES = String.format("%s/%s", BASE_URL,
"datafiles/all");
- public static final String LATEST_ALL_DATA_FILES = String.format("%s/%s",
BASE_URL, "datafiles/all/latest/");
- public static final String LATEST_DATA_FILE_ON_INSTANT_URL =
String.format("%s/%s", BASE_URL, "datafile/on/latest/");
-
- public static final String LATEST_DATA_FILES_RANGE_INSTANT_URL =
- String.format("%s/%s", BASE_URL, "datafiles/range/latest/");
- public static final String LATEST_DATA_FILES_BEFORE_ON_INSTANT_URL =
- String.format("%s/%s", BASE_URL, "datafiles/beforeoron/latest/");
+ public static final String LATEST_PARTITION_BASE_FILES_URL =
+ String.format("%s/%s", BASE_URL, "basefiles/latest/partition");
Review comment:
since a job is deployed in one shot, this may be ok.. no need to support
backwards compatibility of endpoints etc
##########
File path:
hudi-common/src/main/java/org/apache/hudi/common/util/RocksDBSchemaHelper.java
##########
@@ -78,12 +78,12 @@ public String getPrefixForSliceViewByPartitionFile(String
partitionPath, String
return String.format("type=slice,part=%s,id=%s,instant=", partitionPath,
fileId);
}
- public String getPrefixForDataFileViewByPartitionFile(String partitionPath,
String fileId) {
- return String.format("type=df,part=%s,id=%s,instant=", partitionPath,
fileId);
+ public String getPrefixForBaseFileViewByPartitionFile(String partitionPath,
String fileId) {
+ return String.format("type=bf,part=%s,id=%s,instant=", partitionPath,
fileId);
Review comment:
Hmmm. this should be since everytime we deploy, we rebuild the rocksdb
cache? @bvaradar ?
##########
File path:
hudi-common/src/main/java/org/apache/hudi/common/table/view/RocksDbBasedFileSystemView.java
##########
@@ -167,15 +167,15 @@ protected void storePartitionView(String partitionPath,
List<HoodieFileGroup> fi
rocksDB.prefixDelete(schemaHelper.getColFamilyForView(),
schemaHelper.getPrefixForSliceViewByPartition(partitionPath));
rocksDB.prefixDelete(schemaHelper.getColFamilyForView(),
- schemaHelper.getPrefixForDataFileViewByPartition(partitionPath));
+ schemaHelper.getPrefixForBaseFileViewByPartition(partitionPath));
Review comment:
another thing to ensure.. no rocksdb level schema changes due to this .
Seems fine..
##########
File path:
hudi-cli/src/main/java/org/apache/hudi/cli/commands/CompactionCommand.java
##########
@@ -370,7 +370,7 @@ private String printCompaction(HoodieCompactionPlan
compactionPlan,
List<Comparable[]> rows = new ArrayList<>();
if ((null != compactionPlan) && (null != compactionPlan.getOperations())) {
for (HoodieCompactionOperation op : compactionPlan.getOperations()) {
- rows.add(new Comparable[]{op.getPartitionPath(), op.getFileId(),
op.getBaseInstantTime(), op.getDataFilePath(),
+ rows.add(new Comparable[]{op.getPartitionPath(), op.getFileId(),
op.getBaseInstantTime(), op.getBaseFilePath(),
Review comment:
cc @bvaradar is this field name serialized with the plan? if so, we
can't change it easily
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]