vinothchandar commented on a change in pull request #4333:
URL: https://github.com/apache/hudi/pull/4333#discussion_r784094553
##########
File path:
hudi-common/src/main/java/org/apache/hudi/common/table/HoodieTableConfig.java
##########
@@ -125,6 +126,11 @@
.withAlternatives("hoodie.table.rt.file.format")
.withDocumentation("Log format used for the delta logs.");
+ public static final ConfigProperty<String> LOG_BLOCK_TYPE = ConfigProperty
Review comment:
yes. it will be a write config. We can to facilitate newer file groups
written out as avro from say Kafka Connect, while a delete job adds bulk
deletes/updates into log blocks in parquet.
##########
File path:
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/HoodieTable.java
##########
@@ -719,7 +720,13 @@ public HoodieFileFormat getLogFileFormat() {
return metaClient.getTableConfig().getLogFileFormat();
}
- public HoodieLogBlockType getLogDataBlockFormat() {
+ public HoodieLogBlockType getLogDataBlockType() {
+ HoodieLogBlock.HoodieLogBlockType logBlockType =
metaClient.getTableConfig().getLogBlockFormat();
Review comment:
We need to move the log block format away from the table config
##########
File path:
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieAppendHandle.java
##########
@@ -497,4 +489,32 @@ private void flushToDiskIfRequired(HoodieRecord record) {
numberOfRecords = 0;
}
}
+
+ protected void appendDataAndDeleteBlocks(Map<HeaderMetadataType, String>
header) {
Review comment:
@alexeykudinkin can you call out what has changed in this block of code?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]