yihua commented on code in PR #6157:
URL: https://github.com/apache/hudi/pull/6157#discussion_r927760952
##########
hudi-common/src/main/java/org/apache/hudi/common/table/log/HoodieLogFormatWriter.java:
##########
@@ -94,7 +94,8 @@ private FSDataOutputStream getOutputStream() throws
IOException, InterruptedExce
Path path = logFile.getPath();
if (fs.exists(path)) {
boolean isAppendSupported =
StorageSchemes.isAppendSupported(fs.getScheme());
- if (isAppendSupported) {
+ boolean needRollOverToNewFile = fs.getFileStatus(path).getLen() >
sizeThreshold;
+ if (isAppendSupported && !needRollOverToNewFile) {
Review Comment:
The max data block size and log file size are configured by
`hoodie.logfile.data.block.max.size` (256MB) and `hoodie.logfile.max.size`
(1GB) respectively. And the rollover based on the size already happens in
`appendBlocks`. Can you check that logic? not sure if the change here is
needed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]