nsivabalan edited a comment on pull request #3865:
URL: https://github.com/apache/hudi/pull/3865#issuecomment-965480577


   @vinothchandar @bvaradar : need your help here. This patch is to avoid a 
direct fs call within log file reader. the call of interest is used in 
detecting schema from log file and we do reverse reading and so we do 
fs.getFileStatus(logFile).getLenth(). I tracked back the call and realized that 
within HoodieRealtimeInputformatUtils. We have a file slice from which we get 
log files, at which juncture I fetch the log file length and carry that info 
all the way to RealTimeSplit and use it in LogReaderUtils and 
HoodieLogFileReader. 
   
   This fix worked well until recently we landed [mor-incremental 
support](https://github.com/apache/hudi/commit/a40ac62e0ce7ec807a00803d23ed223d7c607459)
 for hive. If not for this recent patch, this PR was in good shape. 
Specifically, 
   the [HoodieLogFile from 
fileSlice](https://github.com/apache/hudi/blob/2d362af00ae61ab76ff45fd83f142d3af4bd0909/hudi-hadoop-mr/src/main/java/org/apache/hudi/hadoop/realtime/HoodieParquetRealtimeInputFormat.java#L201)
 has file size set to 0.  My assumption is that, HoodieLogFile object will 
always have file size set. 
   So, my question is: is there a bug somewhere on this end. or we can't assume 
length will always be set for HoodieLogFile. In other words, when we do 
getLatestFileSlice().getLogFiles(), can we assume the HoodieLogFile will have 
the file size set appropriately? greatly appreciate pointers from you folks on 
this end. 
    


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to