weitianpei commented on issue #11090:
URL: https://github.com/apache/hudi/issues/11090#issuecomment-2097307624

   Small files are continuously merged in the background until 600M. Are you 
sure that if we do this, will the downstream program read less data or read 
repeatedly?
   
   For example, my program is reading a newly generated, only 30M file, after a 
while this file will be merged with other files to 600M large file. So when my 
program reads this big file, doesn't it duplicate the data?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to