weitianpei commented on issue #11090:
URL: https://github.com/apache/hudi/issues/11090#issuecomment-2097301076

   小文件会在后台不停的被合并,直到600M为止。你确定我们这样做,会不会导致下游程序读数据少读,重复读呢?
   
   
   > 2024年5月7日 10:14,Danny Chan ***@***.***> 写道:
   > 
   > 
   > We did have the tests already in the repo for clustering and compaction 
skipping read, can you ensure the option takes effect and increase the numbers 
of retained commits before cleaning with option clean.retain_commits.
   > 
   > —
   > Reply to this email directly, view it on GitHub 
<https://github.com/apache/hudi/issues/11090#issuecomment-2097276769>, or 
unsubscribe 
<https://github.com/notifications/unsubscribe-auth/AHH2Q2WGNRNJRN7XAT2YIJDZBA2HJAVCNFSM6AAAAABGYIXP6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJXGI3TMNZWHE>.
   > You are receiving this because you are subscribed to this thread.
   > 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to