On Sun, Oct 20, 2024 at 02:30:43PM -0700, Linus Torvalds wrote:
> On Sun, 20 Oct 2024 at 14:29, Kent Overstreet <[email protected]> 
> wrote:
> >
> > That's the same as limiting the amount of dirty metadata.
> 
> Exactly.
> 
> We limit dirty data for a reason. That was what I said, and that was my point.
> 
> Excessive dirty data is a huge latency concern. It's a latency concern
> not just for journal replay, it's a latency concern for anybody who
> then does a "sync" or whatever.

And my counterpoint is that if you've got a huge filesystem, limiting
dirty metadata means that you force it to be written out as a whole
bunch of random tiny writes, instead of allowing them to be batched up.
That's horrible for overall throughput.

Reply via email to