On Wed, Feb 19, 2025 at 6:03 PM Thomas Munro <thomas.mu...@gmail.com> wrote:
>
> On Fri, Feb 14, 2025 at 9:32 AM Andres Freund <and...@anarazel.de> wrote:
> > I think we'll need to add some logic in read stream that only disables 
> > advice
> > after a longer sequential sequence. Writing logic for that shouldn't be too
> > hard, I think? Determining the concrete cutoffs is probably harder, 
> > although I
> > think even fairly simplistic logic will be "good enough".
>
> (Sorry for taking time to respond, I had to try some bogus things
> first before I hit on a principled answer.)
>
> Yeah, it is far too stupid.  I think I have figured out the ideal
> cutoff: just keep issuing advice for a given sequential run of blocks
> until the pread() end of the stream catches up with the *start* of it,
> if ever (ie until the kernel sees the actual sequential reads).

So like fadvise blocks 2,3,4,5, but then we see pread block 2 so stop
fadvising for that run of blocks?

> That turned out to require only a small tweak and one new variable.  It
> avoids those stalls on reads of sequential clusters >
> io_combine_limit, with no change in behaviour for pure sequential
> streams and random streams containing sequential clusters <=
> io_combine_limit that I'd previously been fixating on.

Hmm. My understanding must be incorrect because I don't see how this
would behave differently for these two IO patterns

fadvise 2,3,4,5, pread 2

fadvise 2,100,717,999, pread 2

> I've added a
> patch for that to the v2 of my read stream improvements series[1] for
> experimentation... will post shortly.

I don't think you've posted those yet. I'd like to confirm that these
work as expected before we merge the bitmap heap scan code.

> I also see a couple of other reasons why streaming BHS can be less
> aggressive with I/O than master: the silly arbitrary buffer queue cap,
> there's already a patch in the series for that but I have a slightly
> better one now and plan to commit it today, and silly heuristics for
> look-ahead distance reduction also related to sequential detection,
> which I'll explain with a patch on the other thread.  All of these are
> cases where I was basically a bit too chicken to open the throttle all
> the way in early versions.  Will post those at [1] too after lunch...
> more soon...

I'd like to hear more about the buffer queue cap, as I can't say I
remember what that is.

I'd also be interested to see how these other patches affect the BHS
performance.

Other than that, we need to decide on effective_io_concurrency
defaults, which has also been discussed on this thread.

- Melanie


Reply via email to