Andres Freund <and...@anarazel.de> writes: > On 2022-03-22 16:55:49 -0400, Tom Lane wrote: >> BTW, I've had a bee in my bonnet for a long time about whether some of >> nbtree's scan setup work could be done once during planning, rather than >> over again during each indexscan start.
> It does show up in simple-index-lookup heavy workloads. Not as a major thing, > but it's there. And it's just architecturally displeasing :) > Are you thinking of just moving the setup stuff in nbtree (presumably parts of > _bt_first() / _bt_preprocess_keys()) or also stuff in > ExecIndexBuildScanKeys()? Didn't really have specifics in mind. The key stumbling block is that some (not all) of the work depends on knowing the specific values of the indexqual comparison keys, so while you could do that work in advance for constant keys, you'd still have to be prepared to do work at scan start for non-constant keys. I don't have a clear idea about how to factorize that effectively. A couple of other random ideas in this space: * I suspect that a lot of this work overlaps with the efforts that btcostestimate makes along the way to getting a cost estimate. So it's interesting to wonder whether we could refactor so that btcostestimate is integrated with this hypothetical plan-time key preprocessing and doesn't duplicate work. * I think that we run through most or all of that preprocessing logic even for internal catalog accesses, where we know darn well how the keys are set up. We ought to think harder about how we could short-circuit pointless work in those code paths. I don't think any of this is an essential prerequisite to getting something done for loose index scans, which ISTM ought to be the first point of attack for v16. Loose index scans per se shouldn't add much to the key preprocessing costs. But these ideas likely would be useful to look into before anyone starts on the more complicated preprocessing that would be needed for the ideas in the MDAM paper. regards, tom lane