On 12/21/23 07:49, Dilip Kumar wrote:
> On Wed, Dec 20, 2023 at 7:11 AM Tomas Vondra
> <tomas.von...@enterprisedb.com> wrote:
>>
> I was going through to understand the idea, couple of observations
> 
> --
> + for (int i = 0; i < PREFETCH_LRU_SIZE; i++)
> + {
> + entry = &prefetch->prefetchCache[lru * PREFETCH_LRU_SIZE + i];
> +
> + /* Is this the oldest prefetch request in this LRU? */
> + if (entry->request < oldestRequest)
> + {
> + oldestRequest = entry->request;
> + oldestIndex = i;
> + }
> +
> + /*
> + * If the entry is unused (identified by request being set to 0),
> + * we're done. Notice the field is uint64, so empty entry is
> + * guaranteed to be the oldest one.
> + */
> + if (entry->request == 0)
> + continue;
> 
> If the 'entry->request == 0' then we should break instead of continue, right?
> 

Yes, I think that's true. The small LRU caches are accessed/filled
linearly, so once we find an empty entry, all following entries are
going to be empty too.

I thought this shouldn't make any difference, because the LRUs are very
small (only 8 entries, and I don't think we should make them larger).
And it's going to go away once the cache gets full. But now that I think
about it, maybe this could matter for small queries that only ever hit a
couple rows. Hmmm, I'll have to check.

Thanks for noticing this!

> ---
> /*
>  * Used to detect sequential patterns (and disable prefetching).
>  */
> #define PREFETCH_QUEUE_HISTORY 8
> #define PREFETCH_SEQ_PATTERN_BLOCKS 4
> 
> If for sequential patterns we search only 4 blocks then why we are
> maintaining history for 8 blocks
> 
> ---

Right, I think there's no reason to keep these two separate constants. I
believe this is a remnant from an earlier patch version which tried to
do something smarter, but I ended up abandoning that.

> 
> + *
> + * XXX Perhaps this should be tied to effective_io_concurrency somehow?
> + *
> + * XXX Could it be harmful that we read the queue backwards? Maybe memory
> + * prefetching works better for the forward direction?
> + */
> + for (int i = 1; i < PREFETCH_SEQ_PATTERN_BLOCKS; i++)
> 
> Correct, I think if we fetch this forward it will have an advantage
> with memory prefetching.
> 

OK, although we only really have a couple uint32 values, so it should be
the same cacheline I guess.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Reply via email to