The attached patch is a new version based on v3(not including Andrei's the
test case). There is no need to call datumCopy when
isnull is true.

I have not added a new runtime memoryContext so far. Continue to use
mstate->tableContext, I'm not sure the memory used of probeslot will affect
mstate->mem_limit.
Maybe adding a new memoryContext is better. I think I should spend a little
time to learn nodeMemoize.c more deeply.

Andrei Lepikhov <a.lepik...@postgrespro.ru> 于2024年2月26日周一 20:29写道:

> On 26/2/2024 18:34, Richard Guo wrote:
> >
> > On Mon, Feb 26, 2024 at 3:54 PM Andrei Lepikhov
> > <a.lepik...@postgrespro.ru <mailto:a.lepik...@postgrespro.ru>> wrote:
> >
> >     On 26/2/2024 12:44, Tender Wang wrote:
> >      > Make sense. I found MemoizeState already has a MemoryContext, so
> >     I used it.
> >      > I update the patch.
> >     This approach is better for me. In the next version of this patch, I
> >     included a test case. I am still unsure about the context chosen and
> >     the
> >     stability of the test case. Richard, you recently fixed some Memoize
> >     issues, could you look at this problem and patch?
> >
> >
> > I looked at this issue a bit.  It seems to me what happens is that at
> > first the memory areas referenced by probeslot->tts_values[] are
> > allocated in the per tuple context (see prepare_probe_slot).  And then
> > in MemoizeHash_hash, after we've calculated the hashkey, we will reset
> > the per tuple context.  However, later in MemoizeHash_equal, we still
> > need to reference the values in probeslot->tts_values[], which have been
> > cleared.
> Agree
> >
> > Actually the context would always be reset in MemoizeHash_equal, for
> > both binary and logical mode.  So I kind of wonder if it's necessary to
> > reset the context in MemoizeHash_hash.
> I can only provide one thought against this solution: what if we have a
> lot of unique hash values, maybe all of them? In that case, we still
> have a kind of 'leak' David fixed by the commit 0b053e78b5.
> Also, I have a segfault report of one client. As I see, it was caused by
> too long text column in the table slot. As I see, key value, stored in
> the Memoize hash table, was corrupted, and the most plain reason is this
> bug. Should we add a test on this bug, and what do you think about the
> one proposed in v3?
>
> --
> regards,
> Andrei Lepikhov
> Postgres Professional
>
>

-- 
Tender Wang
OpenPie:  https://en.openpie.com/

Attachment: v4-0001-Fix-RangeType-oid-not-found-when-doing-Memoize.patch
Description: Binary data

Reply via email to