Thanks for testing!
> Why is it necessary to allocate a new dsa_pointer for tranche names that are
> the same size and then
> free the old one?
> Is there a reason we can't just assign new_ptrs[i] = old_ptrs[i]?
Fair point. I will updated in the next rev. We don't need to free the'
existing tran
Thanks for testing!
> which is what we expect.
>
> But if I look at LWLockTrancheNames.local (for the process that queried
> pg_stat_activity):
>
> (gdb) p LWLockTrancheNames.local[0]
> $1 = 0x0
> (gdb) p LWLockTrancheNames.local[1]
> $2 = 0x7b759a81b038 "BDT_Lck"
>
> It looks like there is an in
Hi,
I've begun reviewing this patch and have a few questions listed below:
1. + if (i < LWLockTrancheNames.shmem->allocated &&
DsaPointerIsValid(old_ptrs[i]))
Should an assert be used for the second condition instead?
Since for i < LWLockTrancheNames.shmem->allocated, the dsa pointer is
expect
Hi,
On Tue, Aug 05, 2025 at 11:24:04PM -0500, Sami Imseih wrote:
> > I'll fix this in the next rev.
>
> v5 fixes the above 2 issues found above.
Thanks!
> For the issue that was throwing " segment that has been freed",
> indeed we should be freeing LWLockTrancheNames.shmem->list_ptr,
Yeah.
> I'll fix this in the next rev.
v5 fixes the above 2 issues found above.
For the issue that was throwing " segment that has been freed",
indeed we should be freeing LWLockTrancheNames.shmem->list_ptr,
list_ptr is a dsa_pointer that stores an array of dsa_pointers, which
then made me realize
Thanks for reviewing!
> Issue 1 --
>
> If I register enough tranches to go to:
>
> + /* Resize if needed */
> + if (LWLockTrancheNames.shmem->count >=
> LWLockTrancheNames.shmem->allocated)
> + {
> + newalloc =
> pg_nextpower2_32(Ma
Hi,
On Mon, Aug 04, 2025 at 10:47:45PM -0500, Sami Imseih wrote:
> > > > With a local hash table, I don't think it's necessary to introduce new
> > > > code for managing
> > > > a DSA based list of tranche names as is done in v3. We can go back to
> > > > storing the shared
> > > > trance names in
> > > With a local hash table, I don't think it's necessary to introduce new
> > > code for managing
> > > a DSA based list of tranche names as is done in v3. We can go back to
> > > storing the shared
> > > trance names in dshash.
> > >
> > > What do you think?
> >
> > My first thought is that a p
> On Mon, Aug 04, 2025 at 11:32:19AM -0500, Sami Imseih wrote:
> > With a local hash table, I don't think it's necessary to introduce new
> > code for managing
> > a DSA based list of tranche names as is done in v3. We can go back to
> > storing the shared
> > trance names in dshash.
> >
> > What d
On Mon, Aug 04, 2025 at 11:32:19AM -0500, Sami Imseih wrote:
> With a local hash table, I don't think it's necessary to introduce new
> code for managing
> a DSA based list of tranche names as is done in v3. We can go back to
> storing the shared
> trance names in dshash.
>
> What do you think?
M
> > > I think we could add a local backend copy that stays up to date with the
> > > DSA. One idea would be to use an atomic counter to track the number of
> > > entries in the DSA and compare it with a local backend counter whenever
> > > the
> > > tranche name lookup occurs. If the atomic counte
> > I think we could add a local backend copy that stays up to date with the
> > DSA. One idea would be to use an atomic counter to track the number of
> > entries in the DSA and compare it with a local backend counter whenever the
> > tranche name lookup occurs. If the atomic counter is higher (si
Hi,
On Fri, Aug 01, 2025 at 01:15:24PM -0500, Sami Imseih wrote:
> I think we could add a local backend copy that stays up to date with the
> DSA. One idea would be to use an atomic counter to track the number of
> entries in the DSA and compare it with a local backend counter whenever the
> tranc
> > Also, I suspect that there might be some concerns about the API changes.
> > While it should be a very easy fix, that seems likely to break a lot of
> > extensions.
> > I don't know if it's possible to make this stuff backward
> > compatible, and I also don't know if we really want to, as that'
> > > Unlike the local list of tranche names, appending to and searching the
> > > shared memory list requires an LWLock; in exclusive mode to append, and
> > > shared mode to search.
> >
> > The thing that stood out the most to me is how much more expensive looking
> > up the lock name is. At the
Hi,
On Thu, Jul 31, 2025 at 04:24:38PM -0500, Nathan Bossart wrote:
> I've attached a rebased version of the patch.
>
> On Mon, Jul 21, 2025 at 11:26:44PM -0500, Sami Imseih wrote:
> > Unlike the local list of tranche names, appending to and searching the
> > shared memory list requires an LWLock
I've attached a rebased version of the patch.
On Mon, Jul 21, 2025 at 11:26:44PM -0500, Sami Imseih wrote:
> Unlike the local list of tranche names, appending to and searching the
> shared memory list requires an LWLock; in exclusive mode to append, and
> shared mode to search.
The thing that sto
> >> I was imagining putting the array in one big DSA allocation instead of
> >> carting around a pointer for each tranche name. (Sorry, I realize I am
> >> hand-waving over some of the details.)
> >
> > I understood it like this. Here is a sketch:
> >
> > ```
> > dsa_pointer p;
> >
> > dsa = dsa_
> > Hi,
> >
> > If a dshash table is used to store tranche names and IDs, where would the
> > tranche name for this table
> > be registered?
>
> I guess it could be a new BuiltinTrancheId for this dsa but not sure what
> Nathan
> and Sami have in mind.
Yes, it will be a BuiltinTrancheId for a sha
Hi,
On Tue, Jul 15, 2025 at 12:59:04PM +0530, Rahila Syed wrote:
> Hi,
>
> If a dshash table is used to store tranche names and IDs, where would the
> tranche name for this table
> be registered?
I guess it could be a new BuiltinTrancheId for this dsa but not sure what Nathan
and Sami have in mi
On Tue, Jul 15, 2025 at 12:06:00PM -0500, Sami Imseih wrote:
> On Tue, Jul 15, 2025 at 11:57 AM Nathan Bossart
> wrote:
>> I was imagining putting the array in one big DSA allocation instead of
>> carting around a pointer for each tranche name. (Sorry, I realize I am
>> hand-waving over some of t
On Tue, Jul 15, 2025 at 11:57 AM Nathan Bossart
wrote:
>
> On Tue, Jul 15, 2025 at 11:52:19AM -0500, Sami Imseih wrote:
> >> Another random thought: I worry that the dshash approach might be quite a
> >> bit slower, and IIUC we just need to map an integer to a string. Maybe we
> >> should just us
On Tue, Jul 15, 2025 at 11:52:19AM -0500, Sami Imseih wrote:
>> Another random thought: I worry that the dshash approach might be quite a
>> bit slower, and IIUC we just need to map an integer to a string. Maybe we
>> should just use a DSA for LWLockTrancheNames. IOW we'd leave it as a char**
>> b
> Another random thought: I worry that the dshash approach might be quite a
> bit slower, and IIUC we just need to map an integer to a string. Maybe we
> should just use a DSA for LWLockTrancheNames. IOW we'd leave it as a char**
> but put it in shared memory.
To use DSA just for this purpose, we
Hi,
On Fri, Jul 11, 2025 at 04:32:13PM -0500, Sami Imseih wrote:
> > > and instead reuse the existing static hash table, which is
> > > capped at 128 custom wait events:
> > >
> > > ```
> > > #define WAIT_EVENT_CUSTOM_HASH_MAX_SIZE 128
> > > ```
> >
> > That's probably still high enough, thoughts?
Hi,
>
> Since DSM is not available during postmaster, if we were to create a DSA
> segment in place, similar to what's done in StatsShmemInit(), we would also
> need to ensure that the initial shared memory is sized appropriately. This
> is
> because it would need to be large enough to accommodat
On Mon, Jul 14, 2025 at 03:45:02PM -0500, Sami Imseih wrote:
>> Ah, I missed the problem with postmaster. Could we have the first backend
>> that needs to access the table be responsible for creating it and
>> populating it with the built-in/requested-at-startup entries?
>
> We can certainly main
> Ah, I missed the problem with postmaster. Could we have the first backend
> that needs to access the table be responsible for creating it and
> populating it with the built-in/requested-at-startup entries?
We can certainly maintain a flag in the shared state that is set once
the first backend l
Nathan Bossart writes:
> Ah, I missed the problem with postmaster. Could we have the first backend
> that needs to access the table be responsible for creating it and
> populating it with the built-in/requested-at-startup entries? Also, is
> there any chance that postmaster might need to access
On Mon, Jul 14, 2025 at 02:34:00PM -0500, Sami Imseih wrote:
>> Why do we need three different places for the lock names? Is there a
>> reason we can't put it all in shared memory?
>
> The real reason I felt it was better to keep three separate locations is that
> it allows for a clear separation
>> Attached is a proof of concept that does not alter the
>> LWLockRegisterTranche API.
> IMHO we should consider modifying the API, because right now you have to
> call LWLockRegisterTranche() in each backend. Why not accept the name as
> an argument for LWLockNewTrancheId() and only require it t
On Fri, Jul 11, 2025 at 04:32:13PM -0500, Sami Imseih wrote:
> Now, what I think will be a good API is to provide an alternative to
> LWLockRegisterTranche,
> which now takes in both a tranche ID and tranche_name. The new API I propose
> is
> LWLockRegisterTrancheWaitEventCustom which takes only a
> > and instead reuse the existing static hash table, which is
> > capped at 128 custom wait events:
> >
> > ```
> > #define WAIT_EVENT_CUSTOM_HASH_MAX_SIZE 128
> > ```
>
> That's probably still high enough, thoughts?
I have no reason to believe that this number could be too low.
I am not aware of
On Wed, Jul 09, 2025 at 04:39:48PM -0500, Sami Imseih wrote:
> Attached is a proof of concept that does not alter the
> LWLockRegisterTranche API.
IMHO we should consider modifying the API, because right now you have to
call LWLockRegisterTranche() in each backend. Why not accept the name as
an a
Hi,
On Thu, Jul 10, 2025 at 04:34:34PM -0500, Sami Imseih wrote:
> Thanks for the feedback!
>
> > > Attached is a proof of concept that does not alter the
> > > LWLockRegisterTranche API. Instead, it detects when a registration is
> > > performed by a normal backend and stores the tranche name in
Thanks for the feedback!
> > Attached is a proof of concept that does not alter the
> > LWLockRegisterTranche API. Instead, it detects when a registration is
> > performed by a normal backend and stores the tranche name in shared memory,
> > using a dshash keyed by tranche ID. Tranche name lookup
Hi,
On Wed, Jul 09, 2025 at 04:39:48PM -0500, Sami Imseih wrote:
> Hi,
>
> When querying pg_stat_activity, the function pgstat_get_wait_event is
> called, which internally uses GetLWLockIdentifier and GetLWTrancheName
> to map the LWLock to its tranche name. If the backend does not recognize
> th
Hi,
This is a follow-up to a discussion started in [0].
LWLocks in PostgreSQL are categorized into tranches, and the tranche name
appears as the wait_event in pg_stat_activity. There are both built-in
tranche names and tranche names that can be registered by extensions using
RequestNamedLWLockTra
38 matches
Mail list logo