On 29/07/2024 21:10, Robert Haas wrote:
On Mon, Jul 29, 2024 at 5:57 AM Rui Zhao wrote:
Prior to PG16, postmaster children would manually detach from shared memory
if it was not needed. However, this behavior was removed in fork mode in
commit aafc05d.
Oh. The commit message makes no mention
On Mon, Jul 29, 2024 at 5:57 AM Rui Zhao wrote:
> Prior to PG16, postmaster children would manually detach from shared memory
> if it was not needed. However, this behavior was removed in fork mode in
> commit aafc05d.
Oh. The commit message makes no mention of that. I wonder wheth
Thanks for your reply.
> Thanks for the patch. How do you estimate its performance impact?
In my patch, ony child processes that set
(child_process_kinds[child_type].shmem_attach == false)
will detach from shared memory.
Child processes with B_STANDALONE_BACKEND and B_INVALID don
Hi Rui,
> Prior to PG16, postmaster children would manually detach from shared memory
> if it was not needed. However, this behavior was removed in fork mode in
> commit aafc05d.
>
> Detaching shared memory when it is no longer needed is beneficial, as
> postmaster children (lik
Prior to PG16, postmaster children would manually detach from shared memory
if it was not needed. However, this behavior was removed in fork mode in
commit aafc05d.
Detaching shared memory when it is no longer needed is beneficial, as
postmaster children (like syslogger) don't wish to tak
”.
During “ambuild”, I use “ShmemInitStruct” to initialize a segment of shared
memory and save the pointer to this location in my static, global pointer. I
then set some values of the structure that the pointer points to, which I
believe works correctly. I have ensured to acquire, and later release
Committed.
--
nathan
Nathan Bossart writes:
> The only thing stopping me from committing this right now is Tom's upthread
> objection about adding more GUCs that just expose values that you can't
> actually set. If that objection still stands, I'll withdraw this patch
> (and maybe try introducing a new way to surface
On Sun, Jun 09, 2024 at 02:04:17PM -0500, Nathan Bossart wrote:
> Here's a new version of the patch with the GUC renamed to
> num_os_semaphores.
The only thing stopping me from committing this right now is Tom's upthread
objection about adding more GUCs that just expose values that you can't
actua
100644
--- a/src/backend/storage/ipc/ipci.c
+++ b/src/backend/storage/ipc/ipci.c
@@ -372,11 +372,12 @@ InitializeShmemGUCs(void)
Sizesize_b;
Sizesize_mb;
Sizehp_size;
+ int num_semas;
/*
* Ca
On Thu, Jun 06, 2024 at 03:31:53PM -0400, Robert Haas wrote:
> I don't really like making this a GUC, but what's the other option?
> It's reasonable for people to want to ask the server how many
> resources it will need to start, and -C is the only tool we have for
> that right now. So I feel like
On Thu, Jun 6, 2024 at 3:21 PM Nathan Bossart wrote:
> Here is a rebased version of the patch for v18 that adds a runtime-computed
> GUC. As I noted earlier, there still isn't a consensus on this approach.
I don't really like making this a GUC, but what's the other option?
It's reasonable for pe
ipci.c
@@ -372,11 +372,12 @@ InitializeShmemGUCs(void)
Sizesize_b;
Sizesize_mb;
Sizehp_size;
+ int num_semas;
/*
* Calculate the shared memory size and round up to the nearest
megabyte.
On Mon, Jun 03, 2024 at 02:04:19PM -0500, Nathan Bossart wrote:
> Of course, as soon as I committed this, I noticed another missing reference
> to max_wal_senders in the paragraph about POSIX semaphores. I plan to
> commit/back-patch the attached patch within the next couple days.
Committed.
--
On Mon, Jun 03, 2024 at 12:18:21PM -0500, Nathan Bossart wrote:
> On Tue, May 21, 2024 at 11:15:14PM +, Imseih (AWS), Sami wrote:
>> As far as backpatching the present inconsistencies in the docs,
>> [0] looks good to me.
>
> Committed.
Of course, as soon as I committed this, I noticed anothe
On Tue, May 21, 2024 at 11:15:14PM +, Imseih (AWS), Sami wrote:
>> Any concerns with doing something like this [0] for the back-branches? The
>> constant would be 6 instead of 7 on v14 through v16.
>
> As far as backpatching the present inconsistencies in the docs,
> [0] looks good to me.
Com
> Any concerns with doing something like this [0] for the back-branches? The
> constant would be 6 instead of 7 on v14 through v16.
As far as backpatching the present inconsistencies in the docs,
[0] looks good to me.
[0]
https://postgr.es/m/attachment/160360/v1-0001-fix-kernel-resources-docs-on
Sizesize_mb;
Sizehp_size;
+ int num_semas;
/*
* Calculate the shared memory size and round up to the nearest
megabyte.
*/
- size_b = CalculateShmemSize(NULL);
+ size_b = CalculateShmemSize(&nu
> postgres -D pgdev-dev -c shared_buffers=16MB -C
> shared_memory_size_in_huge_pages
> 13
> postgres -D pgdev-dev -c shared_buffers=16MB -c huge_page_size=1GB -C
> shared_memory_size_in_huge_pages
> 1
> Which is very useful to be able to actually configure that number of huge
> pages. I don't t
Hi,
On 2024-05-17 18:30:08 +, Imseih (AWS), Sami wrote:
> > The advantage of the GUC is that its value could be seen before trying to
> > actually start the server.
>
> Only if they have a sample in postgresql.conf file, right?
> A GUC like shared_memory_size_in_huge_pages will not be.
You ca
On Fri, May 17, 2024 at 06:30:08PM +, Imseih (AWS), Sami wrote:
>> The advantage of the GUC is that its value could be seen before trying to
>> actually start the server.
>
> Only if they have a sample in postgresql.conf file, right?
> A GUC like shared_memory_size_in_huge_pages will not be.
On Fri, May 17, 2024 at 12:48:37PM -0500, Nathan Bossart wrote:
> On Fri, May 17, 2024 at 01:09:55PM -0400, Tom Lane wrote:
>> Nathan Bossart writes:
>>> At a bare minimum, we should probably fix the obvious problems, but I
>>> wonder if we could simplify this section a bit, too.
>>
>> Yup. "The
>>> If the exact values
>>> are important, maybe we could introduce more GUCs like
>>> shared_memory_size_in_huge_pages that can be consulted (instead of
>>> requiring users to break out their calculators).
>>
>> I don't especially like shared_memory_size_in_huge_pages, and I don't
>> want to intro
On Fri, May 17, 2024 at 01:09:55PM -0400, Tom Lane wrote:
> Nathan Bossart writes:
>> [ many, many problems in documented formulas ]
>
>> At a bare minimum, we should probably fix the obvious problems, but I
>> wonder if we could simplify this section a bit, too.
>
> Yup. "The definition of ins
Nathan Bossart writes:
> [ many, many problems in documented formulas ]
> At a bare minimum, we should probably fix the obvious problems, but I
> wonder if we could simplify this section a bit, too.
Yup. "The definition of insanity is doing the same thing over and
over and expecting different r
(moving to a new thread)
On Thu, May 16, 2024 at 09:16:46PM -0500, Nathan Bossart wrote:
> On Thu, May 16, 2024 at 04:37:10PM +, Imseih (AWS), Sami wrote:
>> Also, Not sure if I am mistaken here, but the "+ 5" in the existing docs
>> seems wrong.
>>
>> If it refers to NUM_AUXILIARY_PROCS def
On 2024-Mar-05, Hayato Kuroda (Fujitsu) wrote:
> Basically sounds good. My concerns are:
>
> * GetNamedDSMSegment() does not returns a raw pointer to dsm_segment. This
> means
> that it may be difficult to do dsm_unpin_segment on the caller side.
Maybe we don't need a "named" DSM segment at a
Dear Alvaro,
Thanks for giving comments!
> > I agreed it sounds good, but I don't think it can be implemented by
> > current interface. An interface for dynamically allocating memory is
> > GetNamedDSMSegment(), and it returns the same shared memory region if
>
t.
> I agreed it sounds good, but I don't think it can be implemented by
> current interface. An interface for dynamically allocating memory is
> GetNamedDSMSegment(), and it returns the same shared memory region if
> input names are the same. Therefore, there is no way to re-a
About the suggetion, you imagined AutoVacuumRequestWork() and brininsert(),
right? I agreed it sounds good, but I don't think it can be implemented by
current
interface. An interface for dynamically allocating memory is
GetNamedDSMSegment(),
and it returns the same shared memory region if input names are
On 2024-Mar-04, Hayato Kuroda (Fujitsu) wrote:
> However, the second idea is still valid, which allows the allocation
> of shared memory dynamically. This is a bit efficient for the system
> which tuples won't be frozen. Thought?
I think it would be worth allocating AutoVacuumShmem
ed by the
configuration parameter autovacuum_freeze_max_age. (This will happen even
if autovacuum is disabled.)
```
This means that my first idea won't work well. Even if the postmaster does not
initially allocate shared memory, backends may request to start auto vacuum and
use the region. However, the second
On 2024-Mar-04, Hayato Kuroda (Fujitsu) wrote:
> Dear hackers,
>
> While reading codes, I found that ApplyLauncherShmemInit() and
> AutoVacuumShmemInit() are always called even if they would not be
> launched.
Note that there are situations where the autovacuum launcher is started
even though au
Dear Tom,
Thanks for replying!
> "Hayato Kuroda (Fujitsu)" writes:
> > While reading codes, I found that ApplyLauncherShmemInit() and
> AutoVacuumShmemInit()
> > are always called even if they would not be launched.
> > It may be able to reduce the start time to avoid the unnecessary allocation.
"Hayato Kuroda (Fujitsu)" writes:
> While reading codes, I found that ApplyLauncherShmemInit() and
> AutoVacuumShmemInit()
> are always called even if they would not be launched.
> It may be able to reduce the start time to avoid the unnecessary allocation.
Why would this be a good idea? It wou
/launcher.c
@@ -962,6 +962,9 @@ ApplyLauncherShmemInit(void)
{
boolfound;
+if (max_logical_replication_workers == 0 || IsBinaryUpgrade)
+return;
+
```
2)
Dynamically allocate the shared memory. This was allowed by recent commit [1].
I made a small PoC only for logical launcher
On Mon, Jan 22, 2024 at 05:00:48PM +0530, Bharath Rupireddy wrote:
> On Mon, Jan 22, 2024 at 3:43 AM Nathan Bossart
> wrote:
>> Oops. I've attached an attempt at fixing this. I took the opportunity to
>> clean up the surrounding code a bit.
>
> The code looks cleaner and readable with the patc
On Mon, Jan 22, 2024 at 3:43 AM Nathan Bossart wrote:
>
> Oops. I've attached an attempt at fixing this. I took the opportunity to
> clean up the surrounding code a bit.
The code looks cleaner and readable with the patch. All the call sites
are taking care of dsm_attach returning NULL value. So
On Sun, Jan 21, 2024 at 04:13:20PM -0600, Nathan Bossart wrote:
> Oops. I've attached an attempt at fixing this. I took the opportunity to
> clean up the surrounding code a bit.
Thanks for the patch. Your proposed attempt looks correct to me with
an ERROR when no segments are found..
--
Michael
isting segment")));
}
- else if (!dsm_find_mapping(entry->handle))
+ else
{
- /* Attach to existing segment. */
- dsm_segment *seg = dsm_attach(entry->handle);
+ dsm_segment *seg = dsm_find_mapping(entry->handle);
+
+ /* If the existing segment is not already attached, attach it now.
Nathan Bossart writes:
> Committed. Thanks everyone for reviewing!
Coverity complained about this:
*** CID 1586660: Null pointer dereferences (NULL_RETURNS)
/srv/coverity/git/pgsql-git/postgresql/src/backend/storage/ipc/dsm_registry.c:
185 in GetNamedDSMSegment()
179 }
180
On Wed, Jan 17, 2024 at 08:00:00AM +0530, Bharath Rupireddy wrote:
> The v8 patches look good to me.
Committed. Thanks everyone for reviewing!
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
On Wed, Jan 17, 2024 at 06:48:37AM +0530, Bharath Rupireddy wrote:
> LGTM.
Committed. Thanks for reviewing!
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
On Tue, Jan 16, 2024 at 9:37 PM Nathan Bossart wrote:
>
> The autoprewarm change (0003) does use this variable. I considered making
> it optional (i.e., you could pass in NULL if you didn't want it), but I
> didn't feel like the extra code in GetNamedDSMSegment() to allow this was
> worth it so t
On Tue, Jan 16, 2024 at 9:22 PM Nathan Bossart wrote:
>
> I fixed this in v4.
LGTM.
--
Bharath Rupireddy
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
om
>From 402eaf87776fb6a9d212da66947f47c63bd53f2a Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Thu, 11 Jan 2024 21:55:25 -0600
Subject: [PATCH v8 1/3] reorganize shared memory and lwlocks documentation
---
doc/src/sgml/xfunc.sgml | 184 +---
1 file c
7f47c63bd53f2a Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Thu, 11 Jan 2024 21:55:25 -0600
Subject: [PATCH v4 1/1] reorganize shared memory and lwlocks documentation
---
doc/src/sgml/xfunc.sgml | 184 +---
1 file changed, 115 insertions(+), 69 deletions(-
On Tue, Jan 16, 2024 at 10:02:15AM +0530, Bharath Rupireddy wrote:
> The v3 patch looks good to me except for a nitpick: the input
> parameter for RequestAddinShmemSpace is 'Size' not 'int'
>
>
> void RequestAddinShmemSpace(int size)
>
Hah, I think this mistake is nearly old enough to vote (
but in production (without the
assertion), I'm seeing the following errors.
2024-01-16 04:49:28.961 UTC client backend[864369]
pg_regress/test_dsm_registry ERROR: could not resize shared memory
segment "/PostgreSQL.3701090278" to 0 bytes: Invalid argument
2024-01-16 04:49:29.264 UT
On Sun, Jan 14, 2024 at 2:58 AM Nathan Bossart wrote:
>
> Great. I've attached a v3 with a couple of fixes suggested in the other
> thread [0]. I'll wait a little while longer in case anyone else wants to
> take a look.
The v3 patch looks good to me except for a nitpick: the input
parameter for
On Fri, Jan 12, 2024 at 01:45:55PM -0600, Nathan Bossart wrote:
> On Fri, Jan 12, 2024 at 11:13:46PM +0530, Abhijit Menon-Sen wrote:
>> At 2024-01-12 11:21:52 -0600, nathandboss...@gmail.com wrote:
>>> + Each backend sould obtain a pointer to the reserved shared memo
: https://aws.amazon.com
>From b56931edc4488a7376b27ba0e5519cc3a61b4899 Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Thu, 11 Jan 2024 21:55:25 -0600
Subject: [PATCH v3 1/1] reorganize shared memory and lwlocks documentation
---
doc/src/sgml/xfunc.sgml | 182 +---
1 file changed, 114 ins
ute the registered shmem_startup_hook shortly
> > after it attaches to shared memory.
That's much better, thanks.
I think the patch could use another pair of eyes, ideally from a
native English speaker. But if no one will express any objections for
a while I suggest merging it.
--
Best regards,
Aleksander Alekseev
On Fri, Jan 12, 2024 at 11:13:46PM +0530, Abhijit Menon-Sen wrote:
> At 2024-01-12 11:21:52 -0600, nathandboss...@gmail.com wrote:
>> + Each backend sould obtain a pointer to the reserved shared memory by
>
> sould → should
D'oh. Thanks.
>> + Add-ins can
At 2024-01-12 11:21:52 -0600, nathandboss...@gmail.com wrote:
>
> From: Nathan Bossart
> Date: Thu, 11 Jan 2024 21:55:25 -0600
> Subject: [PATCH v6 1/3] reorganize shared memory and lwlocks documentation
>
> ---
> doc/src/sgml/xfunc.sgml | 182 +
On Fri, Jan 12, 2024 at 09:46:50AM -0600, Nathan Bossart wrote:
> On Fri, Jan 12, 2024 at 05:12:28PM +0300, Aleksander Alekseev wrote:
>> """
>> Any registered shmem_startup_hook will be executed shortly after each
>> backend attaches to shared memory.
>&g
it here.
[0] https://postgr.es/m/20240112041430.GA3557928%40nathanxps13
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 7cf22727a96757bf212ec106bd471bf55a6981b9 Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Thu, 11 Jan 2024 21:55:25 -0600
Subject: [PATCH v6 1/3] reorg
Thanks for reviewing.
On Fri, Jan 12, 2024 at 05:12:28PM +0300, Aleksander Alekseev wrote:
> """
> Any registered shmem_startup_hook will be executed shortly after each
> backend attaches to shared memory.
> """
>
> IMO the word "each" he
Hi,
> I recently began trying to write documentation for the dynamic shared
> memory registry feature [0], and I noticed that the "Shared Memory and
> LWLocks" section of the documentation might need some improvement.
I know that feeling.
> Thoughts?
"""
A
I recently began trying to write documentation for the dynamic shared
memory registry feature [0], and I noticed that the "Shared Memory and
LWLocks" section of the documentation might need some improvement. At
least, I felt that it would be hard to add any new content to this secti
u of this new
> >> feature? I think it's quite possible to get rid of the shmem request
> >> and startup hooks (of course, not now but at some point in future to
> >> not break the external modules), because all the external modules can
> >> allocate and in
On Mon, Jan 08, 2024 at 11:16:27AM -0600, Nathan Bossart wrote:
> On Mon, Jan 08, 2024 at 10:53:17AM +0530, Bharath Rupireddy wrote:
>> 1. I think we need to add some notes about this new way of getting
>> shared memory for external modules in the Shared Memory and
>> LWLocks
nts in Postgres". Do you see anything wrong with this?
>>
>> Why do you feel it should be renamed? I don't see anything wrong with it,
>> but I also don't see any particular advantage with that name compared to
>> "dynamic shared memory registry."
On 9/1/2024 00:16, Nathan Bossart wrote:
On Mon, Jan 08, 2024 at 10:53:17AM +0530, Bharath Rupireddy wrote:
1. I think we need to add some notes about this new way of getting
shared memory for external modules in the Shared Memory and
LWLocks section in xfunc.sgml? This will at least tell
On Mon, Jan 8, 2024 at 10:48 PM Nathan Bossart
wrote:
> On Mon, Jan 08, 2024 at 11:13:42AM +0530, Amul Sul wrote:
> > +void *
> > +dsm_registry_init_or_attach(const char *key, size_t size,
> >
> > I think the name could be simple as dsm_registry_init() like we use
> > elsewhere e.g. ShmemInitHash
On Mon, Jan 08, 2024 at 11:13:42AM +0530, Amul Sul wrote:
> +void *
> +dsm_registry_init_or_attach(const char *key, size_t size,
>
> I think the name could be simple as dsm_registry_init() like we use
> elsewhere e.g. ShmemInitHash() which doesn't say attach explicitly.
That seems reasonable to m
On Mon, Jan 08, 2024 at 10:53:17AM +0530, Bharath Rupireddy wrote:
> 1. I think we need to add some notes about this new way of getting
> shared memory for external modules in the Shared Memory and
> LWLocks section in xfunc.sgml? This will at least tell there's
> another way for
t; Thanks. A fresh look at the v5 patches left me with the following thoughts:
>
> 1. I think we need to add some notes about this new way of getting
> shared memory for external modules in the Shared Memory and
> LWLocks section in xfunc.sgml? This will at least tell there's
> an
getting
shared memory for external modules in the Shared Memory and
LWLocks section in xfunc.sgml? This will at least tell there's
another way for external modules to get shared memory, not just with
the shmem_request_hook and shmem_startup_hook. What do you think?
2. FWIW, I'd like to
/src/backend/storage/ipc/dsm_registry.c
@@ -0,0 +1,176 @@
+/*-
+ *
+ * dsm_registry.c
+ *
+ * Functions for interfacing with the dynamic shared memory registry. This
+ * provides a way for libraries to use shared memory without needing to
+ * request it at startup time via a shmem_request_hook.
+ *
+ * Portions Copyri
On Wed, Jan 3, 2024 at 4:19 AM Nathan Bossart wrote:
>
> Here's a new version of the patch set with Bharath's feedback addressed.
Thanks. The v4 patches look good to me except for a few minor
comments. I've marked it as RfC in CF.
1. Update all the copyright to the new year. A run of
src/tools/c
t;> > hence limiting it to a higher value 256 instead of just NAMEDATALEN?
>> > My thoughts were around saving a few bytes of shared memory space that
>> > can get higher when multiple modules using a DSM registry with
>> > multiple DSM segments.
>>
>> I&
On Tue, Jan 2, 2024 at 11:21 AM Nathan Bossart wrote:
> > Are we expecting, for instance, a 128-bit UUID being used as a key and
> > hence limiting it to a higher value 256 instead of just NAMEDATALEN?
> > My thoughts were around saving a few bytes of shared memory space that
aving a
dedicated test suite, if for no other reason that to guarantee some
coverage even if the other in-tree uses disappear.
> I've tried with a shared memory structure size of 10GB on my
> development machine and I have seen an intermittent crash (I haven't
> got a chance
ate test module.
> > 1. IIUC, this feature lets external modules create as many DSM
> > segments as possible with different keys right? If yes, is capping the
> > max number of DSMs a good idea?
>
> Why? Even if it is a good idea, what limit could we choose that wouldn't
&g
istry.c
@@ -0,0 +1,209 @@
+/*-
+ *
+ * dsm_registry.c
+ *
+ * Functions for interfacing with the dynamic shared memory registry. This
+ * provides a way for libraries to use shared memory without needing to
+ * request it
gistry.c
+ *
+ * Functions for interfacing with the dynamic shared memory registry. This
+ * provides a way for libraries to use shared memory without needing to
+ * request it at startup time via a shmem_request_hook.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ *
educe the number of questions in mailing lists.
2. Looking into existing extensions, I see that the most common case of
using a shared memory segment is maintaining some hash table or state
structure that needs at least one lock.
Try to rewrite the pg_prewarm according to this new feature, an
On Thu, Dec 21, 2023 at 12:03:18AM +0800, Zhang Mingli wrote:
> I see most xxxShmemInit functions have the logic to handle IsUnderPostmaster
> env.
> Do we need to consider it in DSMRegistryShmemInit() too? For example, add
> some assertions.
> Others LGTM.
Good point. I _think_ the registry is
nce in a while, I find myself wanting to use shared memory in a
> loadable module without requiring it to be loaded at server start via
> shared_preload_libraries. The DSM API offers a nice way to create and
> manage dynamic shared memory segments, so creating a segment after server
>
t
> with?
Why is it too much?
> 4. Do we need on_dsm_detach for each DSM created?
Presently, I've designed this such that the DSM remains attached for the
lifetime of a session (and stays present even if all attached sessions go
away) to mimic what you get when you allocate shared memory duri
On Wed, Dec 20, 2023 at 11:02:58AM +0200, Andrei Lepikhov wrote:
> In that case, maybe change the test case to make it closer to real-life
> usage - with locks and concurrent access (See attachment)?
I'm not following why we should make this test case more complicated. It
is only intended to test
On Tue, Dec 5, 2023 at 9:17 AM Nathan Bossart wrote:
>
> Every once in a while, I find myself wanting to use shared memory in a
> loadable module without requiring it to be loaded at server start via
> shared_preload_libraries. The DSM API offers a nice way to create and
> manage
On 20/12/2023 07:04, Michael Paquier wrote:
On Tue, Dec 19, 2023 at 10:14:44AM -0600, Nathan Bossart wrote:
On Tue, Dec 19, 2023 at 10:49:23AM -0500, Robert Haas wrote:
On Mon, Dec 18, 2023 at 3:32 AM Andrei Lepikhov
wrote:
2. I think a separate file for this feature looks too expensive.
Acco
On Tue, Dec 19, 2023 at 10:14:44AM -0600, Nathan Bossart wrote:
> On Tue, Dec 19, 2023 at 10:49:23AM -0500, Robert Haas wrote:
>> On Mon, Dec 18, 2023 at 3:32 AM Andrei Lepikhov
>> wrote:
>>> 2. I think a separate file for this feature looks too expensive.
>>> According to the gist of that code, i
> What is the use-case for only verifying the existence of a segment?
One case I was thinking about is parallel aggregates that can define
combining and serial/deserial functions, where some of the operations
could happen in shared memory, requiring a DSM, and where each process
doing some a
On Fri, Dec 08, 2023 at 04:36:52PM +0900, Michael Paquier wrote:
> Yes, tracking that in a more central way can have many usages, so your
> patch sounds like a good idea. Note that we have one case in core
> that be improved and make use of what you have here: autoprewarm.c.
I'll add a patch for
On Tue, Dec 19, 2023 at 10:49:23AM -0500, Robert Haas wrote:
> On Mon, Dec 18, 2023 at 3:32 AM Andrei Lepikhov
> wrote:
>> 2. I think a separate file for this feature looks too expensive.
>> According to the gist of that code, it is a part of the DSA module.
>
> -1. I think this is a totally diff
certainly DROP EXTENSION, in which case you might expect the segment to go
away or at least be reset. But even today, once a preloaded library is
loaded, it stays loaded and its shared memory remains regardless of whether
you CREATE/DROP extension. Can you think of problems with keeping the
se
On Mon, Dec 18, 2023 at 03:32:08PM +0700, Andrei Lepikhov wrote:
> 3. The dsm_registry_init_or_attach routine allocates a DSM segment. Why not
> create dsa_area for a requestor and return it?
My assumption is that most modules just need a fixed-size segment, and if
they really needed a DSA segment
On Mon, Dec 18, 2023 at 3:32 AM Andrei Lepikhov
wrote:
> 2. I think a separate file for this feature looks too expensive.
> According to the gist of that code, it is a part of the DSA module.
-1. I think this is a totally different thing than DSA. More files
aren't nearly as expensive as the conf
Hi!
This patch looks like a good solution for a pain in the ass, I'm too for
this patch to be committed.
Have looked through the code and agree with Andrei, the code looks good.
Just a suggestion - maybe it is worth adding a function for detaching the
segment,
for cases when we unload and/or re-lo
On 18/12/2023 13:39, Andrei Lepikhov wrote:
On 5/12/2023 10:46, Nathan Bossart wrote:
I don't presently have any concrete plans to use this for anything, but I
thought it might be useful for extensions for caching, etc. and wanted to
see whether there was any interest in the feature.
I am deli
On 5/12/2023 10:46, Nathan Bossart wrote:
I don't presently have any concrete plans to use this for anything, but I
thought it might be useful for extensions for caching, etc. and wanted to
see whether there was any interest in the feature.
I am delighted that you commenced this thread.
Designi
of dynamic workers but the library may
not be loaded with shared_preload_libraries, meaning that it can
allocate a chunk of shared memory worth a size of
AutoPrewarmSharedState without having requested it in a
shmem_request_hook. AutoPrewarmSharedState could be moved to a DSM
and tracked with the
On Tue, Dec 5, 2023 at 10:35 AM Joe Conway wrote:
> Notwithstanding any dragons there may be, and not having actually looked
> at the the patches, I love the concept! +
Seems fine to me too. I haven't looked at the patches or searched for
dragons either, though.
--
Robert Haas
EDB: http://www.e
On 12/4/23 22:46, Nathan Bossart wrote:
Every once in a while, I find myself wanting to use shared memory in a
loadable module without requiring it to be loaded at server start via
shared_preload_libraries. The DSM API offers a nice way to create and
manage dynamic shared memory segments, so
Every once in a while, I find myself wanting to use shared memory in a
loadable module without requiring it to be loaded at server start via
shared_preload_libraries. The DSM API offers a nice way to create and
manage dynamic shared memory segments, so creating a segment after server
start is
nging relevant variables
> :)
Yeah.
> I guess I don't really know what you mean with global or local
> pointers?
I meant "global pointers to shared memory" (like XLogCtl) vs "local
pointers to shared memory" (like other_xids in
TransactionIdIsActive()).
> We
Hi,
On 2023-11-03 17:44:44 -0700, Jeff Davis wrote:
> On Fri, 2023-11-03 at 15:59 -0700, Andres Freund wrote:
> > I don't think so. We used to use volatile for most shared memory
> > accesses, but
> > volatile doesn't provide particularly useful semantics - and
&g
1 - 100 of 608 matches
Mail list logo