Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-22 Thread Tom Lane
Martijn van Oosterhout  writes:
> On Mon, 16 Sep 2019 at 15:33, Tom Lane  wrote:
>> But do we care?  With asyncQueueAdvanceTail gone from the listeners,
>> there's no longer an exclusive lock for them to contend on.  And,
>> again, I failed to see any significant contention even in HEAD as it
>> stands; so I'm unconvinced that you're solving a live problem.

> You're right, they only acquire a shared lock which is much less of a
> problem. And I forgot that we're still reducing the load from a few
> hundred signals and exclusive locks per NOTIFY to perhaps a dozen
> shared locks every thousand messages. You'd be hard pressed to
> demonstrate there's a real problem here.

> So I think your patch is fine as is.

OK, pushed.

> Looking at the release cycle it looks like the earliest either of
> these patches will appear in a release is PG13, right?

Right.

regards, tom lane




Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-17 Thread Martijn van Oosterhout
Hoi Tom,

On Mon, 16 Sep 2019 at 15:33, Tom Lane  wrote:
>
> Martijn van Oosterhout  writes:
> > I think I like the idea of having SignalBackend do the waking up a
> > slow backend but I'm not enthused by the "lets wake up (at once)
> > everyone that is behind". That's one of the issues I was explicitly
> > trying to solve. If there are any significant number of "slow"
> > backends then we get the "thundering herd" again.
>
> But do we care?  With asyncQueueAdvanceTail gone from the listeners,
> there's no longer an exclusive lock for them to contend on.  And,
> again, I failed to see any significant contention even in HEAD as it
> stands; so I'm unconvinced that you're solving a live problem.

You're right, they only acquire a shared lock which is much less of a
problem. And I forgot that we're still reducing the load from a few
hundred signals and exclusive locks per NOTIFY to perhaps a dozen
shared locks every thousand messages. You'd be hard pressed to
demonstrate there's a real problem here.

So I think your patch is fine as is.

Looking at the release cycle it looks like the earliest either of
these patches will appear in a release is PG13, right?

Thanks again.
-- 
Martijn van Oosterhout  http://svana.org/kleptog/




Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-16 Thread Tom Lane
Martijn van Oosterhout  writes:
> On Mon, 16 Sep 2019 at 00:14, Tom Lane  wrote:
>> ... I also think we
>> can simplify the handling of other-database listeners by including
>> them in the set signaled by SignalBackends, but only if they're
>> several pages behind.  So that leads me to the attached patch;
>> what do you think?

> I think I like the idea of having SignalBackend do the waking up a
> slow backend but I'm not enthused by the "lets wake up (at once)
> everyone that is behind". That's one of the issues I was explicitly
> trying to solve. If there are any significant number of "slow"
> backends then we get the "thundering herd" again.

But do we care?  With asyncQueueAdvanceTail gone from the listeners,
there's no longer an exclusive lock for them to contend on.  And,
again, I failed to see any significant contention even in HEAD as it
stands; so I'm unconvinced that you're solving a live problem.

regards, tom lane




Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-16 Thread Martijn van Oosterhout
Hoi Tom,

On Mon, 16 Sep 2019 at 00:14, Tom Lane  wrote:
>
> I spent some more time thinking about this, and I'm still not too
> satisfied with this patch's approach.  It seems to me the key insights
> we're trying to make use of are:
>
> 1. We don't really need to keep the global tail pointer exactly
> up to date.  It's bad if it falls way behind, but a few pages back
> is fine.

Agreed.

> 2. When sending notifies, only listening backends connected to our
> own database need be awakened immediately.  Backends connected to
> other DBs will need to advance their queue pointer sometime, but
> again it doesn't need to be right away.

Agreed.

> 3. It's bad for multiple processes to all be trying to do
> asyncQueueAdvanceTail concurrently: they'll contend for exclusive
> access to the AsyncQueueLock.  Therefore, having the listeners
> do it is really the wrong thing, and instead we should do it on
> the sending side.

Agreed, but I'd add that listeners in databases that are largely idle
there may never be a sender, and thus need to be advanced up some
other way.

> However, the patch as presented doesn't go all the way on point 3,
> instead having listeners maybe-or-maybe-not do asyncQueueAdvanceTail
> in asyncQueueReadAllNotifications.  I propose that we should go all
> the way and just define tail-advancing as something that happens on
> the sending side, and only once every few pages.  I also think we
> can simplify the handling of other-database listeners by including
> them in the set signaled by SignalBackends, but only if they're
> several pages behind.  So that leads me to the attached patch;
> what do you think?

I think I like the idea of having SignalBackend do the waking up a
slow backend but I'm not enthused by the "lets wake up (at once)
everyone that is behind". That's one of the issues I was explicitly
trying to solve. If there are any significant number of "slow"
backends then we get the "thundering herd" again. If the number of
slow backends exceeds the number of cores then commits across the
system could be held up quite a while (which is what caused me to make
this patch, multiple seconds was not unusual).

The maybe/maybe not in asyncQueueReadAllNotifications is that "if I
was behind, then I probably got woken up, hence I need to wake up
someone else", thus ensuring the cleanup proceeds in an orderly
fashion, leaving gaps where the lock isn't held allowing COMMITs to
proceed.

> BTW, in my hands it seems like point 2 (skip wakening other-database
> listeners) is the only really significant win here, and of course
> that only wins when the notify traffic is spread across a fair number
> of databases.  Which I fear is not the typical use-case.  In single-DB
> use-cases, point 2 helps not at all.  I had a really hard time measuring
> any benefit from point 3 --- I eventually saw a noticeable savings
> when I tried having one notifier and 100 listen-only backends, but
> again that doesn't seem like a typical use-case.  I could not replicate
> your report of lots of time spent in asyncQueueAdvanceTail's lock
> acquisition.  I wonder whether you're using a very large max_connections
> setting and we already fixed most of the problem with that in bca6e6435.
> Still, this patch doesn't seem to make any cases worse, so I don't mind
> if it's just improving unusual use-cases.

I'm not sure if it's an unusual use-case, but it is my use-case :).
Specifically, there are 100+ instances of the same application running
on the same cluster with wildly different usage patterns. Some will be
idle because no-one is logged in, some will be quite busy. Although
there are only 2 listeners per database, that's still a lot of
listeners that can be behind. Though I agree that bca6e6435 will have
mitigated quite a lot (yes, max_connections is quite high). Another
mitigation would be to spread across more smaller database clusters,
which we need to do anyway.

That said, your approach is conceptually simpler which is also worth
something and it gets essentially all the same benefits for more
normal use cases. If the QUEUE_CLEANUP_DELAY were raised a bit then we
could do mitigation of the rest on the client side by having idle
databases send dummy notifies every now and then to trigger clean up
for their database. The flip-side is that slow backends will then have
further to catch up, thus holding the lock longer. It's not worth
making it configurable so we have to guess, but 16 is perhaps a good
compromise.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/




Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-15 Thread Tom Lane
Martijn van Oosterhout  writes:
> On Sat, 14 Sep 2019 at 17:08, Tom Lane  wrote:
>> None of this seems to respond to my point: it looks to me like it would
>> work fine if you simply dropped the patch's additions in PreCommit_Notify
>> and ProcessCompletedNotifies, because there is already enough logic to
>> decide when to call asyncQueueAdvanceTail.

> ...
> However, I guess you're thinking of asyncQueueReadAllNotifications()
> triggering if the queue as a whole was too long. This could in
> principle work but it does mean that at some point all backends
> sending NOTIFY are going to start calling asyncQueueAdvanceTail()
> every time, until the tail gets advanced, and if there are many idle
> listening backends behind this could take a while. The slowest backend
> might receive more signals while it is processing and so end up
> running asyncQueueAdvanceTail() twice. The fact that signals coalesce
> stops the process getting completely out of hand but it does feel a
> little uncontrolled.
> The whole point of this patch is to ensure that at any time only one
> backend is being woken up and calling asyncQueueAdvanceTail() at a
> time.

I spent some more time thinking about this, and I'm still not too
satisfied with this patch's approach.  It seems to me the key insights
we're trying to make use of are:

1. We don't really need to keep the global tail pointer exactly
up to date.  It's bad if it falls way behind, but a few pages back
is fine.

2. When sending notifies, only listening backends connected to our
own database need be awakened immediately.  Backends connected to
other DBs will need to advance their queue pointer sometime, but
again it doesn't need to be right away.

3. It's bad for multiple processes to all be trying to do
asyncQueueAdvanceTail concurrently: they'll contend for exclusive
access to the AsyncQueueLock.  Therefore, having the listeners
do it is really the wrong thing, and instead we should do it on
the sending side.

However, the patch as presented doesn't go all the way on point 3,
instead having listeners maybe-or-maybe-not do asyncQueueAdvanceTail
in asyncQueueReadAllNotifications.  I propose that we should go all
the way and just define tail-advancing as something that happens on
the sending side, and only once every few pages.  I also think we
can simplify the handling of other-database listeners by including
them in the set signaled by SignalBackends, but only if they're
several pages behind.  So that leads me to the attached patch;
what do you think?

BTW, in my hands it seems like point 2 (skip wakening other-database
listeners) is the only really significant win here, and of course
that only wins when the notify traffic is spread across a fair number
of databases.  Which I fear is not the typical use-case.  In single-DB
use-cases, point 2 helps not at all.  I had a really hard time measuring
any benefit from point 3 --- I eventually saw a noticeable savings
when I tried having one notifier and 100 listen-only backends, but
again that doesn't seem like a typical use-case.  I could not replicate
your report of lots of time spent in asyncQueueAdvanceTail's lock
acquisition.  I wonder whether you're using a very large max_connections
setting and we already fixed most of the problem with that in bca6e6435.
Still, this patch doesn't seem to make any cases worse, so I don't mind
if it's just improving unusual use-cases.

regards, tom lane

diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index f26269b..7791f78 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -75,8 +75,10 @@
  *	  list of listening backends and send a PROCSIG_NOTIFY_INTERRUPT signal
  *	  to every listening backend (we don't know which backend is listening on
  *	  which channel so we must signal them all). We can exclude backends that
- *	  are already up to date, though.  We don't bother with a self-signal
- *	  either, but just process the queue directly.
+ *	  are already up to date, though, and we can also exclude backends that
+ *	  are in other databases (unless they are way behind and should be kicked
+ *	  to make them advance their pointers).  We don't bother with a
+ *	  self-signal either, but just process the queue directly.
  *
  * 5. Upon receipt of a PROCSIG_NOTIFY_INTERRUPT signal, the signal handler
  *	  sets the process's latch, which triggers the event to be processed
@@ -89,13 +91,14 @@
  *	  Inbound-notify processing consists of reading all of the notifications
  *	  that have arrived since scanning last time. We read every notification
  *	  until we reach either a notification from an uncommitted transaction or
- *	  the head pointer's position. Then we check if we were the laziest
- *	  backend: if our pointer is set to the same position as the global tail
- *	  pointer is set, then we move the global tail pointer ahead to where the
- *	  second-laziest backend is (in general, we take the MIN of the current
- *	

Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-15 Thread Martijn van Oosterhout
On Sat, 14 Sep 2019 at 17:08, Tom Lane  wrote:
> Martijn van Oosterhout  writes:
> > On Fri, 13 Sep 2019 at 22:04, Tom Lane  wrote:
> >> But, really ... do we need the backendTryAdvanceTail flag at all?

> None of this seems to respond to my point: it looks to me like it would
> work fine if you simply dropped the patch's additions in PreCommit_Notify
> and ProcessCompletedNotifies, because there is already enough logic to
> decide when to call asyncQueueAdvanceTail.  In particular, the result from
> Signal[MyDB]Backends tells us whether anyone else was awakened, and
> ProcessCompletedNotifies already does asyncQueueAdvanceTail if not.
> As long as we did awaken someone, the ball's now in their court to
> make sure asyncQueueAdvanceTail happens eventually.

Ah, I think I see what you're getting at. As written,
asyncQueueReadAllNotifications() only calls asyncQueueAdvanceTail() if
*it* was a slow backend (advanceTail =
QUEUE_SLOW_BACKEND(MyBackendId)). In a situation where some databases
are regularly using NOTIFY and a few others never (but still
listening) it will lead to the situation where the tail never gets
advanced.

However, I guess you're thinking of asyncQueueReadAllNotifications()
triggering if the queue as a whole was too long. This could in
principle work but it does mean that at some point all backends
sending NOTIFY are going to start calling asyncQueueAdvanceTail()
every time, until the tail gets advanced, and if there are many idle
listening backends behind this could take a while. The slowest backend
might receive more signals while it is processing and so end up
running asyncQueueAdvanceTail() twice. The fact that signals coalesce
stops the process getting completely out of hand but it does feel a
little uncontrolled.

The whole point of this patch is to ensure that at any time only one
backend is being woken up and calling asyncQueueAdvanceTail() at a
time.

But you do point out that the use of the return value of
SignalMyDBBackends() is used wrongly. The fact that no-one got
signalled only meant there were no other listeners on this database
which means nothing in terms of global queue cleanup. What you want to
know is if you're the only listener in the whole system and you can
test for that directly (QUEUE_FIRST_BACKEND == MyBackendId &&
QUEUE_NEXT_BACKEND(MyBackendId) == InvalidBackendId). I can adjust
this in the next version if necessary, it's fairly harmless as is as
it only triggers in the case where a database is only notifying
itself, which probably isn't that common.

I hope I have correctly understood this time.

Have a nice weekend.
-- 
Martijn van Oosterhout  http://svana.org/kleptog/




Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-14 Thread Tom Lane
Martijn van Oosterhout  writes:
> On Fri, 13 Sep 2019 at 22:04, Tom Lane  wrote:
>> But, really ... do we need the backendTryAdvanceTail flag at all?

> There are multiple issues here. asyncQueueReadAllNotifications() is
> going to be called by each listener simultaneously, so each listener
> is going to come to the same conclusion. On the other side, there is
> no guarantee we wake up anyone as a result of the NOTIFY, e.g. if
> there are no listeners in the current database. To be sure you try to
> advance the tail, you have to trigger on the sending side. The global
> is there because at the point we are inserting entries we are still in
> a user transaction, potentially holding many table locks (the issue we
> were running into in the first place). By setting
> backendTryAdvanceTail we can move the work to
> ProcessCompletedNotifies() which is after the transaction has
> committed and the locks released.

None of this seems to respond to my point: it looks to me like it would
work fine if you simply dropped the patch's additions in PreCommit_Notify
and ProcessCompletedNotifies, because there is already enough logic to
decide when to call asyncQueueAdvanceTail.  In particular, the result from
Signal[MyDB]Backends tells us whether anyone else was awakened, and
ProcessCompletedNotifies already does asyncQueueAdvanceTail if not.
As long as we did awaken someone, the ball's now in their court to
make sure asyncQueueAdvanceTail happens eventually.

There are corner cases where someone else might get signaled but never
do asyncQueueAdvanceTail -- for example, if they're in process of exiting
--- but I think the whole point of this patch is that we don't care too
much if that occasionally fails to happen.  If there's a continuing
stream of NOTIFY activity, asyncQueueAdvanceTail will happen often
enough to ensure that the queue storage doesn't bloat unreasonably.

regards, tom lane




Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-14 Thread Martijn van Oosterhout
Hoi Tom,


On Fri, 13 Sep 2019 at 22:04, Tom Lane  wrote:
>
> This throws multiple compiler warnings for me:

Fixed.

> Also, I don't exactly believe this bit:
[snip]
> It seems unlikely that insertion would stop exactly at a page boundary,
> but that seems to be what this is looking for.

This is how asyncQueueAddEntries() works. Entries are never split over
pages. If there is not enough room, then it advances to the beginning
of the next page and returns. Hence here the offset is zero. I could
set the global inside asyncQueueAddEntries() but that seems icky.
Another alternative is to have asyncQueueAddEntries() return a boolean
"moved to new page", but that's just a long-winded way of doing what
it is now.

> But, really ... do we need the backendTryAdvanceTail flag at all?
> I'm dubious, because it seems like asyncQueueReadAllNotifications
> would have already covered the case if we're listening.  If we're
> not listening, but we signalled some other listeners, it falls
> to them to kick us if we're the slowest backend.  If we're not the
> slowest backend then doing asyncQueueAdvanceTail isn't useful.

There are multiple issues here. asyncQueueReadAllNotifications() is
going to be called by each listener simultaneously, so each listener
is going to come to the same conclusion. On the other side, there is
no guarantee we wake up anyone as a result of the NOTIFY, e.g. if
there are no listeners in the current database. To be sure you try to
advance the tail, you have to trigger on the sending side. The global
is there because at the point we are inserting entries we are still in
a user transaction, potentially holding many table locks (the issue we
were running into in the first place). By setting
backendTryAdvanceTail we can move the work to
ProcessCompletedNotifies() which is after the transaction has
committed and the locks released.

> I agree with getting rid of the asyncQueueAdvanceTail call in
> asyncQueueUnregister; on reflection doing that there seems pretty unsafe,
> because we're not necessarily in a transaction and hence anything that
> could possibly error is a bad idea.  However, it'd be good to add a
> comment explaining that we're not doing that and why it's ok not to.

Comment added.

> I'm fairly unimpressed with the "kick a random slow backend" logic.
> There can be no point in kicking any but the slowest backend, ie
> one whose pointer is exactly the oldest.  Since we're already computing
> the min pointer in that loop, it would actually take *less* logic inside
> the loop to remember the/a backend that had that pointer value, and then
> decide afterwards whether it's slow enough to merit a kick.

Adjusted this. I'm not sure it's actually clearer this way, but it is
less work inside the loop. A small change is that now it won't signal
anyone if this backend is the slowest, which more correct.

Thanks for the feedback. Attached is version 3.

Have a nice weekend,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
From 539d97b47c4535314c23df22e5e87ecc43149f3a Mon Sep 17 00:00:00 2001
From: Martijn van Oosterhout 
Date: Sat, 14 Sep 2019 11:01:11 +0200
Subject: [PATCH 1/2] Improve performance of async notifications

Advancing the tail pointer requires an exclusive lock which can block
backends from other databases, so it's worth keeping these attempts to a
minimum.

Instead of tracking the slowest backend exactly we update the queue more
lazily, only checking when we switch to a new SLRU page.  Additionally,
instead of waking up every slow backend at once, we do them one at a time.
---
 src/backend/commands/async.c | 167 ++-
 1 file changed, 124 insertions(+), 43 deletions(-)

diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index f26269b5ea..ffd7c7e90b 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -73,10 +73,11 @@
  *	  Finally, after we are out of the transaction altogether, we check if
  *	  we need to signal listening backends.  In SignalBackends() we scan the
  *	  list of listening backends and send a PROCSIG_NOTIFY_INTERRUPT signal
- *	  to every listening backend (we don't know which backend is listening on
- *	  which channel so we must signal them all). We can exclude backends that
- *	  are already up to date, though.  We don't bother with a self-signal
- *	  either, but just process the queue directly.
+ *	  to every listening backend for the relavent database (we don't know
+ *	  which backend is listening on which channel so we must signal them
+ *	  all).  We can exclude backends that are already up to date, though.
+ *	  We don't bother with a self-signal either, but just process the queue
+ *	  directly.
  *
  * 5. Upon receipt of a PROCSIG_NOTIFY_INTERRUPT signal, the signal handler
  *	  sets the process's latch, which triggers the event to be processed
@@ -89,13 +90,25 @@
  *	  Inbound-notify processing consists of reading all of the notifications
  *	  that have arrived since 

Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-13 Thread Tom Lane
Martijn van Oosterhout  writes:
> Here is the rebased second patch.

This throws multiple compiler warnings for me:

async.c: In function 'asyncQueueUnregister':
async.c:1293: warning: unused variable 'advanceTail'
async.c: In function 'asyncQueueAdvanceTail':
async.c:2153: warning: 'slowbackendpid' may be used uninitialized in this 
function

Also, I don't exactly believe this bit:

+/* If we are advancing to a new page, remember this so after the
+ * transaction commits we can attempt to advance the tail
+ * pointer, see ProcessCompletedNotifies() */
+if (QUEUE_POS_OFFSET(QUEUE_HEAD) == 0)
+backendTryAdvanceTail = true;

It seems unlikely that insertion would stop exactly at a page boundary,
but that seems to be what this is looking for.

But, really ... do we need the backendTryAdvanceTail flag at all?
I'm dubious, because it seems like asyncQueueReadAllNotifications
would have already covered the case if we're listening.  If we're
not listening, but we signalled some other listeners, it falls
to them to kick us if we're the slowest backend.  If we're not the
slowest backend then doing asyncQueueAdvanceTail isn't useful.

I agree with getting rid of the asyncQueueAdvanceTail call in
asyncQueueUnregister; on reflection doing that there seems pretty unsafe,
because we're not necessarily in a transaction and hence anything that
could possibly error is a bad idea.  However, it'd be good to add a
comment explaining that we're not doing that and why it's ok not to.

I'm fairly unimpressed with the "kick a random slow backend" logic.
There can be no point in kicking any but the slowest backend, ie
one whose pointer is exactly the oldest.  Since we're already computing
the min pointer in that loop, it would actually take *less* logic inside
the loop to remember the/a backend that had that pointer value, and then
decide afterwards whether it's slow enough to merit a kick.

regards, tom lane




Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-11 Thread Martijn van Oosterhout
Hoi Tom,


On Wed, 11 Sep 2019 at 00:18, Tom Lane  wrote:

>
> I pushed 0001 after doing some hacking on it --- it was sloppy about
> datatypes, and about whether the invalid-entry value is 0 or -1,
> and it was just wrong about keeping the list in backendid order.
> (You can't conditionally skip looking for where to put the new
> entry, if you want to maintain the order.  I thought about just
> defining the list as unordered, which would simplify joining the
> list initially, but that could get pretty cache-unfriendly when
> there are lots of entries.)
>
> 0002 is now going to need a rebase, so please do that.
>
>
Thanks for this, and good catch. Looks like I didn't test the first patch
by itself very well.

Here is the rebased second patch.

Thanks in advance,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
From bc4b1b458564f758b7fa1c1f7b0397aade71db06 Mon Sep 17 00:00:00 2001
From: Martijn van Oosterhout 
Date: Mon, 3 Jun 2019 17:13:31 +0200
Subject: [PATCH 1/2] Improve performance of async notifications

Advancing the tail pointer requires an exclusive lock which can block
backends from other databases, so it's worth keeping these attempts to a
minimum.

Instead of tracking the slowest backend exactly we update the queue more
lazily, only checking when we switch to a new SLRU page.  Additionally,
instead of waking up every slow backend at once, we do them one at a time.
---
 src/backend/commands/async.c | 142 +--
 1 file changed, 101 insertions(+), 41 deletions(-)

diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index f26269b5ea..b9dd0ca139 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -73,10 +73,11 @@
  *	  Finally, after we are out of the transaction altogether, we check if
  *	  we need to signal listening backends.  In SignalBackends() we scan the
  *	  list of listening backends and send a PROCSIG_NOTIFY_INTERRUPT signal
- *	  to every listening backend (we don't know which backend is listening on
- *	  which channel so we must signal them all). We can exclude backends that
- *	  are already up to date, though.  We don't bother with a self-signal
- *	  either, but just process the queue directly.
+ *	  to every listening backend for the relavent database (we don't know
+ *	  which backend is listening on which channel so we must signal them
+ *	  all).  We can exclude backends that are already up to date, though.
+ *	  We don't bother with a self-signal either, but just process the queue
+ *	  directly.
  *
  * 5. Upon receipt of a PROCSIG_NOTIFY_INTERRUPT signal, the signal handler
  *	  sets the process's latch, which triggers the event to be processed
@@ -89,13 +90,25 @@
  *	  Inbound-notify processing consists of reading all of the notifications
  *	  that have arrived since scanning last time. We read every notification
  *	  until we reach either a notification from an uncommitted transaction or
- *	  the head pointer's position. Then we check if we were the laziest
- *	  backend: if our pointer is set to the same position as the global tail
- *	  pointer is set, then we move the global tail pointer ahead to where the
- *	  second-laziest backend is (in general, we take the MIN of the current
- *	  head position and all active backends' new tail pointers). Whenever we
- *	  move the global tail pointer we also truncate now-unused pages (i.e.,
- *	  delete files in pg_notify/ that are no longer used).
+ *	  the head pointer's position.
+ *
+ * 6. To avoid SLRU wraparound and minimize disk space the tail pointer
+ *	  needs to be advanced so that old pages can be truncated.  This
+ *	  however requires an exclusive lock and as such should be done
+ *	  infrequently.
+ *
+ *	  When a new notification is added, the writer checks to see if the
+ *	  tail pointer is more than QUEUE_CLEANUP_DELAY pages behind.  If
+ *	  so, it attempts to advance the tail, and if there are slow
+ *	  backends (perhaps because all the notifications were for other
+ *	  databases), wake one of them up by sending a signal.
+ *
+ *	  When the slow backend processes the queue it notes it was behind
+ *	  and so also tries to advance the tail, possibly waking up another
+ *	  slow backend.  Eventually all backends will have processed the
+ *	  queue and the global tail pointer is move to a new page and we
+ *	  also truncate now-unused pages (i.e., delete files in pg_notify/
+ *	  that are no longer used).
  *
  * An application that listens on the same channel it notifies will get
  * NOTIFY messages for its own NOTIFYs.  These can be ignored, if not useful,
@@ -211,6 +224,12 @@ typedef struct QueuePosition
 	 (x).page != (y).page ? (x) : \
 	 (x).offset > (y).offset ? (x) : (y))
 
+/* how many pages does a backend need to be behind before it needs to be signalled */
+#define QUEUE_CLEANUP_DELAY 4
+
+/* is a backend so far behind it needs to be signalled? */
+#define QUEUE_SLOW_BACKEND(i) \
+	

Re: [PATCH] Improve performance of NOTIFY over many databases (v2)

2019-09-10 Thread Tom Lane
Martijn van Oosterhout  writes:
> The original three patches have been collapsed into one as given the
> changes discussed it didn't make sense to keep them separate. There
> are now two patches (the third is just to help with testing):

> Patch 1: Tracks the listening backends in a list so non-listening
> backends can be quickly skipped over. This is separate because it's
> orthogonal to the rest of the changes and there are other ways to do
> this.

> Patch 2: This is the meat of the change. It implements all the
> suggestions discussed:

I pushed 0001 after doing some hacking on it --- it was sloppy about
datatypes, and about whether the invalid-entry value is 0 or -1,
and it was just wrong about keeping the list in backendid order.
(You can't conditionally skip looking for where to put the new
entry, if you want to maintain the order.  I thought about just
defining the list as unordered, which would simplify joining the
list initially, but that could get pretty cache-unfriendly when
there are lots of entries.)

0002 is now going to need a rebase, so please do that.

regards, tom lane




[PATCH] Improve performance of NOTIFY over many databases (v2)

2019-08-02 Thread Martijn van Oosterhout
Hoi hackers,

Here is a reworked version of the previous patches.

The original three patches have been collapsed into one as given the
changes discussed it didn't make sense to keep them separate. There
are now two patches (the third is just to help with testing):

Patch 1: Tracks the listening backends in a list so non-listening
backends can be quickly skipped over. This is separate because it's
orthogonal to the rest of the changes and there are other ways to do
this.

Patch 2: This is the meat of the change. It implements all the
suggestions discussed:

- The queue tail is now only updated lazily, whenever the notify queue
moves to a new page. This did require a new global to track this state
through the transaction commit, but it seems worth it.

- Only backends for the current database are signalled when a
notification is made

- Slow backends are woken up one at a time rather than all at once

- A backend is allowed to lag up to 4 SLRU pages behind before being
signalled. This is a tradeoff between how often to get woken up verses
how much work to do once woken up.

- All the relevant comments have been updated to describe the new
algorithm. Locking should also be correct now.

This means in the normal case where listening backends get a
notification occasionally, no-one will ever be considered slow. An
exclusive lock for cleanup will happen about once per SLRU page.
There's still the exclusive locks on adding notifications but that's
unavoidable.

One minor issue is that pg_notification_queue_usage() will now return
a small but non-zero number (about 3e-6) even when nothing is really
going on. This could be fixed by having it take an exclusive lock
instead and updating to the latest values but that barely seems worth
it.

Performance-wise it's even better than my original patches, with about
20-25% reduction in CPU usage in my test setup (using the test script
sent previously).

Here is the log output from my postgres, where you see the signalling in action:

--
16:42:48.673 [10188] martijn@test_131 DEBUG:  PreCommit_Notify
16:42:48.673 [10188] martijn@test_131 DEBUG:  NOTIFY QUEUE = (74,896)...(79,0)
16:42:48.673 [10188] martijn@test_131 DEBUG:  backendTryAdvanceTail -> true
16:42:48.673 [10188] martijn@test_131 DEBUG:  AtCommit_Notify
16:42:48.673 [10188] martijn@test_131 DEBUG:  ProcessCompletedNotifies
16:42:48.673 [10188] martijn@test_131 DEBUG:  backendTryAdvanceTail -> false
16:42:48.673 [10188] martijn@test_131 DEBUG:  asyncQueueAdvanceTail
16:42:48.673 [10188] martijn@test_131 DEBUG:  waking backend 137 (pid 10055)
16:42:48.673 [10055] martijn@test_067 DEBUG:  ProcessIncomingNotify
16:42:48.673 [10187] martijn@test_131 DEBUG:  ProcessIncomingNotify
16:42:48.673 [10055] martijn@test_067 DEBUG:  asyncQueueAdvanceTail
16:42:48.673 [10055] martijn@test_067 DEBUG:  waking backend 138 (pid 10056)
16:42:48.673 [10187] martijn@test_131 DEBUG:  ProcessIncomingNotify: done
16:42:48.673 [10055] martijn@test_067 DEBUG:  ProcessIncomingNotify: done
16:42:48.673 [10056] martijn@test_067 DEBUG:  ProcessIncomingNotify
16:42:48.673 [10056] martijn@test_067 DEBUG:  asyncQueueAdvanceTail
16:42:48.673 [10056] martijn@test_067 DEBUG:  ProcessIncomingNotify: done
16:42:48.683 [9991] martijn@test_042 DEBUG:  Async_Notify(changes)
16:42:48.683 [9991] martijn@test_042 DEBUG:  PreCommit_Notify
16:42:48.683 [9991] martijn@test_042 DEBUG:  NOTIFY QUEUE = (75,7744)...(79,32)
16:42:48.683 [9991] martijn@test_042 DEBUG:  AtCommit_Notify
-

Have a nice weekend.
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
From 82366f1dbc0fc234fdd10dbc15519b3cf7104684 Mon Sep 17 00:00:00 2001
From: Martijn van Oosterhout 
Date: Tue, 23 Jul 2019 16:49:30 +0200
Subject: [PATCH 1/3] Maintain queue of listening backends to speed up loops

Especially the loop to advance the tail pointer is called fairly often while
holding an exclusive lock.
---
 src/backend/commands/async.c | 45 +++-
 1 file changed, 40 insertions(+), 5 deletions(-)

diff --git a/src/backend/commands/async.c b/src/backend/commands/async.c
index 6e9c580ec6..ba0b1baecc 100644
--- a/src/backend/commands/async.c
+++ b/src/backend/commands/async.c
@@ -217,6 +217,7 @@ typedef struct QueueBackendStatus
 {
 	int32		pid;			/* either a PID or InvalidPid */
 	Oid			dboid;			/* backend's database OID, or InvalidOid */
+	int			nextListener;	/* backendid of next listener, 0=last */
 	QueuePosition pos;			/* backend has read queue up to here */
 } QueueBackendStatus;
 
@@ -247,6 +248,7 @@ typedef struct AsyncQueueControl
 	QueuePosition tail;			/* the global tail is equivalent to the pos of
  * the "slowest" backend */
 	TimestampTz lastQueueFillWarn;	/* time of last queue-full msg */
+	int			firstListener;	/* backendId of first listener, 0=none */
 	QueueBackendStatus backend[FLEXIBLE_ARRAY_MEMBER];
 	/* backend[0] is not used; used entries are from [1] to [MaxBackends] */
 } AsyncQueueControl;
@@ -257,8 +259,11 @@ static