On Tue, 2020-08-18 at 19:52 -0400, Jim Jarvie wrote:
> I have a system which implements a message queue with the basic pattern that
> a process selects a group of,
> for example 250, rows for processing via SELECT .. LIMIT 250 FOR UPDATE SKIP
> LOCKED.
>
> When there are a small number of concu
Also, have you checked how bloated your indexes are getting? Do you run
default autovacuum settings? Did you update to the new default 2ms cost
delay value? With a destructive queue, it would be very important to ensure
autovacuum is keeping up with the churn. Share your basic table structure
and i
On Tue, Aug 18, 2020 at 6:22 PM Jim Jarvie wrote:
> There is some ordering on the select [ ORDER BY q_id] so each block of 250
> is sequential-ish queue items; I just need them more or less in the order
> they were queued so as near FIFO as possible without being totally strict
> on absolute sequ
Did you try using NOWAIT instead of SKIP LOCKED to see if the behavior
still shows up?
On Tue, Aug 18, 2020, 8:22 PM Jim Jarvie wrote:
> Thank you for the quick response.
>
> No adjustments of fill factors. Hadn't though of that - I'll investigate
> and try some options to see if I can measure
Thank you for the quick response.
No adjustments of fill factors. Hadn't though of that - I'll
investigate and try some options to see if I can measure an effect.
There is some ordering on the select [ ORDER BY q_id] so each block of
250 is sequential-ish queue items; I just need them more o
Message queue...
Are rows deleted? Are they updated once or many times? Have you adjusted
fillfactor on table or indexes? How many rows in the table currently or on
average? Is there any ordering to which rows you update?
It seems likely that one of the experts/code contributors will chime in and
Using V12, Linux [Ubuntu 16.04LTS]
I have a system which implements a message queue with the basic pattern
that a process selects a group of, for example 250, rows for processing
via SELECT .. LIMIT 250 FOR UPDATE SKIP LOCKED.
When there are a small number of concurrent connections to process
Hello,
I wish to use logical replication in Postgres to capture transactions as
CDC and forward them to a custom sink.
To understand the overhead of logical replication workflow I created a toy
subscriber using the V3PGReplicationStream that acknowledges LSNs after
every 16k reads by calling setA