On Sun, Feb 24, 2019 at 2:04 AM Justin Pryzby wrote:
> Some ideas:
>
> You could ALTER TABLE SET (fillfactor=50) to try to maximize use of HOT
> indices
> during UPDATEs (check pg_stat_user_indexes).
>
> You could also ALTER TABLE SET autovacuum parameters for more aggressive
> vacuuming.
>
>
Some ideas:
You could ALTER TABLE SET (fillfactor=50) to try to maximize use of HOT indices
during UPDATEs (check pg_stat_user_indexes).
You could also ALTER TABLE SET autovacuum parameters for more aggressive
vacuuming.
You could recreate indices using the CONCURRENTLY trick
(CREATE INDEX
On Sun, 24 Feb 2019 at 10:06, Gunther wrote:
> I am using an SQL queue for distributing work to massively parallel workers.
> Workers come in servers with 12 parallel threads. One of those worker sets
> handles 7 transactions per second. If I add a second one, for 24 parallel
> workers, it
On 2/23/2019 16:13, Peter Geoghegan wrote:
On Sat, Feb 23, 2019 at 1:06 PM Gunther wrote:
I thought to keep my index tight, I would define it like this:
CREATE UNIQUE INDEX Queue_idx_pending ON Queue(jobId) WHERE pending;
so that only pending jobs are in that index.
When a job is done,
On Sat, Feb 23, 2019 at 1:06 PM Gunther wrote:
> I thought to keep my index tight, I would define it like this:
>
> CREATE UNIQUE INDEX Queue_idx_pending ON Queue(jobId) WHERE pending;
>
> so that only pending jobs are in that index.
>
> When a job is done, follow up work is often inserted into
Hi,
I am using an SQL queue for distributing work to massively parallel
workers. Workers come in servers with 12 parallel threads. One of those
worker sets handles 7 transactions per second. If I add a second one,
for 24 parallel workers, it scales to 14 /s. Even a third, for 36
parallel