o setautovacuum_max_workers sky-high =).
We have some situation when at thousands of tables autovacuum can’t vacuum all
tables that need it. Simply it vacuums some of most modified table and never
reach others. Only manual vacuum can help with this situation. With wraparound
issue it can be a
All advice very much appreciated, thanks
--
*Rowan Seymour* | +260 964153686
Hello! What do you mean by
"Server is an Amazon RDS instance with default settings and Postgres
9.3.10, with one other database in the instance."
PG is with default config or smth else?
Is it with default config as it is as from compile version? If so you
should definitely have to do some tuning on it.
By looking on plan i saw a lot of disk read. It can be linked to small
shared memory dedicated to PG exactly what Tom said.
Can you share pg config or raise for example shared_buffers parameter?
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
-> Seq Scan on public.dim_cliente (cost=0.00..618.90
rows=16890 width=86) (actual time=0.005..13.736 rows=16890 loops=1)
Output: dim_cliente.tipocliente, dim_cliente.a1_ibge,
dim_cliente.a1_cod, dim_cliente.a1_nome, dim_cliente.key_cliente
-> Hash (cost=18.90..18.90 rows=590 width=59) (actual
time=0.715..0.715 rows=590 loops=1)
Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor
Buckets: 1024 Batches: 1 Memory Usage: 56kB
-> Seq Scan on public.dim_vendedor (cost=0.00..18.90 rows=590
width=59) (actual time=0.024..0.405 rows=590 loops=1)
Output: dim_vendedor.a3_nome, dim_vendedor.key_vendedor
Total runtime: 37249.268 ms
(25 filas)
___
Is anything that I can do to solve this problem, is that a bug or a
config problem?
Here the link with a dump of the tables
https://drive.google.com/file/d/0Bwupj61i9BtWZ1NiVXltaWc0dnM/view?usp=sharing
I appreciate your help
Hello!
What is your Postgres version?
Do you have correct statistics on this tables?
Please show yours execution plans with buffers i.e. explain
(analyze,buffers) ...
--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
as I know.
Thanks in Advance.
Hello Javier!
Our tests shows that PG 9.4 scales well up to 60 Intel cores. I.E
pgbech -S and DB on tmpfs gave us 700 000 tps. After 60 соres s_lock is
dominating in cpu usage%. 9.5 scales way better.
--
Alex Ignatov
Postgres Professional: http
itions can be checked in the index.
regards, tom lane
Hello Bertrand once again!
What's your status? Does the plan changed after deploying three field
index ?
--
Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
:17 GMT+01:00 Alex Ignatov <mailto:a.igna...@postgrespro.ru>>:
On 27.10.2015 14:10, Bertrand Paquet wrote:
Yes, I have run VACUUM ANALYZE, no effect.
Bertrand
2015-10-27 12:08 GMT+01:00 Alex Ignatov mailto:a.igna...@postgrespro.ru>>:
On 27.10.2015 12:35,
On 27.10.2015 14:10, Bertrand Paquet wrote:
Yes, I have run VACUUM ANALYZE, no effect.
Bertrand
2015-10-27 12:08 GMT+01:00 Alex Ignatov <mailto:a.igna...@postgrespro.ru>>:
On 27.10.2015 12:35, Bertrand Paquet wrote:
Hi all,
We have a slow query. After analy
| | in_progress +
| | | error +
| | | sent_to_proxy
(3 rows)
# select count(*) from external_sync_messages;
count
992912
(1 row)
Hello, Bertrand!
May be statistics on external_sync_