killbit are still should
be read from the index first. And with many millions of dead entries it
isn't free.
PS: ignoring killbits on hot standby slaves is a source of endless pain in
many cases.
--
Maxim Boguk
Senior Postgresql DBA
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
annot keep up partial efficient indexes for the queue in
a clean/non-bloated state.
In my opinion the best solution is to keep list of entries to process ("*around
1000-1500 tasks in pending statuses")* duplicated in the separate tiny
table (via triggers or implement it on the application lev
done quickly) and without pg_stat_statements there will be no
adequate performance monitoring of the database.
But I'm pretty sure that the issue will go away with pg_stat_statements
disabled.
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
On Sat, Dec 31, 2022 at 2:26 PM Maxim Boguk wrote:
> Hi,
>
> When performing post-mortem analysis of some short latency spikes on a
> heavily loaded database, I found that the reason for (less than 10 second
> latency spike) wasn't on the EXECUTE stage but on the BIND stage.
>
CPU core per one pgbouncer process
(most time under 20%).
So it's an unlikely result of pgbouncer being overloaded.
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
;
>
LA 10-15 all time, servers are really overprovisioned (2-3x by available
CPU resources) because an application is quite sensitive to the database
latency.
And during these latency spikes - EXECUTE work without any issues (e.g.
only PARSE/BIND suck).
--
Maxim Boguk
Senior Postgresql DBA
https
most
curious log entry of 2022 year for me.
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
On Sat, Dec 31, 2022 at 4:32 PM Justin Pryzby wrote:
> On Sat, Dec 31, 2022 at 02:26:08PM +0200, Maxim Boguk wrote:
> > Hi,
> >
> > When performing post-mortem analysis of some short latency spikes on a
> > heavily loaded database, I found that the reason for (less
ion of such spike usually
under 1s but given how many TPS database serving - 1s is too much and
affect end users).
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
On Fri, Apr 10, 2020 at 1:16 AM Justin Pryzby wrote:
> On Fri, Apr 10, 2020 at 12:51:03AM +1000, Maxim Boguk wrote:
> > With database on dedicated server I encountered unusual load profile:
> > multi thread (200 connections static size pool via pgbouncer) insert only
> > int
there are almost all backends waiting on buffer_content lock and some
backends waiting for XidGenLock .
And always one backend in SLRUSync.
If anyone can have any good idea whats going on in that case and how I can
fix it - any ideas welcome.
So far I out of ideas.
--
Maxim Boguk
Senior Postgresql
used during foreign key
checks (see that non-zero seq_scan counters for linked tables).
Only possible reason (IMHO) that wrong usage numeric in place of bigint.
I recommend change types of events.eventid (and any other similar fields)
to bigint.
It should resolve your performance issues with deletes on ev
_user_tables results
(I now expect to see 3 sequential scan on event_recovery and may be on some
other tables as well).
Kind Regards,
Maxim
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone RU: +7 985 433
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
LinkedIn: ht
t;
>
>
> Resurs Bank AB
> Kristian Ejvind
> Linux System Administrator
> IT Operations | Technical Operations
>
> Ekslingan 8
> Box 222 09, SE-25467 Helsingborg
>
> Direkt Tfn:
> Mobil: +46 728571483
> Vxl: +46 42 382000
> Fax:
> E-post: kristian.ejv...
14 matches
Mail list logo