On Mon, May 12, 2025 at 9:07 PM Tom Lane wrote:
> Maxim Boguk writes:
> > Reading the code - probably the lowest hanging fruit is to make
> > 'The current multiplier of 1000 * cpu_operator_cost' configurable in the
> > future versions.
Is the 100x backe
On Mon, May 12, 2025 at 6:33 PM Tom Lane wrote:
> Maxim Boguk writes:
> > And the problem is that the cost of a custom plan ignores the cost of
> > planning itself (which is like 2x orders of magnitude worse than the cost
> > of real time partition pruning of a generic p
to help with similar issues (where planning
cost calculated as cost_planned*(some heuristic function with amount
involved in query tables).
In my case the high cost of planning itself should force the database to
use generic plan.
--
Maxim Boguk
Senior Postgresql DBA
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
On Mon, May 12, 2025 at 4:48 PM Andrei Lepikhov wrote:
> On 5/12/25 15:08, Maxim Boguk wrote:
> > PS: problem not with difference between custom and generic planning time
> > but with prepared statements
> > generic plan plans only once, but custom plan plan every call (and
On Mon, May 12, 2025 at 4:48 PM Andrei Lepikhov wrote:
> On 5/12/25 15:08, Maxim Boguk wrote:
> > PS: problem not with difference between custom and generic planning time
> > but with prepared statements
> > generic plan plans only once, but custom plan plan every call (and
On Mon, May 12, 2025 at 3:08 PM Andrei Lepikhov wrote:
> On 5/12/25 13:49, Maxim Boguk wrote:
> > I suspect this situation should be quite common with queries over
> > partitioned tables (where planning time is usually quite a high).
> >
> > Any suggestions what cou
the same time huge loss in performance (but actual plans are the same in
both cases).
I suspect this situation should be quite common with queries over
partitioned tables (where planning time is usually quite a high).
Any suggestions what could be done there outside of using
force_generic_plan for a particular db user (which will kill performance in
other queries for sure)?
--
Maxim Boguk
Senior Postgresql DBA
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
uples with killbit are still should
be read from the index first. And with many millions of dead entries it
isn't free.
PS: ignoring killbits on hot standby slaves is a source of endless pain in
many cases.
--
Maxim Boguk
Senior Postgresql DBA
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
s
well because you cannot keep up partial efficient indexes for the queue in
a clean/non-bloated state.
In my opinion the best solution is to keep list of entries to process ("*around
1000-1500 tasks in pending statuses")* duplicated in the separate tiny
table (via triggers or implement it on
done quickly) and without pg_stat_statements there will be no
adequate performance monitoring of the database.
But I'm pretty sure that the issue will go away with pg_stat_statements
disabled.
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
On Sat, Dec 31, 2022 at 2:26 PM Maxim Boguk wrote:
> Hi,
>
> When performing post-mortem analysis of some short latency spikes on a
> heavily loaded database, I found that the reason for (less than 10 second
> latency spike) wasn't on the EXECUTE stage but on the BIND stage
ingle CPU core per one pgbouncer process
(most time under 20%).
So it's an unlikely result of pgbouncer being overloaded.
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
;
>
LA 10-15 all time, servers are really overprovisioned (2-3x by available
CPU resources) because an application is quite sensitive to the database
latency.
And during these latency spikes - EXECUTE work without any issues (e.g.
only PARSE/BIND suck).
--
Maxim Boguk
Senior Postgresql DBA
https
oaded and perfectly healthy server - probably the most
curious log entry of 2022 year for me.
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
On Sat, Dec 31, 2022 at 4:32 PM Justin Pryzby wrote:
> On Sat, Dec 31, 2022 at 02:26:08PM +0200, Maxim Boguk wrote:
> > Hi,
> >
> > When performing post-mortem analysis of some short latency spikes on a
> > heavily loaded database, I found that the reason for (less
this issue (duration of such spike usually
under 1s but given how many TPS database serving - 1s is too much and
affect end users).
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
On Fri, Apr 10, 2020 at 1:16 AM Justin Pryzby wrote:
> On Fri, Apr 10, 2020 at 12:51:03AM +1000, Maxim Boguk wrote:
> > With database on dedicated server I encountered unusual load profile:
> > multi thread (200 connections static size pool via pgbouncer) insert only
> > int
| 1
So there are almost all backends waiting on buffer_content lock and some
backends waiting for XidGenLock .
And always one backend in SLRUSync.
If anyone can have any good idea whats going on in that case and how I can
fix it - any ideas welcome.
So far I out of ideas.
--
Maxim Bog
used during foreign key
checks (see that non-zero seq_scan counters for linked tables).
Only possible reason (IMHO) that wrong usage numeric in place of bigint.
I recommend change types of events.eventid (and any other similar fields)
to bigint.
It should resolve your performance issues with deletes
_stat_xact_user_tables results
(I now expect to see 3 sequential scan on event_recovery and may be on some
other tables as well).
Kind Regards,
Maxim
--
Maxim Boguk
Senior Postgresql DBA
https://dataegret.com/
Phone RU: +7 985 433
Phone UA: +380 99 143
Phone AU: +61 45 218 5678
Li
Ken
>
>
>
>
>
>
> Resurs Bank AB
> Kristian Ejvind
> Linux System Administrator
> IT Operations | Technical Operations
>
> Ekslingan 8
> Box 222 09, SE-25467 Helsingborg
>
> Direkt Tfn:
> Mobil: +46 728571483
> Vxl: +46 42 382000
> Fax:
>
21 matches
Mail list logo