Have you tried adding an index to txn_tbl.txn_type?
And a vacuum on all tables? It seems the visibility map is outdated.
I'm using https://explain.dalibo.com to view the plan visually; it's more
convenient.
You could use the option to periodically save the results of queries with
common filters to another table, and then retrieve the results from that table
when a user performs a query with their own filters.
You should also store the user's query results somewhere for a while to prevent
excessive database access.
I imagine this is some kind of dashboard that each user is taken to after
authenticating. It looks nice in presentations, but after a while in
production, it can make the system unusable. I had to remove similar charts
from the homepage of a system because after a year of work, they were taking a
minute to load.
On Saturday, January 31, 2026 at 08:30:33 AM GMT-5, yudhi s
<[email protected]> wrote:
Hello Experts,
We have a "Select" query which is using three to five main transaction tables
(txn_tbl, txn_status, txn_decision, txn_sale, ath) holding ~2million rows in
each of them(which is going to increase to have ~50-100million in future) and
others(6-7) tables out of which some are master and some other small tables.
When we are running this query , and it's taking ~2-3seconds , however when we
hit this query from 10-15 session at same time its causing CPU spike up to
~50-60% for the DB instance and this is increasing and touching 90% when we are
increasing the hits further to 40-50 times concurrently.
This query is going to be called in the first page of an UI screen and is
supposed to show the latest 1000 rows based on a certain transaction date. This
query is supposed to allow thousands of users to hit this same query at the
first landing page at the same time.
Its postgres version 17. The instance has 2-VCPU and 16GB RAM.
I have the following questions.
1)Why is this query causing a high cpu spike ,if there is any way in postgres
to understand what part/line of the query is contributing to the high cpu time?
2)How can we tune this query to further reduce response time and mainly CPU
consumption ? Is any additional index or anything will make this plan better
further?
3) Is there any guidance or best practices exists , to create/design top
N-queries for such UI scenarios where performance is an important factor?
4)And based on the CPU core and memory , is there any calculation by using
which , we can say that this machine can support a maximum N number of
concurrent queries of such type beyond which we need more cpu cores machines?
Below is the query and its current
plan:-https://gist.github.com/databasetech0073/6688701431dc4bf4eaab8d345c1dc65f
RegardsYudhi