Hi Christian,

Thank you for your interest and suggestion.

To give you an idea, here are the approximate row counts for the main
tables involved in the query:

   - operations.requisitions: ~50,000 rows
   - operations.requisition_items: ~150,000 rows
   - operations.products: ~20,000 rows
   - operations.price_lists: ~ 60,000 rows
   - operations.price_lists_view_assort: ~20,000 rows
   - operations.vendors: ~200 rows

While these numbers aren't in the billions, the complexity of the joins and
the operations being performed, such as window functions and aggregations,
contribute significantly to the CPU utilization.

Partitioning is definitely something worth considering as the data grows.
For now, I have created indexes and run VACUUM, which reduced CPU
utilization from 40% to 30%. However, I'm aware that further optimization
might be necessary.

Thank you for your suggestion, and I will keep partitioning in mind as a
potential next step if data volumes continue to grow.

Best regards,

Kihaguru

On Sat, Aug 3, 2024 at 3:14 PM Christian Schulte <schulte...@gmail.com>
wrote:

> On 30.07.24 19:29, Kihaguru Gathura wrote:
> > Hi Claudio,
> >
> > Yes, I did run 'Explain Analyze' on the query to diagnose the
> > performance issues. Based on the analysis, I created indexes on the
> > relevant columns and ran a VACUUM on the tables. This resulted in an
> > improvement, reducing CPU utilization from 40% to 30%.
>
> Out of curiosity. How many records (table rows) are you talking about?
> Maybe partitioning is the way to go, if you are dealing with huge
> amounts of data. Indexing a couple of billions of records still makes
> selecting expensive and you maybe want to partition things?
>
> --
> Christian
>
>

Reply via email to