Thank you Scott!
I have current work_mem set as 4MB, shared_buffers to 8GB, hugepages on.
I gradually increased the work_mem to 1GB but it did not help a bit. Am I 
missing something obvious.

-----Original Message-----
From: Scott Marlowe []
Sent: Tuesday, January 09, 2018 5:08 PM
To: Kumar, Virendra
Subject: Re: Performance of a Query

On Tue, Jan 9, 2018 at 2:18 PM, Kumar, Virendra <> 
> Hello Gurus,
> I am struggling to tune a query which is doing join on top of
> aggregate for around 3 million rows. The plan and SQL is attached to the 
> email.
> Below is system Details:
> PGSQL version – 10.1
> OS – RHEL 3.10.0-693.5.2.el7.x86_64
> Binary – Dowloaded from compiled and installed.
> Hardware – Virtual Machine with 8vCPU and 32GB of RAM, on XFS filesystem.

I uploaded your query plan here:

The most expensive part is the merge join at the end.

Lines like this one: "Buffers: shared hit=676 read=306596, temp
read=135840 written=135972"

Tell me that your sorts etc are spilling to disk, so the first thing to try is 
upping work_mem a bit. Don't go crazy, as it can run your machine out of memory 
if you do. but doubling or tripling it and seeing the effect on the query 
performance is a good place to start.

The good news is that most of your row estimates are about right, so the query 
planner is doing what it can to make the query fast, but I'm guessing if you 
get the work_mem high enough it will switch from a merge join to a hash_join or 
something more efficient for large numbers of rows.


This message is intended only for the use of the addressee and may contain

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.

Reply via email to