Hello Guys,

I found very strange behavior on one of the master-slave clusters.

In case i'm running query several times in the same session i'm getting 
performance increase on the second request and huge performance decrease 
starting from 6 request. Behavior in 100% situations repeatable but only for 
this type of query. Issue happens on both master and slave systems. There is no 
such issue on other clusters.

Query:
SELECT * FROM get_results(1, 1, 1, '1', '1234567890', '1234567890', 
2*get_rate(26, 26, now()), ARRAY[1], '{}', 13, ARRAY[1, 2, 3], ARRAY[1, 2, 3]) 
WHERE a AND b AND c AND d AND NOT e AND f

Query is as simple as running a procedure with additional filters afterwards. 
Procedure inside calls pretty complex query joining 7-8 tables and several 
side-procedures. get_result returns ~ 30 columns and ~15-30 rows  as a result.

Explain shows interesting picture.

For the first call:

Rows Removed by Filter: 14
Buffers: shared hit=7599
Planning time: 0.190 ms
Execution time: 86.083 ms

For the second-fifth calls:

 Rows Removed by Filter: 14
 Buffers: shared hit=4804
 Planning time: 0.113 ms
 Execution time: 57.835 ms

For the six and afterwards:

 Rows Removed by Filter: 14
 Buffers: shared hit=24474
 Planning time: 0.073 ms
 Execution time: 217.545 ms

So we can see consistent pattern between 'shared hit' and query performance.
I tried to change definition of the function from STABLE to VOLATILE and tried 
to set small and very big costs values but with no luck.

Server:
256Gb RAM
4 x Intel(R) Xeon(R) CPU E7- 8837  @ 2.67GHz
RAID 10 from 8 x TOSHIBA  AL13SXB600N based on MegaRAID


postgresql.conf (9.4.7)
shared_buffers = 64578MB
work_mem = 64MB
maintenance_work_mem = 256MB
effective_cache_size = 129156MB


Any idea how I should investigate it further or what it could be?

Thanks in advance!


Suren Arustamyan
sure...@inbox.ru
-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to