Re: [PERFORM] slow join not using index properly

2014-03-21 Thread Ilya Kosmodemiansky
Hi Stefan! Probably you need to rewrite your query like this (check it first): with RECURSIVE qq(cont_key, anc_key) as ( select min(a1.context_key), ancestor_key from virtual_ancestors a1 union select (SELECT a1.context_key, ancestor_key FROM virtual_ancestors a1 where context_key

Re: [PERFORM] Getting query plan alternatives from query planner?

2014-03-21 Thread Stefan Keller
Hi Tom You wrote: Path alternatives are rejected whenever possible before moving up to the next join level, so that what we have rejected is actually just a plan fragment in most cases. Thanks for the quick answer. This sounds like a fair implementation decision. Background for asking this

Re: [PERFORM] Getting query plan alternatives from query planner?

2014-03-21 Thread Atri Sharma
On Fri, Mar 21, 2014 at 1:07 PM, Stefan Keller sfkel...@gmail.com wrote: Hi Tom You wrote: Path alternatives are rejected whenever possible before moving up to the next join level, so that what we have rejected is actually just a plan fragment in most cases. Thanks for the quick

Re: [PERFORM] slow join not using index properly

2014-03-21 Thread Vincent
On 21-03-14 00:56, Stefan Amshey wrote: We have a slow performing query that we are trying to improve, and it appears to be performing a sequential scan at a point where it should be utilizing an index. Can anyone tell me why postgres is opting to do it this way? The original query is as

Re: [PERFORM] Getting query plan alternatives from query planner?

2014-03-21 Thread Craig James
There have been many discussions about adding hints to Postgres over the years. All have been firmly rejected by the Postgres developers, with well-argued reasons. Search the archives to learn more about this topic. On the other hand, Postgres does have hints. They're just called settings. You

Re: [PERFORM] Getting query plan alternatives from query planner?

2014-03-21 Thread Tom Lane
Craig James cja...@emolecules.com writes: There have been many discussions about adding hints to Postgres over the years. All have been firmly rejected by the Postgres developers, with well-argued reasons. Search the archives to learn more about this topic. To clarify: there are good reasons

[PERFORM] Connection pooling - Number of connections

2014-03-21 Thread Guillaume Smet
Hi all, Brett Wooldridge, the creator of HikariCP [1] - a high performance Java connection pool - is contemplating the idea to change the way pooling is done in HikariCP and have a fixed-size pool of connections always open. No maxPoolSize, no minIdle, no minPoolSize, juste a poolSize parameter

Re: [PERFORM] Connection pooling - Number of connections

2014-03-21 Thread David Johnston
Guillaume Smet wrote Brett Wooldridge, the creator of HikariCP [1] - a high performance Java connection pool - is contemplating the idea to change the way pooling is done in HikariCP and have a fixed-size pool of connections always open. No maxPoolSize, no minIdle, no minPoolSize, juste a

Re: [PERFORM] Connection pooling - Number of connections

2014-03-21 Thread Guillaume Smet
On Fri, Mar 21, 2014 at 4:49 PM, David Johnston pol...@yahoo.com wrote: Consider this train-of-thought: no matter how large the pool size if you are constantly keeping, say, 90% of the connections actively working then having, on average, 10% of the connections sitting idle is probably not

Re: [PERFORM] Connection pooling - Number of connections

2014-03-21 Thread Tom Lane
Guillaume Smet guillaume.s...@gmail.com writes: On Fri, Mar 21, 2014 at 4:49 PM, David Johnston pol...@yahoo.com wrote: Consider this train-of-thought: no matter how large the pool size if you are constantly keeping, say, 90% of the connections actively working then having, on average, 10% of

Re: [PERFORM] Connection pooling - Number of connections

2014-03-21 Thread Sethu Prasad
Reaching the maxPoolSize from the minPoolSize means creating the connections at the crucial moment where the client application is in the desperate need of completing an important query/transaction which the primary responsibility since it cannot hold the data collected. So here the connection

Re: [PERFORM] Connection pooling - Number of connections

2014-03-21 Thread Guillaume Smet
Hi Tom, On Fri, Mar 21, 2014 at 5:17 PM, Tom Lane t...@sss.pgh.pa.us wrote: It will cost you, in ProcArray scans for example. But lots-of-idle- connections is exactly what a pooler is supposed to prevent. If you have a server that can handle say 10 active queries, you should have a pool

Re: [PERFORM] Connection pooling - Number of connections

2014-03-21 Thread Guillaume Smet
Hi Sethu, On Fri, Mar 21, 2014 at 6:51 PM, Sethu Prasad sethuprasad...@gmail.com wrote: So here the connection creation action is the costliest among all the other management tasks. so keeping the connections ready is the best option. That's why you often have a minIdle parameter which allows

Re: [PERFORM] Connection pooling - Number of connections

2014-03-21 Thread David Johnston
Sethu Prasad wrote Reaching the maxPoolSize from the minPoolSize means creating the connections at the crucial moment where the client application is in the desperate need of completing an important query/transaction which the primary responsibility since it cannot hold the data collected.

Re: [PERFORM] Connection pooling - Number of connections

2014-03-21 Thread Claudio Freire
On Fri, Mar 21, 2014 at 3:36 PM, Guillaume Smet guillaume.s...@gmail.com wrote: On Fri, Mar 21, 2014 at 5:17 PM, Tom Lane t...@sss.pgh.pa.us wrote: It will cost you, in ProcArray scans for example. But lots-of-idle- connections is exactly what a pooler is supposed to prevent. If you have a

Re: [PERFORM] Getting query plan alternatives from query planner?

2014-03-21 Thread Shaun Thomas
On 03/21/2014 08:34 AM, Craig James wrote: There have been many discussions about adding hints to Postgres over the years. All have been firmly rejected by the Postgres developers, with well-argued reasons. Search the archives to learn more about this topic. While that's true, and I agree

[PERFORM] Suboptimal query plan when using expensive BCRYPT functions

2014-03-21 Thread Erik van Zijst
Hi there, I've got a relatively simple query that contains expensive BCRYPT functions that gets optimized in a way that causes postgres to compute more bcrypt hashes than necessary, thereby dramatically slowing things down. In a certain part of our application we need to lookup users by their