[PERFORM] out of memory problem

2010-11-09 Thread Till Kirchner
Hello together, I get an out of memory problem I don't understand. The installed Postgres-Version is: PostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real (Debian 4.3.3-5) 4.3.3 It is running on a 32bit Debian machine with 4GB RAM. Thanks for any help in advance Till --

Re: [PERFORM] out of memory problem

2010-11-09 Thread Tom Lane
Till Kirchner till.kirch...@vti.bund.de writes: I get an out of memory problem I don't understand. It's pretty clear that something is leaking memory in the per-query context: ExecutorState: 1833967692 total in 230 blocks; 9008 free (3 chunks); 1833958684 used There doesn't seem to

Re: [PERFORM] out of memory problem

2010-11-09 Thread Bob Lunney
Be sure that you are starting PostgreSQL using an account with sufficient memory limits: ulimit -m If the account has memory limit below the server's configuration you may get the out of memory error. Bob Lunney --- On Tue, 11/9/10, Till Kirchner till.kirch...@vti.bund.de wrote: From:

[PERFORM] anti-join chosen even when slower than old plan

2010-11-09 Thread Kevin Grittner
The semi-join and anti-join have helped us quite a bit, but we have seen a situation where anti-join is chosen even though it is slower than the old fashioned plan. I know there have been other reports of this, but I just wanted to go on record with my details. The query: delete from

[PERFORM] Huge overestimation in rows expected results in bad plan

2010-11-09 Thread bricklen
Hi, I have a query that is getting a pretty bad plan due to a massively incorrect count of expected rows. All tables in the query were vacuum analyzed right before the query was tested. Disabling nested loops gives a significantly faster result (4s vs 292s). Any thoughts on what I can change to

Re: [PERFORM] Huge overestimation in rows expected results in bad plan

2010-11-09 Thread Andy Colson
On 11/9/2010 3:26 PM, bricklen wrote: Hi, I have a query that is getting a pretty bad plan due to a massively incorrect count of expected rows. All tables in the query were vacuum analyzed right before the query was tested. Disabling nested loops gives a significantly faster result (4s vs

Re: [PERFORM] Huge overestimation in rows expected results in bad plan

2010-11-09 Thread bricklen
On Tue, Nov 9, 2010 at 2:48 PM, Andy Colson a...@squeakycode.net wrote: On 11/9/2010 3:26 PM, bricklen wrote:          -   Seq Scan on conversionrejected cr  (cost=0.00..191921.82 rows=11012682 width=31) (actual time=0.003..1515.816 rows=11012682 loops=72)  Total runtime: 292668.992 ms

Re: [PERFORM] anti-join chosen even when slower than old plan

2010-11-09 Thread Kevin Grittner
Kevin Grittner kevin.gritt...@wicourts.gov wrote: samples %symbol name 2320174 33.7617 index_getnext I couldn't resist seeing where the time went within this function. Over 13.7% of the opannotate run time was on this bit of code: /* * The xmin should match the previous xmax

Re: [PERFORM] anti-join chosen even when slower than old plan

2010-11-09 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes: Kevin Grittner kevin.gritt...@wicourts.gov wrote: samples %symbol name 2320174 33.7617 index_getnext I couldn't resist seeing where the time went within this function. Over 13.7% of the opannotate run time was on this bit of

Re: [PERFORM] anti-join chosen even when slower than old plan

2010-11-09 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote: However, you'd have to be spending a lot of time chasing through long HOT chains before that would happen enough to make this a hotspot... That makes it all the more mysterious, then. These tables are insert-only except for a weekly delete of one week of

Re: [PERFORM] Huge overestimation in rows expected results in bad plan

2010-11-09 Thread Tom Lane
bricklen brick...@gmail.com writes: I have a query that is getting a pretty bad plan due to a massively incorrect count of expected rows. The query doesn't seem to match the plan. Where is that OR (c.id = 38441828354::bigint) condition coming from? regards, tom lane

Re: [PERFORM] Huge overestimation in rows expected results in bad plan

2010-11-09 Thread bricklen
On Tue, Nov 9, 2010 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote: bricklen brick...@gmail.com writes: I have a query that is getting a pretty bad plan due to a massively incorrect count of expected rows. The query doesn't seem to match the plan.  Where is that OR (c.id =

Re: [PERFORM] Huge overestimation in rows expected results in bad plan

2010-11-09 Thread Tom Lane
bricklen brick...@gmail.com writes: On Tue, Nov 9, 2010 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote: The query doesn't seem to match the plan.  Where is that OR (c.id = 38441828354::bigint) condition coming from? Ah sorry, I was testing it with and without that part. Here is the corrected

Re: [PERFORM] anti-join chosen even when slower than old plan

2010-11-09 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes: The semi-join and anti-join have helped us quite a bit, but we have seen a situation where anti-join is chosen even though it is slower than the old fashioned plan. I know there have been other reports of this, but I just wanted to go on

Re: [PERFORM] Huge overestimation in rows expected results in bad plan

2010-11-09 Thread bricklen
On Tue, Nov 9, 2010 at 3:55 PM, Tom Lane t...@sss.pgh.pa.us wrote: bricklen brick...@gmail.com writes: On Tue, Nov 9, 2010 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote: The query doesn't seem to match the plan.  Where is that OR (c.id = 38441828354::bigint) condition coming from? Ah sorry,

Re: [PERFORM] anti-join chosen even when slower than old plan

2010-11-09 Thread Grzegorz Jaśkiewicz
you're joining on more than one key. That always hurts performance. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance