Re: [PERFORM] Terrible plan for join to nested union

2012-07-09 Thread Nate Allan
Right now, UNION DISTINCT, along with INTERSECT and EXCEPT, have basically no optimization support whatsoever... Sorry to be the bearer of bad news, but this isn't going to change just because you try to label it a bug. Given the medium, I'll try not to read that in a snarky tone, after all,

Re: [PERFORM] Create tables performance

2012-07-09 Thread Sylvain CAILLET
Hi, Thank you all for your help. @Jeff : my daemon creates these tables at start time so it doesn't do anything else at the same time. The CPU is loaded between 20% and 25%. @Richard : Sure the DB number of table is quite big and sure most of them have the same structure, but it's very

Re: [PERFORM] Paged Query

2012-07-09 Thread Gregg Jaskiewicz
Use cursors. By far the most flexible. offset/limit have their down sides. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Paged Query

2012-07-09 Thread Craig Ringer
On 07/09/2012 07:55 PM, Gregg Jaskiewicz wrote: Use cursors. By far the most flexible. offset/limit have their down sides. Do do cursors. Keeping a cursor open across user think time has resource costs on the database. It doesn't necessarily require keeping the transaction open (with hold

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
Thanks for the answer. Please, see my answers below: On Fri, Jul 6, 2012 at 2:35 PM, Craig Ringer ring...@ringerc.id.au wrote: On 07/06/2012 07:29 PM, Craig Ringer wrote: On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote: I cannot! use transactions. Everything in PostgreSQL uses

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
On Fri, Jul 6, 2012 at 4:38 PM, Craig Ringer ring...@ringerc.id.au wrote: On 07/06/2012 07:38 PM, Daniel Farina wrote: On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer ring...@ringerc.id.au wrote: 1) Truncate each table. It is too slow, I think, especially for empty tables. Really?!? TRUNCATE

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
On Fri, Jul 6, 2012 at 4:39 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote: Stanislaw Pankevich wrote: PostgreSQL db, 30 tables with number of rows 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones I wonder, what is

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
On Fri, Jul 6, 2012 at 5:22 PM, Craig Ringer ring...@ringerc.id.au wrote: On 07/06/2012 09:45 PM, Stanislaw Pankevich wrote: Question: Is there a possibility in PostgreSQL to do DELETE on many tables massively, like TRUNCATE allows. Like DELETE table1, table2, ...? Yes, you can do it with a

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
Interesting catch, I will try to test the behavior of 'DELETE vs multiple TRUNCATE'. I'll post it here, If I discover any amazing results. On Fri, Jul 6, 2012 at 2:38 PM, Daniel Farina dan...@heroku.com wrote: On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer ring...@ringerc.id.au wrote: 1)

Re: [PERFORM] how could select id=xx so slow?

2012-07-09 Thread Yan Chunlu
thanks for all the help. I checked the probability and found that: 1, the size of tuple is small 2, I checked the log manually and it indeed cost that much of time, not aggregated the value of log_min_messages in postgresql.conf is error, I have changed it to warning, so far does not received

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
Marc, thanks for the answer. Na, these seem not to be enough universal and easy to hook into existing truncation strategies used in Ruby world. On Fri, Jul 6, 2012 at 6:24 PM, Marc Mamin m.ma...@intershop.de wrote: Stanislaw Pankevich wrote: PostgreSQL db, 30 tables with number of rows

Re: [PERFORM] Paged Query

2012-07-09 Thread Shaun Thomas
On 07/09/2012 07:02 AM, Craig Ringer wrote: Do do cursors. Did you mean Do not use cursors here? Then the user goes away on a week's holiday and leaves their PC at your next button. This exactly. Cursors have limited functionality that isn't directly disruptive to the database in

Re: [PERFORM] Paged Query

2012-07-09 Thread Craig James
On Mon, Jul 9, 2012 at 6:22 AM, Shaun Thomas stho...@optionshouse.comwrote: On 07/09/2012 07:02 AM, Craig Ringer wrote: Do do cursors. Did you mean Do not use cursors here? Then the user goes away on a week's holiday and leaves their PC at your next button. This exactly. Cursors

Re: [PERFORM] Paged Query

2012-07-09 Thread Greg Spiegelberg
On Mon, Jul 9, 2012 at 8:16 AM, Craig James cja...@emolecules.com wrote: A good solution to this general problem is hitlists. I wrote about this concept before: http://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php I implemented this exact strategy in our product years

Re: [PERFORM] Create tables performance

2012-07-09 Thread Jeff Janes
On Sun, Jul 8, 2012 at 11:49 PM, Sylvain CAILLET scail...@alaloop.com wrote: Hi, Thank you all for your help. @Jeff : my daemon creates these tables at start time so it doesn't do anything else at the same time. The CPU is loaded between 20% and 25%. How does it decide which tables to

Re: [PERFORM] Paged Query

2012-07-09 Thread Misa Simic
2012/7/9 Gregg Jaskiewicz gryz...@gmail.com Use cursors. By far the most flexible. offset/limit have their down sides. Well, I am not aware what down sides there are in LIMIT OFFSET what does not exist in any other solutions for paged queries... But agree there always must be some compromise

Re: [PERFORM] Paged Query

2012-07-09 Thread Andrew Dunstan
On 07/09/2012 01:41 PM, Misa Simic wrote: From my experience users even very rare go to ending pages... easier to them would be to sort data by field to get those rows in very first pages... Yeah, the problem really is that most client code wants to know how many pages there are,

Re: [PERFORM] Paged Query

2012-07-09 Thread Gurjeet Singh
On Mon, Jul 9, 2012 at 1:46 PM, Andrew Dunstan and...@dunslane.net wrote: On 07/09/2012 01:41 PM, Misa Simic wrote: From my experience users even very rare go to ending pages... easier to them would be to sort data by field to get those rows in very first pages... Yeah, the problem

Re: [PERFORM] Paged Query

2012-07-09 Thread Misa Simic
Hi Andrew, Sure... We are sending data in Json to clients { total:6784, data:[50 rows for desired page] } SELECT count(*) FROM table - without where, without joins used to have bad performance... However, in real scenario we have never had the case without joins and where... Join columns are

Re: [PERFORM] Paged Query

2012-07-09 Thread Craig Ringer
On 07/09/2012 09:22 PM, Shaun Thomas wrote: On 07/09/2012 07:02 AM, Craig Ringer wrote: Do do cursors. Did you mean Do not use cursors here? Oops. So do cursors. Then the user goes away on a week's holiday and leaves their PC at your next button. This exactly. Cursors have limited

Re: [PERFORM] Paged Query

2012-07-09 Thread Craig Ringer
On 07/10/2012 06:24 AM, Misa Simic wrote: Hi Andrew, Sure... We are sending data in Json to clients { total:6784, data:[50 rows for desired page] } SELECT count(*) FROM table - without where, without joins used to have bad performance... However, in real scenario we have never had the case

Re: [PERFORM] Paged Query

2012-07-09 Thread Jeff Janes
On Mon, Jul 9, 2012 at 4:50 PM, Craig Ringer ring...@ringerc.id.au wrote: When/if you do need a count of a single table without any filters, a common trick is to use table statistics to return an approximation. If your autovaccum is running regularly it's usually a very good approximation,

Re: [PERFORM] how could select id=xx so slow?

2012-07-09 Thread Craig Ringer
On 07/09/2012 05:20 PM, Yan Chunlu wrote: the value of log_min_messages in postgresql.conf is error, I have changed it to warning, so far does not received any warning, still waiting. When trying to track down performance issues, increasing logging to at least `info' would seem to be

Re: [PERFORM] how could select id=xx so slow?

2012-07-09 Thread Craig Ringer
On 07/10/2012 10:25 AM, Yan Chunlu wrote: I didn't set log_min_duration_statement in the postgresql.conf, but execute /dbapi_con.cursor().execute(SET log_min_duration_statement to 30)/ /for every connection./ OK, same effect: You're only logging slow statements. It's not at all surprising

[PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
Howdy! I'm trying to figure out why checkpointing it completely pegging my I/O under moderate to high write load, I'm on PG9.1.1, RHEL 6.2 x64 checkpoint_completion_target = 0.7 checkpoint_timeout = 10m Jul 10 00:32:30 perf01 postgres[52619]: [1895-1] user=,db= LOG: checkpoint starting:

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread Maxim Boguk
But what appears to be happening is that all of the data is being written out at the end of the checkpoint. This happens at every checkpoint while the system is under load. I get the feeling that this isn't the correct behavior and i've done something wrong. It's not an actual

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread Jeff Janes
On Mon, Jul 9, 2012 at 10:39 PM, David Kerr d...@mr-paradox.net wrote: I thought that the idea of checkpoint_completion_target was that we try to finish writing out the data throughout the entire checkpoint (leaving some room to spare, in my case 30% of the total estimated checkpoint time)

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
On Jul 9, 2012, at 10:52 PM, Jeff Janes wrote: On Mon, Jul 9, 2012 at 10:39 PM, David Kerr d...@mr-paradox.net wrote: I thought that the idea of checkpoint_completion_target was that we try to finish writing out the data throughout the entire checkpoint (leaving some room to spare, in my