Re: [PERFORM] Create tables performance

2012-07-09 Thread Sylvain CAILLET
Yes, you're right ! The process checks if all these tables exist before creating them. So it might be the SELECT that takes time. To check existence, I use the following query : select * from pg_tables where tablename='the_table'; May be it's not the best way. And I launch a query per table ! N

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
On 7/9/2012 11:14 PM, Maxim Boguk wrote: On Tue, Jul 10, 2012 at 4:03 PM, David Kerr mailto:d...@mr-paradox.net>> wrote: On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote: But what appears to be happening is that all of the data is being written out at the end of the chec

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread Maxim Boguk
On Tue, Jul 10, 2012 at 4:03 PM, David Kerr wrote: > > On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote: > > >> >> But what appears to be happening is that all of the data is being written >> out at the end of the checkpoint. >> >> This happens at every checkpoint while the system is under load. >>

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote: > > > But what appears to be happening is that all of the data is being written out > at the end of the checkpoint. > > This happens at every checkpoint while the system is under load. > > I get the feeling that this isn't the correct behavior

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
On Jul 9, 2012, at 10:52 PM, Jeff Janes wrote: > On Mon, Jul 9, 2012 at 10:39 PM, David Kerr wrote: >> >> I thought that the idea of checkpoint_completion_target was that we try to >> finish writing >> out the data throughout the entire checkpoint (leaving some room to spare, >> in my case 30%

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread Jeff Janes
On Mon, Jul 9, 2012 at 10:39 PM, David Kerr wrote: > > I thought that the idea of checkpoint_completion_target was that we try to > finish writing > out the data throughout the entire checkpoint (leaving some room to spare, > in my case 30% > of the total estimated checkpoint time) > > But what ap

Re: [PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread Maxim Boguk
> > > > But what appears to be happening is that all of the data is being written > out at the end of the checkpoint. > > This happens at every checkpoint while the system is under load. > > I get the feeling that this isn't the correct behavior and i've done > something wrong. > > > It's not an ac

[PERFORM] Massive I/O spikes during checkpoint

2012-07-09 Thread David Kerr
Howdy! I'm trying to figure out why checkpointing it completely pegging my I/O under moderate to high write load, I'm on PG9.1.1, RHEL 6.2 x64 checkpoint_completion_target = 0.7 checkpoint_timeout = 10m Jul 10 00:32:30 perf01 postgres[52619]: [1895-1] user=,db= LOG: checkpoint starting: time

Re: [PERFORM] how could select id=xx so slow?

2012-07-09 Thread Craig Ringer
On 07/10/2012 10:25 AM, Yan Chunlu wrote: I didn't set log_min_duration_statement in the postgresql.conf, but execute /dbapi_con.cursor().execute("SET log_min_duration_statement to 30")/ /for every connection./ OK, same effect: You're only logging slow statements. It's not at all surprising t

Re: [PERFORM] how could select id=xx so slow?

2012-07-09 Thread Craig Ringer
On 07/09/2012 05:20 PM, Yan Chunlu wrote: the value of "log_min_messages" in postgresql.conf is error, I have changed it to "warning", so far does not received any warning, still waiting. When trying to track down performance issues, increasing logging to at least `info' would seem to be sens

Re: [PERFORM] Paged Query

2012-07-09 Thread Jeff Janes
On Mon, Jul 9, 2012 at 4:50 PM, Craig Ringer wrote: > > > When/if you do need a count of a single table without any filters, a common > trick is to use table statistics to return an approximation. If your > autovaccum is running regularly it's usually a very good approximation, too. > > Sounds lik

Re: [PERFORM] Paged Query

2012-07-09 Thread Craig Ringer
On 07/10/2012 06:24 AM, Misa Simic wrote: Hi Andrew, Sure... We are sending data in Json to clients { total:6784, data:[50 rows for desired page] } SELECT count(*) FROM table - without where, without joins used to have bad performance... However, in real scenario we have never had the case wit

Re: [PERFORM] Paged Query

2012-07-09 Thread Craig Ringer
On 07/09/2012 09:22 PM, Shaun Thomas wrote: On 07/09/2012 07:02 AM, Craig Ringer wrote: Do do cursors. Did you mean "Do not use cursors" here? Oops. "So do cursors". Then the user goes away on a week's holiday and leaves their PC at your "next" button. This exactly. Cursors have limited

Re: [PERFORM] Paged Query

2012-07-09 Thread Misa Simic
Hi Andrew, Sure... We are sending data in Json to clients { total:6784, data:[50 rows for desired page] } SELECT count(*) FROM table - without where, without joins used to have bad performance... However, in real scenario we have never had the case without joins and where... Join columns are alw

Re: [PERFORM] Paged Query

2012-07-09 Thread Gurjeet Singh
On Mon, Jul 9, 2012 at 1:46 PM, Andrew Dunstan wrote: > > On 07/09/2012 01:41 PM, Misa Simic wrote: > >> >> >> From my experience users even very rare go to ending pages... easier to >> them would be to sort data by field to get those rows in very first pages... >> >> >> > > Yeah, the problem rea

Re: [PERFORM] Paged Query

2012-07-09 Thread Andrew Dunstan
On 07/09/2012 01:41 PM, Misa Simic wrote: From my experience users even very rare go to ending pages... easier to them would be to sort data by field to get those rows in very first pages... Yeah, the problem really is that most client code wants to know how many pages there are, even

Re: [PERFORM] Paged Query

2012-07-09 Thread Misa Simic
2012/7/9 Gregg Jaskiewicz > Use cursors. > By far the most flexible. offset/limit have their down sides. > Well, I am not aware what down sides there are in LIMIT OFFSET what does not exist in any other solutions for paged queries... But agree there always must be some compromise between flexib

Re: [PERFORM] Create tables performance

2012-07-09 Thread Jeff Janes
On Sun, Jul 8, 2012 at 11:49 PM, Sylvain CAILLET wrote: > Hi, > > Thank you all for your help. > > @Jeff : my daemon creates these tables at start time so it doesn't do > anything else at the same time. The CPU is loaded between 20% and 25%. How does it decide which tables to create? Is it query

Re: [PERFORM] Paged Query

2012-07-09 Thread Greg Spiegelberg
On Mon, Jul 9, 2012 at 8:16 AM, Craig James wrote: > > A good solution to this general problem is "hitlists." I wrote about this > concept before: > > http://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php > > I implemented this exact strategy in our product years ago. Our queri

Re: [PERFORM] Paged Query

2012-07-09 Thread Craig James
On Mon, Jul 9, 2012 at 6:22 AM, Shaun Thomas wrote: > On 07/09/2012 07:02 AM, Craig Ringer wrote: > > Do do cursors. >> > > Did you mean "Do not use cursors" here? > > Then the user goes away on a week's holiday and leaves their PC at >> your "next" button. >> > > This exactly. Cursors have limi

Re: [PERFORM] Paged Query

2012-07-09 Thread Shaun Thomas
On 07/09/2012 07:02 AM, Craig Ringer wrote: Do do cursors. Did you mean "Do not use cursors" here? Then the user goes away on a week's holiday and leaves their PC at your "next" button. This exactly. Cursors have limited functionality that isn't directly disruptive to the database in gene

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
Marc, thanks for the answer. Na, these seem not to be enough universal and easy to hook into existing truncation strategies used in Ruby world. On Fri, Jul 6, 2012 at 6:24 PM, Marc Mamin wrote: > > > > Stanislaw Pankevich wrote: PostgreSQL db, 30 tables with number of rows < 100 (not h

Re: [PERFORM] how could select id=xx so slow?

2012-07-09 Thread Yan Chunlu
thanks for all the help. I checked the probability and found that: 1, the size of tuple is small 2, I checked the log manually and it indeed cost that much of time, not aggregated the value of "log_min_messages" in postgresql.conf is error, I have changed it to "warning", so far does not received

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
Interesting catch, I will try to test the behavior of 'DELETE vs multiple TRUNCATE'. I'll post it here, If I discover any amazing results. On Fri, Jul 6, 2012 at 2:38 PM, Daniel Farina wrote: > On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer wrote: >> 1) Truncate each table. It is too slow, I thin

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
On Fri, Jul 6, 2012 at 5:22 PM, Craig Ringer wrote: > On 07/06/2012 09:45 PM, Stanislaw Pankevich wrote: > >> Question: Is there a possibility in PostgreSQL to do DELETE on many tables >> massively, like TRUNCATE allows. Like DELETE table1, table2, ...? > > > Yes, you can do it with a writable com

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
On Fri, Jul 6, 2012 at 4:39 PM, Albe Laurenz wrote: > Stanislaw Pankevich wrote: >> PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the >> fastest way to clean each >> non-empty table and reset unique identifier column of empty ones >> >> I wonder, what is the fastest wa

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
On Fri, Jul 6, 2012 at 4:38 PM, Craig Ringer wrote: > On 07/06/2012 07:38 PM, Daniel Farina wrote: >> >> On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer >> wrote: >>> >>> 1) Truncate each table. It is too slow, I think, especially for empty >>> tables. >>> >>> Really?!? TRUNCATE should be extremely

Re: [PERFORM] PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.

2012-07-09 Thread Stanislaw Pankevich
Thanks for the answer. Please, see my answers below: On Fri, Jul 6, 2012 at 2:35 PM, Craig Ringer wrote: > On 07/06/2012 07:29 PM, Craig Ringer wrote: > > On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote: > > I cannot! use transactions. > > Everything in PostgreSQL uses transactions, they a

Re: [PERFORM] Paged Query

2012-07-09 Thread Craig Ringer
On 07/09/2012 07:55 PM, Gregg Jaskiewicz wrote: Use cursors. By far the most flexible. offset/limit have their down sides. Do do cursors. Keeping a cursor open across user think time has resource costs on the database. It doesn't necessarily require keeping the transaction open (with hold cur

Re: [PERFORM] Paged Query

2012-07-09 Thread Gregg Jaskiewicz
Use cursors. By far the most flexible. offset/limit have their down sides. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance