Yes, you're right ! The process checks if all these tables exist before
creating them. So it might be the SELECT that takes time. To check existence, I
use the following query :
select * from pg_tables where tablename='the_table';
May be it's not the best way. And I launch a query per table ! N
On 7/9/2012 11:14 PM, Maxim Boguk wrote:
On Tue, Jul 10, 2012 at 4:03 PM, David Kerr mailto:d...@mr-paradox.net>> wrote:
On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:
But what appears to be happening is that all of the data is
being written out at the end of the chec
On Tue, Jul 10, 2012 at 4:03 PM, David Kerr wrote:
>
> On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:
>
>
>>
>> But what appears to be happening is that all of the data is being written
>> out at the end of the checkpoint.
>>
>> This happens at every checkpoint while the system is under load.
>>
On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:
>
>
> But what appears to be happening is that all of the data is being written out
> at the end of the checkpoint.
>
> This happens at every checkpoint while the system is under load.
>
> I get the feeling that this isn't the correct behavior
On Jul 9, 2012, at 10:52 PM, Jeff Janes wrote:
> On Mon, Jul 9, 2012 at 10:39 PM, David Kerr wrote:
>>
>> I thought that the idea of checkpoint_completion_target was that we try to
>> finish writing
>> out the data throughout the entire checkpoint (leaving some room to spare,
>> in my case 30%
On Mon, Jul 9, 2012 at 10:39 PM, David Kerr wrote:
>
> I thought that the idea of checkpoint_completion_target was that we try to
> finish writing
> out the data throughout the entire checkpoint (leaving some room to spare,
> in my case 30%
> of the total estimated checkpoint time)
>
> But what ap
>
>
>
> But what appears to be happening is that all of the data is being written
> out at the end of the checkpoint.
>
> This happens at every checkpoint while the system is under load.
>
> I get the feeling that this isn't the correct behavior and i've done
> something wrong.
>
>
>
It's not an ac
Howdy!
I'm trying to figure out why checkpointing it completely pegging my I/O under
moderate to high write load,
I'm on PG9.1.1, RHEL 6.2 x64
checkpoint_completion_target = 0.7
checkpoint_timeout = 10m
Jul 10 00:32:30 perf01 postgres[52619]: [1895-1] user=,db= LOG: checkpoint
starting: time
On 07/10/2012 10:25 AM, Yan Chunlu wrote:
I didn't set log_min_duration_statement in the postgresql.conf, but
execute
/dbapi_con.cursor().execute("SET log_min_duration_statement to 30")/
/for every connection./
OK, same effect: You're only logging slow statements.
It's not at all surprising t
On 07/09/2012 05:20 PM, Yan Chunlu wrote:
the value of "log_min_messages" in postgresql.conf is error, I have
changed it to "warning", so far does not received any warning, still
waiting.
When trying to track down performance issues, increasing logging to at
least `info' would seem to be sens
On Mon, Jul 9, 2012 at 4:50 PM, Craig Ringer wrote:
>
>
> When/if you do need a count of a single table without any filters, a common
> trick is to use table statistics to return an approximation. If your
> autovaccum is running regularly it's usually a very good approximation, too.
>
> Sounds lik
On 07/10/2012 06:24 AM, Misa Simic wrote:
Hi Andrew,
Sure... We are sending data in Json to clients
{
total:6784,
data:[50 rows for desired page]
}
SELECT count(*) FROM table - without where, without joins used to have
bad performance... However, in real scenario we have never had the case
wit
On 07/09/2012 09:22 PM, Shaun Thomas wrote:
On 07/09/2012 07:02 AM, Craig Ringer wrote:
Do do cursors.
Did you mean "Do not use cursors" here?
Oops. "So do cursors".
Then the user goes away on a week's holiday and leaves their PC at
your "next" button.
This exactly. Cursors have limited
Hi Andrew,
Sure... We are sending data in Json to clients
{
total:6784,
data:[50 rows for desired page]
}
SELECT count(*) FROM table - without where, without joins used to have
bad performance... However, in real scenario we have never had the case
without joins and where... Join columns are alw
On Mon, Jul 9, 2012 at 1:46 PM, Andrew Dunstan wrote:
>
> On 07/09/2012 01:41 PM, Misa Simic wrote:
>
>>
>>
>> From my experience users even very rare go to ending pages... easier to
>> them would be to sort data by field to get those rows in very first pages...
>>
>>
>>
>
> Yeah, the problem rea
On 07/09/2012 01:41 PM, Misa Simic wrote:
From my experience users even very rare go to ending pages... easier
to them would be to sort data by field to get those rows in very first
pages...
Yeah, the problem really is that most client code wants to know how many
pages there are, even
2012/7/9 Gregg Jaskiewicz
> Use cursors.
> By far the most flexible. offset/limit have their down sides.
>
Well, I am not aware what down sides there are in LIMIT OFFSET what does
not exist in any other solutions for paged queries... But agree there
always must be some compromise between flexib
On Sun, Jul 8, 2012 at 11:49 PM, Sylvain CAILLET wrote:
> Hi,
>
> Thank you all for your help.
>
> @Jeff : my daemon creates these tables at start time so it doesn't do
> anything else at the same time. The CPU is loaded between 20% and 25%.
How does it decide which tables to create? Is it query
On Mon, Jul 9, 2012 at 8:16 AM, Craig James wrote:
>
> A good solution to this general problem is "hitlists." I wrote about this
> concept before:
>
> http://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php
>
>
I implemented this exact strategy in our product years ago. Our queri
On Mon, Jul 9, 2012 at 6:22 AM, Shaun Thomas wrote:
> On 07/09/2012 07:02 AM, Craig Ringer wrote:
>
> Do do cursors.
>>
>
> Did you mean "Do not use cursors" here?
>
> Then the user goes away on a week's holiday and leaves their PC at
>> your "next" button.
>>
>
> This exactly. Cursors have limi
On 07/09/2012 07:02 AM, Craig Ringer wrote:
Do do cursors.
Did you mean "Do not use cursors" here?
Then the user goes away on a week's holiday and leaves their PC at
your "next" button.
This exactly. Cursors have limited functionality that isn't directly
disruptive to the database in gene
Marc, thanks for the answer.
Na, these seem not to be enough universal and easy to hook into
existing truncation strategies used in Ruby world.
On Fri, Jul 6, 2012 at 6:24 PM, Marc Mamin wrote:
>
>
>
> Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows < 100 (not h
thanks for all the help. I checked the probability and found that:
1, the size of tuple is small
2, I checked the log manually and it indeed cost that much of time, not
aggregated
the value of "log_min_messages" in postgresql.conf is error, I have changed
it to "warning", so far does not received
Interesting catch, I will try to test the behavior of 'DELETE vs
multiple TRUNCATE'.
I'll post it here, If I discover any amazing results.
On Fri, Jul 6, 2012 at 2:38 PM, Daniel Farina wrote:
> On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer wrote:
>> 1) Truncate each table. It is too slow, I thin
On Fri, Jul 6, 2012 at 5:22 PM, Craig Ringer wrote:
> On 07/06/2012 09:45 PM, Stanislaw Pankevich wrote:
>
>> Question: Is there a possibility in PostgreSQL to do DELETE on many tables
>> massively, like TRUNCATE allows. Like DELETE table1, table2, ...?
>
>
> Yes, you can do it with a writable com
On Fri, Jul 6, 2012 at 4:39 PM, Albe Laurenz wrote:
> Stanislaw Pankevich wrote:
>> PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the
>> fastest way to clean each
>> non-empty table and reset unique identifier column of empty ones
>>
>> I wonder, what is the fastest wa
On Fri, Jul 6, 2012 at 4:38 PM, Craig Ringer wrote:
> On 07/06/2012 07:38 PM, Daniel Farina wrote:
>>
>> On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer
>> wrote:
>>>
>>> 1) Truncate each table. It is too slow, I think, especially for empty
>>> tables.
>>>
>>> Really?!? TRUNCATE should be extremely
Thanks for the answer.
Please, see my answers below:
On Fri, Jul 6, 2012 at 2:35 PM, Craig Ringer wrote:
> On 07/06/2012 07:29 PM, Craig Ringer wrote:
>
> On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:
>
> I cannot! use transactions.
>
> Everything in PostgreSQL uses transactions, they a
On 07/09/2012 07:55 PM, Gregg Jaskiewicz wrote:
Use cursors.
By far the most flexible. offset/limit have their down sides.
Do do cursors.
Keeping a cursor open across user think time has resource costs on the
database. It doesn't necessarily require keeping the transaction open
(with hold cur
Use cursors.
By far the most flexible. offset/limit have their down sides.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
30 matches
Mail list logo