On 10/31/07, Pablo Alcaraz <[EMAIL PROTECTED]> wrote:
>
> I was a program inserting into the base table. The program ran in 200+
> threads and every thread insert data on it. Every thread inserts a row every
> 3 seconds aprox.(or they used to do it), but when I put more partitions the
> insert spee
On 10/30/07, Pablo Alcaraz <[EMAIL PROTECTED]> wrote:
>
> I did some testing. I created a 300 partitioned empty table. Then, I
> inserted some rows on it and the perfomance was SLOW too.
Is the problem with inserting to the partitioned table or selecting from
it? It sounds like inserting is the
On 8/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> You might be able to work around it for now by faking such a reindex
> "by hand"; that is, create a duplicate new index under a different
> name using CREATE INDEX CONCURRENTLY, then exclusive-lock the table
> for just long enough to drop the old i
On 8/24/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> "Steven Flatt" <[EMAIL PROTECTED]> writes:
> > Why do we even need to consider calling RelationGetNumberOfBlocks or
> looking
> > at the pg_class.relpages entry? My understanding of the expected
On 8/24/07, Mark Kirkwood <[EMAIL PROTECTED]> wrote:
>
> Tom Lane wrote:
> >
> > The fly in the ointment is that after collecting the pg_index definition
> > of the index, plancat.c also wants to know how big it is --- it calls
> > RelationGetNumberOfBlocks. And that absolutely does look at the
>
On 8/22/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> postgres=# create table test (i integer);
> CREATE TABLE
> postgres=# insert into test select generate_series(1,1000);
> INSERT 0 1000
> postgres=# create or replace function slow(integer) returns integer as
> 'begin perform pg_sleep(0); retur
>
> It makes more sense for us to have ~1 hour's worth of reindexing
> afterwards during which read performance on that partition is "compromised".
>
So, based on the docs, I was expecting read performance to be compromised
during a reindex, specifically reads would not be allowed to use the inde
On 8/8/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> I'd make the same remark about Steven's case: if possible, don't create
> the indexes at all until you've loaded the table fully.
We considered this, however in some of our 12-hour partitions, there are
upwards of 50 or 60 million rows near the e
On 8/8/07, Vivek Khera <[EMAIL PROTECTED]> wrote:
>
> If all you ever did was insert into that table, then you probably
> don't need to reindex. If you did mass updates/deletes mixed with
> your inserts, then perhaps you do.
>
> Do some experiments comparing pg_class.relpages for your table and
>
On 7/28/07, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
>
> What are your vacuum_cost_* settings? If you set those too aggressively
> you'll be in big trouble.
autovacuum_vacuum_cost_delay = 100
autovacuum_vacuum_cost_limit = 200
These are generally fine, autovacuum keeps up, and there is minimal i
Postgres 8.2.4.
We have a large table, let's call it "foo", whereby an automated process
periodically inserts many (hundreds of thousands or millions) rows into it
at a time. It's essentially INSERT INTO foo SELECT FROM
WHERE . Recently, for whatever reason, the query started
to run out of memo
We're using Postgres 8.2.4.
I'm trying to decide whether it's worthwhile to implement a process that
does periodic reindexing. In a few ad hoc tests, where I've tried to set up
data similar to how our application does it, I've noticed decent performance
increases after doing a reindex as well as
On 6/25/07, Jim Nasby <[EMAIL PROTECTED]> wrote:
If you set that to 2B, that means you're 2^31-"2 billion"-100
transactions away from a shutdown when autovac finally gets around to
trying to run a wraparound vacuum on a table. If you have any number
of large tables, that could be a big probl
Thanks everyone. It appears that we had hacked the 502.pgsql script for our
8.1 build to disable the daily vacuum. I was not aware of this when
building and upgrading to 8.2.
So it looks like for the past two weeks, that 36 hour db-wide vacuum has
been running every 24 hours. Good for it for b
Thanks everyone for your responses. I don't think it's realistic to change
our application infrastructure to use COPY from a stream at this point.
It's good to know that multi-row-VALUES is good up into the thousands of
rows (depending on various things, of course). That's a good enough answer
f
On 6/21/07, Francisco Reyes <[EMAIL PROTECTED]> wrote:
Are you on FreeBSD by any chance?
I think the FreeBSD port by default installs a script that does a daily
vacuum.
Yes, FreeBSD. Do you know what script that is? And it does a db-wide
VACUUM ANALYZE every day?! That is certainly not ne
I can't seem to find a definitive answer to this.
It looks like Postgres does not enforce a limit on the length of an SQL
string. Great. However is there some point at which a query string becomes
ridiculously too long and affects performance? Here's my particular case:
consider an INSERT stat
We recently upgraded a very large database (~550 GB) from 8.1.4 to 8.2.4 via
a pg_dump and pg_restore. (Note that the restore took several days.) We
had accepted the default settings:
vacuum_freeze_min_age = 100 million
autovacuum_freeze_max_age = 200 million
Due to our very high transaction r
Thanks Tom and Alvaro.
To follow up on this, I re-wrote and tweaked a number of queries (including
the one provided) to change "LEFT OUTER JOIN ... WHERE col IS NULL" clauses
to "WHERE col NOT IN (...)" clauses.
This has brought performance to an acceptable level on 8.2.
Thanks for your time,
S
On 6/5/07, Tom Lane <[EMAIL PROTECTED]> wrote:
If you're feeling desperate you could revert this patch in your local
copy:
http://archives.postgresql.org/pgsql-committers/2006-11/msg00066.php
regards, tom lane
Reverting that patch has not appeared to solve our problem.
On 5/18/07, Tom Lane <[EMAIL PROTECTED]> wrote:
Yeah. 8.2 is estimating that the "nodeid IS NULL" condition will
discard all or nearly all the rows, presumably because there aren't any
null nodeid's in the underlying table --- it fails to consider that the
LEFT JOIN may inject some nulls. 8.1
Thanks for your answers and feedback.
All things considered, it is easiest (and acceptable) in this case to remove
RI between the tables where the deadlocks were occurring.
We are still looking to upgrade to 8.1.latest but that is another matter...
Steve
Hi, we're using Postgres 8.1.4.
We've been seeing deadlock errors of this form, sometimes as often as
several times per hour:
Apr 17 13:39:50 postgres[53643]: [4-1] ERROR: deadlock detected
Apr 17 13:39:50 postgres[53643]: [4-2] DETAIL: Process 53643 waits for
ShareLock on transaction 11128328
Yeah, I'm hoping there's an easier way. I'd have to put several thousand
entries in the pg_autovacuum table only to remove them a few minutes later.
What I really want is to disable the daemon.
Any idea why I can't just simply set autovacuum to off?
Steve
On 3/5/07, Tomas Vondra <[EMAIL PROTE
Not quite a performance question, but I can't seem to find a simple answer
to this. We're using 8.1.4 and the autovacuum daemon is running every 40
seconds cycling between 3 databases. What is the easiest way to disable the
autovacuumer for a minute or two, do some other work, then re-enable it?
On 1/10/07, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
Except for the simplest partitioning cases, you'll be much better off
using a trigger on the parent table to direct inserts/updates/deletes to
the children. As a bonus, using a trigger makes it a lot more realistic
to deal with an update moving
On 1/9/07, Simon Riggs <[EMAIL PROTECTED]> wrote:
If you are doing date range partitioning it should be fairly simple to
load data into the latest table directly. That was the way I originally
intended for it to be used. The rules approach isn't something I'd
recommend as a bulk loading option a
On 1/6/07, Colin Taylor <[EMAIL PROTECTED]> wrote:
Hi there, we've partioned a table (using 8.2) by day due to the 50TB of
data (500k row size, 100G rows) we expect to store it in a year.
Our performance on inserts and selects against the master table is
disappointing, 10x slower (with ony 1 pa
On 12/19/06, Tom Lane <[EMAIL PROTECTED]> wrote:
I still can't reproduce this. Using 7.4 branch tip, I did
create temp table foo(f1 varchar);
create table nottemp(f1 varchar);
\timing
insert into foo select stringu1 from tenk1 limit 100; insert into nottemp
select * from foo; truncate foo;
ins
Please ignore my post from earlier today. As strange as it sounds, changing
"CREATE TEMP TABLE ... AS" to "CREATE TEMP TABLE ... LIKE" appeared to fix
my performance problem because things errored out so quickly (and silently
in my test program). After checking the pgsql logs, it became clear to
AS when the table you're
creating is temporary and you're selecting from a view?
Steve
On 12/15/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> I've been trying to reproduce the problem for days now :). I've done
pr
om Lane <[EMAIL PROTECTED]> wrote:
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Are the removable rows in pg_class even an issue? So what if 5000-6000
dead
> tuples are generated every hour then vacuumed? Performance continues to
> steadily decline over a few
Good question, and I agree with your point.
Are the removable rows in pg_class even an issue? So what if 5000-6000 dead
tuples are generated every hour then vacuumed? Performance continues to
steadily decline over a few days time. Memory usage does not appear to be
bloating. Open file handles
Our application is such that there is a great deal of activity at the
beginning of the hour and minimal activity near the end of the hour. Those
3 vacuums were done at (approximately) 30 minutes past, 40 minutes past, and
50 minutes past the hour, during low activity. Vacuums of pg_class look
li
the INSERTs
to the temp table. SELECTs are performing just as well now (24 hours since
restarting the connection) as they did immediately after restarting the
connection. INSERTs to the temp table are 5 times slower now than they were
24 hours ago.
I wonder if the problem has to do with a l
hour. So I don't think the answer lies there...
Steve
On 12/13/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Having said that, what kinds of things should I be looking for that
could
> deteriorate/bloat over time? Ordi
temp table has
15 columns: a timestamp, a double, and the rest integers. It has no
indexes.
Thanks,
Steve
On 12/13/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Any idea where the bloat is happening? I believe that if we were
Hi,
Our application is using Postgres 7.4 and I'd like to understand the root
cause of this problem:
To speed up overall insert time, our application will write thousands of
rows, one by one, into a temp table (same structure as a permanent table),
then do a bulk insert from the temp table to th
Sorry, I think there's a misunderstanding here. Our system is not doing near that number of transactions per second. I meant that the duration of a single DB-wide vacuum takes on the order of a couple of weeks. The time between DB-wide vacuums is a little over a year, I believe.
Every coup
Here is a potential problem with the auto-vacuum daemon, and I'm wondering if anyone has considered this. To avoid transaction ID wraparound, the auto-vacuum daemon will periodically determine that it needs to do a DB-wide vacuum, which takes a long time. On our system, it is on the order of a co
40 matches
Mail list logo