On Tue, 2003-08-05 at 17:40, Christopher Browne wrote:
Unfortunately, a configurable-via-tables pg_autovacuum is also going
to be quite different from the current unconfigurable version.
true, however I would like to preserve the unconfigured functionality
so that it can be run against a
From: Tom Lane [EMAIL PROTECTED]
Matthew T. O'Connor [EMAIL PROTECTED] writes:
So, now is precisely the time to be experimenting to find out what works
well and what features are needed.
Another quick question while I have your attention :-)
Since pg_autovaccum is a contrib module does
Josh Berkus wrote:
Matthew,
True, but I think it would be one hour once, rather than 30 minutes 4
times.
Well, generally it would be about 6-8 times at 2-4 minutes each.
Are you saying that you can vacuum a 1 million row table in 2-4
minutes? While a vacuum of the same table with an
Shridhar Daithankar wrote:
Matthew T. O'Connor wrote:
But we track tuples because we can compare against the count given by
the stats system. I don't know of a way (other than looking at the
FSM, or contrib/pgstattuple ) to see how many dead pages exist.
I think making pg_autovacuum dependent
Josh Berkus wrote:
Matthew,
But we could create a config file that would store stuff in a flatfile table,
OR we could add our own system table that would be created when one
initializes pg_avd.
I don't want to add tables to existing databases, as I consider that
clutter and I never like
Josh Berkus wrote:
Matthew,
I don't see how a seperate database is better than a table in the databases.,
except that it means scanning only one table and not one per database. For
one thing, making it a seperate database could make it hard to back up and
move your database+pg_avd
Josh Berkus wrote:
Matthew,
I certainly agree that less than 10% would be excessive, I still feel
that 10% may not be high enough though. That's why I kinda liked the
sliding scale I mentioned earlier, because I agree that for very large
tables, something as low as 10% might be useful,
Josh Berkus wrote:
Matthew,
For small tables, you don't need to vacuum too often. In the testing I
did a small table ~100 rows, didn't really show significant performance
degredation until it had close to 1000 updates.
This is accounted for by using the threshold value. That way
Tom Lane wrote:
Chester Kustarz [EMAIL PROTECTED] writes:
vacuum is to reclaim dead tuples. this means it depends on update and
delete. analyze depends on data values/distribution. this means it depends on
insert, update, and delete. thus the dependencies are slightly different
between the 2
MTO == Matthew T O'Connor [EMAIL PROTECTED] writes:
MTO I don't run FreeBSD, so I haven't tested with FreeBSD. Recently
Craig MTO Boston reported and submitted a patch for a crash on FreeBSD,
but that
some more debugging data:
(gdb) print now
$2 = {tv_sec = 1070565077, tv_usec = 216477
Joe Conway wrote:
Tom Lane wrote:
Just to be clear on this: you have to restart the postmaster to bring
the time back down? Simply starting a fresh backend session doesn't do
it?
IIRC, shared buffers was reasonable, maybe 128MB. One thing that is
worthy of note is that they are using
Joe Conway wrote:
A few pg_autovacuum questions came out of this:
First, the default vacuum scaling factor is 2, which I think implies
the big table should only get vacuumed every 56 million or so changes.
I didn't come anywhere near that volume in my tests, yet the table did
get
Joe Conway wrote:
Yeah, I'm sure. Snippets from the log:
[...lots-o-tables...]
[2004-03-14 12:44:48 PM] added table: specdb.public.parametric_states
[2004-03-14 12:49:48 PM] Performing: VACUUM ANALYZE
public.transaction_data
[2004-03-14 01:29:59 PM] Performing: VACUUM ANALYZE
On Tue, 2004-03-16 at 23:49, Joe Conway wrote:
I have tested Tom's original patch now. The good news -- it works great
in terms of reducing the load imposed by vacuum -- almost to the level
of being unnoticeable. The bad news -- in a simulation test which loads
an hour's worth of data, even
Andrew Sullivan wrote:
The vacuum delay stuff that you're working on may help, but I can't
really believe it's your salvation if this is happening after only a
few minutes. No matter how much you're doing inside those functions,
you surely can't be causing so many dead tuples that a vacuum is
Heiko Kehlenbrink wrote:
i use suse 8.1
postgresql 7.2 compiled from the rpms for using postgis, but that is
Try v7.4, there are many performance improvements. It may not make up
all the differences but it should help.
---(end of
Paul Thomas wrote:
Looks like he's using the default postgresql.conf settings in which
case I'm not suprised at pg looking so slow. His stated use of foreign
keys invalidates the tests anyway as MyISAM tables don't support FKs
so we're probably seeing FK check overheads in pg that are simply
On Wed, 2004-05-19 at 21:59, Robert Creager wrote:
When grilled further on (Wed, 19 May 2004 21:20:20 -0400 (EDT)),
Bruce Momjian [EMAIL PROTECTED] confessed:
Did we ever come to a conclusion about excessive SMP context switching
under load?
I just figured out what was causing the
Is there any way to avoid doing a periodic VACUUM FULL on this table,
given the fairly radical usage pattern? Or is the (ugly) answer to
redesign our application to avoid this usage pattern?
Yes, you should be able to doing avoid periodic VACUUM FULL. The problem
is that your table needs to
Jim Ewert wrote:
When I went to 7.4.3 (Slackware 9.1) w/ JDBC, the improvements are that it doesn't
initially take much memory (have 512M) and didn't swap. I ran a full vaccum and a
cluster before installation, however speed degaded to 1 *second* / update of one row
in 150 rows of data, within
Gaetano Mendola wrote:
Tom Lane wrote:
| Given the nature of the data (login times), I'd imagine that the problem
| is simply that he hasn't analyzed recently enough. A bump in stats
| target may not be needed, but he's going to have to re-analyze that
| column often if he wants this sort of
Gaetano Mendola wrote:
Well I think pg_autovacuum as is in 7.4 can not help me for this particular
table.
The table have 4.8 milions rows and I have for that table almost 10252 new
entries for day.
I'm using pg_autovacuum with -a 200 -A 0.8 this means a threashold for
that table equal to: 3849008
VACUUM FULL ANALYZE every 3 hours seems a little severe. You will
probably be be served just as well by VACUUM ANALYZE. But you probably
don't need the VACUUM part most of the time. You might try doing an
ANALYZE on the specific tables you are having issues with. Since
ANALYZE should be
Stephane Tessier wrote:
I think with your help guys I'll do it!
I'm working on it!
I'll work on theses issues:
we have space for more ram(we use 2 gigs on possibility of 3 gigs)
iowait is very high 98% -- look like postgresql wait for io access
raid5 --raid0 if i'm right raid5 use 4
Lending, Rune wrote:
Hello all.
I am managing a large database with lots of transactions in different
tables.
The largest tables have around 5-6 millions tuples and around
5-6 inserts and maybe 2 updates pr day.
While the smalest tables have only a few tuples and a few updates
Andrew Sullivan wrote:
Probably the most severe objection to doing things this way is that the
selected plan could change unexpectedly as a result of the physical
table size changing. Right now the DBA can keep tight rein on actions
that might affect plan selection (ie, VACUUM and ANALYZE), but
Matt Nuzum wrote:
To me, these three queries seem identical... why doesn't the first one
(simplest to understand and write) go the same speed as the third one?
If you look at the explain output, you will notice that only the 3rd
query is using an Index Scan, where as the 1st and 2nd are doing
David Parker wrote:
We're using postgresql 7.4.5. I've only recently put pg_autovacuum in
place as part of our installation, and I'm basically taking the
defaults. I doubt it's a problem with autovacuum itself, but rather with
my configuration of it. I have some reading to do, so any pointers to
the default threshold of 1000. Are there statistics which only get
generated by vacuum?
I've attached a gzip of the pg_autovacuum log file, with -d 3.
Thanks again.
- DAP
-Original Message-
From: Matthew T. O'Connor [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 17, 2004 11:41 AM
Josh Berkus wrote:
Matt,
I had one comment on the pg_autovacuum section. Near the bottom it
lists some of it's limitations, and I want to clarify the 1st one: Does
not reset the transaction counter. I assume this is talking about the
xid wraparound problem? If so, then that bullet can be
Gaetano Mendola wrote:
pg_class after the vacuum full for that table
relfilenode | relname | relpages | reltuples
-+--+--+-
18376 | messages |63307 | 1.60644e+06
pg_class before the vacuum full for that table
relfilenode | relname | relpages |
The version that shipped with 8.0 should be fine. The only version that
had the problem Tom referred to are in the early 7.4.x releases.
Did you get my other message about information from the stats system
(I'm not sure why my other post has yet to show up on the performance
list).
Matthew
I would rather keep this on list since other people can chime in.
Otto Blomqvist wrote:
It does not seem to be a Stats collector problem.
oid | relname | relnamespace | relpages | relisshared | reltuples |
schemaname | n_tup_ins | n_tup_upd | n_tup_del
: Matthew T. O'Connor matthew@zeut.net
To: Otto Blomqvist [EMAIL PROTECTED];
pgsql-performance@postgresql.org
Sent: Thursday, March 24, 2005 3:58 PM
Subject: Re: [PERFORM] pg_autovacuum not having enough suction ?
I would rather keep this on list since other people can chime in.
Otto Blomqvist wrote
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
hmm the value in reltuples should be accurate after a vacuum (or
vacuum analyze) if it's not it's a vacuum bug or something is going on
that isn't understood. If you or pg_autovacuum are running plain
analyze commands
Tom Lane wrote:
I wrote:
One thing that is possibly relevant here is that in 8.0 a plain VACUUM
doesn't set reltuples to the exactly correct number, but to an
interpolated value that reflects our estimate of the steady state
average between vacuums. I wonder if that code is wrong, or if it's
Well the simple answer is that pg_autovacuum didn't see 10,000 inserts
updates or deletes.
pg_autovacuum saw:476095 - 471336 = 4759 U/D's relevant for
vacuuming and
634119 - 629121 = 4998 I/U/D's relevant for performing analyze.
The tough question is why is pg_autovacuum not seeing all
Mindaugas Riauba wrote:
The vacuum cost parameters can be adjusted to make vacuums fired
by pg_autovacuum less of a burden. I haven't got any specific numbers
to suggest, but perhaps someone else does.
It looks like that not only vacuum causes our problems. vacuum_cost
seems to lower vacuum
Mindaugas Riauba wrote:
AFAICT the vacuum is doing what it is supposed to, and the problem has
to be just that it's not being done often enough. Which suggests either
an autovacuum bug or your autovacuum settings aren't aggressive enough.
-D -d 1 -v 1000 -V 0.5 -a 1000 -A 0.1 -s 10
Mindaugas Riauba wrote:
Might e aggressive enough, but might not. I have seen some people run
-V 0.1. Also you probably don't need -A that low. This could an issue
where analyze results in an inaccurate reltuples value which is
preventing autovacuum from doing it's job. Could you please run
Robert Creager wrote:
For 8.03, pg_autovacuum is running. On 7.4.1, I set up a cron job to vacuum
analyze every 5 minutes.
Are you sure that pg_autovacuum is doing it's job? Meaning are you sure
it's vacuuming as often as needed? Try to run it with -d2 or so and
make sure that it is
always take 30 seconds (when I'm watching).
Cheers,
Rob
When grilled further on (Sun, 17 Jul 2005 23:48:20 -0400),
Matthew T. O'Connor matthew@zeut.net confessed:
Robert Creager wrote:
For 8.03, pg_autovacuum is running. On 7.4.1, I set up a cron job to vacuum
analyze every 5 minutes
Tom Lane wrote:
Robert Creager [EMAIL PROTECTED] writes:
I've vacuum_cost_delay = 10 in the conf file for 803.
Hmm, did you read this thread?
http://archives.postgresql.org/pgsql-performance/2005-07/msg00088.php
It's still far from clear what's going on there, but it might be
Jim C. Nasby wrote:
Small tables are most likely to have either very few updates (ie: a
'lookup table') or very frequent updates (ie: a table implementing a
queue). In the former, even with vacuum_threshold = 0 vacuum will be a
very rare occurance. In the later case, a high threshold is likely
Aaron Turner wrote:
So I'm trying to figure out how to optimize my PG install (8.0.3) to
get better performance without dropping one of my indexes.
What about something like this:
begin;
drop slow_index_name;
update;
create index slow_index_name;
commit;
vacuum;
Matt
More detail please. It sounds like you running 8.1 and talking about
the integrated autovacuum is that correct? Also, what is the message
specifically from pgadmin?
Matt
Antoine wrote:
Hi,
I have enabled the autovacuum daemon, but occasionally still get a
message telling me I need to run
Gábriel Ákos wrote:
Luke Lonergan wrote:
Gabriel,
On 3/27/06 10:05 AM, Gábriel Ákos [EMAIL PROTECTED] wrote:
That gave me an idea. I thought that autovacuum is doing it right, but I
issued a vacuum full analyze verbose , and it worked all the day.
After that I've tweaked memory settings a
Mikael Carneholm wrote:
This is where a last_vacuumed (and last_analyzed) column in
pg_statistic(?) would come in handy. Each time vacuum or analyze has
finished, update the row for the specific table that was
vacuumed/analyzed with a timestamp in the last_vacuumed/last_analyzed
column. No more
Csaba Nagy wrote:
On Wed, 2006-09-27 at 18:08, Edoardo Ceccarelli wrote:
How can I configure the vacuum to run after the daily batch insert/update?
Check out this:
http://www.postgresql.org/docs/8.1/static/catalog-pg-autovacuum.html
By inserting the right row you can disable
Tobias Brox wrote:
[Matthew T. O'Connor - Wed at 02:33:10PM -0400]
In addition autovacuum respects the work of manual or cron based
vacuums, so if you issue a vacuum right after a daily batch insert /
update, autovacuum won't repeat the work of that manual vacuum.
I was experimenting
Carlo Stonebanks wrote:
Just a wild guess, but the performance problem sounds like maybe as your
data changes, eventually the planner moves some query from an index scan
to a sequential scan, do you have any details on what queries are taking
so long when things are running slow? You can turn
Jeremy Haile wrote:
I changed the table-specific settings so that the ANALYZE base threshold
was 5000 and the ANALYZE scale factor is 0. According to the documented
formula: analyze threshold = analyze base threshold + analyze scale
factor * number of tuples, I assumed that this would cause the
Jeremy Haile wrote:
Also, are other auto-vacuums and auto-analyzes showing up in the
pg_stats table? Maybe it's a stats system issue.
No tables have been vacuumed or analyzed today. I had thought that this
problem was due to my pg_autovacuum changes, but perhaps not. I
restarted
Matthew wrote:
For instance, the normal B-tree index on (a, b) is able to answer queries
like a = 5 AND b 1 or a 5. An R-tree would be able to index these,
plus queries like a 5 AND b 1.
Sorry in advance if this is a stupid question, but how is this better
than two index, one on a and one
Steve Poe wrote:
The owners of the animal hospital where I work at want to consider live/hot
backups through out the day so we're less likely to lose a whole
day of transaction. We use Postgresql 8.0.15. We do 3AM
backups, using pg_dumpall, to a file when there is very little activity.
You
Steve Poe wrote:
At this point, I am just moving the pg_dumpall file to another server. Pardon
my question: how would you 'ship the log files'?
[ You should cc the mailing list so that everyone can benefit from the
conversation. ]
RTM:
56 matches
Mail list logo