On Wed, 2006-09-27 at 18:08 +0200, Edoardo Ceccarelli wrote:
I have read that autovacuum cannot check to see pg load before
launching
vacuum but is there any patch about it? that would sort out the
problem
in a good and simple way.
In some cases the solution to high load is to vacuum
On Mon, 2005-11-14 at 23:02 -0500, Tom Lane wrote:
Tim Allen [EMAIL PROTECTED] writes:
We've seen reports of people firing this particular foot-gun before,
haven't we? Would it make sense to rename pg_xlog to something that
doesn't sound like it's just full of log files? Eg pg_wal -
I'd appreciate if anyone could share your experience
in configuring things on the filer for optimal
performance or any recomendataion that i should be
aware of.
Netapps are great things. Just beware that you'll be using NFS, and NFS
drivers on many operating systems have been known to be
What I did next, is put a trigger on pg_attribute that should, in theory,
on insert and update, fire up a function that will increment a version
System tables do not use the same process for row insertion / updates as
the rest of the system. You're trigger will rarely be fired.
signature.asc
insert into state (state_code,state) values ('GU','Guam');
drop table whitepage;
delete from state where state_code = 'GU';
ERROR: Relation whitepage does not exist
Old version of PostgreSQL? Effort went into cleaning up inter-object
dependencies in 7.3. I don't recall having that
Hm. Evidently not :-(. The COMMENT ON DATABASE facility is a bit bogus
anyway (since there's no way to make the comments visible across
databases). You might be best advised not to use it.
Hackers: this seems like an extremely bad side-effect of what we thought
was a simple addition of a
3. Ignore the specified DB name, store the comment as the description
of the current DB; possibly give a warning saying we're doing so.
This would allow correct restoration of dumps into different DBs,
but I think people would find it awfully surprising :-(
I like this one for 7.4
have
preferred to send a patch. The only website I can find source for
(without trying hard) is developers.postgresql.org.
--
Rod Taylor [EMAIL PROTECTED]
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
width=4)
EXPLAIN
--
Rod Taylor
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
email_bank_mailing_lists where
query_id=499;NOTICE: QUERY PLAN:
Aggregate (cost=6863.24..6863.24 rows=1 width=4)
- Seq Scan on email_bank_mailing_lists (cost=0.00..6788.24 rows=30001
width=4)
EXPLAIN
--
Rod Taylor
-
Get your
the table. Since the table fetches are random, the harddrive
will probably incur a seek for each tuple found in the index. The seeks
add up much quicker than a sequential scan (without nearly as many seeks
or drive head movements).
--
Rod Taylor
---(end of broadcast
if they actually try to use PostgreSQL to get at the data.
There are a couple of tools which were designed to recover database data
while the db is not running.
--
Rod Taylor
---(end of broadcast)---
TIP 2: you can get off all lists at once
12 matches
Mail list logo