[snip]
Yes, but it could be a disk issue because you're doing more work than
you need to. If your UPDATEs are chasing down a lot of dead tuples,
for instance, you'll peg your I/O even though you ought to have I/O
to burn.
OK, this sounds interesting, but I don't understand: why would an
[EMAIL PROTECTED] wrote:
Have you tried reindexing your active tables?
It will cause some performance hit while you are doing it. It
sounds like something is bloating rapidly on your system and
the indexes is one possible place that could be happening.
You might consider using
Nörder-Tuitje wrote:
Hello,
I have a strange effect on upcoming structure :
People will be wanting the output of EXPLAIN ANALYSE on that query.
They'll also ask whether you've VACUUMed, ANALYSEd and configured your
postgresql.conf correctly.
--
Richard Huxton
Archonet Ltd
Thanks Andrew, this explanation about the dead rows was enlightening.
Might be the reason for the slowdown I see on occasions, but not for the
case which I was first observing. In that case the updated rows are
different for each update. It is possible that each row has a few dead
versions, but
On Thu, Oct 13, 2005 at 03:14:44PM +0200, Csaba Nagy wrote:
In any case, I suppose that those disk pages should be in OS cache
pretty soon and stay there, so I still don't understand why the disk
usage is 100% in this case (with very low CPU activity, the CPUs are
mostly waiting/idle)... the
Patrick Hatcher [EMAIL PROTECTED] writes:
Pg 7.4.5
Trying to do a update of fields on 23M row database.
Is it normal for this process to take 16hrs and still clocking?
Are there foreign keys pointing at the table being updated? If so,
failure to index the referencing columns could create
Thanks. No foreign keys and I've been bitten by the mismatch datatypes and
checked that before sending out the message :)
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
Tom Lane