reindex should be faster, since you're not dumping/reloading the table
contents on top of rebuilding the index, you're just rebuilding the
index.
Robert Treat
emdeon Practice Services
Alachua, Florida
On Wed, 2005-10-12 at 13:32, Steve Poe wrote:
Would it not be faster to do a dump/reload
In the light of what you've explained below about nonremovable row
versions reported by vacuum, I wonder if I should worry about the
following type of report:
INFO: vacuuming public.some_table
INFO: some_table: removed 29598 row versions in 452 pages
DETAIL: CPU 0.01s/0.04u sec elapsed 18.77
First of all thanks all for the input.
I probably can't afford even the reindex till Christmas, when we have
about 2 weeks of company holiday... but I guess I'll have to do
something until Christmas.
The system should at least look like working all the time. I can have
downtime, but only for
On Tue, Oct 18, 2005 at 05:21:37PM +0200, Csaba Nagy wrote:
INFO: vacuuming public.some_table
INFO: some_table: removed 29598 row versions in 452 pages
DETAIL: CPU 0.01s/0.04u sec elapsed 18.77 sec.
INFO: some_table: found 29598 removable, 39684 nonremovable row
versions in 851 pages
[snip]
Yes, but it could be a disk issue because you're doing more work than
you need to. If your UPDATEs are chasing down a lot of dead tuples,
for instance, you'll peg your I/O even though you ought to have I/O
to burn.
OK, this sounds interesting, but I don't understand: why would an
[EMAIL PROTECTED] wrote:
Have you tried reindexing your active tables?
It will cause some performance hit while you are doing it. It
sounds like something is bloating rapidly on your system and
the indexes is one possible place that could be happening.
You might consider using
Thanks Andrew, this explanation about the dead rows was enlightening.
Might be the reason for the slowdown I see on occasions, but not for the
case which I was first observing. In that case the updated rows are
different for each update. It is possible that each row has a few dead
versions, but
On Thu, Oct 13, 2005 at 03:14:44PM +0200, Csaba Nagy wrote:
In any case, I suppose that those disk pages should be in OS cache
pretty soon and stay there, so I still don't understand why the disk
usage is 100% in this case (with very low CPU activity, the CPUs are
mostly waiting/idle)... the
Hi all,
After a long time of reading the general list it's time to subscribe to
this one...
We have adapted our application (originally written for oracle) to
postgres, and switched part of our business to a postgres data base.
The data base has in the main tables around 150 million rows, the
Hi all,
After a long time of reading the general list it's time to subscribe to
this one...
We have adapted our application (originally written for oracle) to
postgres, and switched part of our business to a postgres data base.
The data base has in the main tables around 150 million rows,
[snip]
Have you tried reindexing your active tables?
Not yet, the db is in production use and I have to plan for a down-time
for that... or is it not impacting the activity on the table ?
Emil
---(end of broadcast)---
TIP 9: In versions
[snip]
Have you tried reindexing your active tables?
Not yet, the db is in production use and I have to plan for a down-time
for that... or is it not impacting the activity on the table ?
It will cause some performance hit while you are doing it. It sounds like
something is bloating
The disk used for the data is an external raid array, I don't know
much
about that right now except I think is some relatively fast IDE stuff.
In any case the operations should be cache friendly, we don't scan
over
and over the big tables...
Maybe you are I/O bound. Do you know if your RAID
Emil Briggs [EMAIL PROTECTED] writes:
Not yet, the db is in production use and I have to plan for a down-time
for that... or is it not impacting the activity on the table ?
It will cause some performance hit while you are doing it.
It'll also lock out writes on the table until the index is
Would it not be faster to do a dump/reload of the table than reindex or
is it about the same?
Steve Poe
On Wed, 2005-10-12 at 13:21 -0400, Tom Lane wrote:
Emil Briggs [EMAIL PROTECTED] writes:
Not yet, the db is in production use and I have to plan for a down-time
for that... or is it not
Would it not be faster to do a dump/reload of the table than reindex
or
is it about the same?
reindex is probably faster, but that's not the point. you can reindex a
running system whereas dump/restore requires downtime unless you work
everything into a transaction, which is headache, and
On Wed, Oct 12, 2005 at 06:55:30PM +0200, Csaba Nagy wrote:
Ok, that was the first thing I've done, checking out the explain of the
query. I don't really need the analyze part, as the plan is going for
the index, which is the right decision. The updates are simple one-row
How do you know? You
17 matches
Mail list logo