On Tue, Oct 18, 2005 at 05:21:37PM +0200, Csaba Nagy wrote:
> INFO: vacuuming "public.some_table"
> INFO: "some_table": removed 29598 row versions in 452 pages
> DETAIL: CPU 0.01s/0.04u sec elapsed 18.77 sec.
> INFO: "some_table": found 29598 removable, 39684 nonremovable row
> versions in 851
First of all thanks all for the input.
I probably can't afford even the reindex till Christmas, when we have
about 2 weeks of company holiday... but I guess I'll have to do
something until Christmas.
The system should at least look like working all the time. I can have
downtime, but only for shor
In the light of what you've explained below about "nonremovable" row
versions reported by vacuum, I wonder if I should worry about the
following type of report:
INFO: vacuuming "public.some_table"
INFO: "some_table": removed 29598 row versions in 452 pages
DETAIL: CPU 0.01s/0.04u sec elapsed 18
reindex should be faster, since you're not dumping/reloading the table
contents on top of rebuilding the index, you're just rebuilding the
index.
Robert Treat
emdeon Practice Services
Alachua, Florida
On Wed, 2005-10-12 at 13:32, Steve Poe wrote:
>
> Would it not be faster to do a dump/reload
On Thu, Oct 13, 2005 at 03:14:44PM +0200, Csaba Nagy wrote:
> In any case, I suppose that those disk pages should be in OS cache
> pretty soon and stay there, so I still don't understand why the disk
> usage is 100% in this case (with very low CPU activity, the CPUs are
> mostly waiting/idle)... th
On 10/13/05, Csaba Nagy <[EMAIL PROTECTED]> wrote:
> On Thu, 2005-10-13 at 14:40, Andrew Sullivan wrote:
> > On Thu, Oct 13, 2005 at 10:15:03AM +0200, Csaba Nagy wrote:
> > > And how would the analyze help in finding this out ? I thought it would
> > > only show me additionally the actual timings,
Thanks Andrew, this explanation about the dead rows was enlightening.
Might be the reason for the slowdown I see on occasions, but not for the
case which I was first observing. In that case the updated rows are
different for each update. It is possible that each row has a few dead
versions, but not
On Thu, Oct 13, 2005 at 10:15:03AM +0200, Csaba Nagy wrote:
>
> OK, this sounds interesting, but I don't understand: why would an update
> "chase down a lot of dead tuples" ? Should I read up on some docs, cause
> I obviously don't know enough about how updates work on postgres...
Right. Here's
[EMAIL PROTECTED] wrote:
> Next we'll upgrade the postgres hardware, and then I'll come
> back to report if it's working better... sorry for the noise for now.
There have been some discussions about which hardware suits PostgreSQL's
needs best under certain load-characteristics. We have experienc
[EMAIL PROTECTED] wrote:
>>> Have you tried reindexing your active tables?
> It will cause some performance hit while you are doing it. It
> sounds like something is bloating rapidly on your system and
> the indexes is one possible place that could be happening.
You might consider using contrib/
[snip]
> Yes, but it could be a disk issue because you're doing more work than
> you need to. If your UPDATEs are chasing down a lot of dead tuples,
> for instance, you'll peg your I/O even though you ought to have I/O
> to burn.
OK, this sounds interesting, but I don't understand: why would an u
On Wed, Oct 12, 2005 at 06:55:30PM +0200, Csaba Nagy wrote:
> Ok, that was the first thing I've done, checking out the explain of the
> query. I don't really need the analyze part, as the plan is going for
> the index, which is the right decision. The updates are simple one-row
How do you know? Y
>
> Would it not be faster to do a dump/reload of the table than reindex
or
> is it about the same?
>
reindex is probably faster, but that's not the point. you can reindex a
running system whereas dump/restore requires downtime unless you work
everything into a transaction, which is headache, and
Would it not be faster to do a dump/reload of the table than reindex or
is it about the same?
Steve Poe
On Wed, 2005-10-12 at 13:21 -0400, Tom Lane wrote:
> Emil Briggs <[EMAIL PROTECTED]> writes:
> >> Not yet, the db is in production use and I have to plan for a down-time
> >> for that... or i
Emil Briggs <[EMAIL PROTECTED]> writes:
>> Not yet, the db is in production use and I have to plan for a down-time
>> for that... or is it not impacting the activity on the table ?
> It will cause some performance hit while you are doing it.
It'll also lock out writes on the table until the index
Ok, that was the first thing I've done, checking out the explain of the
query. I don't really need the analyze part, as the plan is going for
the index, which is the right decision. The updates are simple one-row
updates of one column, qualified by the primary key condition.
This part is OK, the qu
On 10/12/05, Csaba Nagy <[EMAIL PROTECTED]> wrote:
> We have adapted our application (originally written for oracle) to
> postgres, and switched part of our business to a postgres data base.
> The data base has in the main tables around 150 million rows, the whole
> data set takes ~ 30G after the
> The disk used for the data is an external raid array, I don't know
much
> about that right now except I think is some relatively fast IDE stuff.
> In any case the operations should be cache friendly, we don't scan
over
> and over the big tables...
Maybe you are I/O bound. Do you know if your RA
> [snip]
>
> > Have you tried reindexing your active tables?
>
> Not yet, the db is in production use and I have to plan for a down-time
> for that... or is it not impacting the activity on the table ?
>
It will cause some performance hit while you are doing it. It sounds like
something is bloati
[snip]
> Have you tried reindexing your active tables?
>
Not yet, the db is in production use and I have to plan for a down-time
for that... or is it not impacting the activity on the table ?
> Emil
>
> ---(end of broadcast)---
> TIP 9: In versions
> Hi all,
>
> After a long time of reading the general list it's time to subscribe to
> this one...
>
> We have adapted our application (originally written for oracle) to
> postgres, and switched part of our business to a postgres data base.
>
> The data base has in the main tables around 150 milli
21 matches
Mail list logo