Hi Tom,
Thanks! That's exactly what it was. There was a discrepancy in the
data that turned this into an endless loop. Everything has been
running smoothly since I made a change.
Thanks so much,
Richard
On Apr 23, 2005, at 12:50 PM, Tom Lane wrote:
Richard Plotkin <[EMAIL PROTECTED]> writes:
Richard Plotkin <[EMAIL PROTECTED]> writes:
> Thanks for your responses this morning. I did the select relname, and
> it returned 0 rows. I do have one function that creates a temp table
> and fills it within the same transaction. I'm pasting it below.
> Perhaps the "ON COMMIT DROP" is causi
Hi Tom,
Thanks for your responses this morning. I did the select relname, and
it returned 0 rows. I do have one function that creates a temp table
and fills it within the same transaction. I'm pasting it below.
Perhaps the "ON COMMIT DROP" is causing problems, and I need to drop
the table a
Richard Plotkin <[EMAIL PROTECTED]> writes:
> /usr/local/pgsql/data/base/17234/42791
> /usr/local/pgsql/data/base/17234/42791.1
> /usr/local/pgsql/data/base/17234/42791.2
> /usr/local/pgsql/data/base/17234/42791.3
> ...
Well, that is certainly a table or index of some kind.
Go into database 17234
/usr/local/pgsql/data/base/17234/42791
/usr/local/pgsql/data/base/17234/42791.1
/usr/local/pgsql/data/base/17234/42791.2
/usr/local/pgsql/data/base/17234/42791.3
/usr/local/pgsql/data/base/17234/42791.4
/usr/local/pgsql/data/base/17234/42791.5
/usr/local/pgsql/data/base/17234/42791.6
/usr/local/pgs
Richard Plotkin <[EMAIL PROTECTED]> writes:
> I updated postgres to 8.0.2, am running vacuumdb -faz every 3 hours,
> and 50 minutes after a vacuum the CPU usage still skyrocketed, and the
> disk started filling. This time, there is only a single file that is
> spanning multiple GB, but running
I also forgot to mention, vacuumdb fails on the command line now with
the following error:
vacuumdb: could not connect to database smt: FATAL: sorry, too many
clients already
On Apr 23, 2005, at 9:57 AM, Richard Plotkin wrote:
If anybody has additional advice on this problem, I would really,
r
If anybody has additional advice on this problem, I would really,
really appreciate it...
I updated postgres to 8.0.2, am running vacuumdb -faz every 3 hours,
and 50 minutes after a vacuum the CPU usage still skyrocketed, and the
disk started filling. This time, there is only a single file tha
I've also now tried looking at pg_class.relpages. I compared the
results before and after vacuum. The results stayed the same, except
for five rows that increased after the vacuum. Here is the select on
those rows after the vacuum:
relname | relnamespace | reltype
That returned the same result. I also tried oid2name -d smt -x -i -S
and, separately -s, and also separately, -d with all other databases,
and none of the databases turned up any listing, in either oid or
filenode, for any of these three bloated files. One thing I've noticed
is that these oid
On Thu, Apr 21, 2005 at 11:38:22AM -0700, Richard Plotkin wrote:
> More info on what is bloating:
>
> It's only in one database (the one that's most used), and after running
> oid2name on the bloated files, the result is (mysteriously) empty.
> Here's the run on the three enormous files:
>
> $
More info on what is bloating:
It's only in one database (the one that's most used), and after running
oid2name on the bloated files, the result is (mysteriously) empty.
Here's the run on the three enormous files:
$ /usr/local/bin/oid2name -d smt -o 160779
From database "smt":
Filenode Table
Hi Tom,
Q: what have you got the FSM parameters set to?
Here's from postgresql.conf -- FSM at default settings.
# - Memory -
shared_buffers = 30400 # min 16, at least max_connections*2,
8KB each
work_mem = 32168# min 64, size in KB
#maintenance_work_mem = 16384 # min 102
Richard Plotkin <[EMAIL PROTECTED]> writes:
> I'm having a pretty serious problem with postgresql's performance.
> Currently, I have a cron task that is set to restart and vacuumdb -faz
> every six hours. If that doesn't happen, the disk goes from 10% full
> to 95% full within 2 days (and it's
As a follow-up, I've found a function that used the following code:
CREATE TEMPORARY TABLE results
(nOrder integer,
page_id integer,
name text)
WITHOUT OIDS
ON COMMIT DROP;
I would assume that the "WITHOUT OIDS" would be part of the source of
the problem, so I've commented it out.
On Apr 20, 2005
No, I don't think so. I don't think there are any temp table queries
(and I'll check), but even if there are, site traffic is very low, and
queries would be very infrequent.
On Apr 20, 2005, at 12:36 PM, Rod Taylor wrote:
I'm having a pretty serious problem with postgresql's performance.
Current
> I'm having a pretty serious problem with postgresql's performance.
> Currently, I have a cron task that is set to restart and vacuumdb -faz
> every six hours. If that doesn't happen, the disk goes from 10% full
> to 95% full within 2 days (and it's a 90GB disk...with the database
> being a
17 matches
Mail list logo