Hi,
I'm having a pretty serious problem with postgresql's performance.
Currently, I have a cron task that is set to restart and vacuumdb -faz
every six hours. If that doesn't happen, the disk goes from 10% full
to 95% full within 2 days (and it's a 90GB disk...with the database
being a 2MB do
No, I don't think so. I don't think there are any temp table queries
(and I'll check), but even if there are, site traffic is very low, and
queries would be very infrequent.
On Apr 20, 2005, at 12:36 PM, Rod Taylor wrote:
I'm having a pretty serious problem with postgresql's performance.
Current
As a follow-up, I've found a function that used the following code:
CREATE TEMPORARY TABLE results
(nOrder integer,
page_id integer,
name text)
WITHOUT OIDS
ON COMMIT DROP;
I would assume that the "WITHOUT OIDS" would be part of the source of
the problem, so I've commented it out.
On Apr 20, 2005
Hi Tom,
Q: what have you got the FSM parameters set to?
Here's from postgresql.conf -- FSM at default settings.
# - Memory -
shared_buffers = 30400 # min 16, at least max_connections*2,
8KB each
work_mem = 32168# min 64, size in KB
#maintenance_work_mem = 16384 # min 102
o does the empty result mean it's a temporary table? There is one
temporary table (in the function previously mentioned) that does get
created and dropped with some regularity.
Thanks again,
Richard
On Apr 20, 2005, at 2:06 PM, Richard Plotkin wrote:
Hi Tom,
Q: what have you got the FSM par
these oids are all extremely large numbers, whereas the rest of
the oids in /data/base/* are no higher than 4 or 5.
On Apr 21, 2005, at 1:46 PM, Alvaro Herrera wrote:
On Thu, Apr 21, 2005 at 11:38:22AM -0700, Richard Plotkin wrote:
More info on what is bloating:
It's only in one databas
If anybody has additional advice on this problem, I would really,
really appreciate it...
I updated postgres to 8.0.2, am running vacuumdb -faz every 3 hours,
and 50 minutes after a vacuum the CPU usage still skyrocketed, and the
disk started filling. This time, there is only a single file tha
I also forgot to mention, vacuumdb fails on the command line now with
the following error:
vacuumdb: could not connect to database smt: FATAL: sorry, too many
clients already
On Apr 23, 2005, at 9:57 AM, Richard Plotkin wrote:
If anybody has additional advice on this problem, I would really
0 | 32 | 4200 |
0 | 0 | t | f | r |7 |
0 | 0 |0 |0 | 0 | f | f
| f | f | {=r/postgres}
On Apr 20, 2005, at 1:51 PM, Tom Lane wrote:
Richard Pl
/pgsql/data/base/17234/42791.7
/usr/local/pgsql/data/base/17234/42791.8
/usr/local/pgsql/data/base/17234/42791.9
/usr/local/pgsql/data/base/17234/42791.10
/usr/local/pgsql/data/base/17234/42791.11
On Apr 23, 2005, at 11:06 AM, Tom Lane wrote:
Richard Plotkin <[EMAIL PROTECTED]> writes:
I u
path := path || '';
ELSE
path := page_results.name;
END IF;
ELSE
IF withLinkTags IS TRUE
THEN
path := path || delimiter;
path := path || '';
path := path || page_results.name;
path := path || '';
ELSE
path := path || delimiter || page_results.n
Hi Tom,
Thanks! That's exactly what it was. There was a discrepancy in the
data that turned this into an endless loop. Everything has been
running smoothly since I made a change.
Thanks so much,
Richard
On Apr 23, 2005, at 12:50 PM, Tom Lane wrote:
Richard Plotkin <[EMAIL PROTECTED]
12 matches
Mail list logo