working
(although the data is mostly going to be static on this installation, we
will have others that won't be).
Anyone with some insights on VACUUM FULL ANALYZE who can weigh in on
what is going wrong?
Regards,
Michael Akinde
Database Architect,
met.no
begin:vcard
fn:Michael Akinde
:59 +0100, Michael Akinde wrote:
I am encountering problems when trying to run VACUUM FULL ANALYZE on a
particular table in my database; namely that the process crashes out
with the following problem:
Probably just as well, since a VACUUM FULL on an 800GB table is going to
take a rather
Stefan Kaltenbrunner wrote:
Michael Akinde wrote:
Incidentally, in the first error of the two I posted, the shared
memory setting was significantly lower (24 MB, I believe). I'll try
with 128 MB before I leave in the evening, though (assuming the other
tests I'm running complete
with the shared_buffers limit set at
24 MB and maintenance_work_mem was at its default setting (16 MB?), so I
would be rather surprised if the problem did not repeat itself.
Regards,
Michael Akinde
Database Architect, met.no
begin:vcard
fn:Michael Akinde
n:Akinde;Michael
org:Meteorologisk Institutt, Norge
[Synopsis: VACUUM FULL ANALYZE goes out of memory on a very large
pg_catalog.pg_largeobject table.]
Simon Riggs wrote:
Can you run ANALYZE and then VACUUM VERBOSE, both on just
pg_largeobject, please? It will be useful to know whether they succeed
ANALYZE:
INFO: analyzing
. I'd expect an
operation on such a table to take time, of course, but not to
consistently crash out of memory.
Any suggestions as to what we can otherwise try to isolate the problem?
Regards,
Michael Akinde
Database Architect, met.no
Michael Akinde wrote:
[Synopsis: VACUUM FULL ANALYZE goes out
FULL
doesn't work for tables beyond a certain size. Assuming we have not set
up something completely wrongly, this seems like a bug.
If this is the wrong mailing list to be posting this, then please let me
know.
Regards,
Michael Akinde
Database Architect, Met.no
Usama Dar wrote:
On Jan 7
Tom Lane wrote:
Michael Akinde [EMAIL PROTECTED] writes:
INFO: vacuuming pg_catalog.pg_largeobject
ERROR: out of memory
DETAIL: Failed on request of size 536870912
Are you sure this is a VACUUM FULL, and not a plain VACUUM?
Very sure.
Ran a VACUUM FULL again yesterday (the prior query
Tom Lane wrote:
Michael Akinde [EMAIL PROTECTED] writes:
$ ulimit -a
core file size (blocks, -c) 1
...
What you're showing us is the conditions that prevail in your
interactive session. That doesn't necessarily have a lot to do with
the ulimits that init-scripts run
processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Regards,
Michael A.
Tom Lane wrote:
Andrew Sullivan [EMAIL PROTECTED] writes:
On Tue, Jan 08, 2008 at 05:27:16PM +0100, Michael Akinde wrote:
Those
10 matches
Mail list logo