Michael Akinde wrote:
Thanks for the rapid responses.
Stefan Kaltenbrunner wrote:
this seems simply a problem of setting maintenance_work_mem too high
(ie higher than what your OS can support - maybe an
ulimit/processlimit is in effect?) . Try reducing maintenance_work_mem
to say 128MB and retry.
If you promise postgresql that it can get 1GB it will happily try to
use it ...
I set up the system together with one of our Linux sysOps, so I think
the settings should be OK. Kernel.shmmax is set to 1.2 GB, but I'll get
him to recheck if there could be any other limits he has forgotten to
increase.
The way the process was running, it seems to have basically just
continually allocated memory until (presumably) it broke through the
slightly less than 1.2 GB shared memory allocation we had provided for
PostgreSQL (at least the postgres process was still running by the time
resident size had reached 1.1 GB).
Incidentally, in the first error of the two I posted, the shared memory
setting was significantly lower (24 MB, I believe). I'll try with 128 MB
before I leave in the evening, though (assuming the other tests I'm
running complete by then).
this is most likely not at all related to your shared memory settings
but to your setting of maintenance_work_mem which is the amount of
memory a single backend(!) can use for maintainance operations (which
VACUUM is for example).
notice that your first error refers to an allocation of about 500MB
which your ulimit/kernel process limit simply might not be able to give
a single process.
And for very large tables VACUUM FULL is generally not a good idea at
all - either look into regular normal vacuum scheduling or if you need
to recover from a a bloated database use a command that forced a rewrite
of the table (like CLUSTER) which will be heaps faster but also require
about twice the amount of diskspace.
Stefan
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq