Lior,

* Ben Zeev, Lior (lior.ben-z...@hp.com) wrote:
> Does each PostgreSQL process allocating in its own memory (Not shared memory) 
> a cache of all the database catalog which it access during the SQL execution?

PG will look up and cache the catalog information regarding all of the
relations involved, yes.  In *accessing* those relations, PG will pull
needed blocks into shared buffers.  PG will use backend-local memory to
process through the data (generally on a per-tuple basis).

> I mean does each process holds all the catalog indexes data which it 
> accessed, all the catalog index statistics etc' accessed

Catalog information (eg: information in pg_class) is kept, but the
*data* will only be pulled through shared buffers and then processed.
Anything in shared buffers (eg: the data in the tables or indexes) will
be cleaned up as new blocks are needed which push out old ones.

> If yes is there a way to avoid this behavior?

Catalog information is only cached- if the information isn't being used
then it should get purged out in favor of new data which is needed.  Can
you explain a bit more exactly what the issue is..?

> (I asked Josh Berkus from PGExperts and he said that each process holds 
> memory for sorts, hashes, temp tables, vaccum, etc')

Correct, most backend local usage of memory is for running queries and
doing what is required in those queries.  Regarding temp tables, you can
control how much memory is used for those with the temp_buffers
parameter.

        Thanks,

                Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to