"McKinzie, Alan (Alan)" <alan...@avaya.com> writes:
> I am trying to understand how memory is allocated/used by our Postgresql 
> Database connections.  From reading the documentation it appears that 
> work_mem and temp_buffers are the 2 largest contributors (and work_mem usage 
> can grow) to the memory utilized by the Database connections.  In addition, 
> if I understand correctly, work_mem and temp_buffers each have their own 
> pool, and thus database connections use (when needed) and return memory to 
> these pools.  I have not read this anywhere, but based on observation it 
> appears that once these pools grow, they never release any of the memory 
> (e.g. they do not shrink in size if some of the memory has not been for a 
> given period of time).

Temp buffers, once used within a particular backend process, are kept
for the life of that process.  Memory consumed for work_mem will be
released back to libc at the end of the query.  The net effect of that
is platform-dependent --- my experience is that glibc on Linux is able
to give memory back to the OS, but on other platforms the process memory
size doesn't shrink.

> With that said, are there any mechanisms available to determine how much 
> work_mem and temp_buffers memory has been allocated by the Postgres database 
> (and by database connection/process would be very useful as well)?  Also, 
> which postgres process manages the memory pools for work_mem and temp_buffers?

There's no "pool", these allocations are process-local.

> FYI – I am using smem (on a linux server) to monitor the memory allocated 
> to our Database connections.  In an attempt to lower our memory footprint, I 
> lowered our setting for work_mem from 1MB down to 500kB (in addition I 
> enabled log_temp_files to see the SQL statements that now use temp files for 
> sorting and hash operations).  As I expected the memory used by the 
> connections that were doing large sorts went down in size.  However, one of 
> those DB connections dramatically increased in memory usage with this change. 
>  It went from approx. 6MB up to 37MB in memory usage?

Keep in mind that work_mem is the max per sort or hash operation, so a
complex query could consume a multiple of that.  The most obvious theory
about your result is that the work_mem change caused the planner to
switch to another plan that involved more sorts or hashes than before.
But without a lot more detail than this, we can only speculate.

>  Are temp_buffers used in conjunction with some sorting operations that use 
> temp_files (and thus this connection allocated several temp_buffers?

No, they're only used in connection with temp tables.

                        regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to