On Jul 08, John McKown modulated:
...
> I think the "problem" that he is having is fixable only by changing how
> PostgreSQL itself works. His problem is a PL/pgSQL function which is
> 11K lines in length. When invoked, this function is "compiled" into a
> large tokenized parse tree. This parse tree is only usable in the
> session which invoked the the function. Apparently this parse tree
> takes a lot of memory.

Right.  I'm not entirely sure the original poster wants to hear
practical solutions, but I think there are three ways that someone
familar with Postgres would avoid this type of problem:

1. Connection pooling. Set the max connection limit and other tuning
   parameters appropriately for your workload and available
   resources. Don't get into a state as was described (thousands of
   open connections and only hundreds "active"). Make your client
   applications more disciplined.

2. Buy more RAM.  You can easily grow to 512GB in just basic
   dual-socket servers these days.  This hardware cost may well 
   be worth it to avoid human labor costs.

3. Rewrite or refactor such complex stored procedures in a different
   programming language such as C or Python, so your PL/pgsql stored
   procedures remain small glue around libraries of code.  Postgres
   makes it very trivial to extend the system with such procedural
   libraries.
   

Karl


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to