On Thu, Oct 06, 2005 at 08:17:54AM -0500, Kelly Burkhart wrote: > Given: > - Processes which are connected to the database for long periods of time > (transactions are always short). > - These processes will use some functions to query data. > - Lots of data is being inserted into tables that these functions query. > - Vacuums are done frequently. > > Am I at risk of degrading performance after some time due to stale > execution plans?
Yes, because plans are chosen based on the statistics that were current when the function was first called. For example, if a sequential scan made sense when you first called the function, then subsequent calls will also use a sequential scan. You can see this for yourself with a simple test: create a table, populate it with a handful of records, and call a function that issues a query that can (but won't necessarily) use an index. Then add a lot of records to the table and call the function again. You'll probably notice that the function runs slower than the same query run from outside the function, and that the function runs fast if you recreate it or call it in a new session. If you set debug_print_plan to on and client_min_messages to debug1, then you'll see the plan that the function chose (but only on the first call to the function). If you have statistics enabled, then you can query pg_stat_user_tables and pg_stat_user_indexes to see whether subsequent calls use sequential or index scans (this should be done when nobody else is querying the table so the statistics represent only what you did). You can avoid cached plans by using EXECUTE. You'll have to run tests to see whether the potential gain is worth the overhead. -- Michael Fuhr ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings