Hi, I'm replying to an old thread from -performance: https://www.postgresql.org/message-id/flat/7ffb9dbe-c76f-8ca3-12ee-7914ede872e6%40stormcloud9.net
I was looking at: https://commitfest.postgresql.org/20/1691/ "New GUC to sample log queries" On Tue, Jul 10, 2018 at 01:54:12PM -0400, Patrick Hemmer wrote: > I'm looking for a way of gathering performance stats in a more usable > way than turning on `log_statement_stats` (or other related modules). > The problem I have with the log_*_stats family of modules is that they > log every single query, which makes them unusable in production. Aside > from consuming space, there's also the problem that the log system > wouldn't be able to keep up with the rate. > > There are a couple ideas that pop into mind that would make these stats > more usable: > 1. Only log when the statement would otherwise already be logged. Such > as due to the `log_statement` or `log_min_duration_statement` settings. ..but instead came back to this parallel thread and concluded that I'm interested in exactly that behavior: log_statement_stats only if log_min_duration_statement exceeded. If there's agreement that's desirable behavior, should I change log_statement_stats to a GUC (on/off/logged) ? Or, would it be reasonable to instead change the existing behavior of log_statement_stats=on to mean what I want: only log statement stats if statement is otherwise logged. If log_statement_stats is considered to be a developer option, most likely to be enabled either in a development or other segregated or non-production environment, or possibly for only a single session for diagnostics. My own use case is that I'd like to know if a longrunning query was doing lots of analytics (aggregation/sorting), or possibly spinning away on a nested nested loop. But I only care about the longest queries, and then probably look at the ratio of user CPU/clock time. Justin