Tom Lane <[EMAIL PROTECTED]> writes: > "Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes: > > Looking at the log_duration postgresql.conf option. How about adding an > > option log_duration_min which is a value in milliseconds that is the minimum > > time a query must run for before being logged. > > Fine with me --- but you'll need to add more logic than that. Right > now, log_duration *only* causes the query duration to be printed out; > if you ain't got log_statement on, you're in the dark as to what the > query itself was. You'll need to add some code to print the query > (the log_min_error_statement logic might be a useful source of > inspiration). Not sure how this should interact with the case where > log_duration is set and the min-duration isn't. But maybe that case > is silly, and we should just redefine log_duration as a minimum runtime > that causes the query *and* its runtime to be printed to the log.
Is it even guaranteed to be properly ordered on a busy server with multiple processors anyways? One option is to have log_query output an identifier with the query such as a hash of the query or the pointer value for the plan, suppressing duplicates. Then log_duration prints the identifier with the duration. This means on a busy server running lots of prepared queries you would see a whole bunch of queries on startup, then hopefully no durations. Any durations printed could cause alarms to go off. To find the query you grep the logs for the identifier in the duration message. This only really works if you're using prepared queries everywhere. But in the long run that will be the case for OLTP systems, which is where log_duration is really useful. -- greg ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])