On 1/7/17 12:41 PM, Joel Jacobson wrote:
On Sat, Jan 7, 2017 at 3:25 AM, Greg Stark <st...@mit.edu> wrote:
What users need to know is in aggregate how much of the time the
database is spending working on their queries is going into different

This is a separate feature idea, but I think it's really valuable as well.

Maybe something similar to pg_stat_user_functions?
But instead grouping by wait_event_type, wait_event, and showing
accumulated count and sum of waiting time since last stat reset, just
like the other pg_stat_* views?

Maybe something like this?

\d pg_stat_waiting
 View "pg_catalog.pg_stat_waiting"
   Column        |       Type       | Modifiers
 wait_event_type | name             |
 wait_event      | name             |
 waiting_counter | bigint           |
 waiting_time    | double precision |

Yes, I've wanted this many times in the past. If combined with Robert's idea of a background process that does the expensive time calls this could potentially provide very useful information for even very short duration locks.
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to