Hi hackers!

Following is a proposal to add timestamp informations to

# Use case
- If we want to gather list and stats for queries executed at least once
last 1 hour, we had to reset a hours ago. There is no way if we didn't.
- If we found some strange query from `pg_stat_statments`, we might want to
identify when it ran firstly.

If we have timestamp of first and last executed, we can easily gather thess
informations and there are tons of more use cases.

# Implementations
Firstly, I added API version 1.5 to add additional fields and I added two
fields to Counters structure. Now it has 25 fields in total.

@@ -156,6 +158,8 @@ typedef struct Counters
  double    blk_read_time;  /* time spent reading, in msec */
  double    blk_write_time; /* time spent writing, in msec */
  double    usage;      /* usage factor */
+ TimestampTz   created;  /* timestamp of created time */
+ TimestampTz   last_updated; /* timestamp of last updated */
 } Counters;


The `created` field is filled at the first time the entry will added to
hash table.

@@ -1663,6 +1690,8 @@ entry_alloc(pgssHashKey *key, Size query_offset, int
query_len, int encoding,

    /* reset the statistics */
    memset(&entry->counters, 0, sizeof(Counters));
+   /* set the created timestamp */
+    entry->counters.created = GetCurrentTimestamp();
    /* set the appropriate initial usage count */
    entry->counters.usage = sticky ? pgss->cur_median_usage : USAGE_INIT;
    /* re-initialize the mutex each time ... we assume no one using it */

The `last_updated` will be updated every time `pgss_store()` updates stats.

@@ -1251,6 +1256,7 @@ pgss_store(const char *query, uint32 queryId,
    e->counters.blk_read_time +=
    e->counters.blk_write_time +=
    e->counters.usage += USAGE_EXEC(total_time);
+   e->counters.last_updated = GetCurrentTimestamp();


The attached is my first implementation.

Jason Kim.

Attachment: pg_stat_statements_with_timestamp_v1.patch
Description: Binary data

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to