Hi guys,
Thank you for feedbacks.

> I think that this is the job of a tool that aggregates things from 
> pg_stat_statements. It's unfortunate that there isn't a good 
> third-party tool that does that, but there is nothing that prevents 
> it. 

Right. We can do this if we aggregate it frequently enough. However,
third-parties can do it better if we have more informations.
I think these are fundamental informations to strong third-parties could be.

> The concern I've got about this proposal is that the results get very 
> questionable as soon as we start dropping statement entries for lack 
> of space.  last_executed would be okay, perhaps, but first_executed 
> not so much. 

If we set pg_stat_statements.max to large enough, there could be long
lived entries and short lived ones simultaneously. In this case, per call 
statistics could be accurate but per seconds stats can not.
The idea of I named it as created and last_updated (not
first_executed and last_executed) was this. It represents timestamp of
 stats are created and updated, so we can calculate per second stats with
simple math.

> Also, for what it's worth, I should point out to Jun that 
> GetCurrentTimestamp() should definitely not be called when a spinlock 
> is held like that. 

I moved it outside of spinlock.

@@ -1204,6 +1209,11 @@ pgss_store(const char *query, uint32 queryId,
    volatile pgssEntry *e = (volatile pgssEntry *) entry;

+   /*
+    * Read a timestamp before grab the spinlock
+    */
+   TimestampTz last_updated = GetCurrentTimestamp();

    /* "Unstick" entry if it was previously sticky */
@@ -1251,6 +1261,7 @@ pgss_store(const char *query, uint32 queryId,
    e->counters.blk_read_time +=
    e->counters.blk_write_time +=
    e->counters.usage += USAGE_EXEC(total_time);
+   e->counters.last_updated = last_updated;



Jason Kim.

View this message in context: 
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to