On Tue, Sep 29, 2015 at 7:52 PM, Shulgin, Oleksandr <
oleksandr.shul...@zalando.de> wrote:

>
> This is not a change of the direction, but rather of the approach.
> Hitting a process with a signal and hoping it will produce a meaningful
> response in all circumstances without disrupting its current task was way
> too naive.  I'd really want to make this work with ANALYZE, just not as the
> first step.  I believe this approach can be extended to include
> instrumentation support (obviously we will not be able to contain this in
> the auto_explain module).
>

I was thinking about this and what seems to be the biggest problem is when
to actually turn the feature on.  It seems unlikely that someone will want
to enable it unconditionally.  Enabling per-backend also doesn't seem to be
a good approach because you don't know if the next query you'd like to look
at is going to run in this exact backend.

What might be actually usable is poking pg_stat_statements for queryid to
decide if we need to do explain (and possibly analyze).  We could make it
so that the explain is produced for any query that is known to run longer
than certain configurable threshold on average (or we can provide for
enabling this explicitly per entry in pg_stat_statements).  Then the
interested client can go and do pg_explain_backend() on the pid.

If we would also track the plan total costs, we could do some predictions
so that if the same query comes along and is planned with total cost
exceeding the recorded average the configurable amount of percent, then we
enable explain.

Does this make sense to you?  Does this make a good argument for merging
pg_stat_statements and auto_explain into core?

--
Alex

Reply via email to