Pavel Stehule pavel.steh...@gmail.com writes:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little bit more
about these bad queries - and we hope, so
2013/1/13 Dimitri Fontaine dimi...@2ndquadrant.fr:
Pavel Stehule pavel.steh...@gmail.com writes:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little
Dimitri Fontaine dimi...@2ndquadrant.fr writes:
Pavel Stehule pavel.steh...@gmail.com writes:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little bit
Tom Lane t...@sss.pgh.pa.us writes:
Set auto_explain.log_min_duration to 19 mins or maybe 17 and be done?
That would only help if he were willing to wait for the long-running
command to be done (instead of canceling it). It's not hard to think
of commands that will still be running after the
Hello
We use successfully use auto_explain. We miss support cancelled
queries - we need same info for queries, that we cancel after x
minutes.
Subtask of this feature can be dumping currently executed query with
plan and with complete query string to log after receiving some signal
- maybe
Pavel Stehule pavel.steh...@gmail.com writes:
What do you thinking about this feature?
The idea of expecting an add-on module to execute operations in an
already-failed transaction seems pretty dubious to me. I also think
it's not a great idea to add partial executions into a query's stats.
For
2013/1/11 Tom Lane t...@sss.pgh.pa.us:
Pavel Stehule pavel.steh...@gmail.com writes:
What do you thinking about this feature?
The idea of expecting an add-on module to execute operations in an
already-failed transaction seems pretty dubious to me. I also think
it's not a great idea to add
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little bit more
about these bad queries - and we hope, so
Pavel Stehule pavel.steh...@gmail.com writes:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little bit more
about these bad queries - and we hope, so
2013/1/11 Stephen Frost sfr...@snowman.net:
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little bit
Tom Lane escribió:
Pavel Stehule pavel.steh...@gmail.com writes:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little bit more
about these bad
2013/1/11 Tom Lane t...@sss.pgh.pa.us:
Pavel Stehule pavel.steh...@gmail.com writes:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we killing queries. But we have to know little bit more
On 11 January 2013 15:54, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
Tom Lane escribió:
Pavel Stehule pavel.steh...@gmail.com writes:
My propose is proposed for different dimensions and purpose - for
example - we have a limit 20 minutes for almost all queries, and after
this limit we
Alvaro Herrera alvhe...@2ndquadrant.com writes:
Tom Lane escribió:
However, auto_explain is even worse on the other problem. You flat out
cannot do catalog lookups in a failed transaction, but there's no way to
print a decompiled plan without such lookups. So it won't work. (It
would also
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
2013/1/11 Stephen Frost sfr...@snowman.net:
Why not an option to auto_explain (or whatever) to log an execution plan
right before actually executing it? If that was something which could
be set with a GUC or similar, you could just do that
Simon Riggs si...@2ndquadrant.com writes:
An even better feature would be to be able to send a signal to a
running query to log its currently executing plan. That way you can
ask Why so slow? before deciding to kill it.
That could conceivably work. At least it wouldn't require running
EXPLAIN
2013/1/11 Stephen Frost sfr...@snowman.net:
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
2013/1/11 Stephen Frost sfr...@snowman.net:
Why not an option to auto_explain (or whatever) to log an execution plan
right before actually executing it? If that was something which could
be set
2013/1/11 Stephen Frost sfr...@snowman.net:
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
2013/1/11 Stephen Frost sfr...@snowman.net:
Why not an option to auto_explain (or whatever) to log an execution plan
right before actually executing it? If that was something which could
be set
* Tom Lane (t...@sss.pgh.pa.us) wrote:
Simon Riggs si...@2ndquadrant.com writes:
An even better feature would be to be able to send a signal to a
running query to log its currently executing plan. That way you can
ask Why so slow? before deciding to kill it.
That could conceivably work.
2013/1/11 Stephen Frost sfr...@snowman.net:
* Tom Lane (t...@sss.pgh.pa.us) wrote:
Simon Riggs si...@2ndquadrant.com writes:
An even better feature would be to be able to send a signal to a
running query to log its currently executing plan. That way you can
ask Why so slow? before deciding
2013/1/11 Pavel Stehule pavel.steh...@gmail.com:
2013/1/11 Stephen Frost sfr...@snowman.net:
* Tom Lane (t...@sss.pgh.pa.us) wrote:
Simon Riggs si...@2ndquadrant.com writes:
An even better feature would be to be able to send a signal to a
running query to log its currently executing plan.
* Pavel Stehule (pavel.steh...@gmail.com) wrote:
2013/1/11 Stephen Frost sfr...@snowman.net:
We can send a 'cancel query', how about a 'report on query' which
returns the plan and perhaps whatever other stats are easily available?
there is only one question - that POSIX signal we can use?
On 11 January 2013 16:31, Pavel Stehule pavel.steh...@gmail.com wrote:
2013/1/11 Stephen Frost sfr...@snowman.net:
* Tom Lane (t...@sss.pgh.pa.us) wrote:
Simon Riggs si...@2ndquadrant.com writes:
An even better feature would be to be able to send a signal to a
running query to log its
On 11 January 2013 16:52, Simon Riggs si...@2ndquadrant.com wrote:
We already overload the signals, so its just a new type for the signal
handler to cope with.
See procsignal_sigusr1_handler()
I roughed up something to help you here... this is like 50% of a patch.
pg_explain_backend() calls
24 matches
Mail list logo