2015-06-03 9:46 GMT+02:00 Craig Ringer <cr...@2ndquadrant.com>: > > On 3 June 2015 at 15:22, Pavel Stehule <pavel.steh...@gmail.com> wrote: > >> >> >> 2015-06-03 9:17 GMT+02:00 Craig Ringer <cr...@2ndquadrant.com>: >> >>> >>> >>> On 2 June 2015 at 15:11, Pavel Stehule <pavel.steh...@gmail.com> wrote: >>> >>>> >>>> >>>> 2015-06-02 9:07 GMT+02:00 Craig Ringer <cr...@2ndquadrant.com>: >>>> >>>>> >>>>> For the majority of users I'm sure it's sufficient to just have a >>>>> sample rate. >>>>> >>>>> Anything that's trying to match individual queries could be interested >>>>> in all sorts of different things. Queries that touch a particular table >>>>> being one of the more obvious things, or queries that mention a particular >>>>> literal. Rather than try to design something complicated in advance that >>>>> anticipates all needs, I'm thinking it makes sense to just throw a hook in >>>>> there. If some patterns start to emerge in terms of useful real world >>>>> filtering criteria then that'd better inform any more user accessible >>>>> design down the track. >>>>> >>>> >>>> same method can be interesting for interactive EXPLAIN ANALYZE too. >>>> TIMING has about 20%-30% overhead and usually we don't need a perfectly >>>> exact numbers >>>> >>> >>> I don't understand what you are suggesting here. >>> >> >> using some sampling for EXPLAIN ANALYZE statement >> >> > Do you mean that you'd like to be able to set a fraction of queries on > which auto_explain does ANALYZE, so most of the time it just outputs an > ordinary EXPLAIN? > > Or maybe we're talking about different things re the original proposal? I > don't see how this would work. If you run EXPLAIN ANALYZE interactively > like you said above. You'd surely want it to report costs and timings, or > whatever it is that you ask for, all the time. Not just some of the time > based on some background setting. > > Are you advocating a profiling-based approach for EXPLAIN ANALYZE timing > where we sample which executor node we're under at regular intervals, > instead of timing everything? Or suggesting a way to filter out sub-trees > so you only get timing data on some sub-portion of a query? > > lot of variants - I would to see cost and times for EXPLAIN ANALYZE every time - but the precision of time can be reduced to 1ms. It is question if we can significantly reduce the cost (or number of calls) of getting system time.
Pavel > > -- > Craig Ringer http://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Training & Services >