On Thu, Jun 13, 2019 at 03:17:07PM +0200, Rafia Sabih wrote:
On Thu, 13 Jun 2019 at 06:07, Kuntal Ghosh wrote:
On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra
wrote:
>
> >> ...
> >>
> >That'll be an interesting work. For the above query, we can definitely
> >calculate the correction coefficient
On Thu, Jun 13, 2019 at 09:37:07AM +0530, Kuntal Ghosh wrote:
On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra
wrote:
For example, we might require 1000 samples for a given node (say, scan
with some quals), before we start using it to tweak the estimates. Once
we get the number of estimates, we ca
On Thu, 13 Jun 2019 at 06:07, Kuntal Ghosh wrote:
>
> On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra
> wrote:
> >
> > For example, we might require 1000 samples for a given node (say, scan
> > with some quals), before we start using it to tweak the estimates. Once
> > we get the number of estimates
On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra
wrote:
>
> For example, we might require 1000 samples for a given node (say, scan
> with some quals), before we start using it to tweak the estimates. Once
> we get the number of estimates, we can continue collecting more data,
> and once in a while upd
On Wed, Jun 12, 2019 at 06:14:41PM +0530, Kuntal Ghosh wrote:
Hello,
On Wed, Jun 12, 2019 at 5:06 PM Konstantin Knizhnik
wrote:
On 12.06.2019 0:43, Tomas Vondra wrote:
I don't think "learning phase" is an issue, in fact I think that's
something we need to do - it ensures we have enough data to
Hello,
On Wed, Jun 12, 2019 at 5:06 PM Konstantin Knizhnik
wrote:
> On 12.06.2019 0:43, Tomas Vondra wrote:
> I don't think "learning phase" is an issue, in fact I think that's
> something we need to do - it ensures we have enough data to make good
> decisions.
>
> What is wrong with learning pha
On 12.06.2019 0:43, Tomas Vondra wrote:
I don't think "learning phase" is an issue, in fact I think that's
something we need to do - it ensures we have enough data to make good
decisions.
What is wrong with learning phase is that it requires some DBA
assistance: somebody should determine wh
>>I tried to create much simpler version of AQO based on auto_explain
>>extension.
>>This extension provide all necessary infrastructure to analyze
>>statements with long execution time.
>>I have added two new modes to auto_explain:
>>1. Auto generation of multicolumn statistics for variables usi
Hi Alexander,
Thanks for starting this thread. I've had similar ideas in the past and
even hacked together something (very dirty), so it's great someone else
is interested in this topic too.
On Mon, Jun 10, 2019 at 11:53:02AM +0300, Konstantin Knizhnik wrote:
Hi,
Inefficiency of Postgres on so
Hello,
this seems very interesting and make me think about 2 other projets:
- https://github.com/trustly/pg_badplan
- https://github.com/ossc-db/pg_plan_advsr
As I understand all this, there are actually 3 steps:
- compare actual / estimated rows
- suggests some statistics gathering modification
-
10 matches
Mail list logo