I wrote:
I chewed on this for awhile and decided that there'd be no real harm in
taking identification of the unique expressions out of
create_unique_path() and doing it earlier, in initsplan.c; we'd need a
couple more fields in SpecialJoinInfo but that doesn't seem like a
problem. However,
On 2/28/15 12:01 PM, David Kubečka wrote:
With 'random_fk_dupl':
- Index Scan using facts_fk_idx on facts (cost=0.42..5.75
rows=100 width=15) (actual time=0.009..0.117 rows=98 loops=100)
With 'random_fk_uniq':
- Index Scan using facts_fk_idx on facts (cost=0.42..214.26
On 3/5/15 7:58 PM, Jim Nasby wrote:
This got answered on one of the other lists, right?
That was supposed to be off-list. I'll answer my own question: yes.
Sorry for the noise. :(
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
=?UTF-8?Q?David_Kube=C4=8Dka?= kubecka@gmail.com writes:
There is division by loop_count because of predicted effect of caching and
it is exactly this division which makes the run_cost for single index
lookup so low compared with the query version with random_fk_uniq. So the
main problem
On Mon, Mar 2, 2015 at 2:19 PM, David Kubečka kubecka@gmail.com wrote:
The question is why optimizer, or rather the cost estimator, produced so
much different estimates upon very small change in input. Moreover it seems
that the main culprit of bad estimates isn't actually directly related
Hi Tomas and others,
2015-03-02 21:29 GMT+01:00 Tomas Vondra tomas.von...@2ndquadrant.com:
Hi David ;-)
On 2.3.2015 20:19, David Kubečka wrote:
The question is why optimizer, or rather the cost estimator,
produced so much different estimates upon very small change in input.
Moreover
Hi all,
I have encountered a performance problem with relatively simple query,
which I think is caused by the overly pesimistic estimates in optimizer. I
have originally run into this issue on a table few GBs large, but it can be
reproduced with much smaller table as follows:
-- Setup main fact
Hi all,
I have encountered a performance problem with relatively simple query,
which I think is caused by the overly pesimistic estimates in optimizer. I
have originally run into this issue on a table few GBs large, but it can be
reproduced with much smaller table as follows:
-- Setup main fact
David Kubečka kubecka@gmail.com wrote:
I have read the optimizer README file and also looked briefly at
the code, but this seems to be something not related to
particular implementation of algorithm (e.g. nested loop).
Perhaps it's the way how cost estimates are propagated down
It could
Hi David ;-)
On 2.3.2015 20:19, David Kubečka wrote:
The question is why optimizer, or rather the cost estimator,
produced so much different estimates upon very small change in input.
Moreover it seems that the main culprit of bad estimates isn't
actually directly related to outer table, but
Hi all,
I have encountered a performance problem with relatively simple query,
which I think is caused by the overly pesimistic estimates in optimizer. I
have originally run into this issue on a table few GBs large, but it can be
reproduced with much smaller table as follows:
-- Setup main fact
11 matches
Mail list logo