AI Rumman wrote:
> The ENTITY table has 2164493 rows with data as follows:
>
> type | count
> ---+
> Contacts | 327352
> Candidate | 34668
> Emailst | 33604
> Calendar | 493956
> Contacts Image | 7
> PriceBooks | 2
> Notes Attachment | 17
> SalesOrder | 6
> A
On Fri, Dec 14, 2012 at 3:34 PM, Kevin Grittner wrote:
> Claudio Freire wrote:
>
> > Selectivity is decided based on the number of distinct values on
> > both sides, and the table's name "entity" makes me think it's a
> > table that is reused for several things. That could be a problem,
> > since
Claudio Freire wrote:
> Selectivity is decided based on the number of distinct values on
> both sides, and the table's name "entity" makes me think it's a
> table that is reused for several things. That could be a problem,
> since that inflates distinct values, feeding misinformation to
> the plan
On Fri, Dec 14, 2012 at 5:25 PM, AI Rumman wrote:
> Are you suggesting to make different table for Contacts, Candidate etc.
Yes
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performan
Yes, I do have a column in entity table like
setype where the values are 'Contacts', 'Candidate' etc.
I have an index on it also.
Are you suggesting to make different table for Contacts, Candidate etc.
On Fri, Dec 14, 2012 at 3:10 PM, Claudio Freire wrote:
> On Fri, Dec 14, 2012 at 4:22 PM, Tom L
On Fri, Dec 14, 2012 at 4:22 PM, Tom Lane wrote:
> "Kevin Grittner" writes:
>> AI Rumman wrote:
>>> Does FK Constraint help to improve performance? Or it is only
>>> for maintaining data integrity?
>
>> I'm not aware of any situation where adding a foreign key
>> constraint would improve performa
So I am going to change
join_collapse_limit
and
from_collapse_limit
to 20.
Do I need to set geqo_threshold to greater than 20. Now it is 12 ( default).
And could you let me know why geqo_optimizer is not working good in this
case?
On Fri, Dec 14, 2012 at 2:22 PM, Tom Lane wrote:
> "Kevin Gri
"Kevin Grittner" writes:
> AI Rumman wrote:
>> Does FK Constraint help to improve performance? Or it is only
>> for maintaining data integrity?
> I'm not aware of any situation where adding a foreign key
> constraint would improve performance.
There's been talk of teaching the planner to use the
AI Rumman wrote:
> Claudio Freire wrote:
>> I think it's more likely a missing FK constraint.
> Does FK Constraint help to improve performance? Or it is only
> for maintaining data integrity?
I'm not aware of any situation where adding a foreign key
constraint would improve performance.
-Kevin
On Fri, Dec 14, 2012 at 4:01 PM, AI Rumman wrote:
> Does FK Constraint help to improve performance? Or it is only for
> maintaining data integrity?
I'm not entirely sure it's taken into account, I think it is, but a FK
would tell the planner that every non-null value will produce a row.
It seems
Does FK Constraint help to improve performance? Or it is only
for maintaining data integrity?
On Thu, Dec 13, 2012 at 7:38 PM, Claudio Freire wrote:
> On Thu, Dec 13, 2012 at 8:50 PM, Evgeny Shishkin
> wrote:
> >>> OP joins 8 tables, and i suppose join collapse limit is set to default
> 8. I tho
On Thu, Dec 13, 2012 at 8:50 PM, Evgeny Shishkin wrote:
>>> OP joins 8 tables, and i suppose join collapse limit is set to default 8. I
>>> thought postgresql's optimiser is not mysql's.
>>
>> It's not obvious to me that there's anything very wrong with the plan.
>> An 8-way join that produces 15
On Dec 14, 2012, at 3:36 AM, Tom Lane wrote:
> Evgeny Shishkin writes:
>> On Dec 14, 2012, at 3:09 AM, Andrew Dunstan wrote:
>>> Well, it looks like it's choosing a join order that's quite a bit different
>>> from the way the query is expressed, so the OP might need to play around
>>> with f
Evgeny Shishkin writes:
> On Dec 14, 2012, at 3:09 AM, Andrew Dunstan wrote:
>> Well, it looks like it's choosing a join order that's quite a bit different
>> from the way the query is expressed, so the OP might need to play around
>> with forcing the join order some.
> OP joins 8 tables, and
On Dec 14, 2012, at 3:09 AM, Andrew Dunstan wrote:
>
> On 12/13/2012 05:42 PM, Claudio Freire wrote:
>> And it looks like it all may be starting to go south here:
>>>-> Hash Join
>>> (cost=9337.97..18115.71 rows=34489 width=244) (actual
>>> time=418.0
On 12/13/2012 05:42 PM, Claudio Freire wrote:
And it looks like it all may be starting to go south here:
-> Hash Join (cost=9337.97..18115.71
rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)
On Thu, Dec 13, 2012 at 7:36 PM, Andrew Dunstan wrote:
> On 12/13/2012 05:12 PM, AI Rumman wrote:
>>
>> Why does the number of rows are different in actual and estimated?
>>
>
>
> Isn't that in the nature of estimates? An estimate is a heuristic guess at
> the number of rows it will find for the g
On Dec 14, 2012, at 2:36 AM, Andrew Dunstan wrote:
>
> On 12/13/2012 05:12 PM, AI Rumman wrote:
>> Why does the number of rows are different in actual and estimated?
>>
>
>
> Isn't that in the nature of estimates? An estimate is a heuristic guess at
> the number of rows it will find for the
On 12/13/2012 05:12 PM, AI Rumman wrote:
Why does the number of rows are different in actual and estimated?
Isn't that in the nature of estimates? An estimate is a heuristic guess
at the number of rows it will find for the given query or part of a
query. It's not uncommon for estimates to
19 matches
Mail list logo