On Sat, Aug 9, 2014 at 5:15 AM, Josh Berkus wrote:
> Folks,
>
> So one thing we tell users who have chronically long IN() lists is that
> they should create a temporary table and join against that instead.
> Other than not having the code, is there a reason why PostgreSQL
> shouldn't do something
Folks,
So one thing we tell users who have chronically long IN() lists is that
they should create a temporary table and join against that instead.
Other than not having the code, is there a reason why PostgreSQL
shouldn't do something like this behind the scenes, automatically?
--
Josh Berkus
Po
2010/5/1 Cédric Villemain :
> 2010/4/28 Robert Haas :
>> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
>> wrote:
>>> In the first query, the planner doesn't use the information of the 2,3,4.
>>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>>> say 3, but it doesn't)
>>>
2010/4/28 Robert Haas :
> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
> wrote:
>> In the first query, the planner doesn't use the information of the 2,3,4.
>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>> say 3, but it doesn't)
>> So it divide the estimated number of
2010/4/29 Robert Haas :
> On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov wrote:
>> Even if it will be done it does not solve the original issue. If I
>> understood you right there is now no any decent way of speeding up the query
>>
>> select *
>> from t2
>> join t1 on t1.t = t2.t
>> where t1.id =
On Wed, Apr 28, 2010 at 5:37 AM, Vlad Arkhipov wrote:
> Even if it will be done it does not solve the original issue. If I
> understood you right there is now no any decent way of speeding up the query
>
> select *
> from t2
> join t1 on t1.t = t2.t
> where t1.id = X;
>
> except of the propagating
2010/4/28 Robert Haas :
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
wrote:
In the first query, the planner doesn't use the information of the 2,3,4.
It just does a : I'll bet I'll have 2 rows in t1 (I think it should
say 3, but it doesn't)
So it divide the estimated number of ro
2010/4/28 Robert Haas :
> On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
> wrote:
>> In the first query, the planner doesn't use the information of the 2,3,4.
>> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
>> say 3, but it doesn't)
>> So it divide the estimated number of
On Mon, Apr 26, 2010 at 5:33 AM, Cédric Villemain
wrote:
> In the first query, the planner doesn't use the information of the 2,3,4.
> It just does a : I'll bet I'll have 2 rows in t1 (I think it should
> say 3, but it doesn't)
> So it divide the estimated number of rows in the t2 table by 5
> (di
2010/4/26 Vlad Arkhipov :
>
>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov
>> wrote:
>>
>>>
>>> I don't think this is just an issue with statistics, because the same
>>> problem arises when I try executing a query like this:
>>>
>>
>> I'm not sure how you think this proves that it isn't a prob
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a query like this:
I'm not sure how you think this proves that it isn't a problem with
statistics, but I think what you sho
On Fri, Apr 23, 2010 at 6:53 PM, Tom Lane wrote:
> Robert Haas writes:
>> Hmm. We currently have a heuristic that we don't record a value as an
>> MCV unless it's more frequent than the average frequency. When the
>> number of MCVs is substantially smaller than the number of distinct
>> values
Robert Haas writes:
> Hmm. We currently have a heuristic that we don't record a value as an
> MCV unless it's more frequent than the average frequency. When the
> number of MCVs is substantially smaller than the number of distinct
> values in the table this is probably a good heuristic, since it
On Fri, Apr 23, 2010 at 3:22 PM, Cédric Villemain
wrote:
> 2010/4/23 Robert Haas :
>> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
>> wrote:
>>> 2010/4/23 Robert Haas :
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov
wrote:
> I don't think this is just an issue with statistic
2010/4/23 Robert Haas :
> On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
> wrote:
>> 2010/4/23 Robert Haas :
>>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov
>>> wrote:
I don't think this is just an issue with statistics, because the same
problem arises when I try executing a que
Cédric Villemain wrote:
> 2010/4/23 Robert Haas :
>> Since all your data is probably fully cached, at a first cut, I
>> might try setting random_page_cost and seq_page_cost to 0.005 or
>> so, and adjusting effective_cache_size to something appropriate.
>
> that will help worrect the situation, b
On Fri, Apr 23, 2010 at 9:09 AM, Cédric Villemain
wrote:
> 2010/4/23 Robert Haas :
>> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
>>> I don't think this is just an issue with statistics, because the same
>>> problem arises when I try executing a query like this:
>>
>> I'm not sure how
2010/4/23 Robert Haas :
> On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
>> I don't think this is just an issue with statistics, because the same
>> problem arises when I try executing a query like this:
>
> I'm not sure how you think this proves that it isn't a problem with
> statistics,
On Thu, Apr 22, 2010 at 10:37 PM, Vlad Arkhipov wrote:
> I don't think this is just an issue with statistics, because the same
> problem arises when I try executing a query like this:
I'm not sure how you think this proves that it isn't a problem with
statistics, but I think what you should be fo
Greg Smith пишет:
I can't replicate your problem on the current development 9.0; all
three plans come back with results quickly when I just tried it:
Nested Loop (cost=0.00..50.76 rows=204 width=32) (actual
time=0.049..0.959 rows=200 loops=1)
-> Seq Scan on t1 (cost=0.00..1.06 rows=1 widt
Greg Smith пишет:
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work
Vlad Arkhipov wrote:
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does
not do that.
This isn't strictly an optimization problem; it's an issue with
statistics the optimizer has to work with, the ones ANAL
Please do this small optimization if it is possible. It seem that the
optimizer have the all information to create a fast plan but it does not
do that.
create temp table t1 (id bigint, t bigint);
insert into t1 values (1, 1);
insert into t1 values (2, 2);
insert into t1 values (2, 3);
insert i
23 matches
Mail list logo