On 9.2.4, running two identical queries except for the value of a column in
the WHERE clause. Postgres is picking very different query plans, the first
is much slower than the second.
Any ideas on how I can speed this up? I have btree indexes for all the
columns used in the query.
explain analyz
(
https://gist.github.com/joevandyk/df0df703f3fda6d14ae1/raw/c15cae813913b7f8c35b24b467a0c732c0100d79/gistfile1.txtshows
a non-wrapped version of the queries and plan)
On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk wrote:
> On 9.2.4, running two identical queries except for the value of a col
chased_items pi (cost=0.00..7.60 rows=2 width=4) (actual
time=0.005..0.005 rows=1 loops=8178)
Index Cond: (line_item_id = li.id)
Heap Fetches: 144
Total runtime: 103.442 ms
(11 rows)
On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk wrote:
> On 9.2.4, running two
On Fri, Apr 5, 2013 at 6:54 PM, Greg Williamson wrote:
> Joe --
>
> >____
> > From: Joe Van Dyk
> >To: pgsql-performance@postgresql.org
> >Sent: Friday, April 5, 2013 6:42 PM
> >Subject: Re: [PERFORM] slow joins?
> >
&g
The actual query selects columns from each of those tables.
If I remove the join on order_shipping_addresses, it's very fast. Likewise,
if I remove the join on skus, base_skus, or products, it's also very fast.
I'm pretty sure I have all the necessary indexes.
The below is also at
https://gist.g
Hm, setting set join_collapse_limit = 9 seemed to fix the problem. Is that
my best/only option?
On Mon, Dec 16, 2013 at 1:52 PM, Joe Van Dyk wrote:
> The actual query selects columns from each of those tables.
>
> If I remove the join on order_shipping_addresses, it's very fast.
On Mon, Dec 16, 2013 at 4:14 PM, Tom Lane wrote:
> Joe Van Dyk writes:
> > Hm, setting set join_collapse_limit = 9 seemed to fix the problem. Is
> that
> > my best/only option?
>
> Yup, that's what I was just about to suggest. You might want to use
> 10 or 12 in
I have an events table that records page views and purchases (type =
'viewed' or type='purchased'). I have a query that figures out "people who
bought/viewed this also bought/viewed that".
It worked fine, taking about 0.1 seconds to complete, until a few hours ago
when it started taking hours to c
On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk wrote:
> I have an events table that records page views and purchases (type =
> 'viewed' or type='purchased'). I have a query that figures out "people who
> bought/viewed this also bought/viewed that".
>
>
Oops, didn't run vacuum analyze after deleting the events. Here is another
'explain analyze': http://explain.depesz.com/s/AviN
On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk wrote:
> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk wrote:
>
>> I have an events table
On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule
wrote:
> Hi
>
> this plan looks well
>
> Regards
>
> Pavel
>
Here's one that's not quite as well: http://explain.depesz.com/s/SgT
Joe
>
> 2015-01-25 6:45 GMT+01:00 Joe Van Dyk :
>
>> Oops, di
On Sat, Jan 24, 2015 at 11:14 PM, Pavel Stehule
wrote:
>
>
> 2015-01-25 7:38 GMT+01:00 Joe Van Dyk :
>
>>
>>
>> On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule
>> wrote:
>>
>>> Hi
>>>
>>> this plan looks well
>>>
I noticed that querying for
product_attributes @> '{"upsell":["true"]}'
is much slower than querying for
product_attributes @> '{"upsell": 1}'
Is that expected? I have a gin index on product_attributes. I'm using 9.4.1.
explain analyze
select count(*) from products where product_attributes
You're right, brain fart. Nevermind! :)
On Sat, Nov 7, 2015 at 4:00 PM, Tom Lane wrote:
> Joe Van Dyk writes:
> > I noticed that querying for
> >product_attributes @> '{"upsell":["true"]}'
> > is much slower than querying for
14 matches
Mail list logo