On Wed, Nov 13, 2019 at 4:01 PM Luís Roberto Weck <
luisrobe...@siscobra.com.br> wrote:
>
> Maybe PostgreSQL can't find a way to calculate having estimates?
>
I wasn't even thinking of the HAVING estimates I was thinking of just the
raw aggregates. It can't implement the HAVING until has the
On Wed, Nov 13, 2019 at 3:59 PM Luís Roberto Weck <
luisrobe...@siscobra.com.br> wrote:
>
>
> Indeed, reducing the costs made the query run in parallel, but the
> improvement in speed was not worth the cost (CPU).
>
Could you show the plan for that?
Em 13/11/2019 17:40, Jeff Janes escreveu:
On Wed, Nov 13, 2019 at 3:11 PM Luís Roberto Weck
mailto:luisrobe...@siscobra.com.br>> wrote:
Hi!
Is there a reason query 3 can't use parallel workers? Using q1 and q2
they seem very similar but can use up to 4 workers to run faster:
Em 13/11/2019 17:47, Tomas Vondra escreveu:
On Wed, Nov 13, 2019 at 05:16:44PM -0300, Luís Roberto Weck wrote:
Hi!
Is there a reason query 3 can't use parallel workers? Using q1 and q2
they seem very similar but can use up to 4 workers to run faster:
q1: https://pastebin.com/ufkbSmfB
On Wed, Nov 13, 2019 at 05:16:44PM -0300, Luís Roberto Weck wrote:
Hi!
Is there a reason query 3 can't use parallel workers? Using q1 and q2
they seem very similar but can use up to 4 workers to run faster:
q1: https://pastebin.com/ufkbSmfB
q2: https://pastebin.com/Yt32zRNX
q3:
On Wed, Nov 13, 2019 at 3:11 PM Luís Roberto Weck <
luisrobe...@siscobra.com.br> wrote:
> Hi!
>
> Is there a reason query 3 can't use parallel workers? Using q1 and q2
> they seem very similar but can use up to 4 workers to run faster:
>
> q1: https://pastebin.com/ufkbSmfB
> q2:
Hi!
Is there a reason query 3 can't use parallel workers? Using q1 and q2
they seem very similar but can use up to 4 workers to run faster:
q1: https://pastebin.com/ufkbSmfB
q2: https://pastebin.com/Yt32zRNX
q3: https://pastebin.com/dqh7yKPb
The sort node on q3 takes almost 12 seconds,
Wow! Thank you very much Jeff!! I am really grateful.
Thanks to the btree (instead of gin) the query now takes about 500ms
instead of 70s.
Il Mer 13 Nov 2019, 13:18 Jeff Janes ha scritto:
> On Wed, Nov 13, 2019 at 6:56 AM Marco Colli
> wrote:
>
>> > the answer is that is because it is a GIN
On Wed, Nov 13, 2019 at 6:56 AM Marco Colli wrote:
> > the answer is that is because it is a GIN index. Make the same index
> only as btree, and you should get good performance as it can filter the
> tags within a given project without visiting the table.
>
> Currently I have this GIN index:
>
> the answer is that is because it is a GIN index. Make the same index only
as btree, and you should get good performance as it can filter the tags
within a given project without visiting the table.
Currently I have this GIN index:
"index_subscriptions_on_project_id_and_tags" gin (project_id,
On Wed, Nov 13, 2019 at 4:20 AM Marco Colli wrote:
> Replying to the previous questions:
> - work_mem = 64MB (there are hundreds of connections)
> - the project 123 has more than 7M records, and those that don't have the
> tag 'en' are 4.8M
>
>
>> What was the plan for the one that took 500ms?
>
On Wed, Nov 13, 2019 at 5:47 AM Morris de Oryx
wrote:
> Disclaimer: Out over my skis again.
>
> From what you say here, and over on SO, it sounds like you've got two
> problems:
>
> * Matching on *huge *numbers of records because of common tags.
>
> * A dynamic collection of tags as they're
Disclaimer: Out over my skis again.
>From what you say here, and over on SO, it sounds like you've got two
problems:
* Matching on *huge *numbers of records because of common tags.
* A dynamic collection of tags as they're customer driven/configured.
An "ideal" solution might look like a
Replying to the previous questions:
- work_mem = 64MB (there are hundreds of connections)
- the project 123 has more than 7M records, and those that don't have the
tag 'en' are 4.8M
> What was the plan for the one that took 500ms?
This is the query / plan without the filter on tags:
SELECT
14 matches
Mail list logo