Hello everyone,
*1) Context*
I'm working with large tables containing arrays of integers, indexed with "
*gin__int_ops*" GIN indexes offered by the "*intarray*" extension.
The goal I'm trying to achieve is to do a "nested loop semi join" using the
array inclusion operation (@>) as join condition
will find the query plan using EXPLAIN here:
- Visual query plan: https://explain.dalibo.com/plan#plan
- Raw query plan: https://explain.dalibo.com/plan#raw
Thanks for your help,
Mickael
On Wed, Apr 27, 2022 at 4:28 PM Jeff Janes wrote:
> On Wed, Apr 27, 2022 at 8:19 AM Mickael
FROM
> fact_pages
> WHERE
> attribute_idxs && ARRAY[30160]
> FETCH FIRST 1 ROWS ONLY
> )
> AS fp
> WHERE
> fu.w2_page_idxs && fp.page_idx
> )
> ;
Without any surprises, the planner is using
y of integers and the right side a set
of rows with a single integer column.
The reason I'm using integer arrays is because it is the only way I have
found in PostgreSQL to get fast inclusion / exclusion checks on large
datasets (hundreds of millions of values).
Did I misunderstand your r
ay 23, 2022 at 4:11 PM Jeff Janes wrote:
> On Mon, May 23, 2022 at 3:57 AM Mickael van der Beek <
> mickael.van.der.b...@gmail.com> wrote:
>
>> Hello Jeff,
>>
>> Sadly, the query you suggested won't work because you are only returning
>> the first row of
Hello everyone,
I'm playing around with BRIN indexes so as to get a feel for the feature.
During my tests, I was unable to make BRIN indexes perform better than a
sequential scan for queries searching for large value sets (20K values in
the example down below).
Creating the table with one single
ckael
On Fri, Feb 24, 2023 at 6:19 PM Justin Pryzby wrote:
> On Fri, Feb 24, 2023 at 05:40:55PM +0100, Mickael van der Beek wrote:
> > Hello everyone,
> >
> > I'm playing around with BRIN indexes so as to get a feel for the feature.
> > During my tests, I was una
Hey everyone,
I have a PostgreSQL 10 database that contains two tables which both have
two levels of partitioning (by list and using a single value). Meaning that
a partitioned table gets repartitioned again.
The data in my use case is stored on 5K to 10K partitioned tables (children
and grand-ch
Thank both of you for your quick answers,
@Justin Based on your answer it would seem to confirm that partitioning or
at least partitioning this much is not the correct direction to take.
The reason I originally wanted to use partitioning was that I'm storing a
multi-tenant graph and that as the da