On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Rick Otten <rottenwindf...@gmail.com> writes:
> > I'm wrestling with a very similar problem too - except instead of
> > partitions I have a views on top of a bunch (50+) of unioned materialized
> > views, each "partition" with 10M - 100M rows. On 9.6.6 the queries would
> > use the indexes on each materialized view. On 10.1, every materialized
> > view is sequence scanned.
> Can you post a self-contained example of this behavior? My gut reaction
> is that the changes for the partitioning feature broke some optimization
> that used to work ... but it could easily be something else, too. Hard
> to say with nothing concrete to look at.
I think it is worth trying to reproduce in an example. I'll try to cook
something up that illustrates it. It should be doable.
> > I'm mostly hoping with fingers crossed that something in 10.2, which is
> > coming out next week, fixes it.
> If you'd reported this in suitable detail awhile ago, we might have been
> able to fix it for 10.2. At this point, with barely 30 hours remaining
> before the planned release wrap, it's unlikely that anything but the most
> trivial fixes could get done in time.
I wish I could move faster on identifying and reporting this sort of thing.
We only cut over to 10.1 about 2 weeks ago and didn't discover the issue
until we'd been running for a few days (and eliminated everything else we
could think of - including the bug that is fixed in 10.2 that crashes some
queries when they have parallel gather enabled).
My hope is that 10.2 will fix our issue "by accident" rather than on
I'll try to build a test case this afternoon.
I use a view on top of the materialized views so I can swap them in and out
with a "create or replace" that doesn't disrupt downstream depndencies.
I'm currently thinking to work around this issue for the short term, I need
to build a mat view on top of the mat views, and then put my view on top of
that (so I can swap out the big matview without disrupting downstream
dependencies). It means a lot more disk will be needed, and moving
partitions around will be much less elegant, but I can live with that if it
fixes the performance problems caused by the sequence scanning. Hopefully
the planner will use the indexes on the "big" materialized view.
I'm going to try that hack this afternoon too.
I was going to blog about this approach of using a view to do partitioning
of materialized views, but I'm not sure when I'll ever get to it. It was
this list that originally gave me the idea to try this approach. The
partiions are actually materialized views of foreign tables from a Hadoop
FWIW, here is the function that builds the view:
create or replace function treasure_data."relinkMyView"()
for wrMatView in
join pg_namespace n on c.relnamespace = n.oid
c.relkind = 'm'
n.nspname = 'myschema'
c.relname ~ 'my_matview_partition_\d\d\d\d_\d\d$'
if length(fromString) > 0 then
fromString := format ('%s union all select * from myschema.%I',
fromString := format ('select * from myschema.%I', wrMatView);
execute format ('create or replace view myschema.my_view as %s',
grant select on myschema.my_view to some_read_only_role;
grant select on myschema.my_view to some_read_write_role;
return format ('create or replace view myschema.my_view as %s',
$$ language plpgsql
To swap a partition out, I rename it to something that does not conform to
the regex pattern above, and then run the function.
To swap a partition in, I rename it to something that does conform to the
regex pattern, and then run the function.
(of course, that is mostly automated, but it works by hand too)
This has been working great for us until we jumped to PG 10, when suddenly
I can't get the planner to use the indexes in the partitions any more.