On Thu, Jan 21, 2016 at 4:08 PM, David Rowley
> It's quite simple to test how much of a win it'll be in the serial
> case today, and yes, it's not much, but it's a bit.
> create table t1 as select x from generate_series(1,1000000) x(x);
> vacuum analyze t1;
> select count(*) from (select * from t1 union all select * from t1) t;
> (1 row)
> Time: 185.793 ms
> -- Mock up pushed down aggregation by using sum() as a combine
> function for count(*)
> select sum(c) from (select count(*) c from t1 union all select
> count(*) from t1) t;
> (1 row)
> Time: 162.076 ms
> Not particularly incredible, but we don't normally turn our noses up
> at a 14% improvement, so let's just see how complex it will be to
> implement, once the upper planner changes are done.
Mumble mumble. Why is that even any faster? Just because we avoid
the projection overhead of the Append node, which is a no-op here
anyway? If so, a planner change is one thought, but perhaps we also
ought to look at whether we can't reduce the execution-time overhead.
> But as you mention about lack of ability to make use of pre-sorted
> Path for each branch of the UNION ALL; I was really hoping Tom's patch
> will improve that part by allowing the planner to choose a pre-sorted
> Path and perform a MergeAppend instead of an Append, which would allow
> pre-sorted input into a GroupAggregate node.
I won't hazard a guess on that point...
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: