> On Wed, Mar 4, 2015 at 4:41 AM, David Rowley <dgrowle...@gmail.com> wrote:
> >> This thread mentions "parallel queries" as a use case, but that means
> >> passing data between processes, and that requires being able to
> >> serialize and deserialize the aggregate state somehow. For actual data
> >> types that's not overly difficult I guess (we can use in/out functions),
> >> but what about aggretates using 'internal' state? I.e. aggregates
> >> passing pointers that we can't serialize?
> >
> > This is a good question. I really don't know the answer to it as I've not
> > looked at the Robert's API for passing data between backends yet.
> >
> > Maybe Robert or Amit can answer this?
> 
> I think parallel aggregation will probably require both the
> infrastructure discussed here, namely the ability to combine two
> transition states into a single transition state, and also the ability
> to serialize and de-serialize transition states, which has previously
> been discussed in the context of letting hash aggregates spill to
> disk.  My current thinking is that the parallel plan will look
> something like this:
> 
> HashAggregateFinish
> -> Funnel
>     -> HashAggregatePartial
>       -> PartialHeapScan
> 
> So the workers will all pull from a partial heap scan and each worker
> will aggregate its own portion of the data.  Then it'll need to pass
> the results of that step back to the master, which will aggregate the
> partial results and produce the final results.  I'm guessing that if
> we're grouping on, say, column a, the HashAggregatePartial nodes will
> kick out 2-column tuples of the form (<value for a>, <serialized
> transition state for group with that value for a>).
> 
It may not be an urgent topic to be discussed towards v9.5, however,
I'd like to raise a topic about planner and aggregates.

Once we could get the two phase aggregation, planner needs to pay
attention possibility of partial aggregate during plan construction,
even though our current implementation attach Agg node after the
join/scan plan construction.

Probably, a straightforward design is to add FunnelPath with some
child nodes on set_rel_pathlist() or add_paths_to_joinrel().
Its child node may be PartialAggregate node (or some other parallel
safe node of course). In this case, it must inform the planner this
node (for more correctness, targetlist of the node) returns partial
aggregation, instead of the values row-by-row.
Then, planner need to track and care about which type of data shall
be returned to the upper node. Path node may have a flag to show it,
and we also may need to inject dummy PartialAggregate if we try to
join a relation that returns values row-by-row and another one that
returns partial aggregate.
It also leads another problem. The RelOptInfo->targetlist will
depend on the Path node chosen. Even if float datum is expected as
an argument of AVG(), its state to be combined is float[3].
So, we will need to adjust the targetlist of RelOptInfo, once Path
got chosen.

Anyway, I could find out, at least, these complicated issues around
two-phase aggregate integration with planner. Someone can suggest
minimum invasive way for these integration?

Thanks,
--
NEC OSS Promotion Center / PG-Strom Project
KaiGai Kohei <kai...@ak.jp.nec.com>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to