Shigeru HANADA wrote:
>> My gut feeling is that planning should be done by the server which
>> will execute the query.
> Agreed, if selectivity of both local filtering and remote filtering
> available, we can estimate result rows correctly and choose better
> How about getting # of rows estimate by executing EXPLAIN for
> fully-fledged remote query (IOW, contains pushed-down WHERE clause),
> estimate selectivity of local filter on the basis of the statistics
> which are generated by FDW via do_analyze_rel() and FDW-specific
> sampling function?  In this design, we would be able to use quite
> correct rows estimate because we can consider filtering stuffs done on
> each side separately, though it requires expensive remote EXPLAIN for
> each possible path.

That sounds nice.
How would that work with a query that has one condition that could be
pushed down and one that has to be filtered locally?
Would you use the (local) statistics for the full table or can you
somehow account for the fact that rows have already been filtered
out remotely, which might influence the distribution?

> You are right, built-in equality and inequality operators don't cause
> collation problem.  Perhaps allowing them would cover significant
> of string comparison, but I'm not sure how to determine whether an
> operator is = or != in generic way.  We might have to hold list of oid
> for collation-safe operator/functions until we support ROUTINE MAPPING
> or something like that...  Anyway, I'll fix pgsql_fdw to allow = and
> for character types.

I believe that this covers a significant percentage of real-world cases.
I'd think that every built-in operator with name "=" or "<>" could
be pushed down.

Laurenz Albe

Sent via pgsql-hackers mailing list (
To make changes to your subscription:

Reply via email to