Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-11 Thread Shigeru HANADA
Hi all,

(2012/03/07 3:39), Tom Lane wrote:
 A bigger issue with postgresql_fdw_validator is that it supposes that
 the core backend is authoritative as to what options libpq supports,
 which is bad design on its face.  It would be much more sensible for
 dblink to be asking libpq what options libpq supports, say via
 PQconndefaults().
 
 We might find that we have to leave postgresql_fdw_validator as-is
 for backwards compatibility reasons (in particular, being able to load
 existing FDW definitions) but I think we should migrate away from using
 it.

In the discussion about pgsql_fdw which was proposed for 9.2, some
issues about postgresql_fdw_validator are pointed out.

* The name postgresql_fdw_validator conflicts with the name of the FDW
for PostgreSQL which
follows the naming habit of other FDWs.
* dblink depends on postgresql_fdw_validator.
* postgresql_fdw_validator assumes that libpq supports some particular
options.

An idea to resolve these is to add dblink's own validator which doesn't
assume much about libpq, and obsolete postgresql_fdw_validator.

* Add dblink_fdw_validator to contrib/dblink, which is similar to
postgresql_fdw_validator but it assumes less about libpq.
* Add dblink_fdw as default FDW of dblink, which uses
dblink_fdw_validator, and recommend to use it.  This would prevent users
from using postgresql_fdw_validator with dblink.
* Mention that postgresql_fdw_validator might be obsolete in future
release in the document of CREATE FOREIGN DATA WRAPPER.

To make the behavior of dblink_fdw_validator similar to that of current
postgresql_fdw_validator, we need to assume that libpq supports user
option, and allow it and secret options in only USER MAPPING options,
and allow others in only SERVER options (and reject all debug options).
 IMO this is not unreasonable assumption.

Is this proposal reasonable?  Any comments and questions are welcome.

Regards,
-- 
Shigeru HANADA

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-09 Thread Thom Brown
2012/4/9 Shigeru HANADA shigeru.han...@gmail.com:
    1) connect to the server at the beginning of the local query
    2) execute EXPLAIN for foreign table foo
    3) execute EXPLAIN for foreign table bar
    4) execute actual query for foreign table foo
    5) execute actual query for foreign table bar
    6) disconnect from the server at the end of the local query

 If the connection has broken between 4) and 5), and immediate reconnect
 succeeded, retrieved results for foo and bar might be inconsistent from
 the viewpoint of transaction isolation.

 In current implementation, next local query which contains foreign table
 of failed foreign table tries to reconnect to the server.

How would this apply to the scenario where you haven't even begun a
transaction yet?  There's no risk of inconsistency if the connection
is lost before the first command can execute, so why fail in such a
case?  Isn't there a line in the sand we can draw where we say that if
we have passed it, we just die, otherwise we try to reconnect as
there's no risk of undesirable results?

 Also I'm not particularly keen on the message provided to the user in
 this event:

 ERROR:  could not execute EXPLAIN for cost estimation
 DETAIL:  FATAL:  terminating connection due to administrator command
 FATAL:  terminating connection due to administrator command

 There's no explanation what the administrator command was, and I
 suspect this is really just a I don't know what's happened here
 condition.  I don't think we should reach that point.

 That FATAL message is returned by remote backend's ProcessInterrupts()
 during some administrator commands, such as immediate shutdown or
 pg_terminate_backend().  If remote backend died of fast shutdown or
 SIGKILL, no error message is available (see the sample below).

 postgres=# select * From pgsql_branches ;
 ERROR:  could not execute EXPLAIN for cost estimation
 DETAIL:
 HINT:  SELECT bid, bbalance, filler FROM public.pgbench_branches

 I agree that the message is confusing.  How about showing message like
 pgsql_fdw connection failure on servername or something with remote
 error message for such cases?  It can be achieved by adding extra check
 for connection status right after PQexec()/PQexecParams().  Although
 some word polishing would be required :)

 postgres=# select * from pgsql_branches ;
 ERROR:  pgsql_fdw connection failure on subaru_pgbench
 DETAIL:  FATAL:  terminating connection due to administrator command
 FATAL:  terminating connection due to administrator command

Yes, that would be an improvement.

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-08 Thread Shigeru HANADA
(2012/04/08 5:19), Thom Brown wrote:
 2012/4/7 Shigeru HANADAshigeru.han...@gmail.com:
 I've updated pgsql_fdw so that it can collect statistics from foreign
 data with new FDW API.
 
 I notice that if you restart the remote server, the connection is
 broken, but the client doesn't notice this until it goes to fire off
 another command.  Should there be an option to automatically
 re-establish the connection upon noticing the connection has dropped,
 and issue a NOTICE that it had done so?

Hm, I'd prefer reporting the connection failure and aborting the local
transaction, because reconnecting to the server would break consistency
between the results come from multiple foreign tables.  Server shutdown
(or other troubles e.g. network failure) might happen at various timing
in the sequence of remote query (or sampling in ANALYZE).  For example,
when we execute a local query which contains two foreign tables, foo and
bar, then the sequence of libpq activity would be like this.

1) connect to the server at the beginning of the local query
2) execute EXPLAIN for foreign table foo
3) execute EXPLAIN for foreign table bar
4) execute actual query for foreign table foo
5) execute actual query for foreign table bar
6) disconnect from the server at the end of the local query

If the connection has broken between 4) and 5), and immediate reconnect
succeeded, retrieved results for foo and bar might be inconsistent from
the viewpoint of transaction isolation.

In current implementation, next local query which contains foreign table
of failed foreign table tries to reconnect to the server.

 Also I'm not particularly keen on the message provided to the user in
 this event:
 
 ERROR:  could not execute EXPLAIN for cost estimation
 DETAIL:  FATAL:  terminating connection due to administrator command
 FATAL:  terminating connection due to administrator command
 
 There's no explanation what the administrator command was, and I
 suspect this is really just a I don't know what's happened here
 condition.  I don't think we should reach that point.

That FATAL message is returned by remote backend's ProcessInterrupts()
during some administrator commands, such as immediate shutdown or
pg_terminate_backend().  If remote backend died of fast shutdown or
SIGKILL, no error message is available (see the sample below).

postgres=# select * From pgsql_branches ;
ERROR:  could not execute EXPLAIN for cost estimation
DETAIL:
HINT:  SELECT bid, bbalance, filler FROM public.pgbench_branches

I agree that the message is confusing.  How about showing message like
pgsql_fdw connection failure on servername or something with remote
error message for such cases?  It can be achieved by adding extra check
for connection status right after PQexec()/PQexecParams().  Although
some word polishing would be required :)

postgres=# select * from pgsql_branches ;
ERROR:  pgsql_fdw connection failure on subaru_pgbench
DETAIL:  FATAL:  terminating connection due to administrator command
FATAL:  terminating connection due to administrator command

This seems to impress users that remote side has some trouble.

Regards,
-- 
Shigeru HANADA

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-08 Thread Gerald Devotta
This is Gerald Devotta, Recruitment Specialist with Newt Global LLC, a
global staffing solutions provider that has been serving the
telecommunications and utility industries. 

 

I am contacting you to let you know that your resume came to my attention
while I was conducting a job search for a project in Bloomington Illinois. I
have reviewed your resume with particular interest and am excited to let you
know that many of your qualifications match my client's needs

 

PostgreSQL database developer

Bloomington Illinois

2 +Years Contract

Attractive Compensation + Expenses paid

 

 

Required:

Solid experience with PgSQL programming on PostgreSQL ORBMS

Person can work 4 days at site and one day from home

 

If you are interested kindly send your update resume with expected
rate/salary

 

 

 

 

 

Regards,

Gerald Devotta

Recruitment Analyst

Newt Global Consulting LLC

Phone: 202.470.2492

Email: gdevo...@newtglobal.com mailto:h...@newtglobal.com  |
www.newtglobal.com

1300 W Walnut Hill Lane, Suite #230, Irving, TX 75038

 

From: Shigeru Hanada-2 [via PostgreSQL]
[mailto:ml-node+s1045698n5626807...@n5.nabble.com] 
Sent: Monday, April 09, 2012 12:39 AM
To: Gerald Devotta
Subject: Re: pgsql_fdw, FDW for PostgreSQL server

 

(2012/04/08 5:19), Thom Brown wrote: 
 2012/4/7 Shigeru HANADA[hidden email]: 
 I've updated pgsql_fdw so that it can collect statistics from foreign 
 data with new FDW API. 
 
 I notice that if you restart the remote server, the connection is 
 broken, but the client doesn't notice this until it goes to fire off 
 another command.  Should there be an option to automatically 
 re-establish the connection upon noticing the connection has dropped, 
 and issue a NOTICE that it had done so? 

Hm, I'd prefer reporting the connection failure and aborting the local 
transaction, because reconnecting to the server would break consistency 
between the results come from multiple foreign tables.  Server shutdown 
(or other troubles e.g. network failure) might happen at various timing 
in the sequence of remote query (or sampling in ANALYZE).  For example, 
when we execute a local query which contains two foreign tables, foo and 
bar, then the sequence of libpq activity would be like this. 

1) connect to the server at the beginning of the local query 
2) execute EXPLAIN for foreign table foo 
3) execute EXPLAIN for foreign table bar 
4) execute actual query for foreign table foo 
5) execute actual query for foreign table bar 
6) disconnect from the server at the end of the local query 

If the connection has broken between 4) and 5), and immediate reconnect 
succeeded, retrieved results for foo and bar might be inconsistent from 
the viewpoint of transaction isolation. 

In current implementation, next local query which contains foreign table 
of failed foreign table tries to reconnect to the server. 


 Also I'm not particularly keen on the message provided to the user in 
 this event: 
 
 ERROR:  could not execute EXPLAIN for cost estimation 
 DETAIL:  FATAL:  terminating connection due to administrator command 
 FATAL:  terminating connection due to administrator command 
 
 There's no explanation what the administrator command was, and I 
 suspect this is really just a I don't know what's happened here 
 condition.  I don't think we should reach that point. 


That FATAL message is returned by remote backend's ProcessInterrupts() 
during some administrator commands, such as immediate shutdown or 
pg_terminate_backend().  If remote backend died of fast shutdown or 
SIGKILL, no error message is available (see the sample below). 

postgres=# select * From pgsql_branches ; 
ERROR:  could not execute EXPLAIN for cost estimation 
DETAIL: 
HINT:  SELECT bid, bbalance, filler FROM public.pgbench_branches 

I agree that the message is confusing.  How about showing message like 
pgsql_fdw connection failure on servername or something with remote 
error message for such cases?  It can be achieved by adding extra check 
for connection status right after PQexec()/PQexecParams().  Although 
some word polishing would be required :) 

postgres=# select * from pgsql_branches ; 
ERROR:  pgsql_fdw connection failure on subaru_pgbench 
DETAIL:  FATAL:  terminating connection due to administrator command 
FATAL:  terminating connection due to administrator command 

This seems to impress users that remote side has some trouble. 

Regards, 
-- 
Shigeru HANADA 

-- 
Sent via pgsql-hackers mailing list ([hidden email]) 
To make changes to your subscription: 
http://www.postgresql.org/mailpref/pgsql-hackers



  _  

If you reply to this email, your message will be added to the discussion
below:

http://postgresql.1045698.n5.nabble.com/pgsql-fdw-FDW-for-PostgreSQL-server-
tp4935560p5626807.html 

To unsubscribe from PostgreSQL, click
http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=unsu

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-07 Thread Thom Brown
2012/4/7 Shigeru HANADA shigeru.han...@gmail.com:
 I've updated pgsql_fdw so that it can collect statistics from foreign
 data with new FDW API.

I notice that if you restart the remote server, the connection is
broken, but the client doesn't notice this until it goes to fire off
another command.  Should there be an option to automatically
re-establish the connection upon noticing the connection has dropped,
and issue a NOTICE that it had done so?

Also I'm not particularly keen on the message provided to the user in
this event:

ERROR:  could not execute EXPLAIN for cost estimation
DETAIL:  FATAL:  terminating connection due to administrator command
FATAL:  terminating connection due to administrator command

There's no explanation what the administrator command was, and I
suspect this is really just a I don't know what's happened here
condition.  I don't think we should reach that point.

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-06 Thread Shigeru HANADA
(2012/04/06 1:29), Tom Lane wrote:
 Albe Laurenzlaurenz.a...@wien.gv.at  writes:
 Maybe the FDW API could be extended so that foreign data wrappers
 can provide a random sample to avoid a full table scan.
 
 The one thing that seems pretty clear from this discussion is that one
 size doesn't fit all.  I think we really ought to provide a hook so that
 the FDW can determine whether ANALYZE applies to its foreign tables at
 all, and how to obtain the sample rows if it does.

 Since we've already whacked the FDW API around in incompatible ways for
 9.2, now is probably a good time to add that.  I'm inclined to say this
 should happen whether or not we accept any of the currently proposed
 patches for 9.2, because if the hook is there it will provide a way for
 people to experiment with foreign-table ANALYZE operations outside of
 core.

To support foreign-table ANALYZE by adding a new hook, we would need a
mechanism (or at least documented guide lines) to manage the chain of
hook functions, because such hook might be used by multiple FDWs (or
other plug-ins) at the same time.  A wrongly-written plug-in can easily
break the hook chain.  We might need to provide register/unregister API
for this hook point, like RegisterResourceReleaseCallback, and call each
registered function until either of them processes the request.  Is
there any other hook point which has similar issue?

Another concern is the place where we hook the process of ANALYZE.  IOW,
how much portion of ANALYZE should be overridable?  Replacing
analyze_rel or do_analyze_rel wholly requires plug-ins to copy most of
codes from original function in order to implement hook function.  From
the perspective of FDW author, I think that row sampling
(acquire_sample_rows) function seems handy place to hook.

Regards,
-- 
Shigeru HANADA

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-06 Thread Kyotaro HORIGUCHI
Excuse me for cutting in,

2012/4/6 Shigeru HANADA shigeru.han...@gmail.com:
 To support foreign-table ANALYZE by adding a new hook, we would need a
 mechanism (or at least documented guide lines) to manage the chain of
 hook functions, because such hook might be used by multiple FDWs (or
 other plug-ins) at the same time.  A wrongly-written plug-in can easily
 break the hook chain.  We might need to provide register/unregister API
 for this hook point, like RegisterResourceReleaseCallback, and call each
 registered function until either of them processes the request.  Is
 there any other hook point which has similar issue?

+1

Plain hook mechanism in PostgreSQL is, I think, to hang a bunch
of faceless callbacks to be registered, unregistered and called
all together. And it does not fit to manage individual callbacks
which may be registered or unregistered in arbitrary order and
are preferred to be called separately.

Although we provide RegisterResourceReleaseCallback-like staff,
it seems far more complicated than the additional field in
FdwRoutine and some analyze_rel() modifications in core-side, and
confirmation of whether it's really the time for me should be a
reluctant work in plugin-side.

Of cource, I don't think there will be so many fdw-analyze
callbacks registered but two seems sufficient.

The current mods in analyze_rel() does not look definitive, but
it does not look so bad and seems more stable than simple hook
point which will be abandoned before long.

 Another concern is the place where we hook the process of ANALYZE.  IOW,
 how much portion of ANALYZE should be overridable?  Replacing
 analyze_rel or do_analyze_rel wholly requires plug-ins to copy most of
 codes from original function in order to implement hook function.  From
 the perspective of FDW author, I think that row sampling
 (acquire_sample_rows) function seems handy place to hook.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-06 Thread Tom Lane
Shigeru HANADA shigeru.han...@gmail.com writes:
 (2012/04/06 1:29), Tom Lane wrote:
 The one thing that seems pretty clear from this discussion is that one
 size doesn't fit all.  I think we really ought to provide a hook so that
 the FDW can determine whether ANALYZE applies to its foreign tables at
 all, and how to obtain the sample rows if it does.

 To support foreign-table ANALYZE by adding a new hook, we would need a
 mechanism (or at least documented guide lines) to manage the chain of
 hook functions, because such hook might be used by multiple FDWs (or
 other plug-ins) at the same time.  A wrongly-written plug-in can easily
 break the hook chain.

Sorry, I used the word hook loosely, not with the intention of meaning
a global function pointer.  We should of course implement this with
another per-FDW method pointer, as in the postgresql-analyze patch
series.

 Another concern is the place where we hook the process of ANALYZE.  IOW,
 how much portion of ANALYZE should be overridable?

Not much, IMO.  The FDW should be able to decide whether or not to
analyze a particular table, and it should be in charge of implementing
its own version of acquire_sample_rows, but no more than that.  In
particular I do not like the specific way it's done in the v7 patch
(I've not looked at v8 yet) because the interposed logic has a
hard-wired assumption that foreign tables do not have inheritance
children.  I think that assumption has a life expectancy measured in
months at most, and I don't want to have to try to fix every FDW when
it changes.  But I think we can easily revise the hook details to fix
that, and I'm hoping to get that done today.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-06 Thread Shigeru Hanada
On Fri, Apr 6, 2012 at 11:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Another concern is the place where we hook the process of ANALYZE.  IOW,
 how much portion of ANALYZE should be overridable?

 Not much, IMO.  The FDW should be able to decide whether or not to
 analyze a particular table, and it should be in charge of implementing
 its own version of acquire_sample_rows, but no more than that.

ISTM that we have rough consensus about what FDW should do for an
ANALYZE request.  FDW should choose either of:
a) get sample rows and return them to backend
b) tell backend that the FDW has nothing to do for the request

 In
 particular I do not like the specific way it's done in the v7 patch
 (I've not looked at v8 yet) because the interposed logic has a
 hard-wired assumption that foreign tables do not have inheritance
 children.  I think that assumption has a life expectancy measured in
 months at most, and I don't want to have to try to fix every FDW when
 it changes.  But I think we can easily revise the hook details to fix
 that, and I'm hoping to get that done today.

I'll try implementing the design you suggested.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-06 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 On Fri, Apr 6, 2012 at 11:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 In
 particular I do not like the specific way it's done in the v7 patch
 (I've not looked at v8 yet) because the interposed logic has a
 hard-wired assumption that foreign tables do not have inheritance
 children.  I think that assumption has a life expectancy measured in
 months at most, and I don't want to have to try to fix every FDW when
 it changes.  But I think we can easily revise the hook details to fix
 that, and I'm hoping to get that done today.

 I'll try implementing the design you suggested.

I've already got it fixed up ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-05 Thread Etsuro Fujita

At 22:11 12/03/28 +0900, Shigeru HANADA wrote:

ANALYZE support for foreign tables is proposed by Fujita-san in current
CF, so I'd like to push it.


I updated the patch to the latest HEAD.  Please find attached a patch.

Best regards,
Etsuro Fujita


postgresql-analyze-v7.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-05 Thread Etsuro Fujita
Sorry, I sent this email without noticing Hanada-san' earlier email.  So,
please look at Hanada-san's post.

Best regards,
Etsuro Fujita

-Original Message-
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Etsuro Fujita
Sent: Thursday, April 05, 2012 9:36 PM
To: Shigeru HANADA; Albe Laurenz
Cc: Tom Lane; Kevin Grittner; Robert Haas; PostgreSQL-development; Kohei
KaiGai; Martijn van Oosterhout; Hitoshi Harada
Subject: Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

At 22:11 12/03/28 +0900, Shigeru HANADA wrote:
ANALYZE support for foreign tables is proposed by Fujita-san in current 
CF, so I'd like to push it.

I updated the patch to the latest HEAD.  Please find attached a patch.

Best regards,
Etsuro Fujita


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-05 Thread Tom Lane
Albe Laurenz laurenz.a...@wien.gv.at writes:
 I think that the price of a remote table scan is something
 we should be willing to pay for good local statistics.
 And there is always the option not to analyze the foreign
 table if you are not willing to pay that price.

 Maybe the FDW API could be extended so that foreign data wrappers
 can provide a random sample to avoid a full table scan.

The one thing that seems pretty clear from this discussion is that one
size doesn't fit all.  I think we really ought to provide a hook so that
the FDW can determine whether ANALYZE applies to its foreign tables at
all, and how to obtain the sample rows if it does.

Since we've already whacked the FDW API around in incompatible ways for
9.2, now is probably a good time to add that.  I'm inclined to say this
should happen whether or not we accept any of the currently proposed
patches for 9.2, because if the hook is there it will provide a way for
people to experiment with foreign-table ANALYZE operations outside of
core.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-04 Thread Albe Laurenz
Shigeru HANADA wrote:
 During a foreign scan, type input functions are used to convert
 the text representation of values.  If a foreign table is
misconfigured,
 you can get error messages from these functions, like:

 ERROR:  invalid input syntax for type double precision: etwas
 or
 ERROR:  value too long for type character varying(3)

 It might me nice for finding problems if the message were
 something like:

 ERROR:  cannot convert data in foreign scan of tablename, column
col
 in row 42
 DETAIL:  ERROR:  value too long for type character varying(3)
 
 Agreed.  How about showing context information with errcontext() in
 addition to main error message?  Of course, identifiers are quoted if
 necessary.  This way doesn't need additional PG_TRY block, so overhead
 would be relatively cheap.

 postgres=# SELECT * FROM ft1 WHERE c1 = 1;  -- ERROR
 ERROR:  invalid input syntax for integer: 1970-01-02 17:00:00+09
 CONTEXT:  column c4 of foreign table ft1
 
 Showing index of the row seems overkill, because most cause of this
kind
 of error is wrong configuration, as you say, and users would be able
to
 address the issue without knowing which record caused the error.

Agreed.  I think that is a better approach than what I suggested.

 As stated previously, I don't think that using local stats on
 foreign tables is a win.  The other patches work fine for me, and
 I'd be happy if that could go into 9.2.
 
 I have opposite opinion on this issue because we need to do some of
 filtering on local side.  We can leave cost/rows estimation to remote
 side about WHERE expressions which are pushed down, but we need
 selectivity of extra filtering done on local side.  For such purpose,
 having local stats of foreign data seems reasonable and useful.
 
 Of course, it has downside that we need to execute explicit ANALYZE
for
 foreign tables which would cause full sequential scan on remote
tables,
 in addition to ANALYZE for remote tables done on remote side as usual
 maintenance work.

This approach is much better and does not suffer from the
limitations the original analyze patch had.

I think that the price of a remote table scan is something
we should be willing to pay for good local statistics.
And there is always the option not to analyze the foreign
table if you are not willing to pay that price.

Maybe the FDW API could be extended so that foreign data wrappers
can provide a random sample to avoid a full table scan.

 Attached patch contains changes below:
 
 pgsql_fdw_v19.patch
   - show context of data conversion error
   - move codes for fetch_count FDW option to option.c
 (refactoring)
 pgsql_fdw_pushdown_v12.patch
   - make deparseExpr function static (refactoring)
 
 I also attached pgsql_fdw_analyze for only testing the effect of local
 statistics.  It contains both backend's ANALYZE command support and
 pgsql_fdw's ANALYZE support.

I think the idea is promising.

I'll mark the patch as ready for committer.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-04-03 Thread Albe Laurenz
Shigeru HANADA wrote:
 Attached are latest version of pgsql_fdw patches.  Note that
 pgsql_fdw_analyze.patch is only for test the effect of local
statistics.
  Please apply patches in the order below:
 
 (1) pgsql_fdw_v18.patch
 (2) pgsql_fdw_pushdown_v11.patch
 (3) pgsql_fdw_analyze.patch (if you want to try local stats)

Since Kohei KaiGai doesn't post a review, I'll have a go.

The patch applies and compiles fine without warnings and passes
regression tests.

I found bugs in the analyze functions:

In pgsql_fdw_analyze:
nspname and relname are not initialized to NULL.
This causes failure if the corresponding option is not set
on the foreign table.

In store_remote_stats:
atttypmod is initialized to 0 and never changed.
This causes the following error for columns of type interval:
ERROR:  unrecognized interval typmod: 0


During a foreign scan, type input functions are used to convert
the text representation of values.  If a foreign table is misconfigured,
you can get error messages from these functions, like:

ERROR:  invalid input syntax for type double precision: etwas
or
ERROR:  value too long for type character varying(3)

It might me nice for finding problems if the message were
something like:

ERROR:  cannot convert data in foreign scan of tablename, column col
in row 42
DETAIL:  ERROR:  value too long for type character varying(3)


As stated previously, I don't think that using local stats on
foreign tables is a win.  The other patches work fine for me, and
I'd be happy if that could go into 9.2.


Once the two bugs above are fixed, should I mark it ready for
committer?

Yours,
Laurenz Albe



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-30 Thread Albe Laurenz
I wrote:
 How about getting # of rows estimate by executing EXPLAIN for
 fully-fledged remote query (IOW, contains pushed-down WHERE clause),
and
 estimate selectivity of local filter on the basis of the statistics
 which are generated by FDW via do_analyze_rel() and FDW-specific
 sampling function?  In this design, we would be able to use quite
 correct rows estimate because we can consider filtering stuffs done
on
 each side separately, though it requires expensive remote EXPLAIN for
 each possible path.
 
 That sounds nice.

... but it still suffers from the problems of local statistics
for remote tables I pointed out.

I think that these shortcomings are not justified by the gain of
one client-server round trip less during planning.  I'd prefer
if pgsql_fdw were not dependent on remote statistics stored in the
local database.

Yours,
Laurenz Albe


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-28 Thread Thom Brown
2012/3/28 Shigeru HANADA shigeru.han...@gmail.com:
 (2012/03/27 20:32), Thom Brown wrote:
 2012/3/26 Shigeru HANADAshigeru.han...@gmail.com:
 * pgsql_fdw_v17.patch
     - Adds pgsql_fdw as contrib module
 * pgsql_fdw_pushdown_v10.patch
     - Adds WHERE push down capability to pgsql_fdw
 * pgsql_fdw_analyze_v1.patch
     - Adds pgsql_fdw_analyze function for updating local stats

 Hmm... I've applied them using the latest Git master, and in the order
 specified, but I can't build the contrib module.  What am I doing
 wrong?

 I'm sorry, but I couldn't reproduce the errors with this procedure.

 $ git checkout master
 $ git pull upstream master      # make master branch up-to-date
 $ git clean -fd                 # remove files for other branches
 $ make clean                    # just in case
 $ patch -p1  /path/to/pgsql_fdw_v17.patch
 $ patch -p1  /path/to/pgsql_fdw_pushdown_v10.patch
 $ patch -p1  /path/to/pgsql_fdw_analyze_v1.patch
 $ make                          # make core first for libpq et al.
 $ cd contrib/pgsql_fdw
 $ make                          # pgsql_fdw

 Please try git clean and make clean, if you have not.
 FWIW, I'm using GNU Make 3.82 and gcc 4.6.0 on Fedora 15.

I had done a make clean, git stash and git clean -f, but I didn't try
git clean -fd.  For some reason it's working now.

Thanks

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-28 Thread Thom Brown
On 28 March 2012 08:13, Thom Brown t...@linux.com wrote:
 2012/3/28 Shigeru HANADA shigeru.han...@gmail.com:
 (2012/03/27 20:32), Thom Brown wrote:
 2012/3/26 Shigeru HANADAshigeru.han...@gmail.com:
 * pgsql_fdw_v17.patch
     - Adds pgsql_fdw as contrib module
 * pgsql_fdw_pushdown_v10.patch
     - Adds WHERE push down capability to pgsql_fdw
 * pgsql_fdw_analyze_v1.patch
     - Adds pgsql_fdw_analyze function for updating local stats

 Hmm... I've applied them using the latest Git master, and in the order
 specified, but I can't build the contrib module.  What am I doing
 wrong?

 I'm sorry, but I couldn't reproduce the errors with this procedure.

 $ git checkout master
 $ git pull upstream master      # make master branch up-to-date
 $ git clean -fd                 # remove files for other branches
 $ make clean                    # just in case
 $ patch -p1  /path/to/pgsql_fdw_v17.patch
 $ patch -p1  /path/to/pgsql_fdw_pushdown_v10.patch
 $ patch -p1  /path/to/pgsql_fdw_analyze_v1.patch
 $ make                          # make core first for libpq et al.
 $ cd contrib/pgsql_fdw
 $ make                          # pgsql_fdw

 Please try git clean and make clean, if you have not.
 FWIW, I'm using GNU Make 3.82 and gcc 4.6.0 on Fedora 15.

 I had done a make clean, git stash and git clean -f, but I didn't try
 git clean -fd.  For some reason it's working now.

Hmm.. I'm getting some rather odd errors though:

thom@test=# select * from stuff limit 3 ;
DEBUG:  StartTransactionCommand
DEBUG:  StartTransaction
DEBUG:  name: unnamed; blockState:   DEFAULT; state: INPROGR,
xid/subid/cid: 0/1/0, nestlvl: 1, children:
LOG:  statement: select * from stuff limit 3 ;
DEBUG:  relid=16402 fetch_count=1
DEBUG:  Remote SQL: SELECT id, stuff, age FROM public.stuff
DEBUG:  starting remote transaction with START TRANSACTION ISOLATION
LEVEL REPEATABLE READ
ERROR:  could not declare cursor
DETAIL:  ERROR:  relation public.stuff does not exist
LINE 1: ...or_6 SCROLL CURSOR FOR SELECT id, stuff, age FROM public.stu...
 ^

HINT:  DECLARE pgsql_fdw_cursor_6 SCROLL CURSOR FOR SELECT id, stuff,
age FROM public.stuff

The table in question indeed doesn't exist, but I'm confused as to why
the user is being exposed to such messages.

And more troublesome:

(local select on foreign server):

test=# select * from stuff limit 3 ;
 id |  thing   | age
+--+-
  1 | STANDARD |  30
  2 | STANDARD |  29
  3 | STANDARD |  12
(3 rows)

(foreign select on foreign server):

thom@test=# select * from stuff limit 3 ;
 id |  stuff  | age
+-+-
  1 | (1,STANDARD,30) |  30
  2 | (2,STANDARD,29) |  29
  3 | (3,STANDARD,12) |  12
(3 rows)


The row expansion seems to incorrectly rewrite the column without a
table prefix if both column and table name are identical.

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-28 Thread Thom Brown
On 28 March 2012 08:39, Thom Brown t...@linux.com wrote:
 On 28 March 2012 08:13, Thom Brown t...@linux.com wrote:
 2012/3/28 Shigeru HANADA shigeru.han...@gmail.com:
 (2012/03/27 20:32), Thom Brown wrote:
 2012/3/26 Shigeru HANADAshigeru.han...@gmail.com:
 * pgsql_fdw_v17.patch
     - Adds pgsql_fdw as contrib module
 * pgsql_fdw_pushdown_v10.patch
     - Adds WHERE push down capability to pgsql_fdw
 * pgsql_fdw_analyze_v1.patch
     - Adds pgsql_fdw_analyze function for updating local stats

 Hmm... I've applied them using the latest Git master, and in the order
 specified, but I can't build the contrib module.  What am I doing
 wrong?

 I'm sorry, but I couldn't reproduce the errors with this procedure.

 $ git checkout master
 $ git pull upstream master      # make master branch up-to-date
 $ git clean -fd                 # remove files for other branches
 $ make clean                    # just in case
 $ patch -p1  /path/to/pgsql_fdw_v17.patch
 $ patch -p1  /path/to/pgsql_fdw_pushdown_v10.patch
 $ patch -p1  /path/to/pgsql_fdw_analyze_v1.patch
 $ make                          # make core first for libpq et al.
 $ cd contrib/pgsql_fdw
 $ make                          # pgsql_fdw

 Please try git clean and make clean, if you have not.
 FWIW, I'm using GNU Make 3.82 and gcc 4.6.0 on Fedora 15.

 I had done a make clean, git stash and git clean -f, but I didn't try
 git clean -fd.  For some reason it's working now.

 Hmm.. I'm getting some rather odd errors though:

 thom@test=# select * from stuff limit 3 ;
 DEBUG:  StartTransactionCommand
 DEBUG:  StartTransaction
 DEBUG:  name: unnamed; blockState:       DEFAULT; state: INPROGR,
 xid/subid/cid: 0/1/0, nestlvl: 1, children:
 LOG:  statement: select * from stuff limit 3 ;
 DEBUG:  relid=16402 fetch_count=1
 DEBUG:  Remote SQL: SELECT id, stuff, age FROM public.stuff
 DEBUG:  starting remote transaction with START TRANSACTION ISOLATION
 LEVEL REPEATABLE READ
 ERROR:  could not declare cursor
 DETAIL:  ERROR:  relation public.stuff does not exist
 LINE 1: ...or_6 SCROLL CURSOR FOR SELECT id, stuff, age FROM public.stu...
                                                             ^

 HINT:  DECLARE pgsql_fdw_cursor_6 SCROLL CURSOR FOR SELECT id, stuff,
 age FROM public.stuff

 The table in question indeed doesn't exist, but I'm confused as to why
 the user is being exposed to such messages.

 And more troublesome:

 (local select on foreign server):

 test=# select * from stuff limit 3 ;
  id |  thing   | age
 +--+-
  1 | STANDARD |  30
  2 | STANDARD |  29
  3 | STANDARD |  12
 (3 rows)

 (foreign select on foreign server):

 thom@test=# select * from stuff limit 3 ;
  id |      stuff      | age
 +-+-
  1 | (1,STANDARD,30) |  30
  2 | (2,STANDARD,29) |  29
  3 | (3,STANDARD,12) |  12
 (3 rows)


 The row expansion seems to incorrectly rewrite the column without a
 table prefix if both column and table name are identical.

Actually, correction.  The foreign table definition names the column
the same as the table.  I accidentally omitted the 'thing' column and
instead substituted it with the table name in the definition.

Original table definition on foreign server:

create table stuff (id serial primary key, thing text, age int);

Foreign table definition:

create foreign table stuff (id int not null, stuff text, age int) server pgsql;

So it appears I'm allowed to use the table as a column in this
context.  So please disregard my complaint.

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-28 Thread Shigeru HANADA
(2012/03/28 16:18), Albe Laurenz wrote:
 I wrote:
 How about getting # of rows estimate by executing EXPLAIN for
 fully-fledged remote query (IOW, contains pushed-down WHERE clause),
 and
 estimate selectivity of local filter on the basis of the statistics
 which are generated by FDW via do_analyze_rel() and FDW-specific
 sampling function?  In this design, we would be able to use quite
 correct rows estimate because we can consider filtering stuffs done
 on
 each side separately, though it requires expensive remote EXPLAIN for
 each possible path.

 That sounds nice.
 
 ... but it still suffers from the problems of local statistics
 for remote tables I pointed out.

I guess that you mean about these issues you wrote in earlier post, so
I'd like to comment on them.

 - You have an additional maintenance task if you want to keep
   statistics for remote tables accurate.  I understand that this may
   get better in a future release.

I'm not sure that's what you meant, but we need to execute remote
ANALYZE before calling pgsql_fdw_analyze() to keep local statistics
accurate.  IMO DBAs are responsible to execute remote ANALYZE at
appropriate timing, so pgsql_fdw_analyze (or handler function for
ANALYZE) should just collect statistics from remote side.

 - You depend on the structure of pg_statistic, which means a potential
   incompatibility between server versions.  You can add cases to
   pgsql_fdw_analyze to cater for changes, but that is cumbersome and
 will
   only help for later PostgreSQL versions connecting to earlier ones.

Indeed.  Like pg_dump, pgsql_fdw should aware of different server
version if we choose copying statistics.  Difference of catalog
structure is very easy to track and cope with, but if meanings of values
or the way to calculate statistics are changed, pgsql_fdw would need
very complex codes to convert values from different version.
I don't know such example, but IMO we should assume that statistics are
valid for only same version (at least major version).  After all, I'd
prefer collecting sample data by pgsql_fdw and leaving statistics
generation to local backend.

 - Planning and execution will change (improve, of course) between server
   versions.  The local planner may choose an inferior plan based on a
   wrong assumption of how a certain query can be handled on the remote.

Hm, I don't worry about detail of remote planning so much, because
remote server would do its best for a query given by pgsql_fdw.  Also
local planner would do its best for given estimation (rows, width and
costs).  One concern is that remote cost factors might be different from
local's, so FDW option which represents cost conversion coefficient (1.0
means that remote cost has same weight as local) might be useful.

 - You have no statistics if the foreign table points to a view on the
   remote system.

ISTM that this would be enough reason to give up copying remote stats to
local.  We don't provide SELECT push-down nor GROUP BY push-down at
present, so users would want to create views which contain function call
in SELECT clauses.

 I think that these shortcomings are not justified by the gain of
 one client-server round trip less during planning.  I'd prefer
 if pgsql_fdw were not dependent on remote statistics stored in the
 local database.

I too prefer if pgsql_fdw doesn't fully depend on statistics of foreign
data, but IMO having statistics of foreign data which were calculated in
the way same as local data seems still useful for estimation about local
filtering.  Even if we have no statistics of foreign data on local side,
still we would be able to create plans on the basis of default
selectivity for each expression, as same as regular tables.

Regards,
-- 
Shigeru HANADA

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-28 Thread Shigeru HANADA
(2012/03/28 21:07), Albe Laurenz wrote:
 I found another limitation of this approach:
 pgsql_fdw_analyze() has to run as a user who can update
 pg_statistic, and this user needs a user mapping to a remote
 user who can read pg_statistic.
 
 This is not necessary for normal operation and needs
 to be configured specifically for getting remote statistics.
 This is cumbersome, and people might be unhappy to have to
 create user mappings for highly privileged remote users.

Agreed.  After all, supporting ANALYZE command for foreign tables seems
the only way to obtain local statistics of foreign data without granting
privileges too much.  ANALYZE is allowed to only the owner of the table
or a superuser, so assuming that an invoker has valid user mapping for a
remote user who can read corresponding foreign data seems reasonable.

ANALYZE support for foreign tables is proposed by Fujita-san in current
CF, so I'd like to push it.

Regards,
-- 
Shigeru HANADA

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-27 Thread Albe Laurenz
Shigeru HANADA wrote:
 I've implemented pgsql_fdw's own deparser and enhanced some features
 since last post.  Please apply attached patches in the order below:

 Changes from previous version
 =
 
 1) Don't use remote EXPLAIN for cost/rows estimation, so now planner
 estimates result rows and costs on the basis of local statistics such
as
 pg_class and pg_statistic.  To update local statistics, I added
 pgsql_fdw_analyze() SQL function which updates local statistics of a
 foreign table by retrieving remote statistics, such as pg_class and
 pg_statistic, via libpq.  This would make the planning of pgsql_fdw
 simple and fast.  This function can be easily modified to handle
ANALYZE
 command invoked for a foreign table (Fujita-san is proposing this as
 common feature in another thread).
 
 2) Defer planning stuffs as long as possible to clarify the role of
each
 function.  Currently GetRelSize just estimates result rows from local
 statistics, and GetPaths adds only one path which represents SeqScan
on
 remote side.  As result of this change, PgsqlFdwPlanState struct is
 obsolete.

I see the advantage of being able to do all this locally, but
I think there are a lot of downsides too:
- You have an additional maintenance task if you want to keep
  statistics for remote tables accurate.  I understand that this may
  get better in a future release.
- You depend on the structure of pg_statistic, which means a potential
  incompatibility between server versions.  You can add cases to
  pgsql_fdw_analyze to cater for changes, but that is cumbersome and
will
  only help for later PostgreSQL versions connecting to earlier ones.
- Planning and execution will change (improve, of course) between server
  versions.  The local planner may choose an inferior plan based on a
  wrong assumption of how a certain query can be handled on the remote.
- You have no statistics if the foreign table points to a view on the
  remote system.

My gut feeling is that planning should be done by the server which
will execute the query.

 3) Implement pgsql_fdw's own deparser which pushes down collation-free
 and immutable expressions in local WHERE clause.  This means that most
 of numeric conditions can be pushed down, but conditions using
character
 types are not.

I understand that this is simple and practical, but it is a pity that
this excludes equality and inequality conditions on strings.
Correct me if I am wrong, but I thought that these work the same
regardless of the collation.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-27 Thread Shigeru HANADA
Thanks for the comments.

(2012/03/27 18:36), Albe Laurenz wrote:
 2) Defer planning stuffs as long as possible to clarify the role of
 each
 function.  Currently GetRelSize just estimates result rows from local
 statistics, and GetPaths adds only one path which represents SeqScan
 on
 remote side.  As result of this change, PgsqlFdwPlanState struct is
 obsolete.
 
 I see the advantage of being able to do all this locally, but
 I think there are a lot of downsides too:
 - You have an additional maintenance task if you want to keep
statistics for remote tables accurate.  I understand that this may
get better in a future release.
 - You depend on the structure of pg_statistic, which means a potential
incompatibility between server versions.  You can add cases to
pgsql_fdw_analyze to cater for changes, but that is cumbersome and
 will
only help for later PostgreSQL versions connecting to earlier ones.
 - Planning and execution will change (improve, of course) between server
versions.  The local planner may choose an inferior plan based on a
wrong assumption of how a certain query can be handled on the remote.
 - You have no statistics if the foreign table points to a view on the
remote system.

Especially for 2nd and 4th, generating pg_statistic records without
calling do_analyze_rel() seems unpractical in multiple version
environment.  As you pointed out, I've missed another
semantics-different problem here.  We would have to use do_analyze_rel()
and custom sampling function which returns sample rows from remote data
source, if we want to have statistics of foreign data locally.  This
method would be available for most of FDWs, but requires some changes in
core.  [I'll comment on Fujita-san's ANALYZE patch about this issue soon.]

 My gut feeling is that planning should be done by the server which
 will execute the query.

Agreed, if selectivity of both local filtering and remote filtering were
available, we can estimate result rows correctly and choose better plan.

How about getting # of rows estimate by executing EXPLAIN for
fully-fledged remote query (IOW, contains pushed-down WHERE clause), and
estimate selectivity of local filter on the basis of the statistics
which are generated by FDW via do_analyze_rel() and FDW-specific
sampling function?  In this design, we would be able to use quite
correct rows estimate because we can consider filtering stuffs done on
each side separately, though it requires expensive remote EXPLAIN for
each possible path.

 3) Implement pgsql_fdw's own deparser which pushes down collation-free
 and immutable expressions in local WHERE clause.  This means that most
 of numeric conditions can be pushed down, but conditions using
 character
 types are not.
 
 I understand that this is simple and practical, but it is a pity that
 this excludes equality and inequality conditions on strings.
 Correct me if I am wrong, but I thought that these work the same
 regardless of the collation.

You are right, built-in equality and inequality operators don't cause
collation problem.  Perhaps allowing them would cover significant cases
of string comparison, but I'm not sure how to determine whether an
operator is = or != in generic way.  We might have to hold list of oid
for collation-safe operator/functions until we support ROUTINE MAPPING
or something like that...  Anyway, I'll fix pgsql_fdw to allow = and !=
for character types.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-27 Thread Thom Brown
2012/3/26 Shigeru HANADA shigeru.han...@gmail.com:
 (2012/03/15 23:06), Shigeru HANADA wrote:
 Although the patches are still WIP, especially in WHERE push-down part,
 but I'd like to post them so that I can get feedback of the design as
 soon as possible.

 I've implemented pgsql_fdw's own deparser and enhanced some features
 since last post.  Please apply attached patches in the order below:

 * pgsql_fdw_v17.patch
    - Adds pgsql_fdw as contrib module
 * pgsql_fdw_pushdown_v10.patch
    - Adds WHERE push down capability to pgsql_fdw
 * pgsql_fdw_analyze_v1.patch
    - Adds pgsql_fdw_analyze function for updating local stats

Hmm... I've applied them using the latest Git master, and in the order
specified, but I can't build the contrib module.  What am I doing
wrong?

It starts off with things like:

../../src/Makefile.global:420: warning: overriding commands for target
`submake-libpq'
../../src/Makefile.global:420: warning: ignoring old commands for
target `submake-libpq'

and later:

pgsql_fdw.c: In function ‘pgsqlGetForeignPlan’:
pgsql_fdw.c:321:2: warning: implicit declaration of function
‘extractRemoteExprs’ [-Wimplicit-function-declaration]
pgsql_fdw.c:321:12: warning: assignment makes pointer from integer
without a cast [enabled by default]
pgsql_fdw.c:362:3: warning: implicit declaration of function
‘deparseWhereClause’ [-Wimplicit-function-declaration]
pgsql_fdw.c: At top level:
pgsql_fdw.c:972:1: error: redefinition of ‘Pg_magic_func’
pgsql_fdw.c:39:1: note: previous definition of ‘Pg_magic_func’ was here

-- 
Thom

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-27 Thread Albe Laurenz
Shigeru HANADA wrote:
 My gut feeling is that planning should be done by the server which
 will execute the query.
 
 Agreed, if selectivity of both local filtering and remote filtering
were
 available, we can estimate result rows correctly and choose better
plan.
 
 How about getting # of rows estimate by executing EXPLAIN for
 fully-fledged remote query (IOW, contains pushed-down WHERE clause),
and
 estimate selectivity of local filter on the basis of the statistics
 which are generated by FDW via do_analyze_rel() and FDW-specific
 sampling function?  In this design, we would be able to use quite
 correct rows estimate because we can consider filtering stuffs done on
 each side separately, though it requires expensive remote EXPLAIN for
 each possible path.

That sounds nice.
How would that work with a query that has one condition that could be
pushed down and one that has to be filtered locally?
Would you use the (local) statistics for the full table or can you
somehow account for the fact that rows have already been filtered
out remotely, which might influence the distribution?

 You are right, built-in equality and inequality operators don't cause
 collation problem.  Perhaps allowing them would cover significant
cases
 of string comparison, but I'm not sure how to determine whether an
 operator is = or != in generic way.  We might have to hold list of oid
 for collation-safe operator/functions until we support ROUTINE MAPPING
 or something like that...  Anyway, I'll fix pgsql_fdw to allow = and
!=
 for character types.

I believe that this covers a significant percentage of real-world cases.
I'd think that every built-in operator with name = or  could
be pushed down.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-27 Thread Shigeru HANADA
(2012/03/27 20:32), Thom Brown wrote:
 2012/3/26 Shigeru HANADAshigeru.han...@gmail.com:
 * pgsql_fdw_v17.patch
 - Adds pgsql_fdw as contrib module
 * pgsql_fdw_pushdown_v10.patch
 - Adds WHERE push down capability to pgsql_fdw
 * pgsql_fdw_analyze_v1.patch
 - Adds pgsql_fdw_analyze function for updating local stats
 
 Hmm... I've applied them using the latest Git master, and in the order
 specified, but I can't build the contrib module.  What am I doing
 wrong?

I'm sorry, but I couldn't reproduce the errors with this procedure.

$ git checkout master
$ git pull upstream master  # make master branch up-to-date
$ git clean -fd # remove files for other branches
$ make clean# just in case
$ patch -p1  /path/to/pgsql_fdw_v17.patch
$ patch -p1  /path/to/pgsql_fdw_pushdown_v10.patch
$ patch -p1  /path/to/pgsql_fdw_analyze_v1.patch
$ make  # make core first for libpq et al.
$ cd contrib/pgsql_fdw
$ make  # pgsql_fdw

Please try git clean and make clean, if you have not.
FWIW, I'm using GNU Make 3.82 and gcc 4.6.0 on Fedora 15.

Regards,
-- 
Shigeru HANADA

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Corrected: Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-12 Thread Etsuro Fujita
(2012/03/12 13:04), Etsuro Fujita wrote:
 (2012/03/09 23:48), Tom Lane wrote:
 Etsuro Fujitafujita.ets...@lab.ntt.co.jp   writes:

 2. IMHO RelOptInfo.fdw_private seems confusing.  How about renaming it
 to e.g., RelOptInfo.fdw_state?

 Why is that better?  It seems just as open to confusion with another
 field (ie, the execution-time fdw_state).
 
 I thought the risk.  However, I feel that the naming of
 RelOptInfo.fdw_state is not so bad because it is used only at the query
 planning time, not used along with the execution-time fdw_private.

I wrote the execution-time fdw_private by mistake.  I meant the
execution-time fdw_state.  I'm sorry about that.

Best regards,
Etsuro Fujita

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-12 Thread Albe Laurenz
Tom Lane wrote:
 Shigeru Hanada shigeru.han...@gmail.com writes:
 Thanks for the review.  Agreed to write own depraser for pgsql_fdw
 which handles nodes which can be pushed down.  Every SQL-based FDW
 which constructs SQL statement for each local query would need such
 module inside.
 
 Yeah.  That's kind of annoying, and the first thing you think of is
that
 we ought to find a way to share that code somehow.  But I think it's
 folly to try to design a shared implementation until we have some
 concrete implementations to compare.  An Oracle FDW, for instance,
would
 need to emit SQL code with many differences in detail from pgsql_fdw.
 It's not clear to me whether a shared implementation is even
practical,
 but for sure I don't want to try to build it before we've done some
 prototype single-purpose implementations.

Having written something like that for Oracle, I tend to share that
opinion.  Anything general-purpose enough to cater for every whim and
oddity of the remote system would probably be so unwieldy that it
wouldn't be much easier to use it than to write the whole thing from
scratch.  To illustrate this, a few examples from the Oracle case:

- Empty strings have different semantics in Oracle (to wit, they mean
NULL).
  So you can push down all string constants except empty strings.
- Oracle can only represent intervals with just year and month
  or with just day of month and smaller fields.  So you can either
  punt on intervals or translate only the ones that fit the bill.
- You can push down - for date arithmetic except when both
  operands on the Oracle side are of type DATE, because that would
  result in a NUMERIC value (number of days between).

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-11 Thread Etsuro Fujita
(2012/03/09 23:48), Tom Lane wrote:
 Etsuro Fujitafujita.ets...@lab.ntt.co.jp  writes:

Thank you for your answer.

 1. FilefdwPlanState.pages and FileFdwPlanState.ntuples seems redundant.
Why not use RelOptInfo.pages and RelOptInfo.tuples?
 
 I intentionally avoided setting RelOptInfo.pages because that would have
 other effects on planning (cf total_table_pages or whatever it's
 called).  It's possible that that would actually be desirable, depending
 on whether you think the external file should be counted as part of the
 query's disk-access footprint; but it would take some investigation to
 conclude that, which I didn't feel like doing right now.  Likewise, I'm
 not sure offhand what side effects might occur from using
 RelOptInfo.tuples, and didn't want to change file_fdw's behavior without
 more checking.

OK

 2. IMHO RelOptInfo.fdw_private seems confusing.  How about renaming it
 to e.g., RelOptInfo.fdw_state?
 
 Why is that better?  It seems just as open to confusion with another
 field (ie, the execution-time fdw_state).

I thought the risk.  However, I feel that the naming of
RelOptInfo.fdw_state is not so bad because it is used only at the query
planning time, not used along with the execution-time fdw_private.  The
naming of RelOptInfo.fdw_private seems as open to confusion to me
because it would have to be used along with Path.fdw_private or
Plan.fdw_private in FDW's functions at the planning time, while I guess
that the contents of RelOptInfo.fdw_private are relatively far from the
ones of fdw_private of Path and Plan.

Best regards,
Etsuro Fujita

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-10 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 I've not read whole of the patch yet, but I have basic questions.

 1) IIUC, GetForeignRelSize should set baserel-rows to the number of
 rows the ForeignScan node returns to upper node, but not the number
 of rows FDW returns to core executor, right?

It should be the number of rows estimated to pass the baserestrictinfo
restriction clauses, so yeah, not the same as what the FDW would return,
except in cases where all the restriction clauses are handled internally
by the FDW.

 BTW, once Fujita-san's ANALYZE support patch is merged, we will be
 able to get rows estimatation easily by calling clauselist_selectivity
 with baserel-tuples and baserestrictinfo.  Otherwise, pgsql_fdw
 would still need to execute EXPLAIN on remote side to get meaningful
 rows estimation.

Yeah, one of the issues for that patch is how we see it coexisting with
the option of doing a remote-side EXPLAIN.

 2) ISTM that pgsql_fdw needs to execute EXPLAIN on remote side for each
 possible remote query to get meaningful costs estimation, and it
 requires pgsql_fdw to generate SQL statements in GetForeignPaths.
 I worry that I've misunderstood intention of your design because
 you've mentioned postponing SQL deparsing to createplan time.

If you want to get the cost estimates that way, then yes, you'd be
needing to do some SQL-statement-construction earlier than final plan
generation.  But it's not apparent to me that those statements would
necessarily be the same as, or even very similar to, what the final
queries would be.  For instance, you'd probably try to reduce parameters
to constants for estimation purposes.

 GetForeignPaths
 1) Repeat for each possible remote query:
 1-1) Generate remote query, such as with-WHERE and with-ORDER BY.
 1-2) Execute EXPLAIN of generated query, and get costs estimation
 (rows estimation is ignored because it's useless in planning).

Why do you say that?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-10 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 Thanks for the review.  Agreed to write own depraser for pgsql_fdw
 which handles nodes which can be pushed down.  Every SQL-based FDW
 which constructs SQL statement for each local query would need such
 module inside.

Yeah.  That's kind of annoying, and the first thing you think of is that
we ought to find a way to share that code somehow.  But I think it's
folly to try to design a shared implementation until we have some
concrete implementations to compare.  An Oracle FDW, for instance, would
need to emit SQL code with many differences in detail from pgsql_fdw.
It's not clear to me whether a shared implementation is even practical,
but for sure I don't want to try to build it before we've done some
prototype single-purpose implementations.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-10 Thread Martijn van Oosterhout
On Sat, Mar 10, 2012 at 11:38:51AM -0500, Tom Lane wrote:
 Shigeru Hanada shigeru.han...@gmail.com writes:
  Thanks for the review.  Agreed to write own depraser for pgsql_fdw
  which handles nodes which can be pushed down.  Every SQL-based FDW
  which constructs SQL statement for each local query would need such
  module inside.
 
 Yeah.  That's kind of annoying, and the first thing you think of is that
 we ought to find a way to share that code somehow.  But I think it's
 folly to try to design a shared implementation until we have some
 concrete implementations to compare.  An Oracle FDW, for instance, would
 need to emit SQL code with many differences in detail from pgsql_fdw.
 It's not clear to me whether a shared implementation is even practical,
 but for sure I don't want to try to build it before we've done some
 prototype single-purpose implementations.

FWIW, this sounds like the compiler mechanism in SQLalchemy for
turning SQL node trees into strings. The basic idea is you define
functions for converting nodes to strings. Stuff like And and Or
works for every database, but then dialects can override different
things.

So you have for Postgres: Node(foo) = $1, but to other databases
perhaps :field1. But most of the other code can be shared..

http://docs.sqlalchemy.org/en/latest/core/compiler.html

In my experience it works well for generating custom constructs. They
have compilers for 11 different database engines, so it seems flexible
enough.  Mind you, they also handle DDL mapping (where most of the
variation is) and datatype translations, which seems a lot further than
we need here.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-09 Thread Etsuro Fujita

(2012/03/09 14:00), Tom Lane wrote:

I wrote:

There are a couple of other points that make me think we need to revisit
the PlanForeignScan API definition some more, too.  ...
So we need to break down what PlanForeignScan currently does into three
separate steps.  The first idea that comes to mind is to call them
GetForeignRelSize, GetForeignPaths, GetForeignPlan; but maybe somebody
has a better idea for names?


Attached is a draft patch for that.


1. FilefdwPlanState.pages and FileFdwPlanState.ntuples seems redundant. 
 Why not use RelOptInfo.pages and RelOptInfo.tuples?


2. IMHO RelOptInfo.fdw_private seems confusing.  How about renaming it 
to e.g., RelOptInfo.fdw_state?


Attached is a patch for the draft patch.

Best regards,
Etsuro Fujita
*** a/contrib/file_fdw/file_fdw.c
--- b/contrib/file_fdw/file_fdw.c
***
*** 74,87  static const struct FileFdwOption valid_options[] = {
  };
  
  /*
!  * FDW-specific information for RelOptInfo.fdw_private.
   */
  typedef struct FileFdwPlanState
  {
char   *filename;   /* file to read */
List   *options;/* merged COPY options, excluding 
filename */
-   BlockNumber pages;  /* estimate of file's physical 
size */
-   double  ntuples;/* estimate of number of rows 
in file */
  } FileFdwPlanState;
  
  /*
--- 74,85 
  };
  
  /*
!  * FDW-specific information for RelOptInfo.fdw_state.
   */
  typedef struct FileFdwPlanState
  {
char   *filename;   /* file to read */
List   *options;/* merged COPY options, excluding 
filename */
  } FileFdwPlanState;
  
  /*
***
*** 132,140  static void fileGetOptions(Oid foreigntableid,
   char **filename, List **other_options);
  static List *get_file_fdw_attribute_options(Oid relid);
  static void estimate_size(PlannerInfo *root, RelOptInfo *baserel,
! FileFdwPlanState *fdw_private);
  static void estimate_costs(PlannerInfo *root, RelOptInfo *baserel,
-  FileFdwPlanState *fdw_private,
   Cost *startup_cost, Cost *total_cost);
  
  
--- 130,137 
   char **filename, List **other_options);
  static List *get_file_fdw_attribute_options(Oid relid);
  static void estimate_size(PlannerInfo *root, RelOptInfo *baserel,
! FileFdwPlanState *fpstate);
  static void estimate_costs(PlannerInfo *root, RelOptInfo *baserel,
   Cost *startup_cost, Cost *total_cost);
  
  
***
*** 415,433  fileGetForeignRelSize(PlannerInfo *root,
  RelOptInfo *baserel,
  Oid foreigntableid)
  {
!   FileFdwPlanState *fdw_private;
  
/*
 * Fetch options.  We only need filename at this point, but we might
 * as well get everything and not need to re-fetch it later in planning.
 */
!   fdw_private = (FileFdwPlanState *) palloc(sizeof(FileFdwPlanState));
!   fileGetOptions(foreigntableid,
!  fdw_private-filename, 
fdw_private-options);
!   baserel-fdw_private = (void *) fdw_private;
  
/* Estimate relation size */
!   estimate_size(root, baserel, fdw_private);
  }
  
  /*
--- 412,429 
  RelOptInfo *baserel,
  Oid foreigntableid)
  {
!   FileFdwPlanState *fpstate;
  
/*
 * Fetch options.  We only need filename at this point, but we might
 * as well get everything and not need to re-fetch it later in planning.
 */
!   fpstate = (FileFdwPlanState *) palloc(sizeof(FileFdwPlanState));
!   fileGetOptions(foreigntableid, fpstate-filename, fpstate-options);
!   baserel-fdw_state = (void *) fpstate;
  
/* Estimate relation size */
!   estimate_size(root, baserel, fpstate);
  }
  
  /*
***
*** 443,455  fileGetForeignPaths(PlannerInfo *root,
RelOptInfo *baserel,
Oid foreigntableid)
  {
-   FileFdwPlanState *fdw_private = (FileFdwPlanState *) 
baserel-fdw_private;
Coststartup_cost;
Costtotal_cost;
  
/* Estimate costs */
!   estimate_costs(root, baserel, fdw_private,
!  startup_cost, total_cost);
  
/* Create a ForeignPath node and add it as only possible path */
add_path(baserel, (Path *)
--- 439,449 
RelOptInfo *baserel,
Oid foreigntableid)
  {
Coststartup_cost;
Costtotal_cost;
  
/* Estimate costs */
!   estimate_costs(root, baserel, startup_cost, total_cost);
  
/* Create 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-09 Thread Tom Lane
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes:
 (2012/03/09 14:00), Tom Lane wrote:
 Attached is a draft patch for that.

 1. FilefdwPlanState.pages and FileFdwPlanState.ntuples seems redundant. 
   Why not use RelOptInfo.pages and RelOptInfo.tuples?

I intentionally avoided setting RelOptInfo.pages because that would have
other effects on planning (cf total_table_pages or whatever it's
called).  It's possible that that would actually be desirable, depending
on whether you think the external file should be counted as part of the
query's disk-access footprint; but it would take some investigation to
conclude that, which I didn't feel like doing right now.  Likewise, I'm
not sure offhand what side effects might occur from using
RelOptInfo.tuples, and didn't want to change file_fdw's behavior without
more checking.

 2. IMHO RelOptInfo.fdw_private seems confusing.  How about renaming it 
 to e.g., RelOptInfo.fdw_state?

Why is that better?  It seems just as open to confusion with another
field (ie, the execution-time fdw_state).  I thought for a little bit
about trying to give different names to all four of the fdw private
fields (RelOptInfo, Path, Plan, PlanState) but it's not obvious what
naming rule to use, and at least the last two of those can't be changed
without breaking existing FDW code.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-09 Thread Shigeru Hanada
I've not read whole of the patch yet, but I have basic questions.

1) IIUC, GetForeignRelSize should set baserel-rows to the number of
rows the ForeignScan node returns to upper node, but not the number
of rows FDW returns to core executor, right?

BTW, once Fujita-san's ANALYZE support patch is merged, we will be
able to get rows estimatation easily by calling clauselist_selectivity
with baserel-tuples and baserestrictinfo.  Otherwise, pgsql_fdw
would still need to execute EXPLAIN on remote side to get meaningful
rows estimation.

2) ISTM that pgsql_fdw needs to execute EXPLAIN on remote side for each
possible remote query to get meaningful costs estimation, and it
requires pgsql_fdw to generate SQL statements in GetForeignPaths.
I worry that I've misunderstood intention of your design because
you've mentioned postponing SQL deparsing to createplan time.

I'll read the document  and patch, and fix pgsql_fdw so that it can
work with new API.  As for now, I think that pgsqlPlanForeignScan
should be separated like below:

GetForeignRelSize
1) Retrieve catalog infomation via GetForeignFoo funcitons.
2) Generate simple remote query which has no WHERE clause.
3) Execute EXPLAIN of simple query, and get rows and costs estimation.
4) Set baserel-rows.

All information above are stored in baserel-fdw_private to use them
in subsequent GetForeignPaths.

If ANALYZE of foreign tables is supported, we can postpone 2) and 3)
to GetForeignPaths.

GetForeignPaths
1) Repeat for each possible remote query:
1-1) Generate remote query, such as with-WHERE and with-ORDER BY.
1-2) Execute EXPLAIN of generated query, and get costs estimation
(rows estimation is ignored because it's useless in planning).
1-3) Call add_path and create_foreignscan_path for the query.

GetForeignPlan
1) Create fdw_exprs from baserestrictinfo, with removing clauses
which are pushed down by selected path.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-09 Thread Shigeru Hanada
(2012/03/09 1:18), Tom Lane wrote:
 I've been looking at this patch a little bit over the past day or so.
 I'm pretty unhappy with deparse.c --- it seems like a real kluge,
 inefficient and full of corner-case bugs.  After some thought I believe
 that you're ultimately going to have to abandon depending on ruleutils.c
 for reverse-listing services, and it would be best to bite that bullet
 now and rewrite this code from scratch.

Thanks for the review.  Agreed to write own depraser for pgsql_fdw
which handles nodes which can be pushed down.  Every SQL-based FDW
which constructs SQL statement for each local query would need such
module inside.

BTW, pgsql_fdw pushes only built-in objects which have no collation
effect down to remote side, because user-defined objects might have
different semantics on remote end.  In future, such deparser will
need some mechanism to map local object (or expression?) to remote
one, like ROUTINE MAPPING, as discussed before.  But it seems ok to
assume that built-in objects have same name and semantics on remote
end.

 There are a couple of other points that make me think we need to revisit
 the PlanForeignScan API definition some more, too.  First, deparse.c is
 far from cheap.  While this doesn't matter greatly as long as there's
 only one possible path for a foreign table, as soon as you want to
 create more than one it's going to be annoying to do all that work N
 times and then throw away N-1 of the results.

Indeed deprase.c is not cheap, but I think that pgsql_fdw can avoid
redundant works by deparsing SQL statement separately, unless we need
to consider join-push-down.  Possible parts are SELECT, FROM, WHERE
and ORDER BY clauses.  Simplest path uses SELECT and FROM, and other
paths can be built by copying necessary clauses into individual
buffers.

Comments to the rest part are in my another reply to your recent post.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-09 Thread Shigeru Hanada
(2012/03/06 23:47), Albe Laurenz wrote:
 Shigeru Hanada wrote:
 Connection should be closed only when the trigger is a
 top level transaction and it's aborting, but isTopLevel flag was not
 checked.  I fixed the bug and added regression tests for such cases.

 I wondered about that - is it really necessary to close the remote
 connection? Wouldn't a ROLLBACK on the remote connection be good enough?

Rolling back remote transaction seems enough, when the error comes
from local reason and remote connection is still available.  However,
I'd rather disconnect always to keep error handling simple and
centralized in cleanup_connection.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-08 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 [ pgsql_fdw_v15.patch ]

I've been looking at this patch a little bit over the past day or so.
I'm pretty unhappy with deparse.c --- it seems like a real kluge,
inefficient and full of corner-case bugs.  After some thought I believe
that you're ultimately going to have to abandon depending on ruleutils.c
for reverse-listing services, and it would be best to bite that bullet
now and rewrite this code from scratch.  ruleutils.c is serving two
masters already (rule-dumping and EXPLAIN) and it's not going to be
practical to tweak its behavior further for this usage; yet there are
all sorts of clear problems that you are going to run into, boiling down
to the fact that names on the remote end aren't necessarily the same as
names on the local end.  For instance, ruleutils.c is not going to be
helpful at schema-qualifying function names in a way that's correct for
the foreign server environment.  Another issue is that as soon as you
try to push down join clauses for parameterized paths, you are going to
want Vars of other relations to be printed as parameters ($n, most
likely) and ruleutils is not going to know to do that.  Seeing that
semantic constraints will greatly limit the set of node types that can
ever be pushed down anyway, I think it's likely to be easiest to just
write your own node-printing code and not even try to use ruleutils.c.

There are a couple of other points that make me think we need to revisit
the PlanForeignScan API definition some more, too.  First, deparse.c is
far from cheap.  While this doesn't matter greatly as long as there's
only one possible path for a foreign table, as soon as you want to
create more than one it's going to be annoying to do all that work N
times and then throw away N-1 of the results.  I also choked on the fact
that the pushdown patch thinks it can modify baserel-baserestrictinfo.
That might accidentally fail to malfunction right now, but it's never
going to scale to multiple paths with potentially different sets of
remotely-applied constraints.  So I'm thinking we really need to let
FDWs in on the Path versus Plan distinction --- that is, a Path just
needs to be a cheap summary of a way to do things, and then at
createplan.c time you convert the selected Path into a full-fledged
Plan.  Most of the work done in deparse.c could be postponed to
createplan time and done only once, even with multiple paths.
The baserestrictinfo hack would be unnecessary too if the FDW had more
direct control over generation of the ForeignScan plan node.

Another thing I'm thinking we should let FDWs in on is the distinction
between rowcount estimation and path generation.  When we did the first
API design last year it was okay to expect a single call to do both,
but as of a couple months ago allpaths.c does those steps in two
separate passes over the baserels, and it'd be handy if FDWs would
cooperate.

So we need to break down what PlanForeignScan currently does into three
separate steps.  The first idea that comes to mind is to call them
GetForeignRelSize, GetForeignPaths, GetForeignPlan; but maybe somebody
has a better idea for names?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-08 Thread Tom Lane
I wrote:
 There are a couple of other points that make me think we need to revisit
 the PlanForeignScan API definition some more, too.  ...
 So we need to break down what PlanForeignScan currently does into three
 separate steps.  The first idea that comes to mind is to call them
 GetForeignRelSize, GetForeignPaths, GetForeignPlan; but maybe somebody
 has a better idea for names?

Attached is a draft patch for that.  While I was working on this
I realized that we were very far short of allowing FDWs to set up
expressions of their choice for execution; there was nothing for that in
setrefs.c, nor some other places that need to post-process expressions.
I had originally supposed that fdw_private could just contain some
expression trees, but that wasn't going to work without post-processing.
So this patch attempts to cover that too, by breaking what had been
fdw_private into a private part and an fdw_exprs list that will be
subject to expression post-processing.  (The alternative to this would
be to do post-processing on all of fdw_private, but that would
considerably restrict what can be in fdw_private, so it seemed better
to decree two separate fields.)

Working on this also helped me identify some other things that had been
subliminally bothering me about pgsql_fdw's qual pushdown code.  That
patch is set up with the idea of pushing entire quals (boolean
RestrictInfo expressions) across to the remote side, but I think that
is probably the wrong granularity, or at least not the only mechanism
we should have.  IMO it is more important to provide a structure similar
to index quals; that is, what you want to identify is RestrictInfo
expressions of the form
remote_variable operator local_expression
where the operator has to be one that the remote can execute with the
same semantics as we think it has, but the only real restriction on the
local_expression is that it be stable, because we'll execute it locally
and send only its result value across to the remote.  (The SQL sent to
the remote looks like remote_variable operator $1, or some such.)
Thus, to take an example that's said to be unsafe in the existing code
comments, there's no problem at all with
remote_timestamp_col = now()
as long as we execute now() locally.

There might be some value in pushing entire quals across too, for
clauses like remote_variable_1 = remote_variable_2, but I believe
that these are not nearly as important as variable = constant and
variable = join_variable cases.  Consider that when dealing with a
local table, only the latter two cases can be accelerated by indexes.

regards, tom lane

diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index 29f203c6f10d194670aa37956a6190e8e7a1..e8907709bd90a6342384dfb6f10b00e55018d65d 100644
*** a/contrib/file_fdw/file_fdw.c
--- b/contrib/file_fdw/file_fdw.c
***
*** 26,31 
--- 26,33 
  #include nodes/makefuncs.h
  #include optimizer/cost.h
  #include optimizer/pathnode.h
+ #include optimizer/planmain.h
+ #include optimizer/restrictinfo.h
  #include utils/rel.h
  
  PG_MODULE_MAGIC;
*** struct FileFdwOption
*** 48,54 
   * Note: If you are adding new option for user mapping, you need to modify
   * fileGetOptions(), which currently doesn't bother to look at user mappings.
   */
! static struct FileFdwOption valid_options[] = {
  	/* File options */
  	{filename, ForeignTableRelationId},
  
--- 50,56 
   * Note: If you are adding new option for user mapping, you need to modify
   * fileGetOptions(), which currently doesn't bother to look at user mappings.
   */
! static const struct FileFdwOption valid_options[] = {
  	/* File options */
  	{filename, ForeignTableRelationId},
  
*** static struct FileFdwOption valid_option
*** 72,77 
--- 74,90 
  };
  
  /*
+  * FDW-specific information for RelOptInfo.fdw_private.
+  */
+ typedef struct FileFdwPlanState
+ {
+ 	char	   *filename;		/* file to read */
+ 	List	   *options;		/* merged COPY options, excluding filename */
+ 	BlockNumber pages;			/* estimate of file's physical size */
+ 	double		ntuples;		/* estimate of number of rows in file */
+ } FileFdwPlanState;
+ 
+ /*
   * FDW-specific information for ForeignScanState.fdw_state.
   */
  typedef struct FileFdwExecutionState
*** PG_FUNCTION_INFO_V1(file_fdw_validator);
*** 93,101 
  /*
   * FDW callback routines
   */
! static void filePlanForeignScan(Oid foreigntableid,
! 	PlannerInfo *root,
! 	RelOptInfo *baserel);
  static void fileExplainForeignScan(ForeignScanState *node, ExplainState *es);
  static void fileBeginForeignScan(ForeignScanState *node, int eflags);
  static TupleTableSlot *fileIterateForeignScan(ForeignScanState *node);
--- 106,123 
  /*
   * FDW callback routines
   */
! static void fileGetForeignRelSize(PlannerInfo *root,
!   RelOptInfo *baserel,
!   Oid foreigntableid);
! static void fileGetForeignPaths(PlannerInfo 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-07 Thread Peter Eisentraut
On tis, 2012-03-06 at 13:39 -0500, Tom Lane wrote:
 A bigger issue with postgresql_fdw_validator is that it supposes that
 the core backend is authoritative as to what options libpq supports,
 which is bad design on its face.  It would be much more sensible for
 dblink to be asking libpq what options libpq supports, say via
 PQconndefaults().

The validator for the proposed FDW suffers from the same problem.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-07 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 Agreed.  Attached fdw_helper patch doesn't contain GetFdwOptionValue()
 any more, and pgsql_fdw patch accesses only necessary catalogs.

I've committed the fdw_helper part of this, with some very minor
improvements.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-06 Thread Albe Laurenz
Shigeru Hanada wrote:
[pgsql_fdw_v12.patch]

I know this is not the latest version, but I played around with it and
tickled a bug.
It seems to have a problem with rolled back subtransactions.

test= \d+ remote
  Foreign table laurenz.remote
Column |  Type   | Modifiers | FDW Options | Storage  | Description
   +-+---+-+--+-
id | integer | not null  | | plain|
val| text| not null  | | extended |
   Server: loopback
   FDW Options: (nspname 'laurenz', relname 'local')
   Has OIDs: no
test= BEGIN;
test= DECLARE x CURSOR FOR SELECT * FROM remote;
   DEBUG:  Remote SQL: SELECT id, val FROM laurenz.local
   DEBUG:  relid=16423 fetch_count=1
   DEBUG:  starting remote transaction with START TRANSACTION ISOLATION
LEVEL REPEATABLE READ
test= FETCH x;
id | val
   +-
 1 | one
   (1 row)
test= SAVEPOINT z;
test= ERROR OUT;
   ERROR:  syntax error at or near ERROR
   LINE 1: ERROR OUT;
test= ROLLBACK TO SAVEPOINT z;
test= FETCH x;
id | val
   +-
 2 | two
   (1 row)
test= COMMIT;
   ERROR:  could not close cursor
   DETAIL:  no connection to the server

   HINT:  CLOSE pgsql_fdw_cursor_0

The error message reported is not consistent, at one attempt the backend
crashed.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-06 Thread Shigeru Hanada
(2012/03/06 6:19), Tom Lane wrote:
 I've committed the PlanForeignScan API change, with that change and
 some other minor editorialization.  The pgsql_fdw patch now needs an
 update, so I set it back to Waiting On Author state.

Thanks.

I've revised pgsql_fdw to catch up to this change, but I'll post those
patches after fixing the bug reported by Albe Laurenz.

BTW, what I did for this change is also needed for other existing FDWs
to make them available on 9.2.  So, I'd like to add how to change FDWs
for 9.2 to SQL/MED wiki page, where probably most of the FDW authors check.

Regards,
-- 
Shigeru Hanada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-06 Thread Albe Laurenz
Shigeru Hanada wrote:
Connection should be closed only when the trigger is a
 top level transaction and it's aborting, but isTopLevel flag was not
 checked.  I fixed the bug and added regression tests for such cases.

I wondered about that - is it really necessary to close the remote
connection? Wouldn't a ROLLBACK on the remote connection be good enough?

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-06 Thread Tom Lane
Peter Eisentraut pete...@gmx.net writes:
 On tor, 2012-03-01 at 20:56 +0900, Shigeru Hanada wrote:
 How about moving postgresql_fdw_validator into dblink,

 That's probably a good move.  If this were C++, we might try to subclass
 this whole thing a bit, to avoid code duplication, but I don't see an
 easy way to do that here.

 with renaming to dblink_fdw_validator? 

 Well, it's not the validator of the dblink_fdw, so maybe something like
 basic_postgresql_fdw_validator.

I don't understand this objection.  If we move it into dblink, then it
*is* dblink's validator, and nobody else's.

A bigger issue with postgresql_fdw_validator is that it supposes that
the core backend is authoritative as to what options libpq supports,
which is bad design on its face.  It would be much more sensible for
dblink to be asking libpq what options libpq supports, say via
PQconndefaults().

We might find that we have to leave postgresql_fdw_validator as-is
for backwards compatibility reasons (in particular, being able to load
existing FDW definitions) but I think we should migrate away from using
it.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-06 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 Attached patches also contains changes to catch up to the redesign of
 PlanForeignScan.

I started to look at this patch, which soon led me back to the
prerequisite patch fdw_helper_v5.patch (that is the last version you
posted of that one, right?).  I can see the value of
GetForeignColumnOptions, but ISTM that GetFdwOptionValue is poorly
designed and will accomplish little except to encourage inefficient
searching.  The original version was even worse, but even in the current
version there is no way to avoid a useless scan of table-level options
when you are looking for a column-level option.  Also, it's inefficient
when looking for several option values, as in this extract from
pgsql_fdw_v13.patch,

+   nspname = GetFdwOptionValue(InvalidOid, InvalidOid, relid,
+   
InvalidAttrNumber, nspname);
+   if (nspname == NULL)
+   nspname = get_namespace_name(get_rel_namespace(relid));
+   q_nspname = quote_identifier(nspname);
+ 
+   relname = GetFdwOptionValue(InvalidOid, InvalidOid, relid,
+   
InvalidAttrNumber, relname);
+   if (relname == NULL)
+   relname = get_rel_name(relid);
+   q_relname = quote_identifier(relname);

where we are going to uselessly run GetForeignTable twice.

If we had a lot of options that could usefully be specified at multiple
levels of the foreign-objects hierarchy, it might be appropriate to have
a search function defined like this; but the existing samples of FDWs
don't seem to support the idea that that's going to be common.  It looks
to me like the vast majority of options make sense at exactly one level.

So I'm thinking we should forget GetFdwOptionValue and just expect the
callers to search the option lists for the appropriate object(s).  It
might be worth providing get_options_value() as an exported function,
though surely there's not that much to it.

Another issue that get_options_value ignores defGetString() which is
what it really ought to be using, instead of assuming strVal() is
appropriate.  ISTM that it would also encourage people to take shortcuts
where they should be using functions like defGetBoolean() etc.  Not
quite sure what we should do about that; maybe we need to provide
several variants of the function that are appropriate for different
option datatypes.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-05 Thread Shigeru Hanada
(2012/02/21 20:25), Etsuro Fujita wrote:
 Please find attached an updated version of the patch.

This v2 patch can be applied on HEAD cleanly.  Compile completed with
only one expected warning of scan.c, and all regression tests for both
core and contrib modules passed.

This patch allows FDWs to return multiple ForeignPath nodes per a
PlanForeignScan call.  It also get rid of FdwPlan, FDW-private
information container, by replacing with simple List.

I've reviewed the patch closely, and have some comments about its design.

Basically a create_foo_path is responsible for creating a node object
with a particular Path-derived type, but this patch changes
create_foreignscan_path to just call PlanForeignScan and return void.
This change seems breaking module design.  IMO create_foreignscan_path
should return just one ForeignPath node per a call, so calling add_path
multiple times should be done in somewhere else.  I think
set_foreign_pathlist suites for it, because set_foo_pathlist functions
are responsible for building possible paths for a RangeTblEntry, as
comment of set_foreign_pathlist says.

/*
 * set_foreign_pathlist
 *  Build one or more access paths for a foreign table RTE
 */

In this design, FDW authors can implement PlanForeignScan by repeating
steps below for each possible scan path for a foreign table:

  (1) create a template ForeignPath node with create_foreignscan_path
  (2) customize the path as FDW wants, e.g. push down WHERE clause
  (3) store FDW-private info
  (4) estimate costs of the path
  (5) call add_path to add the path to RelOptInfo

Current design doesn't allow FDWs to provide multiple paths which have
different local filtering from each other, because all paths share a
RelOptInfo and baserestrictinfo in it.  I think this restriction
wouldn't be a serious problem.

Please find attached a patch implementing the design above.

-- 
Shigeru Hanada
diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index 46394a8..bb541e3 100644
*** a/contrib/file_fdw/file_fdw.c
--- b/contrib/file_fdw/file_fdw.c
***
*** 25,30 
--- 25,31 
  #include miscadmin.h
  #include nodes/makefuncs.h
  #include optimizer/cost.h
+ #include optimizer/pathnode.h
  #include utils/rel.h
  #include utils/syscache.h
  
*** PG_FUNCTION_INFO_V1(file_fdw_validator);
*** 93,99 
  /*
   * FDW callback routines
   */
! static FdwPlan *filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel);
  static void fileExplainForeignScan(ForeignScanState *node, ExplainState *es);
--- 94,100 
  /*
   * FDW callback routines
   */
! static void filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel);
  static void fileExplainForeignScan(ForeignScanState *node, ExplainState *es);
*** get_file_fdw_attribute_options(Oid relid
*** 406,432 
  
  /*
   * filePlanForeignScan
!  *Create a FdwPlan for a scan on the foreign table
   */
! static FdwPlan *
  filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel)
  {
!   FdwPlan*fdwplan;
char   *filename;
List   *options;
  
/* Fetch options --- we only need filename at this point */
fileGetOptions(foreigntableid, filename, options);
  
!   /* Construct FdwPlan with cost estimates */
!   fdwplan = makeNode(FdwPlan);
estimate_costs(root, baserel, filename,
!  fdwplan-startup_cost, 
fdwplan-total_cost);
!   fdwplan-fdw_private = NIL; /* not used */
  
!   return fdwplan;
  }
  
  /*
--- 407,443 
  
  /*
   * filePlanForeignScan
!  *Create possible access paths for a scan on the foreign table
!  *
!  *Currently we don't support any push-down feature, so there is 
only one
!  *possible access path, which simply returns all records in the 
order in
!  *the data file.
   */
! static void
  filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel)
  {
!   ForeignPath *pathnode = makeNode(ForeignPath);
char   *filename;
List   *options;
  
/* Fetch options --- we only need filename at this point */
fileGetOptions(foreigntableid, filename, options);
  
!   /* Create a ForeignPath node and add it as only one possible path. */
!   pathnode = create_foreignscan_path(root, baserel);
!   pathnode-fdw_private = NIL;
estimate_costs(root, baserel, filename,
!  pathnode-path.startup_cost, 
pathnode-path.total_cost);
!   pathnode-path.rows = baserel-rows;
!   

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-05 Thread Etsuro Fujita
(2012/03/05 18:21), Shigeru Hanada wrote:
 (2012/02/21 20:25), Etsuro Fujita wrote:
 Please find attached an updated version of the patch.
 
 This v2 patch can be applied on HEAD cleanly.  Compile completed with
 only one expected warning of scan.c, and all regression tests for both
 core and contrib modules passed.
 
 This patch allows FDWs to return multiple ForeignPath nodes per a
 PlanForeignScan call.  It also get rid of FdwPlan, FDW-private
 information container, by replacing with simple List.
 
 I've reviewed the patch closely, and have some comments about its design.

Thank you for your review.

 Basically a create_foo_path is responsible for creating a node object
 with a particular Path-derived type, but this patch changes
 create_foreignscan_path to just call PlanForeignScan and return void.
 This change seems breaking module design.

create_index_path builds multiple index paths for a plain relation.  How
about renaming the function to create_foreign_paths?

Best regards,
Etsuro Fujita

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-05 Thread Etsuro Fujita
(2012/03/05 21:00), Etsuro Fujita wrote:
 (2012/03/05 18:21), Shigeru Hanada wrote:
 (2012/02/21 20:25), Etsuro Fujita wrote:
 Please find attached an updated version of the patch.

 This v2 patch can be applied on HEAD cleanly.  Compile completed with
 only one expected warning of scan.c, and all regression tests for both
 core and contrib modules passed.

 This patch allows FDWs to return multiple ForeignPath nodes per a
 PlanForeignScan call.  It also get rid of FdwPlan, FDW-private
 information container, by replacing with simple List.

 I've reviewed the patch closely, and have some comments about its design.
 
 Thank you for your review.
 
 Basically a create_foo_path is responsible for creating a node object
 with a particular Path-derived type, but this patch changes
 create_foreignscan_path to just call PlanForeignScan and return void.
 This change seems breaking module design.
 
 create_index_path builds multiple index paths for a plain relation.  How
 about renaming the function to create_foreign_paths?

I meant create_foreignscan_paths.  I'm sorry about that.

Best regards,
Etsuro Fujita

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-05 Thread Shigeru Hanada
(2012/03/05 21:05), Etsuro Fujita wrote:
 (2012/03/05 21:00), Etsuro Fujita wrote:
 create_index_path builds multiple index paths for a plain relation.  How
 about renaming the function to create_foreign_paths?

 I meant create_foreignscan_paths.  I'm sorry about that.

Perhaps you are confusing create_index_path with create_index_paths.
Former creates a IndexScan path node (so it's similar to
create_foreignscan_path), and latter builds multiple IndexScan paths for
a plain relation.

So, just renaming create_foreignscan_path to plural form seems missing
the point.

-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-05 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 So, just renaming create_foreignscan_path to plural form seems missing
 the point.

I agree that that wouldn't be an improvement.  What bothers me about the
patch's version of this function is that it just creates a content-free
Path node and leaves it to the caller to fill in everything.  That
doesn't accomplish much, and it leaves the caller very exposed to errors
of omission.  It's also unlike the other create_xxx_path functions,
which generally hand back a completed Path ready to pass to add_path.

I'm inclined to think that if we provide this function in core at all,
it should take a parameter list long enough to let it fill in the Path
completely.  That would imply that any future changes in Path structs
would result in a change in the parameter list, which would break
callers --- but it would break them in an obvious way that the C
compiler would complain about.  If we leave it as-is, those same callers
would be broken silently, because they'd just be failing to fill in
the new Path fields.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-05 Thread Tom Lane
I wrote:
 I'm inclined to think that if we provide this function in core at all,
 it should take a parameter list long enough to let it fill in the Path
 completely.  That would imply that any future changes in Path structs
 would result in a change in the parameter list, which would break
 callers --- but it would break them in an obvious way that the C
 compiler would complain about.  If we leave it as-is, those same callers
 would be broken silently, because they'd just be failing to fill in
 the new Path fields.

I've committed the PlanForeignScan API change, with that change and
some other minor editorialization.  The pgsql_fdw patch now needs an
update, so I set it back to Waiting On Author state.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-01 Thread Shigeru Hanada
(2012/03/01 0:33), Tom Lane wrote:
 I don't think that creating such a dependency is acceptable.
 Even if we didn't mind the dependency, you said yourself that
 contrib/postgresql_fdw's validator will accept stuff that's not
 appropriate for dblink.

Agreed.  I think that these two contrib modules (and all FDW modules)
should have individual validator for each to avoid undesirable
dependency and naming conflict, and such validator function should be
inside each module, but not in core.

How about moving postgresql_fdw_validator into dblink, with renaming to
dblink_fdw_validator?  Attached patch achieves such changes.  I've left
postgresql_fdw_validator in foreign_data regression test section, so
that foreign_data section can still check whether FDW DDLs invoke
validator function.  I used the name postgresql_fdw_validator for test
validator to make change as little as possible.

This change requires dblink to have new function, so its version should
be bumped to 1.1.

These changes have no direct relation to PostgreSQL FDW, so this patch
can be applied by itself.  If this patch has been applied, I'll rename
pgsql_fdw to postgresql_fdw which contains product name fully spelled out.

-- 
Shigeru Hanada
diff --git a/contrib/dblink/Makefile b/contrib/dblink/Makefile
index ac63748..a27db88 100644
*** a/contrib/dblink/Makefile
--- b/contrib/dblink/Makefile
*** SHLIB_LINK = $(libpq)
*** 7,13 
  SHLIB_PREREQS = submake-libpq
  
  EXTENSION = dblink
! DATA = dblink--1.0.sql dblink--unpackaged--1.0.sql
  
  REGRESS = dblink
  
--- 7,13 
  SHLIB_PREREQS = submake-libpq
  
  EXTENSION = dblink
! DATA = dblink--1.1.sql dblink--1.0--1.1.sql dblink--unpackaged--1.0.sql
  
  REGRESS = dblink
  
diff --git a/contrib/dblink/dblink--1.0--1.1.sql 
b/contrib/dblink/dblink--1.0--1.1.sql
index ...3dd02a0 .
*** a/contrib/dblink/dblink--1.0--1.1.sql
--- b/contrib/dblink/dblink--1.0--1.1.sql
***
*** 0 
--- 1,17 
+ /* contrib/dblink/dblink--1.0--1.1.sql */
+ 
+ -- complain if script is sourced in psql, rather than via ALTER EXTENSION
+ \echo Use ALTER EXTENSION dblink UPDATE to load this file. \quit
+ 
+ /* First we have to create validator function */
+ CREATE FUNCTION dblink_fdw_validator(
+ options text[],
+ catalog oid
+ )
+ RETURNS boolean
+ AS 'MODULE_PATHNAME', 'dblink_fdw_validator'
+ LANGUAGE C STRICT;
+ 
+ /* Then we add the validator */
+ ALTER EXTENSION dblink ADD FUNCTION dblink_fdw_validator(text[], oid);
+ 
diff --git a/contrib/dblink/dblink--1.1.sql b/contrib/dblink/dblink--1.1.sql
index ...ec02498 .
*** a/contrib/dblink/dblink--1.1.sql
--- b/contrib/dblink/dblink--1.1.sql
***
*** 0 
--- 1,231 
+ /* contrib/dblink/dblink--1.1.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use CREATE EXTENSION dblink to load this file. \quit
+ 
+ -- dblink_connect now restricts non-superusers to password
+ -- authenticated connections
+ CREATE FUNCTION dblink_connect (text)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_connect'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_connect (text, text)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_connect'
+ LANGUAGE C STRICT;
+ 
+ -- dblink_connect_u allows non-superusers to use
+ -- non-password authenticated connections, but initially
+ -- privileges are revoked from public
+ CREATE FUNCTION dblink_connect_u (text)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_connect'
+ LANGUAGE C STRICT SECURITY DEFINER;
+ 
+ CREATE FUNCTION dblink_connect_u (text, text)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_connect'
+ LANGUAGE C STRICT SECURITY DEFINER;
+ 
+ REVOKE ALL ON FUNCTION dblink_connect_u (text) FROM public;
+ REVOKE ALL ON FUNCTION dblink_connect_u (text, text) FROM public;
+ 
+ CREATE FUNCTION dblink_disconnect ()
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_disconnect'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_disconnect (text)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_disconnect'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_open (text, text)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_open'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_open (text, text, boolean)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_open'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_open (text, text, text)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_open'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_open (text, text, text, boolean)
+ RETURNS text
+ AS 'MODULE_PATHNAME','dblink_open'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_fetch (text, int)
+ RETURNS setof record
+ AS 'MODULE_PATHNAME','dblink_fetch'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_fetch (text, int, boolean)
+ RETURNS setof record
+ AS 'MODULE_PATHNAME','dblink_fetch'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_fetch (text, text, int)
+ RETURNS setof record
+ AS 'MODULE_PATHNAME','dblink_fetch'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION dblink_fetch (text, text, int, boolean)
+ 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-01 Thread Peter Eisentraut
On tor, 2012-03-01 at 20:56 +0900, Shigeru Hanada wrote:
 How about moving postgresql_fdw_validator into dblink,

That's probably a good move.  If this were C++, we might try to subclass
this whole thing a bit, to avoid code duplication, but I don't see an
easy way to do that here.

  with renaming to dblink_fdw_validator? 

Well, it's not the validator of the dblink_fdw, so maybe something like
basic_postgresql_fdw_validator.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-03-01 Thread Shigeru Hanada
2012/3/2 Peter Eisentraut pete...@gmx.net:
  with renaming to dblink_fdw_validator?

 Well, it's not the validator of the dblink_fdw, so maybe something like
 basic_postgresql_fdw_validator.

-1 for same reason.  It's not the validator of basic_postgresql_fdw.

Using fdw in the name of validator which doesn't have actual FDW might
confuse users.  Rather dblink_validator or libpq_option_validator is better?

One possible another idea is creating dblink_fdw which uses the
validator during CREATE EXTENSION dblink for users who store
connection information in FDW objects.

-- 
Shigeru Hanada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-29 Thread Shigeru Hanada
(2012/02/29 4:07), Peter Eisentraut wrote:
 Let's at least be clear about the reasons here.  The fact that
 postgresql_fdw_validator exists means (a) there is a possible naming
 conflict that has not been discussed yet, and/or (b) the name is already
 settled and we need to think of a way to make postgresql_fdw_validator
 work with the new actual FDW.

We can avoid conflict of name by using postgres_fdw or pgsql_fdw, but it
doesn't solve fundamental issue.  ISTM that maintaining two similar
validators is wasteful and confusing, and FDW for PostgreSQL should be
just one, at least in the context of core distribution.

Current pgsql_fdw_validator accepts every FDW options which is accepted
by postgresql_fdw_validator, and additionally accepts FDW specific
options such as fetch_count.  So, if dblink can ignore unknown FDW
options, pgsql_fdw_validator can be used to create foreign servers for
dblink connection.

How about removing postgresql_fdw_validator from backend binary, and
changing dblink to use contrib/postgresql_fdw's validator?  It breaks
some backward compatibility and requires contrib/postgresql_fdw to be
installed before using contrib/dblink with foreign servers, but ISTM
that it doesn't become so serious.

Of course dblink is still available by itself if user specifies
connection information with key = value string, not with server name.

One concern is how to avoid duplicated list of valid libpq options.
Adding new libpq function, like below, which returns 1 when given name
is a valid libpq option would help.

int PQisValidOption(const char *keyword);

-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-29 Thread Pavel Stehule
2012/2/29 Shigeru Hanada shigeru.han...@gmail.com:
 (2012/02/29 4:07), Peter Eisentraut wrote:
 Let's at least be clear about the reasons here.  The fact that
 postgresql_fdw_validator exists means (a) there is a possible naming
 conflict that has not been discussed yet, and/or (b) the name is already
 settled and we need to think of a way to make postgresql_fdw_validator
 work with the new actual FDW.

 We can avoid conflict of name by using postgres_fdw or pgsql_fdw, but it
 doesn't solve fundamental issue.  ISTM that maintaining two similar
 validators is wasteful and confusing, and FDW for PostgreSQL should be
 just one, at least in the context of core distribution.

 Current pgsql_fdw_validator accepts every FDW options which is accepted
 by postgresql_fdw_validator, and additionally accepts FDW specific
 options such as fetch_count.  So, if dblink can ignore unknown FDW
 options, pgsql_fdw_validator can be used to create foreign servers for
 dblink connection.

 How about removing postgresql_fdw_validator from backend binary, and
 changing dblink to use contrib/postgresql_fdw's validator?  It breaks
 some backward compatibility and requires contrib/postgresql_fdw to be
 installed before using contrib/dblink with foreign servers, but ISTM
 that it doesn't become so serious.


+1

pavel stehule

 Of course dblink is still available by itself if user specifies
 connection information with key = value string, not with server name.

 One concern is how to avoid duplicated list of valid libpq options.
 Adding new libpq function, like below, which returns 1 when given name
 is a valid libpq option would help.

    int PQisValidOption(const char *keyword);

 --
 Shigeru Hanada

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-29 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 How about removing postgresql_fdw_validator from backend binary, and
 changing dblink to use contrib/postgresql_fdw's validator?  It breaks
 some backward compatibility and requires contrib/postgresql_fdw to be
 installed before using contrib/dblink with foreign servers, but ISTM
 that it doesn't become so serious.

I don't think that creating such a dependency is acceptable.
Even if we didn't mind the dependency, you said yourself that
contrib/postgresql_fdw's validator will accept stuff that's not
appropriate for dblink.

If we don't think postgresql_fdw_validator belongs in core after all,
we should just move it to dblink.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-28 Thread Robert Haas
On Fri, Feb 24, 2012 at 5:31 PM, Peter Eisentraut pete...@gmx.net wrote:
 Could we name this postgresql_fdw instead?  We already have several
 ${productname}_fdw out there, and I don't want to get in the business of
 having to guess variant spellings.

If you don't like variant spellings, having anything to do with
PostgreSQL, aka Postgres, and usually discussed on the pgsql-* mailing
lists, is probably a bad idea.

Go Postgre!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-28 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Fri, Feb 24, 2012 at 5:31 PM, Peter Eisentraut pete...@gmx.net wrote:
 Could we name this postgresql_fdw instead?  We already have several
 ${productname}_fdw out there, and I don't want to get in the business of
 having to guess variant spellings.

 If you don't like variant spellings, having anything to do with
 PostgreSQL, aka Postgres, and usually discussed on the pgsql-* mailing
 lists, is probably a bad idea.

[ snicker ]  But still, Peter has a point: pgsql is not a name for the
product, it's at best an abbreviation.  We aren't calling the other
thing orcl_fdw or ora_fdw.

I think either postgres_fdw or postgresql_fdw would be fine.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-28 Thread Robert Haas
On Tue, Feb 28, 2012 at 11:02 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Fri, Feb 24, 2012 at 5:31 PM, Peter Eisentraut pete...@gmx.net wrote:
 Could we name this postgresql_fdw instead?  We already have several
 ${productname}_fdw out there, and I don't want to get in the business of
 having to guess variant spellings.

 If you don't like variant spellings, having anything to do with
 PostgreSQL, aka Postgres, and usually discussed on the pgsql-* mailing
 lists, is probably a bad idea.

 [ snicker ]  But still, Peter has a point: pgsql is not a name for the
 product, it's at best an abbreviation.  We aren't calling the other
 thing orcl_fdw or ora_fdw.

 I think either postgres_fdw or postgresql_fdw would be fine.

I liked the shorter name, myself, but I'm not going to make a big deal about it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-28 Thread Peter Eisentraut
On tis, 2012-02-28 at 11:20 -0500, Robert Haas wrote:
  [ snicker ]  But still, Peter has a point: pgsql is not a name for
 the
  product, it's at best an abbreviation.  We aren't calling the other
  thing orcl_fdw or ora_fdw.
 
  I think either postgres_fdw or postgresql_fdw would be fine.
 
 I liked the shorter name, myself, but I'm not going to make a big deal
 about it.

Let's at least be clear about the reasons here.  The fact that
postgresql_fdw_validator exists means (a) there is a possible naming
conflict that has not been discussed yet, and/or (b) the name is already
settled and we need to think of a way to make postgresql_fdw_validator
work with the new actual FDW.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-28 Thread David E. Wheeler
On Feb 28, 2012, at 8:20 AM, Robert Haas wrote:

 I liked the shorter name, myself, but I'm not going to make a big deal about 
 it.

pg_ is used quite a bit. what about pg_fdw?

David
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-27 Thread Shigeru Hanada
(2012/02/25 7:31), Peter Eisentraut wrote:
 Could we name this postgresql_fdw instead?  We already have several
 ${productname}_fdw out there, and I don't want to get in the business of
 having to guess variant spellings.

I worry name conflict with existing postgresql_fdw_validator, which is
implemented in backend binary and used by contrib/dblink.  I thought
that we should use another name for PostgreSQL FDW unless we can change
specification of dblink connection string.

-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-24 Thread Peter Eisentraut
Could we name this postgresql_fdw instead?  We already have several
${productname}_fdw out there, and I don't want to get in the business of
having to guess variant spellings.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-21 Thread Etsuro Fujita
(2012/02/15 20:50), Etsuro Fujita wrote:
 (2012/02/14 23:50), Tom Lane wrote:
 (2012/02/14 17:40), Etsuro Fujita wrote:
 As discussed at
 that thread, it would have to change the PlanForeignScan API to let the
 FDW generate multiple paths and dump them all to add_path instead of
 returning a FdwPlan struct.

 I would really like to see that happen in 9.2, because the longer we let
 that mistake live, the harder it will be to change.  More and more FDWs
 are getting written.  I don't think it's that hard to do: we just have
 to agree that PlanForeignScan should return void and call add_path for
 itself, possibly more than once.
 
 Agreed.  I fixed the PlanForeignScan API.  Please find attached a patch.
 
 If we do that, I'm inclined to think
 we cou;d get rid of the separate Node type FdwPlan, and just incorporate
 List *fdw_private into ForeignPath and ForeignScan.
 
 +1  While the patch retains the struct FdwPlan, I would like to get rid
 of it at next version of the patch.

Please find attached an updated version of the patch.

Best regards,
Etsuro Fujita
*** a/contrib/file_fdw/file_fdw.c
--- b/contrib/file_fdw/file_fdw.c
***
*** 25,30 
--- 25,31 
  #include miscadmin.h
  #include nodes/makefuncs.h
  #include optimizer/cost.h
+ #include optimizer/pathnode.h
  #include utils/rel.h
  #include utils/syscache.h
  
***
*** 93,99  PG_FUNCTION_INFO_V1(file_fdw_validator);
  /*
   * FDW callback routines
   */
! static FdwPlan *filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel);
  static void fileExplainForeignScan(ForeignScanState *node, ExplainState *es);
--- 94,100 
  /*
   * FDW callback routines
   */
! static void filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel);
  static void fileExplainForeignScan(ForeignScanState *node, ExplainState *es);
***
*** 406,432  get_file_fdw_attribute_options(Oid relid)
  
  /*
   * filePlanForeignScan
!  *Create a FdwPlan for a scan on the foreign table
   */
! static FdwPlan *
  filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel)
  {
!   FdwPlan*fdwplan;
char   *filename;
List   *options;
  
/* Fetch options --- we only need filename at this point */
fileGetOptions(foreigntableid, filename, options);
  
!   /* Construct FdwPlan with cost estimates */
!   fdwplan = makeNode(FdwPlan);
estimate_costs(root, baserel, filename,
!  fdwplan-startup_cost, 
fdwplan-total_cost);
!   fdwplan-fdw_private = NIL; /* not used */
  
!   return fdwplan;
  }
  
  /*
--- 407,439 
  
  /*
   * filePlanForeignScan
!  *Create the (single) path for a scan on the foreign table
   */
! static void
  filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel)
  {
!   ForeignPath *pathnode = makeNode(ForeignPath);
char   *filename;
List   *options;
  
+   pathnode-path.pathtype = T_ForeignScan;
+   pathnode-path.parent = baserel;
+   pathnode-path.pathkeys = NIL;  /* result is always unordered */
+   pathnode-path.required_outer = NULL;
+   pathnode-path.param_clauses = NIL;
+   pathnode-fdw_private = NIL;/* not used */
+ 
/* Fetch options --- we only need filename at this point */
fileGetOptions(foreigntableid, filename, options);
  
!   /* Estimate costs of scanning the foreign table */
estimate_costs(root, baserel, filename,
!  pathnode-path.startup_cost, 
pathnode-path.total_cost);
!   pathnode-path.rows = baserel-rows;
  
!   add_path(baserel, (Path *) pathnode);
  }
  
  /*
*** a/doc/src/sgml/fdwhandler.sgml
--- b/doc/src/sgml/fdwhandler.sgml
***
*** 88,108 
  
  para
  programlisting
! FdwPlan *
  PlanForeignScan (Oid foreigntableid,
   PlannerInfo *root,
   RelOptInfo *baserel);
  /programlisting
  
!  Plan a scan on a foreign table. This is called when a query is planned.
   literalforeigntableid/ is the structnamepg_class/ OID of the
   foreign table.  literalroot/ is the planner's global information
   about the query, and literalbaserel/ is the planner's information
   about this table.
!  The function must return a palloc'd struct that contains cost estimates
!  plus any FDW-private information that is needed to execute the foreign
!  scan at a later time.  (Note that the private information must be
!  represented in a form that functioncopyObject/ knows how to 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-20 Thread Albe Laurenz
I wrote:
 Shigeru Hanada wrote:
 - Since a rescan is done by rewinding the cursor, is it necessary
to have any other remote isolation level than READ COMMITED?
There is only one query issued per transaction.

 If multiple foreign tables on a foreign server is used in a local
query,
 multiple queries are executed in a remote transaction.  So IMO
isolation
 levels are useful even if remote query is executed only once.
 
 Oh, I see. You are right.

I thought some more about this and changed my mind.

If your query involves foreign scans on two foreign tables on the same
foreign server, these should always see the same snapshot, because
that's how it works with two scans in one query on local tables.

So I think it should be REPEATABLE READ in all cases - SERIALIZABLE
is not necessary as long as all you do is read.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-20 Thread Kevin Grittner
Albe Laurenz laurenz.a...@wien.gv.at wrote:
 
 If your query involves foreign scans on two foreign tables on the
 same foreign server, these should always see the same snapshot,
 because that's how it works with two scans in one query on local
 tables.
 
That makes sense.
 
 So I think it should be REPEATABLE READ in all cases -
 SERIALIZABLE is not necessary as long as all you do is read.
 
That depends on whether you only want to see states of the database
which are consistent with later states of the database and any
invariants enforced by triggers or other software.  See this example
of how a read-only transaction can see a bogus state at REPEATABLE
READ or less strict transaction isolation:
 
http://wiki.postgresql.org/wiki/SSI#Read_Only_Transactions
 
Perhaps if the transaction using the pgsql_fdw is running at the
SERIALIZABLE transaction isolation level, it should run the queries
at the that level, otherwise at REPEATABLE READ.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-20 Thread Albe Laurenz
Kevin Grittner wrote:
  If your query involves foreign scans on two foreign tables on the
  same foreign server, these should always see the same snapshot,
  because that's how it works with two scans in one query on local
  tables.
 
 That makes sense.

  So I think it should be REPEATABLE READ in all cases -
  SERIALIZABLE is not necessary as long as all you do is read.
 
 That depends on whether you only want to see states of the database
 which are consistent with later states of the database and any
 invariants enforced by triggers or other software.  See this example
 of how a read-only transaction can see a bogus state at REPEATABLE
 READ or less strict transaction isolation:
 
 http://wiki.postgresql.org/wiki/SSI#Read_Only_Transactions
 
 Perhaps if the transaction using the pgsql_fdw is running at the
 SERIALIZABLE transaction isolation level, it should run the queries
 at the that level, otherwise at REPEATABLE READ.

I read the example carefully, and it seems to me that it is necessary
for the read-only transaction (T3) to be SERIALIZABLE so that
T1 is aborted and the state that T3 saw remains valid.

If I understand right, I agree with your correction.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-20 Thread Kevin Grittner
Albe Laurenz laurenz.a...@wien.gv.at wrote:
 
 I read the example carefully, and it seems to me that it is
 necessary for the read-only transaction (T3) to be SERIALIZABLE so
 that T1 is aborted and the state that T3 saw remains valid.
 
Correct.
 
 If I understand right, I agree with your correction.
 
:-)
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-20 Thread Shigeru Hanada
2012/02/21 0:58 Kevin Grittner kevin.gritt...@wicourts.gov:

 Albe Laurenz laurenz.a...@wien.gv.at wrote:

  I read the example carefully, and it seems to me that it is
  necessary for the read-only transaction (T3)v to be SERIALIZABLE so
  that T1 is aborted and the state that T3 saw remains valid.

 Correct.
Hm, agreed that isolation levels  REPEATABLE READ are not sufficient for
pgsql_fdw's usage.  I'll examine the example and fix pgsql_fdw.

Thanks.


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-18 Thread Kohei KaiGai
2012年2月17日6:08 Shigeru Hanada shigeru.han...@gmail.com:
 (2012/02/17 2:02), Kohei KaiGai wrote:
 I found a strange behavior with v10. Is it available to reproduce?
 snip
 I tried to raise an error on remote side.

postgres=# select * FROM ftbl WHERE 100 / (a - 3)  0;
The connection to the server was lost. Attempting reset: Failed.
The connection to the server was lost. Attempting reset: Failed.
!  \q

 I could reproduce the error by omitting CFLAGS=-O0 from configure
 option.  I usually this for coding environment so that gdb debugging
 works correctly, so I haven't noticed this issue.  I should test
 optimized environment too...

 Expected result in that case is:

 postgres=# select * from pgbench_accounts where 100 / (aid - 3)  0;
 ERROR:  could not fetch rows from foreign server
 DETAIL:  ERROR:  division by zero

 HINT:  FETCH 1 FROM pgsql_fdw_cursor_0
 postgres=#

 This is the PG_CATCH block at execute_query(). fetch_result() raises
 an error, then it shall be catched to release PGresult.
 Although res should be NULL at this point, PQclear was called with
 a non-zero value according to the call trace.

 More strangely, I tried to inject elog(INFO, ...) to show the value of res
 at this point. Then, it become unavailable to reproduce when I tried to
 show the pointer of res with elog(INFO, res = %p,res);

 Why the res has a non-zero value, even though it was cleared prior
 to fetch_result() and an error was raised within this function?

 I've found the the problem is uninitialized PGresult variables.
 Uninitialized PGresult pointer is used in some places, so its value is
 garbage in PG_CATCH block when assignment code has been interrupted by
 longjmp.

 Probably recommended style would be like this:

 pseudo_code
PGresult *res = NULL;/* must be NULL in PG_CATCH */

PG_TRY();
{
res = func_might_throw_exception();
if (PQstatus(res) != PGRES_xxx_OK)
{
/* error handling, pass message to caller */
ereport(ERROR, ...);
}

/* success case, use result of query and release it */
...
PQclear(res);
}
PG_CATCH();
{
PQclear(res);
PG_RE_THROW();
/* caller should catch this exception. */
}
 /pseudo_code

 I misunderstood that PGresult pointer always has valid value after that
 line, because I had wrote assignment of PGresult pointer before PG_TRY
 block.  Fixes for this issue are:

 (1) Initialize PGresult pointer with NULL, if it is used in PG_CATCH.
 (2) Move PGresult assignment into PG_TRY block so that we can get
 compiler warning of uninitialized  variable, just in case.

 Please find attached a patch including fixes for this issue.

I marked this patch as Ready for Committer, since I have nothing to
comment any more.

I'd like committer help to review this patch and it get merged.

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-17 Thread Albe Laurenz
Shigeru Hanada wrote:
 - Since a rescan is done by rewinding the cursor, is it necessary
to have any other remote isolation level than READ COMMITED?
There is only one query issued per transaction.
 
 If multiple foreign tables on a foreign server is used in a local
query,
 multiple queries are executed in a remote transaction.  So IMO
isolation
 levels are useful even if remote query is executed only once.

Oh, I see. You are right.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-16 Thread Albe Laurenz
Shigeru Hanada wrote:
 Thanks for the review.  Attached patches are revised version, though
 only fdw_helper_v5.patch is unchanged.

Two questions:
- Is it on purpose that you can specify all SSL client options
  except sslcompression?
- Since a rescan is done by rewinding the cursor, is it necessary
  to have any other remote isolation level than READ COMMITED?
  There is only one query issued per transaction.

Yours,
Laurenz Albe


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-16 Thread Kohei KaiGai
I found a strange behavior with v10. Is it available to reproduce?

In case of ftbl is declared as follows:
  postgres=# select * FROM ftbl;
   a |  b
  ---+-
   1 | aaa
   2 | bbb
   3 | ccc
   4 | ddd
   5 | eee
  (5 rows)

I tried to raise an error on remote side.

  postgres=# select * FROM ftbl WHERE 100 / (a - 3)  0;
  The connection to the server was lost. Attempting reset: Failed.
  The connection to the server was lost. Attempting reset: Failed.
  ! \q

Its call-trace was:

(gdb) bt
#0  0x0031030810a4 in free () from /lib64/libc.so.6
#1  0x7f2caa620bd9 in PQclear (res=0x2102500) at fe-exec.c:679
#2  0x7f2caa83c4db in execute_query (node=0x20f20a0) at pgsql_fdw.c:722
#3  0x7f2caa83c64a in pgsqlIterateForeignScan (node=0x20f20a0)
at pgsql_fdw.c:402
#4  0x005c120f in ForeignNext (node=0x20f20a0) at nodeForeignscan.c:50
#5  0x005a9b37 in ExecScanFetch (recheckMtd=0x5c11c0 ForeignRecheck,
accessMtd=0x5c11d0 ForeignNext, node=0x20f20a0) at execScan.c:82
#6  ExecScan (node=0x20f20a0, accessMtd=0x5c11d0 ForeignNext,
recheckMtd=0x5c11c0 ForeignRecheck) at execScan.c:132
#7  0x005a2128 in ExecProcNode (node=0x20f20a0) at execProcnode.c:441
#8  0x0059edc2 in ExecutePlan (dest=0x210f280,
direction=optimized out, numberTuples=0, sendTuples=1 '\001',
operation=CMD_SELECT, planstate=0x20f20a0, estate=0x20f1f88)
at execMain.c:1449

This is the PG_CATCH block at execute_query(). fetch_result() raises
an error, then it shall be catched to release PGresult.
Although res should be NULL at this point, PQclear was called with
a non-zero value according to the call trace.

More strangely, I tried to inject elog(INFO, ...) to show the value of res
at this point. Then, it become unavailable to reproduce when I tried to
show the pointer of res with elog(INFO, res = %p, res);

Why the res has a non-zero value, even though it was cleared prior
to fetch_result() and an error was raised within this function?

Thanks,

2012年2月16日13:41 Shigeru Hanada shigeru.han...@gmail.com:
 Kaigai-san,

 Thanks for the review.  Attached patches are revised version, though
 only fdw_helper_v5.patch is unchanged.

 (2012/02/16 0:09), Kohei KaiGai wrote:
 [memory context of tuple store]
 It calls tuplestore_begin_heap() under the memory context of
 festate-scan_cxt at pgsqlBeginForeignScan.

 Yes, it's because tuplestore uses a context which was current when
 tuplestore_begin_heap was called.  I want to use per-scan context for
 tuplestore, to keep its content tuples alive through the scan.

 On the other hand, tuplestore_gettupleslot() is called under the
 memory context of festate-tuples.

 Yes, result tuples to be returned to executor should be allocated in
 per-scan context and live until next IterateForeignScan (or
 EndForeignScan),  because such tuple will be released via ExecClearTuple
 in next IterateForeignScan call.  If we don't switch context to per-scan
 context, result tuple is allocated in per-tuple context and cause
 double-free and server crash.

 I could not find a callback functions being invoked on errors,
 so I doubt the memory objects acquired within tuplestore_begin_heap()
 shall be leaked, even though it is my suggestion to create a sub-context
 under the existing one.

 How do you confirmed that no callback function is invoked on errors?  I
 think that memory objects acquired within tuplestore_begin_heap (I guess
 you mean excluding stored tuples, right?) are released during cleanup of
 aborted transaction.  I tested that by adding elog(ERROR) to the tail of
 store_result() for intentional error, and execute large query 100 times
 in a session.  I saw VIRT value (via top command) comes down to constant
 level after every query.

 In my opinion, it is a good choice to use es_query_cxt of the supplied 
 EState.
 What does prevent to apply this per-query memory context?

 Ah, I've confused context management of pgsql_fdw...  I fixed pgsql_fdw
 to create per-scan context as a child of es_query_cxt in
 BeginForeignScan, and use it for tuplestore of the scan.  So, tuplestore
 and its contents are released correctly at EndForeignScan, or cleanup of
 aborted transaction in error case.

 You mention about PGresult being malloc()'ed. However, it seems to me
 fetch_result() and store_result() once copy the contents on malloc()'ed
 area to the palloc()'ed area, and PQresult is released on an error using
 PG_TRY() ... PG_CATCH() block.

 During thinking about this comment, I found double-free bug of PGresult
 in execute_query, thanks :)

 But, sorry, I'm not sure what the concern you show here is.  The reason
 why copying  tuples from malloc'ed area to palloc'ed area is to release
 PGresult before returning from the IterateForeingScan call.  The reason
 why using PG_TRY block is to sure that PGresult is released before jump
 back to upstream in error case.

 [Minor comments]
 Please set NULL to sql variable at begin_remote_tx().
 Compiler raises a warnning 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-16 Thread Kohei KaiGai
2012年2月16日13:41 Shigeru Hanada shigeru.han...@gmail.com:
 Kaigai-san,

 Thanks for the review.  Attached patches are revised version, though
 only fdw_helper_v5.patch is unchanged.

 (2012/02/16 0:09), Kohei KaiGai wrote:
 [memory context of tuple store]
 It calls tuplestore_begin_heap() under the memory context of
 festate-scan_cxt at pgsqlBeginForeignScan.

 Yes, it's because tuplestore uses a context which was current when
 tuplestore_begin_heap was called.  I want to use per-scan context for
 tuplestore, to keep its content tuples alive through the scan.

 On the other hand, tuplestore_gettupleslot() is called under the
 memory context of festate-tuples.

 Yes, result tuples to be returned to executor should be allocated in
 per-scan context and live until next IterateForeignScan (or
 EndForeignScan),  because such tuple will be released via ExecClearTuple
 in next IterateForeignScan call.  If we don't switch context to per-scan
 context, result tuple is allocated in per-tuple context and cause
 double-free and server crash.

 I could not find a callback functions being invoked on errors,
 so I doubt the memory objects acquired within tuplestore_begin_heap()
 shall be leaked, even though it is my suggestion to create a sub-context
 under the existing one.

 How do you confirmed that no callback function is invoked on errors?  I
 think that memory objects acquired within tuplestore_begin_heap (I guess
 you mean excluding stored tuples, right?) are released during cleanup of
 aborted transaction.  I tested that by adding elog(ERROR) to the tail of
 store_result() for intentional error, and execute large query 100 times
 in a session.  I saw VIRT value (via top command) comes down to constant
 level after every query.

Oops, I overlooked the point where MessageContext and its children get
reset. However, as its name, I don't believe it was right usage of memory
context.
As the latest version doing, es_query_cxt is the right way to acquire
memory object with per-query duration.

 In my opinion, it is a good choice to use es_query_cxt of the supplied 
 EState.
 What does prevent to apply this per-query memory context?

 Ah, I've confused context management of pgsql_fdw...  I fixed pgsql_fdw
 to create per-scan context as a child of es_query_cxt in
 BeginForeignScan, and use it for tuplestore of the scan.  So, tuplestore
 and its contents are released correctly at EndForeignScan, or cleanup of
 aborted transaction in error case.

I believe it is right direction.

 You mention about PGresult being malloc()'ed. However, it seems to me
 fetch_result() and store_result() once copy the contents on malloc()'ed
 area to the palloc()'ed area, and PQresult is released on an error using
 PG_TRY() ... PG_CATCH() block.

 During thinking about this comment, I found double-free bug of PGresult
 in execute_query, thanks :)

Unfortunately, I found the strange behavior around this code.
I doubt an interaction between longjmp and compiler optimization,
but it is not certain right now.

I'd like to push this patch to committer reviews after this problem got closed.

Right now, I don't have comments on this patch any more.

 But, sorry, I'm not sure what the concern you show here is.  The reason
 why copying  tuples from malloc'ed area to palloc'ed area is to release
 PGresult before returning from the IterateForeingScan call.  The reason
 why using PG_TRY block is to sure that PGresult is released before jump
 back to upstream in error case.

 [Minor comments]
 Please set NULL to sql variable at begin_remote_tx().
 Compiler raises a warnning due to references of uninitialized variable,
 even though the code path never run.

 Fixed.  BTW, just out of curiosity, which compiler do you use?  My
 compiler ,gcc (GCC) 4.6.0 20110603 (Red Hat 4.6.0-10) on Fedora 15,
 doesn't warn it.

I uses Fedora 16, and GCC 4.6.2.

[kaigai@iwashi pgsql_fdw]$ gcc --version
gcc (GCC) 4.6.2 20111027 (Red Hat 4.6.2-1)

It is not a matter related to compiler version, but common manner in
PostgreSQL code. You can likely found source code comments
like /* keep compiler quiet */

Thanks,
-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-16 Thread Shigeru Hanada
(2012/02/17 0:15), Albe Laurenz wrote:
 Shigeru Hanada wrote:
 Thanks for the review.  Attached patches are revised version, though
 only fdw_helper_v5.patch is unchanged.
 
 Two questions:
 - Is it on purpose that you can specify all SSL client options
except sslcompression?

No, just an oversight.  Good catch.

 - Since a rescan is done by rewinding the cursor, is it necessary
to have any other remote isolation level than READ COMMITED?
There is only one query issued per transaction.

If multiple foreign tables on a foreign server is used in a local query,
multiple queries are executed in a remote transaction.  So IMO isolation
levels are useful even if remote query is executed only once.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-15 Thread Etsuro Fujita
(2012/02/14 23:50), Tom Lane wrote:
 Shigeru Hanadashigeru.han...@gmail.com  writes:
 (2012/02/14 17:40), Etsuro Fujita wrote:
 As discussed at
 that thread, it would have to change the PlanForeignScan API to let the
 FDW generate multiple paths and dump them all to add_path instead of
 returning a FdwPlan struct.
 
 Multiple valuable Paths for a scan of a foreign table by FDW, but
 changing PlanForeignScan to return list of FdwPlan in 9.2 seems too
 hasty.
 
 I would really like to see that happen in 9.2, because the longer we let
 that mistake live, the harder it will be to change.  More and more FDWs
 are getting written.  I don't think it's that hard to do: we just have
 to agree that PlanForeignScan should return void and call add_path for
 itself, possibly more than once.

Agreed.  I fixed the PlanForeignScan API.  Please find attached a patch.

 If we do that, I'm inclined to think
 we cou;d get rid of the separate Node type FdwPlan, and just incorporate
 List *fdw_private into ForeignPath and ForeignScan.

+1  While the patch retains the struct FdwPlan, I would like to get rid
of it at next version of the patch.

Best regards,
Etsuro Fujita
*** a/contrib/file_fdw/file_fdw.c
--- b/contrib/file_fdw/file_fdw.c
***
*** 25,30 
--- 25,31 
  #include miscadmin.h
  #include nodes/makefuncs.h
  #include optimizer/cost.h
+ #include optimizer/pathnode.h
  #include utils/rel.h
  #include utils/syscache.h
  
***
*** 93,99  PG_FUNCTION_INFO_V1(file_fdw_validator);
  /*
   * FDW callback routines
   */
! static FdwPlan *filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel);
  static void fileExplainForeignScan(ForeignScanState *node, ExplainState *es);
--- 94,100 
  /*
   * FDW callback routines
   */
! static void filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel);
  static void fileExplainForeignScan(ForeignScanState *node, ExplainState *es);
***
*** 406,432  get_file_fdw_attribute_options(Oid relid)
  
  /*
   * filePlanForeignScan
!  *Create a FdwPlan for a scan on the foreign table
   */
! static FdwPlan *
  filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel)
  {
FdwPlan*fdwplan;
char   *filename;
List   *options;
  
/* Fetch options --- we only need filename at this point */
fileGetOptions(foreigntableid, filename, options);
  
/* Construct FdwPlan with cost estimates */
fdwplan = makeNode(FdwPlan);
estimate_costs(root, baserel, filename,
   fdwplan-startup_cost, 
fdwplan-total_cost);
-   fdwplan-fdw_private = NIL; /* not used */
  
!   return fdwplan;
  }
  
  /*
--- 407,447 
  
  /*
   * filePlanForeignScan
!  *Create the (single) path for a scan on the foreign table
   */
! static void
  filePlanForeignScan(Oid foreigntableid,
PlannerInfo *root,
RelOptInfo *baserel)
  {
+   ForeignPath *pathnode = makeNode(ForeignPath);
FdwPlan*fdwplan;
char   *filename;
List   *options;
  
+   pathnode-path.pathtype = T_ForeignScan;
+   pathnode-path.parent = baserel;
+   pathnode-path.pathkeys = NIL;  /* result is always unordered */
+   pathnode-path.required_outer = NULL;
+   pathnode-path.param_clauses = NIL;
+ 
/* Fetch options --- we only need filename at this point */
fileGetOptions(foreigntableid, filename, options);
  
/* Construct FdwPlan with cost estimates */
fdwplan = makeNode(FdwPlan);
+   fdwplan-fdw_private = NIL; /* not used */
estimate_costs(root, baserel, filename,
   fdwplan-startup_cost, 
fdwplan-total_cost);
  
!   pathnode-fdwplan = fdwplan;
! 
!   /* Use costs estimated by FDW */
!   pathnode-path.rows = baserel-rows;
!   pathnode-path.startup_cost = fdwplan-startup_cost;
!   pathnode-path.total_cost = fdwplan-total_cost;
! 
!   add_path(baserel, (Path *) pathnode);
  }
  
  /*
*** a/doc/src/sgml/fdwhandler.sgml
--- b/doc/src/sgml/fdwhandler.sgml
***
*** 88,108 
  
  para
  programlisting
! FdwPlan *
  PlanForeignScan (Oid foreigntableid,
   PlannerInfo *root,
   RelOptInfo *baserel);
  /programlisting
  
!  Plan a scan on a foreign table. This is called when a query is planned.
   literalforeigntableid/ is the structnamepg_class/ OID of the
   foreign table.  literalroot/ is the planner's global information
   about the query, and literalbaserel/ is 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-15 Thread Kohei KaiGai
Harada-san,

I checked the v9 patch, however, it still has some uncertain implementation.

[memory context of tuple store]
It calls tuplestore_begin_heap() under the memory context of
festate-scan_cxt at pgsqlBeginForeignScan.
On the other hand, tuplestore_gettupleslot() is called under the
memory context of festate-tuples.
I could not find a callback functions being invoked on errors,
so I doubt the memory objects acquired within tuplestore_begin_heap()
shall be leaked, even though it is my suggestion to create a sub-context
under the existing one.

In my opinion, it is a good choice to use es_query_cxt of the supplied EState.
What does prevent to apply this per-query memory context?

You mention about PGresult being malloc()'ed. However, it seems to me
fetch_result() and store_result() once copy the contents on malloc()'ed
area to the palloc()'ed area, and PQresult is released on an error using
PG_TRY() ... PG_CATCH() block.

[Minor comments]
Please set NULL to sql variable at begin_remote_tx().
Compiler raises a warnning due to references of uninitialized variable,
even though the code path never run.

It potentially causes a problem in case when fetch_result() raises an
error because of unexpected status (!= PGRES_TUPLES_OK).
One code path is not protected with PG_TRY(), and other code path
will call PQclear towards already released PQresult.

Although it is just a preference of mine, is the exprFunction necessary?
It seems to me, the point of push-down check is whether the supplied
node is built-in object, or not. So, an sufficient check is is_builtin() onto
FuncExpr-funcid, OpExpr-opno, ScalarArrayOpExpr-opno and so on.
It does not depend on whether the function implementing these nodes
are built-in or not.

Thanks,

2012年2月14日9:09 Shigeru Hanada shigeru.han...@gmail.com:
 (2012/02/14 15:15), Shigeru Hanada wrote:
 Good catch, thanks.  I'll revise pgsql_fdw tests little more.

 Here are the updated patches.  In addition to Fujita-san's comment, I
 moved DROP OPERATOR statements to clean up section of test script.

 Regards,
 --
 Shigeru Hanada



-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-14 Thread Etsuro Fujita
(2012/02/14 15:15), Shigeru Hanada wrote:
 (2012/02/13 20:50), Etsuro Fujita wrote:
 The patches have been applied, but role-related regression tests failed
 in my environment.  I fixed it in a similar fashion of
 /src/test/regress/sql/foreign_data.sql.  Please find attached a updated
 patch for the regression tests.
 
 Good catch, thanks.  I'll revise pgsql_fdw tests little more.
 
 BTW, What do you think about this?

 http://archives.postgresql.org/pgsql-hackers/2012-01/msg00229.php
 
 I'm sorry that I've left the thread unfinished...  I've given up to
 propose Join-push-down of foreign tables for 9.2, because it will take a
 while to achieve general semantics mapping for join push-down and WHERE
 clause push-down.  For 9.2, I'm proposing pgsql_fdw which has WHERE
 clause push-down for built-in elements which are free from collation.
 I'd like to go back to that item after 9.2 development enters beta or
 RC, hopefully :)

OK.  But my question was about the PlanForeignScan API.  As discussed at
that thread, it would have to change the PlanForeignScan API to let the
FDW generate multiple paths and dump them all to add_path instead of
returning a FdwPlan struct.  With this change, I think it would also
have to add a new FDW API that is called from create_foreignscan_plan()
and lets the FDW generate foreignscan plan for the base relation scanned
by the best path choosed by postgres optimizer for itself.  What do you
think about it?

Best regards,
Etsuro Fujita


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-14 Thread Shigeru Hanada
(2012/02/14 17:40), Etsuro Fujita wrote:
 OK.  But my question was about the PlanForeignScan API.

Sorry for misunderstanding. :(

  As discussed at
 that thread, it would have to change the PlanForeignScan API to let the
 FDW generate multiple paths and dump them all to add_path instead of
 returning a FdwPlan struct.  With this change, I think it would also
 have to add a new FDW API that is called from create_foreignscan_plan()
 and lets the FDW generate foreignscan plan for the base relation scanned
 by the best path choosed by postgres optimizer for itself.  What do you
 think about it?

Though I have only random thoughts about this issue at the moment...

Multiple valuable Paths for a scan of a foreign table by FDW, but
changing PlanForeignScan to return list of FdwPlan in 9.2 seems too
hasty.  It would need more consideration about general interface for
possible results such as:

* Full output (no WHERE push-down) is expensive on both remote and transfer.
* Filtered output (WHERE push-down) has cheap total costs when only few
rows come through the filter.
* Ordered output (ORDER BY push-down) is expensive on remote, but has
chance to omit upper Sort node.
* Aggregated output (GROUP BY push-down) is expensive on remote, but
have chance to omit upper Agg node, and reduces data transfer.
* Limited output (LIMIT/OFFSET push-down) can reduce data transfer, and
have chance to omit upper Limit node.

Currently FDWs can consider only first two, AFAIK.  If FDW generates
multiple FdwPlan (Full and Filtered) and sets different start-up costs
and total costs to them (may be former has higher start-up and lower
total than latter), planner would choose better for the whole plan.

In addition to changing FdwRoutine, it seems worth changing FdwPlan too
so that FDWs can return more information to planner, such as pathkeys
and rows, for each possible path.

In short, I have some ideas to enhance foreign table scans, but IMO they
are half-baked and we don't have enough time to achieve them for 9.2.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-14 Thread Etsuro Fujita
(2012/02/14 19:42), Shigeru Hanada wrote:
 (2012/02/14 17:40), Etsuro Fujita wrote:
   As discussed at
 that thread, it would have to change the PlanForeignScan API to let the
 FDW generate multiple paths and dump them all to add_path instead of
 returning a FdwPlan struct.  With this change, I think it would also
 have to add a new FDW API that is called from create_foreignscan_plan()
 and lets the FDW generate foreignscan plan for the base relation scanned
 by the best path choosed by postgres optimizer for itself.  What do you
 think about it?

 In short, I have some ideas to enhance foreign table scans, but IMO they
 are half-baked and we don't have enough time to achieve them for 9.2.

OK.  Thank you for your answer.

Best regards,
Etsuro Fujita

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-14 Thread Tom Lane
Shigeru Hanada shigeru.han...@gmail.com writes:
 (2012/02/14 17:40), Etsuro Fujita wrote:
 As discussed at
 that thread, it would have to change the PlanForeignScan API to let the
 FDW generate multiple paths and dump them all to add_path instead of
 returning a FdwPlan struct.

 Multiple valuable Paths for a scan of a foreign table by FDW, but
 changing PlanForeignScan to return list of FdwPlan in 9.2 seems too
 hasty.

I would really like to see that happen in 9.2, because the longer we let
that mistake live, the harder it will be to change.  More and more FDWs
are getting written.  I don't think it's that hard to do: we just have
to agree that PlanForeignScan should return void and call add_path for
itself, possibly more than once.  If we do that, I'm inclined to think
we cou;d get rid of the separate Node type FdwPlan, and just incorporate
List *fdw_private into ForeignPath and ForeignScan.

This does mean that FDWs will be a bit more tightly coupled to the
planner, because they'll have to change whenever we add new fields to
struct Path; but that is not really something that happens often.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-13 Thread Etsuro Fujita
(2012/02/10 20:39), Shigeru Hanada wrote:
 (2012/02/08 20:51), Shigeru Hanada wrote:
 Attached revised patches.  Changes from last version are below.
 snip
 
 I've found and fixed a bug which generates wrong remote query when any
 column of a foreign table has been dropped.  Also regression test for
 this case is added.
 
 I attached only pgsql_fdw_v8.patch, because pgsql_fdw_pushdown_v3.patch
 in last post still can be applied onto v8 patch.
 
 Regards,

The patches have been applied, but role-related regression tests failed
in my environment.  I fixed it in a similar fashion of
/src/test/regress/sql/foreign_data.sql.  Please find attached a updated
patch for the regression tests.

BTW, What do you think about this?

http://archives.postgresql.org/pgsql-hackers/2012-01/msg00229.php

Best regards,
Etsuro Fujita
*** sql/pgsql_fdw.sql.orig  2012-02-13 19:52:08.0 +0900
--- sql/pgsql_fdw.sql   2012-02-13 19:44:17.0 +0900
***
*** 2,7 
--- 2,19 
  -- create FDW objects
  -- ===
  
+ -- Clean up in case a prior regression run failed
+ 
+ -- Suppress NOTICE messages when roles don't exist
+ SET client_min_messages TO 'error';
+ 
+ DROP ROLE IF EXISTS pgsql_fdw_user;
+ 
+ RESET client_min_messages;
+ 
+ CREATE ROLE pgsql_fdw_user LOGIN SUPERUSER;
+ SET SESSION AUTHORIZATION 'pgsql_fdw_user';
+ 
  CREATE EXTENSION pgsql_fdw;
  
  CREATE SERVER loopback1 FOREIGN DATA WRAPPER pgsql_fdw;
***
*** 168,173 
--- 180,186 
  EXPLAIN (COSTS false) SELECT * FROM ft1 t1 WHERE t1.c1 = abs(t1.c2);
  EXPLAIN (COSTS false) SELECT * FROM ft1 t1 WHERE t1.c1 = t1.c2;
  DROP OPERATOR === (int, int) CASCADE;
+ DROP OPERATOR !== (int, int) CASCADE;
  DROP FUNCTION pgsql_fdw_abs(int);
  
  -- ===
***
*** 212,216 
  -- ===
  -- cleanup
  -- ===
  DROP EXTENSION pgsql_fdw CASCADE;
! 
--- 225,231 
  -- ===
  -- cleanup
  -- ===
+ DROP SCHEMA S 1 CASCADE;
  DROP EXTENSION pgsql_fdw CASCADE;
! \c
! DROP ROLE pgsql_fdw_user;
*** expected/pgsql_fdw.out.orig 2012-02-13 19:52:03.0 +0900
--- expected/pgsql_fdw.out  2012-02-13 19:51:49.0 +0900
***
*** 1,6 
--- 1,13 
  -- ===
  -- create FDW objects
  -- ===
+ -- Clean up in case a prior regression run failed
+ -- Suppress NOTICE messages when roles don't exist
+ SET client_min_messages TO 'error';
+ DROP ROLE IF EXISTS pgsql_fdw_user;
+ RESET client_min_messages;
+ CREATE ROLE pgsql_fdw_user LOGIN SUPERUSER;
+ SET SESSION AUTHORIZATION 'pgsql_fdw_user';
  CREATE EXTENSION pgsql_fdw;
  CREATE SERVER loopback1 FOREIGN DATA WRAPPER pgsql_fdw;
  CREATE SERVER loopback2 FOREIGN DATA WRAPPER pgsql_fdw
***
*** 130,147 
  ALTER FOREIGN TABLE ft1 ALTER COLUMN c1 OPTIONS (colname 'C 1');
  ALTER FOREIGN TABLE ft2 ALTER COLUMN c1 OPTIONS (colname 'C 1');
  \dew+
!  List of foreign-data wrappers
!Name|  Owner   |  Handler  |  Validator  | Access 
privileges | FDW Options | Description 
! 
---+--+---+-+---+-+-
!  pgsql_fdw | postgres | pgsql_fdw_handler | pgsql_fdw_validator | 
  | | 
  (1 row)
  
  \des+
!   

 List of foreign servers
!Name|  Owner   | Foreign-data wrapper | Access privileges | Type | 
Version |   

  FDW Options   

  | Description 
! 
---+--+--+---+--+-+-+-
!  loopback1 | postgres | pgsql_fdw|   |  | 
| 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-13 Thread Shigeru Hanada
(2012/02/13 20:50), Etsuro Fujita wrote:
 The patches have been applied, but role-related regression tests failed
 in my environment.  I fixed it in a similar fashion of
 /src/test/regress/sql/foreign_data.sql.  Please find attached a updated
 patch for the regression tests.

Good catch, thanks.  I'll revise pgsql_fdw tests little more.

 BTW, What do you think about this?
 
 http://archives.postgresql.org/pgsql-hackers/2012-01/msg00229.php

I'm sorry that I've left the thread unfinished...  I've given up to
propose Join-push-down of foreign tables for 9.2, because it will take a
while to achieve general semantics mapping for join push-down and WHERE
clause push-down.  For 9.2, I'm proposing pgsql_fdw which has WHERE
clause push-down for built-in elements which are free from collation.
I'd like to go back to that item after 9.2 development enters beta or
RC, hopefully :)

-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-08 Thread Shigeru Hanada
(2012/02/02 18:24), Marko Kreen wrote:
 I think you want this instead:
 
https://commitfest.postgresql.org/action/patch_view?id=769

With modified version of pgsql_fdw which uses row processor to retrieve
result tuples, I found significant performance gain on simple read-only
pgbench, though scale factor was very small (-s 3).  Executed command
was pgbench -S -n -c 5 T 30.

Average tps (excluding connections establishing) of 3 times measurements
are:

pgsql_fdw with SQL-cursor:  622
pgsql_fdw with Row Processor : 1219 - 2.0x faster than SQL-cursor
w/o pgsql_fdw(direct access) : 7719 - 6.3x faster than Row Processor

Row processor was almost 2x faster than SQL-cursor!  I'm looking forward
to this feature.

In addition to performance gain, of course memory usage was kept at very
low level. :)

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-06 Thread Shigeru Hanada
Thanks for the comments!

(2012/02/06 5:08), Kohei KaiGai wrote:
 Yes. In my opinion, one significant benefit of pgsql_fdw is to execute
 qualifiers on the distributed nodes; that enables to utilize multiple
 CPU resources efficiently.
 Duplicate checks are reliable way to keep invisible tuples being filtered
 out, indeed. But it shall degrade one competitive characteristics of the
 pgsql_fdw.
 
 https://github.com/kaigai/pg_strom/blob/master/plan.c#L693
 In my module, qualifiers being executable on device side are detached
 from the baserel-baserestrictinfo, and remaining qualifiers are chained
 to the list.
 The is_device_executable_qual() is equivalent to is_foreign_expr() in
 the pgsql_fdw module.

Agreed, I too think that pushed-down qualifiers should not be evaluated
on local side again from the viewpoint of performance.

 Of course, it is your decision, and I might miss something.
 
 BTW, what is the undesirable behavior on your previous implementation?

In early development, maybe during testing PREPARE/EXECUTE or DECALRE, I
saw that iterated execution of foreign scan produce wrong result which
includes rows which are NOT match pushed-down qualifiers.  And, at last,
I could recall what happened at that time.  It was just trivial bug I
made.  Perhaps I've removed pushed-down qualifiers in Path generation
phase, so generated plan node has lost qualifiers permanently.

In short, I'll remove remote qualifiers from baserestrictinfo, like
pg_storm.

 [Design comment]
 I'm not sure the reason why store_result() uses MessageContext to save
 the Tuplestorestate within PgsqlFdwExecutionState.
 The source code comment says it is used to avoid memory leaks in error
 cases. I also have a similar experience on implementation of my fdw module,
 so, I could understand per-scan context is already cleared at the timing of
 resource-release-callback, thus, handlers to external resource have to be
 saved on separated memory context.
 In my personal opinion, the right design is to declare a memory context for
 pgsql_fdw itself, instead of the abuse of existing memory context.
 (More wise design is to define sub-memory-context for each foreign-scan,
 then, remove the sub-memory-context after release handlers.)

 I simply chose built-in context which has enough lifespan, but now I
 think that using MessageContext directly is not recommended way.  As you
 say, creating new context as child of MessageContext for each scan in
 BeginForeignScan (or first IterateForeignScan) would be better.  Please
 see attached patch.

 One other option is getting rid of tuplestore by holding result rows as
 PGresult, and track it for error cases which might happen.
 ResourceOwner callback can be used to release PGresult on error, similar
 to PGconn.

 If we could have set of results on per-query memory context (thus,
 no need to explicit release on error timing), it is more ideal design.
 It it possible to implement based on the libpq APIs?

Currently no, so I used tuplestore even though it needs coping results.
However, Kyotaro Horiguchi's patch might make it possible. I'm reading
his patch to determine whether it suits pgsql_fdw.

http://archives.postgresql.org/message-id/20120202143057.ga12...@gmail.com

 Please note that per-query memory context is already released on
 ResourceOwner callback is launched, so, it is unavailable to implement
 if libpq requires to release some resource.

I see.  We need to use context which has longer lifetime if we want to
track malloc'ed PQresult.  I already use CacheContext for connection
pooling, so linking PGreslts to its source connection would be a solutions.

 [Design comment]
 When BEGIN should be issued on the remote-side?
 The connect_pg_server() is an only chance to issue BEGIN command
 at the remote-side on connection being opened. However, the connection
 shall be kept unless an error is not raised. Thus, the remote-side will
 continue to work within a single transaction block, even if local-side 
 iterates
 a pair of BEGIN and COMMIT.
 I'd like to suggest to close the transaction block at the timing of either
 end of the scan, transaction or sub-transaction.

 Indeed, remote transactions should be terminated at some timing.
 Terminating at the end of a scan seems troublesome because a connection
 might be shared by multiple scans in a query.  I'd prefer aborting
 remote transaction at the end of local query.  Please see
 abort_remote_tx in attached patch.

 It seems to me abort_remote_tx in ReleaseConnection() is reasonable.
 However, isn't it needed to have ABORT in GetConnection() at first time?

Hm, forcing overhead of aborting transaction to all local queries is
unreasonable.  Redundant BEGIN doesn't cause error but just generate
WARNING, so I'll remove abort_remote_tx preceding begin_remote_tx.

 [Comment to improve]
 It seems to me the design of exprFunction is not future-proof, if we add
 a new node type that contains two or more function calls, because it can
 return an oid of 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-05 Thread Kohei KaiGai
2012年2月1日12:15 Shigeru Hanada shigeru.han...@gmail.com:
 (2012/01/30 4:39), Kohei KaiGai wrote:
 I checked the fdw_helper_funcs_v3.patch, pgsql_fdw_v5.patch and
 pgsql_fdw_pushdown_v1.patch. My comments are below.

 Thanks for the review!

 [BUG]
 Even though pgsql_fdw tries to push-down qualifiers being executable
 on the remove side at the deparseSql(), it does not remove qualifiers
 being pushed down from the baserel-baserestrictinfo, thus, these
 qualifiers are eventually executed twice.

 See the result of this EXPLAIN.
postgres=# EXPLAIN SELECT * FROM ft1 WHERE a  2 AND f_leak(b);
  QUERY PLAN

 --
 Foreign Scan on ft1  (cost=107.43..122.55 rows=410 width=36)
   Filter: (f_leak(b) AND (a  2))
   Remote SQL: DECLARE pgsql_fdw_cursor_0 SCROLL CURSOR FOR SELECT
 a, b FROM public.t1 WHERE (a  2)
(3 rows)

 My expectation is (a  2) being executed on the remote-side and f_leak(b)
 being executed on the local-side. But, filter of foreign-scan on ft1 has both
 of qualifiers. It has to be removed, if a RestrictInfo get pushed-down.

 It's intentional that pgsql_fdw keeps pushed-down qualifier in
 baserestrictinfo, because I saw some undesirable behavior when I
 implemented so that with such optimization when plan is reused, but it's
 not clear to me now.  I'll try to recall what I saw...

 BTW, I think evaluating pushed-down qualifiers again on local side is
 safe and has no semantic problem, though we must pay little for such
 overhead.  Do you have concern about performance?

Yes. In my opinion, one significant benefit of pgsql_fdw is to execute
qualifiers on the distributed nodes; that enables to utilize multiple
CPU resources efficiently.
Duplicate checks are reliable way to keep invisible tuples being filtered
out, indeed. But it shall degrade one competitive characteristics of the
pgsql_fdw.

https://github.com/kaigai/pg_strom/blob/master/plan.c#L693
In my module, qualifiers being executable on device side are detached
from the baserel-baserestrictinfo, and remaining qualifiers are chained
to the list.
The is_device_executable_qual() is equivalent to is_foreign_expr() in
the pgsql_fdw module.

Of course, it is your decision, and I might miss something.

BTW, what is the undesirable behavior on your previous implementation?

 [Design comment]
 I'm not sure the reason why store_result() uses MessageContext to save
 the Tuplestorestate within PgsqlFdwExecutionState.
 The source code comment says it is used to avoid memory leaks in error
 cases. I also have a similar experience on implementation of my fdw module,
 so, I could understand per-scan context is already cleared at the timing of
 resource-release-callback, thus, handlers to external resource have to be
 saved on separated memory context.
 In my personal opinion, the right design is to declare a memory context for
 pgsql_fdw itself, instead of the abuse of existing memory context.
 (More wise design is to define sub-memory-context for each foreign-scan,
 then, remove the sub-memory-context after release handlers.)

 I simply chose built-in context which has enough lifespan, but now I
 think that using MessageContext directly is not recommended way.  As you
 say, creating new context as child of MessageContext for each scan in
 BeginForeignScan (or first IterateForeignScan) would be better.  Please
 see attached patch.

 One other option is getting rid of tuplestore by holding result rows as
 PGresult, and track it for error cases which might happen.
 ResourceOwner callback can be used to release PGresult on error, similar
 to PGconn.

If we could have set of results on per-query memory context (thus,
no need to explicit release on error timing), it is more ideal design.
It it possible to implement based on the libpq APIs?

Please note that per-query memory context is already released on
ResourceOwner callback is launched, so, it is unavailable to implement
if libpq requires to release some resource.

 [Design comment]
 When BEGIN should be issued on the remote-side?
 The connect_pg_server() is an only chance to issue BEGIN command
 at the remote-side on connection being opened. However, the connection
 shall be kept unless an error is not raised. Thus, the remote-side will
 continue to work within a single transaction block, even if local-side 
 iterates
 a pair of BEGIN and COMMIT.
 I'd like to suggest to close the transaction block at the timing of either
 end of the scan, transaction or sub-transaction.

 Indeed, remote transactions should be terminated at some timing.
 Terminating at the end of a scan seems troublesome because a connection
 might be shared by multiple scans in a query.  I'd prefer aborting
 remote transaction at the end of local query.  Please see
 abort_remote_tx in attached patch.

It seems to me abort_remote_tx in ReleaseConnection() is reasonable.

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-05 Thread Shigeru Hanada

2012/2/2 Marko Kreen mark...@gmail.com:

I think you want this instead:

 https://commitfest.postgresql.org/action/patch_view?id=769


Somehow I've missed this cool feature.  Thanks for the suggestion!

--
Shigeru Hanada


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-02 Thread Marko Kreen
On Tue, Jan 31, 2012 at 8:56 PM, Robert Haas robertmh...@gmail.com wrote:
 2012/1/29 Kohei KaiGai kai...@kaigai.gr.jp:
     Remote SQL: DECLARE pgsql_fdw_cursor_0 SCROLL CURSOR FOR SELECT
 a, b FROM public.t1 WHERE (a  2)
  (3 rows)

 Shouldn't we be using protocol-level cursors rather than SQL-level cursors?

I think you want this instead:

  https://commitfest.postgresql.org/action/patch_view?id=769

-- 
marko

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-02-01 Thread Shigeru Hanada
(2012/02/01 3:56), Robert Haas wrote:
 2012/1/29 Kohei KaiGaikai...@kaigai.gr.jp:
  Remote SQL: DECLARE pgsql_fdw_cursor_0 SCROLL CURSOR FOR SELECT
 a, b FROM public.t1 WHERE (a  2)
   (3 rows)
 
 Shouldn't we be using protocol-level cursors rather than SQL-level cursors?

Yes, we should, if we have protocol-level cursor :)
I checked libpq interface but I couldn't find any function for
protocol-level cursor.

 [Design comment]
 When BEGIN should be issued on the remote-side?
 The connect_pg_server() is an only chance to issue BEGIN command
 at the remote-side on connection being opened. However, the connection
 shall be kept unless an error is not raised. Thus, the remote-side will
 continue to work within a single transaction block, even if local-side 
 iterates
 a pair of BEGIN and COMMIT.
 I'd like to suggest to close the transaction block at the timing of either
 end of the scan, transaction or sub-transaction.
 
 I suspect this is ultimately going to need to be configurable.  Some
 people might want to close the transaction on the remote side ASAP,
 while other people might want to hold it open until commit.  For a
 first version I think it's most likely best to do whatever seems
 simplest to code, planning to add more options later.

I fixed pgsql_fdw to abort remote transaction at the end of each local
query.  I chose this timing because local query might include multiple
scans on same foreign server.  I think this would be ASAP timing in
your comment.

It would be useful to make length of remote transaction same as local's,
I'll try RegisterXactCallback for this purpose, though we need to
preload FDW module to catch BEGIN preceding query using foreign tables.

-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-01-31 Thread Robert Haas
2012/1/29 Kohei KaiGai kai...@kaigai.gr.jp:
     Remote SQL: DECLARE pgsql_fdw_cursor_0 SCROLL CURSOR FOR SELECT
 a, b FROM public.t1 WHERE (a  2)
  (3 rows)

Shouldn't we be using protocol-level cursors rather than SQL-level cursors?

 [Design comment]
 When BEGIN should be issued on the remote-side?
 The connect_pg_server() is an only chance to issue BEGIN command
 at the remote-side on connection being opened. However, the connection
 shall be kept unless an error is not raised. Thus, the remote-side will
 continue to work within a single transaction block, even if local-side 
 iterates
 a pair of BEGIN and COMMIT.
 I'd like to suggest to close the transaction block at the timing of either
 end of the scan, transaction or sub-transaction.

I suspect this is ultimately going to need to be configurable.  Some
people might want to close the transaction on the remote side ASAP,
while other people might want to hold it open until commit.  For a
first version I think it's most likely best to do whatever seems
simplest to code, planning to add more options later.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2012-01-29 Thread Kohei KaiGai
Hi Harada-san,

I checked the fdw_helper_funcs_v3.patch, pgsql_fdw_v5.patch and
pgsql_fdw_pushdown_v1.patch. My comments are below.

[BUG]
Even though pgsql_fdw tries to push-down qualifiers being executable
on the remove side at the deparseSql(), it does not remove qualifiers
being pushed down from the baserel-baserestrictinfo, thus, these
qualifiers are eventually executed twice.

See the result of this EXPLAIN.
  postgres=# EXPLAIN SELECT * FROM ft1 WHERE a  2 AND f_leak(b);
QUERY PLAN
  
--
   Foreign Scan on ft1  (cost=107.43..122.55 rows=410 width=36)
 Filter: (f_leak(b) AND (a  2))
 Remote SQL: DECLARE pgsql_fdw_cursor_0 SCROLL CURSOR FOR SELECT
a, b FROM public.t1 WHERE (a  2)
  (3 rows)

My expectation is (a  2) being executed on the remote-side and f_leak(b)
being executed on the local-side. But, filter of foreign-scan on ft1 has both
of qualifiers. It has to be removed, if a RestrictInfo get pushed-down.

[Design comment]
I'm not sure the reason why store_result() uses MessageContext to save
the Tuplestorestate within PgsqlFdwExecutionState.
The source code comment says it is used to avoid memory leaks in error
cases. I also have a similar experience on implementation of my fdw module,
so, I could understand per-scan context is already cleared at the timing of
resource-release-callback, thus, handlers to external resource have to be
saved on separated memory context.
In my personal opinion, the right design is to declare a memory context for
pgsql_fdw itself, instead of the abuse of existing memory context.
(More wise design is to define sub-memory-context for each foreign-scan,
then, remove the sub-memory-context after release handlers.)

[Design comment]
When BEGIN should be issued on the remote-side?
The connect_pg_server() is an only chance to issue BEGIN command
at the remote-side on connection being opened. However, the connection
shall be kept unless an error is not raised. Thus, the remote-side will
continue to work within a single transaction block, even if local-side iterates
a pair of BEGIN and COMMIT.
I'd like to suggest to close the transaction block at the timing of either
end of the scan, transaction or sub-transaction.

[Comment to Improve]
Also, which transaction isolation level should be specified in this case?
An idea is its isolation level is specified according to the current isolation
level on the local-side.
(Of course, it is your choice if it is not supported right now.)

[Comment to improve]
It seems to me the design of exprFunction is not future-proof, if we add
a new node type that contains two or more function calls, because it can
return an oid of functions.
I think, the right design is to handle individual node-types within the
large switch statement at foreign_expr_walker().
Of course, it is just my sense.

[Comment to improve]
The pgsql_fdw_handler() allocates FdwRoutine using makeNode(),
then it set function-pointers on each fields.
Why don't you return a pointer to statically declared FdwRoutine
variable being initialized at compile time, like:

  static FdwRoutine pgsql_fdw_handler = {
  .type   = T_FdwRoutine,
  .PlanForeignScan= pgsqlPlanForeignScan,
  .ExplainForeignScan = pgsqlExplainForeignScan,
  .BeginForeignScan   = pgsqlBeginForeignScan,
  .IterateForeignScan = pgsqlIterateForeignScan,
  .ReScanForeignScan  = pgsqlReScanForeignScan,
  .EndForeignScan = pgsqlEndForeignScan,
  };

  Datum
  pgsql_fdw_handler(PG_FUNCTION_ARGS)
  {
PG_RETURN_POINTER(pgsql_fdw_handler);
  }

[Question to implementation]
At pgsqlIterateForeignScan(), it applies null-check on festate-tuples
and bool-checks on festete-cursor_opened.
Do we have a possible scenario that festate-tuples is not null, but
festate-cursor_opened, or an opposite combination?
If null-check on festate-tuples is enough to detect the first call of
the iterate callback, it is not my preference to have redundant flag.

Thanks,

2011年12月14日15:02 Shigeru Hanada shigeru.han...@gmail.com:
 (2011/12/13 14:46), Tom Lane wrote:
 Shigeru Hanadashigeru.han...@gmail.com  writes:
 Agreed.  How about to add a per-column boolean FDW option, say
 pushdown, to pgsql_fdw?  Users can tell pgsql_fdw that the column can
 be pushed down safely by setting this option to true.

 [ itch... ] That doesn't seem like the right level of granularity.
 ISTM the problem is with whether specific operators have the same
 meaning at the far end as they do locally.  If you try to attach the
 flag to columns, you have to promise that *every* operator on that
 column means what it does locally, which is likely to not be the
 case ever if you look hard enough.  Plus, having to set the flag on
 each individual column of the same datatype seems pretty tedious.

 Indeed, I too think that labeling on each columns is not the best way,
 but 

Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2011-12-19 Thread Greg Smith
On 12/14/2011 09:02 AM, Shigeru Hanada wrote:
 Here I'd like to propose three incremental patches:

 1) fdw_helper_funcs_v3.patch...:  This is not specific to pgsql_fdw, but
 probably useful for every FDWs which use FDW options...
 2) pgsql_fdw_v5.patch:  This patch provides simple pgsql_fdw
 which does *NOT* support any push-down...
 3) pgsql_fdw_pushdown_v1.patch:  This patch adds limited push-down
 capability to pgsql_fdw which is implemented by previous patch...
 ...
 To implement [expression which uses user-defined function], I added 
 exprFunction to nodefuncs.c which returns Oid
 of function which is used in the expression node, but I'm not sure that
 it should be there.  Should we have it inside pgsql_fdw?

After failing to bring some light onto this during my general update,
will try again here. We now have 3 updated patches that refactor things
from how this was originally presented, with one asked implementation
question. There's also a spawned off Join push-down for foreign tables
patch off in another thread.

I don't think it's really clear to everyone what state this feature
proposal is in. We've gotten bits of review here from KaiGai and Heikki,
big picture comments from Robert and Tom. Given how these are
structured, is fdw_helper_funcs_v3.patch at the point where it should be
considered for committer review? Maybe pgsql_fdw_v5.patch too?

The others seem to be more in flux to me, due to all the recent pushdown
changes.

-- 
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2011-12-13 Thread Albe Laurenz
Tom Lane wrote:
 Shigeru Hanada shigeru.han...@gmail.com writes:
 (2011/12/12 22:59), Robert Haas wrote:
 ... I feel like we might need a system here that
 allows for more explicit user control about what to push down vs.
not,
 rather than assuming we'll be able to figure it out behind the
scenes.

 Agreed.  How about to add a per-column boolean FDW option, say
 pushdown, to pgsql_fdw?  Users can tell pgsql_fdw that the column
can
 be pushed down safely by setting this option to true.

 [ itch... ] That doesn't seem like the right level of granularity.
 ISTM the problem is with whether specific operators have the same
 meaning at the far end as they do locally.  If you try to attach the
 flag to columns, you have to promise that *every* operator on that
 column means what it does locally, which is likely to not be the
 case ever if you look hard enough.  Plus, having to set the flag on
 each individual column of the same datatype seems pretty tedious.
 
 I don't have a better idea to offer at the moment though.  Trying
 to attach such a property to operators seems impossibly messy too.
 If it weren't for the collations issue, I might think that labeling
 datatypes as being compatible would be a workable approximation.

Maybe I'm missing something, but if pushdown worked as follows:

- Push down only system functions and operators on system types.
- Only push down what is guaranteed to work.

then the only things we would miss out on are encoding- or
collation-sensitive string operations.

Is that loss so big that it warrants a lot of effort?

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2011-12-13 Thread Heikki Linnakangas

On 13.12.2011 11:57, Albe Laurenz wrote:

Tom Lane wrote:

Shigeru Hanadashigeru.han...@gmail.com  writes:

(2011/12/12 22:59), Robert Haas wrote:

... I feel like we might need a system here that
allows for more explicit user control about what to push down vs.

not,

rather than assuming we'll be able to figure it out behind the

scenes.


Agreed.  How about to add a per-column boolean FDW option, say
pushdown, to pgsql_fdw?  Users can tell pgsql_fdw that the column

can

be pushed down safely by setting this option to true.



[ itch... ] That doesn't seem like the right level of granularity.
ISTM the problem is with whether specific operators have the same
meaning at the far end as they do locally.  If you try to attach the
flag to columns, you have to promise that *every* operator on that
column means what it does locally, which is likely to not be the
case ever if you look hard enough.  Plus, having to set the flag on
each individual column of the same datatype seems pretty tedious.

I don't have a better idea to offer at the moment though.  Trying
to attach such a property to operators seems impossibly messy too.
If it weren't for the collations issue, I might think that labeling
datatypes as being compatible would be a workable approximation.


Maybe I'm missing something, but if pushdown worked as follows:

- Push down only system functions and operators on system types.
- Only push down what is guaranteed to work.

then the only things we would miss out on are encoding- or
collation-sensitive string operations.

Is that loss so big that it warrants a lot of effort?


The SQL/MED spec handles this with the concept of routine mappings. 
There is syntax for defining which remote routines, meaning functions, 
correspond local functions:


CREATE ROUTINE MAPPING routine mapping name FOR specific routine 
designator

SERVER foreign server name [ generic options ]

generic options is FDW-specific, I'd imagine the idea is to give the 
name of the corresponding function in the remote server. It doesn't say 
anything about collations, but you could have extra options to specify 
that a function can only be mapped under C collation, or whatever.


It seems tedious to specify that per-server, though, so we'll probably 
still want to have some smarts in the pgsql_fdw to handle the built-in 
functions and types that we know to be safe.


I've been talking about functions here, not operators, on the assumption 
that we can look up the function underlying the operator and make the 
decisions based on that.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2011-12-13 Thread Shigeru Hanada
(2011/12/13 20:04), Heikki Linnakangas wrote:
 The SQL/MED spec handles this with the concept of routine mappings. 
 There is syntax for defining which remote routines, meaning functions, 
 correspond local functions:
 
 CREATE ROUTINE MAPPING routine mapping name FOR specific routine 
 designator
 SERVER foreign server name [ generic options ]
 
 generic options is FDW-specific, I'd imagine the idea is to give the 
 name of the corresponding function in the remote server. It doesn't say 
 anything about collations, but you could have extra options to specify 
 that a function can only be mapped under C collation, or whatever.

I considered ROUTINE MAPPING for other RDBMS before, and thought that
having order of parameter in generic options would be necessary.  It's
also useful for pgsql_fdw to support pushing down user-defined
functions.  Maybe built-in format() function suits for this purpose?

 It seems tedious to specify that per-server, though, so we'll probably 
 still want to have some smarts in the pgsql_fdw to handle the built-in 
 functions and types that we know to be safe.

One possible idea is having default mapping with serverid = InvalidOid,
and override them with entries which has valid server oid.  Such default
mappings can be loaded during CREATE EXTENSION.

 I've been talking about functions here, not operators, on the assumption 
 that we can look up the function underlying the operator and make the 
 decisions based on that.

It's interesting viewpoint to think operator notation is syntax sugar of
function notation, e.g. A = B - int4eq(A, B).  Routine mappings
seem to work for operators too.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2011-12-13 Thread Shigeru Hanada
(2011/12/13 18:57), Albe Laurenz wrote:
 Maybe I'm missing something, but if pushdown worked as follows:
 
 - Push down only system functions and operators on system types.
 - Only push down what is guaranteed to work.

Oh, I didn't care whether system data types.  Indeed user defined types
would not be safe to push down.

 then the only things we would miss out on are encoding- or
 collation-sensitive string operations.
 
 Is that loss so big that it warrants a lot of effort?

It depends on the definition of collation-sensitive.  If we define it
as all operations which might handle any collation-sensitive element,
all functions/operators which take any of character data types (text,
varchar, bpchar, sql_identifier, etc.) are unable to be pushed down.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2011-12-12 Thread Robert Haas
On Wed, Dec 7, 2011 at 2:34 AM, Shigeru Hanada shigeru.han...@gmail.com wrote:
 Sorry for delayed response.

 2011/11/29 Albe Laurenz laurenz.a...@wien.gv.at:
 I think that this is not always safe even from PostgreSQL to PostgreSQL.
 If two databases have different collation,  on strings will behave
 differently.

 Indeed.  I think that only the owner of foreign table can keep collation
 consistent between foreign and local, like data type of column.

+1.

 We need to
 support per-column-collation on foreign tables too, or should deny pushing
 down condition which is collation-sensitive...

It seems that we already do:

rhaas=# create foreign table ft1 (a text collate de_DE) server s1;
CREATE FOREIGN TABLE

It does seem like this might not be enough information for the FDW to
make good decisions about pushdown.  Even supposing the server on the
other hand is also PostgreSQL, the collation names might not match
(if, say, one is running Windows, and the other, Linux).  And even if
they do, there is no guarantee that two collations with the same name
have the same behavior on two different machines; they probably
should, but who knows?  And if we're using an FDW to talk to some
other database server, the problem is much worse; it's not clear that
we'll even begin to be able to guess whether the remote side has
compatible semantics.  I feel like we might need a system here that
allows for more explicit user control about what to push down vs. not,
rather than assuming we'll be able to figure it out behind the scenes.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server

2011-12-12 Thread Shigeru Hanada
(2011/12/12 22:59), Robert Haas wrote:
 It does seem like this might not be enough information for the FDW to
 make good decisions about pushdown.  Even supposing the server on the
 other hand is also PostgreSQL, the collation names might not match
 (if, say, one is running Windows, and the other, Linux).  And even if
 they do, there is no guarantee that two collations with the same name
 have the same behavior on two different machines; they probably
 should, but who knows?  And if we're using an FDW to talk to some
 other database server, the problem is much worse; it's not clear that
 we'll even begin to be able to guess whether the remote side has
 compatible semantics.  I feel like we might need a system here that
 allows for more explicit user control about what to push down vs. not,
 rather than assuming we'll be able to figure it out behind the scenes.

Agreed.  How about to add a per-column boolean FDW option, say
pushdown, to pgsql_fdw?  Users can tell pgsql_fdw that the column can
be pushed down safely by setting this option to true.  IMO default
should be false (no push down).

In most cases, columns with numeric/time-related types are safe to be
pushed down because they are free from collations issue, so users would
want to set to true.  OTOH, columns with string types would need some
considerations.  Once users have ensured that the column has compatible
semantics, they can set pushdown=true for efficiency.

If a condition contains any columns with pushdown=false, that condition
should NOT be pushed down.

This idea is only for pgsql_fdw now, but it can be used for other FDWs
which support push-down, and it would be also useful for ORDER BY
push-down support in future, which is apparently contains collation issue.

Regards,
-- 
Shigeru Hanada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >