Re: [HACKERS] pl/perl extension fails on Windows
Christoph Berg <m...@debian.org> writes: >>> The only interesting line in log/postmaster.log is a log_line_prefix-less >>> Util.c: loadable library and perl binaries are mismatched (got handshake >>> key 0xd500080, needed 0xd600080) Can we see the Perl-related output from configure, particularly the new lines about CFLAGS? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Christoph Berg <m...@debian.org> writes: > Re: Tom Lane 2017-07-28 <3254.1501276...@sss.pgh.pa.us> >> Christoph Berg <m...@debian.org> writes: >>> The plperl segfault on Debian's kfreebsd port I reported back in 2013 >>> is also still present: >> So it'd be interesting to know if it's any better with HEAD ... > Unfortunately not: > The only interesting line in log/postmaster.log is a log_line_prefix-less > Util.c: loadable library and perl binaries are mismatched (got handshake key > 0xd500080, needed 0xd600080) > ... which is unchanged from the beta2 output. Well, that's quite interesting, because it implies that this is indeed the same type of problem. I wonder why the patch didn't fix it? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?
"Tels" <nospam-pg-ab...@bloodgate.com> writes: > On Sun, July 30, 2017 12:22 pm, Tom Lane wrote: >> Yeah, I looked into that. The closest candidate I can find is that >> perl 5.10.1 contains Test::More 0.92. However, it's not real clear >> to me exactly which files I'd need to pull out of 5.10.1 and inject into >> an older tarball --- the layout seems a lot different from a standalone >> package. > So basically the two files: > http://search.cpan.org/src/EXODIST/Test-Simple-1.302086/lib/Test/More.pm > http://search.cpan.org/src/EXODIST/Test-Simple-1.302086/lib/Test/Builder/Module.pm > might do the trick. Thanks for the hint. I transplanted these files out of a 5.10.1 tarball into 5.8.3, then built as usual: lib/Test/Simple.pm lib/Test/More.pm lib/Test/Builder.pm lib/Test/Builder/Module.pm The result seems to work, although it fails a few of 5.8.3's tests, probably because I didn't copy over the relevant test scripts. It's good enough to run PG's tests though. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL not setting OpenSSL session id context?
I wrote: > I think what you need to do is tell SslStream not to expect that PG > servers will do session resumption. (I'm a bit astonished that that > would be its default assumption in the first place, but whatever.) Actually, after a bit of further googling, it seems that the brain damage here may be on the server side. It seems that OpenSSL will send a session ticket if requested, even though the surrounding application has given it no means to identify the session (!?). Apparently we need to pass SSL_OP_NO_TICKET to SSL_CTX_set_options to prevent that from happening. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL not setting OpenSSL session id context?
Shay Rojansky <r...@roji.org> writes: > When trying to connect with Npgsql to PostgreSQL with client authentication > (PG has ssl_ca_file set), the first connection works just fine. The second > connection, however, fails and the PostgreSQL logs contain the message > session id context uninitialized". This occurs when using .NET's default > SSL implementation, SslStream, which supports session resumption - the > session connection's ClientHello message contains a session ticket from the > first session, triggering the issue. AFAIK Postgres doesn't support session resumption. If I am correctly understanding what that is supposed to provide, it would require saving all of a backend's internal state on the off chance that somebody would request resuming the session later. I do not think we are going there. The idea makes sense for servers with relatively lightweight per-session state, but that ain't us. I think what you need to do is tell SslStream not to expect that PG servers will do session resumption. (I'm a bit astonished that that would be its default assumption in the first place, but whatever.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?
Noah Misch <n...@leadboat.com> writes: > On Sun, Jul 30, 2017 at 12:05:10PM -0400, Tom Lane wrote: >> Well, OK, but I'd still like to tweak configure so that it records >> an absolute path for prove rather than just setting PROVE=prove. >> That way you'd at least be able to tell from the configure log >> whether you are possibly at risk. > That's an improvement. The reason it does that seems to be that we use AC_CHECK_PROGS rather than AC_PATH_PROGS for locating "prove". I can see no particular consistency to the decisions made in configure.in about which to use: AC_CHECK_PROGS(GCOV, gcov) AC_CHECK_PROGS(LCOV, lcov) AC_CHECK_PROGS(GENHTML, genhtml) AC_CHECK_PROGS(DTRACE, dtrace) AC_CHECK_PROGS(XML2_CONFIG, xml2-config) AC_CHECK_PROGS(DBTOEPUB, dbtoepub) AC_CHECK_PROGS(XMLLINT, xmllint) AC_CHECK_PROGS(XSLTPROC, xsltproc) AC_CHECK_PROGS(OSX, [osx sgml2xml sx]) AC_CHECK_PROGS(FOP, fop) AC_CHECK_PROGS(PROVE, prove) versus AC_PATH_PROG(TAR, tar) PGAC_PATH_BISON PGAC_PATH_FLEX PGAC_PATH_PERL PGAC_PATH_PYTHON AC_PATH_PROG(ZIC, zic) PGAC_PATH_TCLCONFIGSH([$with_tclconfig]) I'm tempted to propose that we switch *all* of these uses of AC_CHECK_PROGS to AC_PATH_PROGS. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?
"Tels" <nospam-pg-ab...@bloodgate.com> writes: > On Sun, July 30, 2017 1:21 am, Tom Lane wrote: >>> So the question is, does anyone care? I wouldn't except that our >>> documentation appears to claim that we work with Perl "5.8 or later". > Not sure how often People use old Perl versions out in the field. I'd > venture this is either happens with "ancient" stuff, e.g. where people > just can't or want upgrade. > Otherwise, an up-to-date OS is just necessary for security, anyway, and > that would contain a Perl from this decade, wouldn't it? Well, that's not really the point, IMO. The reason I'm interested in this is the same reason I run some buildfarm critters on ancient platforms: if we do something that breaks backwards compatibility with old software, we should know it and make a deliberate decision that it's okay. (And update the relevant compatibility claims in our docs.) Moving the compatibility goalposts without knowing it isn't good, especially if it happens in supposedly-stable release branches. >> I am unable to confirm our claim that we work with Test::More 0.82, >> because CPAN has only versions from a year or three back. (Anyone >> know of a more, er, comprehensive archive than CPAN?) It's also >> interesting to speculate about how old a version of IPC::Run is new >> enough, but I see no easy way to get much data on that either. > Test::More has been bundled with Perl since 5.6.2 (you can use "corelist" > to check for these things), so if all fails, it might be possible to > extract a version from a Perl distribution [4]. Yeah, I looked into that. The closest candidate I can find is that perl 5.10.1 contains Test::More 0.92. However, it's not real clear to me exactly which files I'd need to pull out of 5.10.1 and inject into an older tarball --- the layout seems a lot different from a standalone package. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?
Noah Misch <n...@leadboat.com> writes: > On Sun, Jul 30, 2017 at 01:21:28AM -0400, Tom Lane wrote: >> I think it'd be a good idea to insist that "prove" be in >> the same directory we found "perl" in. > Nah; on my machines, I use /usr/bin/perl and ~/sw/cpan/bin/prove. The latter > is built against the former, so there's no particular hazard. Well, OK, but I'd still like to tweak configure so that it records an absolute path for prove rather than just setting PROVE=prove. That way you'd at least be able to tell from the configure log whether you are possibly at risk. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GSoC 2017: Foreign Key Arrays
Alvaro Herrera <alvhe...@2ndquadrant.com> writes: > ... However, when you create an index, you can > indicate which operator class to use, and it may not be the default one. > If a different one is chosen at index creation time, then a query using > COUNT(distinct) will do the wrong thing, because DISTINCT will select > an equality type using the type's default operator class, not the > equality that belongs to the operator class used to create the index. > That's wrong: DISTINCT should use the equality operator that corresponds > to the index' operator class instead, not the default one. Uh, what? Surely the semantics of count(distinct x) *must not* vary depending on what indexes happen to be available. I think what you meant to say is that the planner may only choose an optimization of this sort when the index's opclass matches the one DISTINCT will use, ie the default for the data type. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?
I wrote: > So the question is, does anyone care? I wouldn't except that our > documentation appears to claim that we work with Perl "5.8 or later". > And the lack of field complaints suggests strongly that nobody else > cares. So I'm inclined to think we just need to be more specific > about the minimum Perl version --- but what exactly? I've done some more research on this. It seems to me there are four distinct components to any claim about whether we work with a particular Perl version: 1. Can it run the build-related Perl scripts needed to build PG from a bare git checkout, on non-Windows platforms? 2. Can it run our MSVC build scripts? 3. Does pl/perl compile (and pass its regression tests) against it? 4. Can it run our TAP tests? I have no info to offer about point #2, but I did some tests with ancient 5.8.x Perl versions on my buildfarm hosts prairiedog and gaur. (I would have liked to use something faster, but these Perl versions failed to build at all on more recent platforms, and I thought it rather pointless to try to hack them enough to build.) I find that perl 5.8.0 and later seem to succeed at point #1, but you need at least 5.8.1 to compile plperl. Also, on prairiedog 5.8.1 fails plperl's regression tests because of some seemingly utf8-related bug. That may be an ancient-macOS problem more than anything else, so I didn't poke at it too hard. The really surprising thing I found out is that you can't run the TAP tests on anything older than 5.8.3, because that's when they added "prove". I'm bemused by the idea that such a fundamental facility would get added in a third-digit minor release, but there you have it. (Now, in fairness, this amounted to importing a feature that already existed on CPAN, but still...) 5.8.3 does appear to succeed at points 1,3,4. Now, to get through the TAP tests you need to install IPC::Run, and you also need to update Test::More because the version shipped with 5.8.3 is too old. But at least the failures that you get from lacking these are pretty clear. I am unable to confirm our claim that we work with Test::More 0.82, because CPAN has only versions from a year or three back. (Anyone know of a more, er, comprehensive archive than CPAN?) It's also interesting to speculate about how old a version of IPC::Run is new enough, but I see no easy way to get much data on that either. Anyway, pending some news about compatibility of the MSVC scripts, I think we ought to adjust our docs to state that 5.8.3 is the minimum supported Perl version. Also, I got seriously confused at one point during these tests because, while our configure script carefully sets PERL to an absolute path name, it's content to set PROVE to "prove", paying no attention to whether that version of "prove" matches "perl". Is it really okay to run the TAP tests with a different perl version than we selected for other purposes? I think it'd be a good idea to insist that "prove" be in the same directory we found "perl" in. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] segfault in HEAD when too many nested functions call
Andres Freund <and...@anarazel.de> writes: > [ 0002-Move-ExecProcNode-from-dispatch-to-function-pointer-.patch ] Here's a reviewed version of this patch. I added dummy ExecProcNodeMtd functions to the various node types that lacked them because they expect to be called through MultiExecProcNode instead. In the old coding, trying to call ExecProcNode on one of those node types would have led to a useful error message; as you had it, it'd have dumped core, which is not an improvement. Also, I removed the ExecReScan stanza from ExecProcNodeFirst; that should surely be redundant, because we should only get to that function through ExecProcNode(). If somehow it's not redundant, please add a comment explaining why not. Some other minor cosmetic changes, mostly comment wordsmithing. I think this (and the previous one) are committable. regards, tom lane diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index 20fd9f8..d338cfe 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -17,15 +17,10 @@ *- */ /* - * INTERFACE ROUTINES - * ExecInitNode - initialize a plan node and its subplans - * ExecProcNode - get a tuple by executing the plan node - * ExecEndNode - shut down a plan node and its subplans - * * NOTES * This used to be three files. It is now all combined into - * one file so that it is easier to keep ExecInitNode, ExecProcNode, - * and ExecEndNode in sync when new nodes are added. + * one file so that it is easier to keep the dispatch routines + * in sync when new nodes are added. * * EXAMPLE * Suppose we want the age of the manager of the shoe department and @@ -122,6 +117,10 @@ #include "miscadmin.h" +static TupleTableSlot *ExecProcNodeFirst(PlanState *node); +static TupleTableSlot *ExecProcNodeInstr(PlanState *node); + + /* * ExecInitNode * @@ -149,6 +148,13 @@ ExecInitNode(Plan *node, EState *estate, int eflags) if (node == NULL) return NULL; + /* + * Make sure there's enough stack available. Need to check here, in + * addition to ExecProcNode() (via ExecProcNodeFirst()), to ensure the + * stack isn't overrun while initializing the node tree. + */ + check_stack_depth(); + switch (nodeTag(node)) { /* @@ -365,6 +371,13 @@ ExecInitNode(Plan *node, EState *estate, int eflags) } /* + * Add a wrapper around the ExecProcNode callback that checks stack depth + * during the first execution. + */ + result->ExecProcNodeReal = result->ExecProcNode; + result->ExecProcNode = ExecProcNodeFirst; + + /* * Initialize any initPlans present in this node. The planner put them in * a separate list for us. */ @@ -388,195 +401,51 @@ ExecInitNode(Plan *node, EState *estate, int eflags) } -/* - * ExecProcNode - * - * Execute the given node to return a(nother) tuple. - * +/* + * ExecProcNode wrapper that performs some one-time checks, before calling + * the relevant node method (possibly via an instrumentation wrapper). */ -TupleTableSlot * -ExecProcNode(PlanState *node) +static TupleTableSlot * +ExecProcNodeFirst(PlanState *node) { - TupleTableSlot *result; - - if (node->chgParam != NULL) /* something changed */ - ExecReScan(node); /* let ReScan handle this */ + /* + * Perform stack depth check during the first execution of the node. We + * only do so the first time round because it turns out to not be cheap on + * some common architectures (eg. x86). This relies on an assumption that + * ExecProcNode calls for a given plan node will always be made at roughly + * the same stack depth. + */ + check_stack_depth(); + /* + * If instrumentation is required, change the wrapper to one that just + * does instrumentation. Otherwise we can dispense with all wrappers and + * have ExecProcNode() directly call the relevant function from now on. + */ if (node->instrument) - InstrStartNode(node->instrument); - - switch (nodeTag(node)) - { - /* - * control nodes - */ - case T_ResultState: - result = ExecResult((ResultState *) node); - break; - - case T_ProjectSetState: - result = ExecProjectSet((ProjectSetState *) node); - break; - - case T_ModifyTableState: - result = ExecModifyTable((ModifyTableState *) node); - break; - - case T_AppendState: - result = ExecAppend((AppendState *) node); - break; - - case T_MergeAppendState: - result = ExecMergeAppend((MergeAppendState *) node); - break; - - case T_RecursiveUnionState: - result = ExecRecursiveUnion((RecursiveUnionState *) node); - break; - - /* BitmapAndState does not yield tuples */ - - /* BitmapOrState does not yield
Re: [HACKERS] segfault in HEAD when too many nested functions call
Andres Freund <and...@anarazel.de> writes: > On 2017-07-26 16:28:38 -0400, Tom Lane wrote: >> It's certainly possible that there are long-running loops not involving >> any ExecProcNode recursion at all, but that would be a bug independent >> of this issue. The CFI in ExecProcNode itself can be replaced exactly >> either by asking all callers to do it, or by asking all callees to do it. >> I think the latter is going to be more uniform and harder to screw up. > Looks a bit better. Still a lot of judgement-y calls tho, e.g. when one > node function just calls the next, or when there's loops etc. I found > a good number of missing CFIs... > What do you think? Here's a reviewed version of this patch. Differences from yours: * I think you put ExecScan's CFI in the wrong place; AFAICT yours only covers its fast path. * I think ExecAgg needs a CFI at the head, just to be sure it's hit in any path through that. * I agree that putting CFI inside ExecHashJoin's state machine loop is a good idea, because it might have to trawl through quite a lot of a batch file before finding a returnable tuple. But I think in merge and nestloop joins it's sufficient to put one CFI at the head. Neither of those cases can do very much processing without invoking a child node, where a CFI will happen. * You missed ExecLockRows altogether. * I added some comments and cosmetic tweaks. regards, tom lane diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index 294ad2c..20fd9f8 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -399,8 +399,6 @@ ExecProcNode(PlanState *node) { TupleTableSlot *result; - CHECK_FOR_INTERRUPTS(); - if (node->chgParam != NULL) /* something changed */ ExecReScan(node); /* let ReScan handle this */ diff --git a/src/backend/executor/execScan.c b/src/backend/executor/execScan.c index 4f131b3..6fde7cd 100644 --- a/src/backend/executor/execScan.c +++ b/src/backend/executor/execScan.c @@ -126,6 +126,8 @@ ExecScan(ScanState *node, ExprState *qual; ProjectionInfo *projInfo; + CHECK_FOR_INTERRUPTS(); + /* * Fetch data from node */ diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c index de9a18e..377916d 100644 --- a/src/backend/executor/nodeAgg.c +++ b/src/backend/executor/nodeAgg.c @@ -677,6 +677,8 @@ fetch_input_tuple(AggState *aggstate) if (aggstate->sort_in) { + /* make sure we check for interrupts in either path through here */ + CHECK_FOR_INTERRUPTS(); if (!tuplesort_gettupleslot(aggstate->sort_in, true, false, aggstate->sort_slot, NULL)) return NULL; @@ -1414,6 +1416,8 @@ process_ordered_aggregate_multi(AggState *aggstate, while (tuplesort_gettupleslot(pertrans->sortstates[aggstate->current_set], true, true, slot1, )) { + CHECK_FOR_INTERRUPTS(); + /* * Extract the first numTransInputs columns as datums to pass to the * transfn. (This will help execTuplesMatch too, so we do it @@ -2100,6 +2104,8 @@ ExecAgg(AggState *node) { TupleTableSlot *result = NULL; + CHECK_FOR_INTERRUPTS(); + if (!node->agg_done) { /* Dispatch based on strategy */ @@ -2563,6 +2569,8 @@ agg_retrieve_hash_table(AggState *aggstate) TupleTableSlot *hashslot = perhash->hashslot; int i; + CHECK_FOR_INTERRUPTS(); + /* * Find the next entry in the hash table */ diff --git a/src/backend/executor/nodeAppend.c b/src/backend/executor/nodeAppend.c index aae5e3f..58045e0 100644 --- a/src/backend/executor/nodeAppend.c +++ b/src/backend/executor/nodeAppend.c @@ -59,6 +59,7 @@ #include "executor/execdebug.h" #include "executor/nodeAppend.h" +#include "miscadmin.h" static bool exec_append_initialize_next(AppendState *appendstate); @@ -204,6 +205,8 @@ ExecAppend(AppendState *node) PlanState *subnode; TupleTableSlot *result; + CHECK_FOR_INTERRUPTS(); + /* * figure out which subplan we are currently processing */ diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c index 7e0ba03..cf109d5 100644 --- a/src/backend/executor/nodeBitmapHeapscan.c +++ b/src/backend/executor/nodeBitmapHeapscan.c @@ -41,6 +41,7 @@ #include "access/transam.h" #include "executor/execdebug.h" #include "executor/nodeBitmapHeapscan.h" +#include "miscadmin.h" #include "pgstat.h" #include "storage/bufmgr.h" #include "storage/predicate.h" @@ -192,6 +193,8 @@ BitmapHeapNext(BitmapHeapScanState *node) Page dp; ItemId lp; + CHECK_FOR_INTERRUPTS(); + /* * Get next page of results if needed */ diff --git a/src/backend/executor/nodeCustom.c b/src/backend/executor/nodeCustom.c index 69e2704..fc15974 100644 --- a/src/backend/executor/nodeCustom.c +++ b/src/backend/
Re: [HACKERS] Memory leak
fan yang <yangfang...@gmail.com> writes: > - src/port/quotes.c > At line 38, at function "escape_single_quotes_ascii", > here used "malloc" to get some memory and return the > pointer returned by the "malloc". > So, any caller used this function shoule free this memory. > - /src/bin/initdb/initdb.c > At line 327, at function "escape_quotes", > which use function "escape_single_quotes_ascii" > from above file. > But at this file(/src/bin/initdb/initdb.c), there are many place > used function "escape_quotes" but have not free the pointer returned > by the function, thus cause memory leak. By and large, we intentionally don't worry about memory leaks in initdb (or any other program with a limited runtime). It's not worth the maintenance effort to save a few bytes, at least not where it requires code contortions like these. Doing a quick check with valgrind says that a run of initdb, in HEAD, leaks about 560KB over its lifespan. That's well below the threshold of pain on any machine capable of running modern Postgres reasonably. For fun, I tried to see whether your patch moved that number appreciably, but patch(1) couldn't make any sense of it at all. I think that probably your mail program munged the whitespace in the patch. Many of us have found that the most reliable way to forward patches through email is to add them as attachments rather than pasting them into the text in-line. Poking at it a bit harder with valgrind, it seems that the vast majority of what it reports as leaks is caused by replace_token(). If we wanted to reduce memory wastage during initdb, that would be the place to work on. But I'm dubious that it's worth any effort. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql's \d and \dt are sending their complaints to different output files
Daniel Gustafsson <dan...@yesql.se> writes: > On 27 Jul 2017, at 19:40, Tom Lane <t...@sss.pgh.pa.us> wrote: >> listTables and listDbRoleSettings print a custom message rather than >> an empty table for no matches (but in QUIET mode they just do the >> latter). I think that's actually a good decision for listDbRoleSettings, >> because the user might be confused about which pattern is which, and >> we can straighten him out with a custom error message. But the only >> good argument for listTables behaving that way is that people are used >> to it. > Which is a pretty good argument for not changing it. Yeah, not hearing any votes for changing it, I'll leave it be. >> I'm not sure about this one. It definitely seems bogus that \dRp+ is >> omitting the owner column, but I'm less excited about duplicating the >> pubname. > It’s indeed a bit silly to duplicate the information like that. The minimum change here seems to be to add the owner column, so I did that. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: >> On 07/27/2017 11:58 PM, Tom Lane wrote: >>> I wonder if it'd be worth getting the buildfarm >>> to log the output of "perl -V" so we could get a clearer picture >>> of what's being tested. > Looks like this, bit it's rather tedious. I think I might back it out. I > guess I could also write it to a log file, if we really think we need it. > <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake=2017-07-28%2018%3A37%3A19> Yeah, that's awfully verbose :-(. I suspect most vendors wouldn't bother with enumerating quite so many patches. I fixed configure so that it will report what it saw in $Config{ccflags}; that may be sufficient. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Ashutosh Sharma <ashu.coe...@gmail.com> writes: > I quickly ran the some basic test-cases on my Windows machine (machine > where i have been regenerating the issue) and they are all passing. > Also, all the automated test-cases for contrib module hstore_plperl > are passing. Cool, thanks. I hope people will set up some buildfarm machines with the formerly-broken configurations. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Christoph Berg <m...@debian.org> writes: > The plperl segfault on Debian's kfreebsd port I reported back in 2013 > is also still present: > https://www.postgresql.org/message-id/20130515064201.GC704%40msgid.df7cb.de > https://buildd.debian.org/status/fetch.php?pkg=postgresql-10=kfreebsd-amd64=10~beta2-1=1499947011=0 So it'd be interesting to know if it's any better with HEAD ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] map_partition_varattnos() and whole-row vars
Robert Haas <robertmh...@gmail.com> writes: > If we're remapping the varattnos, I don't see how we can just pass > whole-row references through. I mean, if the partition and the parent > have different varattnos, then a whole-row attribute for one is a > different thing from a whole-row attribute for the other; the > HeapTuple you would need to build in each case is different, based on > the column order for the relation you're worrying about. There is longstanding code in the planner to handle this for traditional- inheritance cases; IIRC what it does is build a ROW() expression that emits the proper output rowtype regardless of the discrepancies. Not sure why that's apparently not getting applied for partitioning. > (Boy, our implementation of DROP COLUMN is painful! If we really got > rid of columns when they were dropped we could've avoided this whole > mess.) I think the pain arises mostly from the decision to allow partitions to not all have identical rowtype. I would have lobbied against that choice if I'd been paying more attention at the start ... but I wasn't. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Ashutosh Sharma <ashu.coe...@gmail.com> writes: > On Fri, Jul 28, 2017 at 10:05 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> Uh-huh. So the issue is indeed that they're injecting PERL_IMPLICIT_SYS >> via a command-line #define rather than putting it into perl's config.h, >> and that results in a change in the apparent size of the PerlInterp >> struct (because of IMem and friends). > Yes, That's right. We would have seen different result if the > PERL_IMPLICIT_SYS would not have been defined globally. I've pushed the changes to the build scripts now. I had to revise the Mkvcbuild.pm part a bit, because you'd forgotten to propagate the extra defines into hstore_plperl. So that code is untested; please confirm that HEAD works in your problem environment now. I notice that Mkvcbuild.pm is artificially injecting a define for PLPERL_HAVE_UID_GID. I strongly suspect that that was a hack meant to work around the lack of this mechanism, and that we could now get rid of it or clean it up. But there's no evidence in the commit log about the reason for it --- it seems to go back to the original addition of MSVC support. Anybody remember? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] segfault in HEAD when too many nested functions call
Andres Freund <and...@anarazel.de> writes: > Anyway, I'll commit it after another pass in ~1 week if it doesn't get a > review till then, but I assume it'll. FWIW, I intend to review it today, or tomorrow at the very latest. (Right now I'm buried in perl droppings.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Ashutosh Sharma <ashu.coe...@gmail.com> writes: > On Fri, Jul 28, 2017 at 7:22 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> Assuming that the Perl crew know what they're doing and this list is >> complete, I notice that not one of the symbols they show as relevant >> starts with an underscore. So I'm thinking that my previous semi- >> joking idea of absorbing only -D switches for names that *don't* >> start with an underscore was actually a good solution. > Okay, as per your suggestion, I have modified my earlier patches that > imports the -D switches used by perl into plperl accordingly i.e. it > now ignores the switches whose name starts with underscore. Please > find plperl_win_v3 and plperl_linux_v2 patches attached with this > email. OK, thanks. I've pushed the XSUB/dTHX patch after another round of code-reading and some minor comment improvements; we'll soon see what the buildfarm thinks of it. In the meantime I'll work on these two patches. >> (BTW, you never did tell us exactly what defines you're getting >> out of Perl's flags on the problem installation.) > I am really sorry about that. I just missed that in my earlier email. > Here are the defines used in the perl where i could reproduce the > issue, > C:\Users\ashu>perl -MConfig -e "print $Config{ccflags}" > -nologo -GF -W3 -O1 -MD -Zi -DNDEBUG -GL -fp:precise -DWIN32 > -D_CONSOLE -DNO_STRICT -DWIN64 -DCONSERVATIVE -DUSE_SITECUSTOMIZE > -DPERL_TEXTMODE_SCRIPTS -DPERL_IMPLICIT_CONTEXT -DPERL_IMPLICIT_SYS Uh-huh. So the issue is indeed that they're injecting PERL_IMPLICIT_SYS via a command-line #define rather than putting it into perl's config.h, and that results in a change in the apparent size of the PerlInterp struct (because of IMem and friends). We'd expected as much, but it's good to have clear confirmation. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Ashutosh Sharma <ashu.coe...@gmail.com> writes: > Thanks for the patch. I am seeing some compilation errors on Windows > with the patch. Below are the errors observed, > ... > I did spent some time to find reason for these compilation errors and > could eventually find that some of the Windows specific functions > inside plperl module are calling Perl APIs without fetching the perl > interpreter's context using dTHX. Ah, thanks. I just stuck that in where compiler errors were telling me to, so I didn't realize there were Windows-specific functions to worry about. > Moreover, I have also tested this patch along with the patch to import > switches used by perl into plperl and together it fixes the server > crash issue. Also, now, the interpreter size in both perl and plperl > module are the same thereby generating the same key on both plperl and > perl module. Thanks. Excellent. So this looks like the avenue to pursue. As far as the question of which symbols to import goes: on my own machines I'm seeing a lot of things like $ perl -MConfig -e 'print $Config{ccflags}' -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 $ perl -MConfig -e 'print $Config{ccflags}' -Ae -D_HPUX_SOURCE -Wl,+vnocompatwarnings -DDEBUGGING -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 I'm really quite afraid to import symbols like _LARGEFILE_SOURCE and _FILE_OFFSET_BITS into plperl; those *will* break things if they are different from what core Postgres is built with. Moreover, I came across a relevant data structure in perl.h: /* These are all the compile time options that affect binary compatibility. Other compile time options that are binary compatible are in perl.c Both are combined for the output of perl -V However, this string will be embedded in any shared perl library, which will allow us add a comparison check in perlmain.c in the near future. */ #ifdef DOINIT EXTCONST char PL_bincompat_options[] = # ifdef DEBUG_LEAKING_SCALARS " DEBUG_LEAKING_SCALARS" # endif # ifdef DEBUG_LEAKING_SCALARS_FORK_DUMP " DEBUG_LEAKING_SCALARS_FORK_DUMP" # endif # ifdef FAKE_THREADS " FAKE_THREADS" # endif # ifdef MULTIPLICITY " MULTIPLICITY" # endif ... lots more ... Assuming that the Perl crew know what they're doing and this list is complete, I notice that not one of the symbols they show as relevant starts with an underscore. So I'm thinking that my previous semi- joking idea of absorbing only -D switches for names that *don't* start with an underscore was actually a good solution. If that turns out to not be enough of a filter, we could consider looking into perl.h to extract the set of symbols tested in building PL_bincompat_options and then intersecting that with what we get from Perl's ccflags. But that would be a lot more complex, so I'd rather go with the simpler filter rule for now. (BTW, you never did tell us exactly what defines you're getting out of Perl's flags on the problem installation.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL_stashcache, or, what's our minimum Perl version?
I wrote: > So the question is, does anyone care? I wouldn't except that our > documentation appears to claim that we work with Perl "5.8 or later". BTW, what actually says that is installation.sgml: Perl 5.8 or later is needed to build from a Git checkout, or if you changed the input files for any of the build steps that use Perl scripts. If building on Windows you will need Perl in any case. Perl is also required to run some test suites. Strictly speaking, there is no statement anywhere (AFAICT) about what Perl versions PL/Perl works with. As an experiment, I built from a "make maintainer-clean" state using that 5.8.0 install, and it worked. So indeed installation.sgml's statement is correct as far as it goes. But I dunno what the situation is for the MSVC build scripts. I did not try the TAP tests either. A look in the buildfarm logs says that the oldest Perl version we're actually testing is 5.8.6 on prairiedog. (castoroides/protosciurus have 5.8.4 but they are not building --with-perl, so that only exercises the build scripts not plperl; they're not doing TAP either.) I only looked into configure.log results though, so I'm not sure about the Windows critters. I kinda suspect we're not actively testing non-MULTIPLICITY builds either. The 5.8.7 test I just ran was with a non-MULTIPLICITY build, so the case doesn't seem actively broken, but I doubt there is any buildfarm coverage. I wonder if it'd be worth getting the buildfarm to log the output of "perl -V" so we could get a clearer picture of what's being tested. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] PL_stashcache, or, what's our minimum Perl version?
I wanted to do some portability testing on the recently-proposed plperl changes, so I tried to build against a 5.8.0 Perl that I had laying about. It blew up real good: plperl.c: In function `plperl_trusted_init': plperl.c:1017: `PL_stashcache' undeclared (first use in this function) plperl.c:1017: (Each undeclared identifier is reported only once plperl.c:1017: for each function it appears in.) make: *** [plperl.o] Error 1 It's complaining about this: /* invalidate assorted caches */ ++PL_sub_generation; hv_clear(PL_stashcache); which was introduced by you in commit 1f474d299 (2010-05-13). Some digging suggests that PL_stashcache was added in Perl 5.8.1 circa 2003, although this is impressively underdocumented in any Perl changelog that I could find. So the question is, does anyone care? I wouldn't except that our documentation appears to claim that we work with Perl "5.8 or later". And the lack of field complaints suggests strongly that nobody else cares. So I'm inclined to think we just need to be more specific about the minimum Perl version --- but what exactly? Alternatively, can we make this work with 5.8.0? It looks like PL_stashcache is a macro, so we could make it compile with an "#ifdef PL_stashcache", but I'm pretty nervous about whether that would be breaking needed cleanup behavior. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] On Complex Source Code Reading Strategy
Peter Geoghegan <p...@bowt.ie> writes: > 2. Start somewhere. I have no idea where that should be, but it has to > be some particular place that seems interesting to you. Don't forget to start with the available documentation, ie https://www.postgresql.org/docs/devel/static/internals.html You should certainly read https://www.postgresql.org/docs/devel/static/overview.html and depending on what your interests are, there are probably other chapters of Part VII that are worth your time. Also keep an eye out for README files in the part of the source tree you're browsing in. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > On 07/27/2017 04:33 PM, Tom Lane wrote: >> So I was trying to figure a way to not include XSUB.h except in a very >> limited part of plperl, like ideally just the .xs files. It's looking >> like that would take nontrivial refactoring though :-(. Another problem >> is that this critical bit of the library API is in XSUB.h: > That's the sort of thing that prompted me to ask what was the minimal > set of defines required to fix the original problem (assuming such a > thing exists) > We haven't used PERL_IMPLICIT_CONTEXT to date, and without ill effect. > For example. it's in the ExtUtils::Embed::ccopts for the perl that > jacana and bowerbird happily build and test against. Well, actually, PERL_IMPLICIT_CONTEXT is turned on automatically in any MULTIPLICITY build, and since it changes all the Perl ABIs, we *are* relying on it. However, after further study of the Perl docs I noticed that we could dispense with XSUB.h if we defined PERL_NO_GET_CONTEXT (which turns the quoted stanza into a no-op). That results in needing to sprinkle plperl.c with "dTHX" declarations, as explained in perlguts.pod. They're slightly tricky to place correctly, because they load up a pointer to the current Perl interpreter, so you have to be wary of where to put them in functions that change interpreters. But I seem to have it working in the attached patch. (One benefit of doing this extra work is that it should be a bit more efficient, in that we load up a Perl interpreter pointer only once per function called, not once per usage therein. We could remove many of those fetches too, if we wanted to sprinkle the plperl code with yet more THX droppings; but I left that for another day.) Armed with that, I got rid of XSUB.h in plperl.c and moved the PG_TRY-using functions in the .xs files to plperl.c. I think this would fix Ashutosh's problem, though I am not in a position to try it with a PERL_IMPLICIT_SYS build here. regards, tom lane diff --git a/src/pl/plperl/SPI.xs b/src/pl/plperl/SPI.xs index 0447c50..7651b62 100644 *** a/src/pl/plperl/SPI.xs --- b/src/pl/plperl/SPI.xs *** *** 15,52 #undef _ /* perl stuff */ #include "plperl.h" #include "plperl_helpers.h" - /* - * Interface routine to catch ereports and punt them to Perl - */ - static void - do_plperl_return_next(SV *sv) - { - MemoryContext oldcontext = CurrentMemoryContext; - - PG_TRY(); - { - plperl_return_next(sv); - } - PG_CATCH(); - { - ErrorData *edata; - - /* Must reset elog.c's state */ - MemoryContextSwitchTo(oldcontext); - edata = CopyErrorData(); - FlushErrorState(); - - /* Punt the error to Perl */ - croak_cstr(edata->message); - } - PG_END_TRY(); - } - - MODULE = PostgreSQL::InServer::SPI PREFIX = spi_ PROTOTYPES: ENABLE --- 15,25 #undef _ /* perl stuff */ + #define PG_NEED_PERL_XSUB_H #include "plperl.h" #include "plperl_helpers.h" MODULE = PostgreSQL::InServer::SPI PREFIX = spi_ PROTOTYPES: ENABLE *** void *** 76,82 spi_return_next(rv) SV *rv; CODE: ! do_plperl_return_next(rv); SV * spi_spi_query(sv) --- 49,55 spi_return_next(rv) SV *rv; CODE: ! plperl_return_next(rv); SV * spi_spi_query(sv) diff --git a/src/pl/plperl/Util.xs b/src/pl/plperl/Util.xs index dbba0d7..862fec4 100644 *** a/src/pl/plperl/Util.xs --- b/src/pl/plperl/Util.xs *** *** 20,67 #undef _ /* perl stuff */ #include "plperl.h" #include "plperl_helpers.h" - /* - * Implementation of plperl's elog() function - * - * If the error level is less than ERROR, we'll just emit the message and - * return. When it is ERROR, elog() will longjmp, which we catch and - * turn into a Perl croak(). Note we are assuming that elog() can't have - * any internal failures that are so bad as to require a transaction abort. - * - * This is out-of-line to suppress "might be clobbered by longjmp" warnings. - */ - static void - do_util_elog(int level, SV *msg) - { - MemoryContext oldcontext = CurrentMemoryContext; - char * volatile cmsg = NULL; - - PG_TRY(); - { - cmsg = sv2cstr(msg); - elog(level, "%s", cmsg); - pfree(cmsg); - } - PG_CATCH(); - { - ErrorData *edata; - - /* Must reset elog.c's state */ - MemoryContextSwitchTo(oldcontext); - edata = CopyErrorData(); - FlushErrorState(); - - if (cmsg) - pfree(cmsg); - - /* Punt the error to Perl */ - croak_cstr(edata->message); - } - PG_END_TRY(); - } static text * sv2text(SV *sv) --- 20,29 #undef _ /* perl stuff */ + #define PG_NEED_PERL_XSUB_H #include "plperl.h" #include "plperl_helpers.h" static text * sv2text(SV *sv) *** util
Re: [HACKERS] pl/perl extension fails on Windows
Robert Haas <robertmh...@gmail.com> writes: > How about we fix it like this? That seems pretty invasive; I'm not excited about breaking a lot of unrelated code (particularly third-party extensions) for plperl's benefit. Even if we wanted to do that in HEAD, it seems like a nonstarter for released branches. An even bigger issue is that if Perl feels free to redefine sigsetjmp, what other libc calls might they decide to horn in on? So I was trying to figure a way to not include XSUB.h except in a very limited part of plperl, like ideally just the .xs files. It's looking like that would take nontrivial refactoring though :-(. Another problem is that this critical bit of the library API is in XSUB.h: #if defined(PERL_IMPLICIT_CONTEXT) && !defined(PERL_NO_GET_CONTEXT) && !defined(PERL_CORE) # undef aTHX # undef aTHX_ # define aTHX PERL_GET_THX # define aTHX_ aTHX, #endif As best I can tell, that's absolute brain death on the part of the Perl crew; it means you can't write working calling code at all without including XSUB.h, or at least copying-and-pasting this bit out of it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change in "policy" on dump ordering?
Robert Haas <robertmh...@gmail.com> writes: > On Wed, Jul 26, 2017 at 2:41 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> The bigger issue is whether there's some failure case this would cause >> that I'm missing altogether. Thoughts? > I think dependencies are fundamentally the right model for this sort > of problem. I can't imagine what could go wrong that wouldn't amount > to a failure to insert all of the right dependencies, and thus be > fixable by inserting whatever was missing. I tend to agree. One fairly obvious risk factor is that we end up with circular dependencies, but in that case we have conflicting requirements and we're gonna have to decide what to do about them no matter what. Another potential issue is pg_dump performance; this could result in a large increase in the number of DumpableObjects and dependency links. The topological sort is O(N log N), so we shouldn't hit any serious big-O problems, but even a more-or-less-linear slowdown might be problematic for some people. On the whole though, I believe pg_dump is mostly limited by its server queries, and that would probably still be true. But the big point: even if we had the code for this ready-to-go, I'd be hesitant to inject it into v10 at this point, let alone back-patch. If we go down this path we won't be seeing a fix for the matview ordering problem before v11 (at best). Maybe that's acceptable considering it's been there for several years already, but it's annoying. So I'm thinking we should consider the multi-pass patch I posted as a short-term, backpatchable workaround, which we could hope to replace with dependency logic later. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > What is the minimal set of extra defines required to sort out the > handshake fingerprint issue? Also, exactly what defines do you end up importing in your test build? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] tab completion for "create user mapping for"
Jeff Janes <jeff.ja...@gmail.com> writes: > If you have "CREATE USE" tab completion will recommend both USER and USER > MAPPING FOR. > But once you have the full "CREATE USER " or "CREATE USER M" it will not > complete the rest of the "MAPPING FOR". > Attached patch fixes that. Pushed, thanks. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql's \d and \dt are sending their complaints to different output files
Daniel Gustafsson <dan...@yesql.se> writes: >> On 19 Jun 2017, at 17:32, Tom Lane <t...@sss.pgh.pa.us> wrote: >> So, if we're getting into enforcing consistency in describe.c, there's >> lots to do. > Addressed in attached patch, see list of patches below. I've pushed most of this. There are a couple of remaining inconsistencies: listTables and listDbRoleSettings print a custom message rather than an empty table for no matches (but in QUIET mode they just do the latter). I think that's actually a good decision for listDbRoleSettings, because the user might be confused about which pattern is which, and we can straighten him out with a custom error message. But the only good argument for listTables behaving that way is that people are used to it. Should we override backwards compatibility to the extent of dropping the custom messages in listTables, and just printing an empty table-of-tables? > 0004 - Most verbose metacommands include additional information on top of what > is in the normal output, while \dRp+ dropped two columns (moving one to the > title). This include the information from \dRp in \dRp+. Having the pubname > in the title as well as the table is perhaps superfuous, but consistent with > other commands so opted for it. I'm not sure about this one. It definitely seems bogus that \dRp+ is omitting the owner column, but I'm less excited about duplicating the pubname. We'd better make up our minds before v10 freezes, though. Anybody else have an opinion? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Ashutosh Sharma <ashu.coe...@gmail.com> writes: > Anyways, attached is the patch that corrects this issue. The patch now > imports all the switches used by perl into plperl module but, after > doing so, i am seeing some compilation errors on Windows. Following is > the error observed, > SPI.obj : error LNK2019: unresolved external symbol PerlProc_setjmp > referenced in function do_plperl_return_next That's certainly a mess, but how come that wasn't happening before? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] expand_dbname in postgres_fdw
Arseny Sher <a.s...@postgrespro.ru> writes: > Attached patch allows dbname expansion and makes sure that it doesn't > contain any invalid options. I'm pretty much against this in principle. It complicates both the code and the conceptual API, for no serious gain, even if you take it on faith that it doesn't and never will produce any security issues. > Whether you decide to commit it or not > (at the moment I don't see any security implications, at least not more than > in usual dbname expansion usage, e.g. in psql, but who knows), it seems > to me that the documentation should be updated since currently it is not > clear on the subject, as the beginning of this thread proves. I really don't see anything wrong with the FDW's documentation. To claim that it's not clear, you have to suppose that a connstring's dbname field is allowed to recursively contain a connstring. However, if you've got a concrete suggestion about improving the wording, let's see it. Now on the other hand, libpq's documentation seems a little confusing on this point independently of the FDW: so far as I can see, what "certain contexts" means is not clearly defined anywhere, and for that matter "checked for extended formats" is a masterpiece of unhelpful obfuscation. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] asynchronous execution
Robert Haas <robertmh...@gmail.com> writes: > Ostensibly, the advantage of this framework over my previous proposal > is that it avoids inserting anything into ExecProcNode(), which is > probably a good thing to avoid given how frequently ExecProcNode() is > called. Unless the parent and the child both know about asynchronous > execution and choose to use it, everything runs exactly as it does > today and so there is no possibility of a complaint about a > performance hit. As far as it goes, that is good. > However, at a deeper level, I fear we haven't really solved the > problem. If an Append is directly on top of a ForeignScan node, then > this will work. But if an Append is indirectly on top of a > ForeignScan node with some other stuff in the middle, then it won't - > unless we make whichever nodes appear between the Append and the > ForeignScan async-capable. I have not been paying any attention to this thread whatsoever, but I wonder if you can address your problem by building on top of the ExecProcNode replacement that Andres is working on, https://www.postgresql.org/message-id/20170726012641.bmhfcp5ajpudi...@alap3.anarazel.de The scheme he has allows $extra_stuff to be injected into ExecProcNode at no cost when $extra_stuff is not needed, because you simply don't insert the wrapper function when it's not needed. I'm not sure that it will scale well to several different kinds of insertions though, for instance if you wanted both instrumentation and async support on the same node. But maybe those two couldn't be arms-length from each other anyway, in which case it might be fine as-is. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump does not handle indirectly-granted permissions properly
Stephen Frost <sfr...@snowman.net> writes: > * Tom Lane (t...@sss.pgh.pa.us) wrote: >> AFAICT, pg_dump has no notion that it needs to be careful about the order >> in which permissions are granted. I did > I'm afraid that's correct, though I believe that's always been the case. > I spent some time looking into this today and from what I've gathered so > far, there's essentially an implicit (or at least, I couldn't find any > explicit reference to it) ordering in ACLs whereby a role which was > given a GRANT OPTION always appears in the ACL list before an ACL entry > where that role is GRANT'ing that option to another role, and this is > what pg_dump was (again, implicitly, it seems) depending on to get this > correct in prior versions. Yeah, I suspected that was what made it work before. I think the ordering just comes from the fact that new ACLs are added to the end, and you can't add an entry as a non-owner unless there's a grant WGO there already. > Pulling apart the ACL list and rebuilding it to handle adding/revoking > of default options on objects ends up, in some cases, changing the > ordering around for the ACLs and that's how pg_dump comes to emit the > GRANT commands in the wrong order. Right. > I don't, at the moment, think we actually need to do any checks in the > backend code to make sure that the implicit ordering is always held, > assuming we make this change in pg_dump. I do wonder if it might be > possible, with the correct set of GRANTs (perhaps having role > memberships coming into play also, as discussed in the header of > select_best_grantor()) to generate an ACL list with an "incorrect" > ordering which would end up causing issues in older releases with > pg_dump. We've had precious little complaints from the field about this > and so, even if we were to generate such a case, I'm not sure that we'd > want to add all the code necessary to avoid it and then back-patch it. I suspect it would be easier to modify the backend code that munges ACL lists so that it takes care to preserve that property, if we ever find there are cases where it does not. It seems plausible to me that pg_dump is not the only code that depends on that ordering. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] segfault in HEAD when too many nested functions call
Andres Freund <and...@anarazel.de> writes: > On 2017-07-26 15:03:37 -0400, Tom Lane wrote: >> Hm, that seems kinda backwards to me; I was envisioning the checks >> moving to the callees not the callers. I think it'd end up being >> about the same number of occurrences of CHECK_FOR_INTERRUPTS(), >> and there would be less of a judgment call about where to put them. > Hm, that seems a bit riskier - easy to forget one of the places where we > might need a CFI(). I would argue the contrary. If we put a CFI at the head of each node execution function, then it's just boilerplate that you copy-and-paste when you invent a new node type. The way you've coded it here, it seems to involve a lot of judgment calls. That's very far from being copy and paste, and the more different it looks from one node type to another, the easier it will be to forget it. > We certainly are missing a bunch of them in various nodes It's certainly possible that there are long-running loops not involving any ExecProcNode recursion at all, but that would be a bug independent of this issue. The CFI in ExecProcNode itself can be replaced exactly either by asking all callers to do it, or by asking all callees to do it. I think the latter is going to be more uniform and harder to screw up. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] segfault in HEAD when too many nested functions call
Andres Freund <and...@anarazel.de> writes: > I've moved the CHECK_FOR_INTERRUPTS() to the callsites. That > unsurprisingly ends up being somewhat verbose, and there's a bunch of > minor judgement calls where exactly to place them. While doing so I've > also added a few extra ones. Did this in a separate patch to make it > easier to review. Hm, that seems kinda backwards to me; I was envisioning the checks moving to the callees not the callers. I think it'd end up being about the same number of occurrences of CHECK_FOR_INTERRUPTS(), and there would be less of a judgment call about where to put them. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] expand_dbname in postgres_fdw
Robert Haas <robertmh...@gmail.com> writes: > On Wed, Jul 26, 2017 at 5:38 AM, Ashutosh Bapat > <ashutosh.ba...@enterprisedb.com> wrote: >> According to F.34.1.1 at [1] passing connection string as dbname >> option should work, so your question is valid. I am not aware of any >> discussion around this on hackers. > I kind of wonder if this had some security aspect to it? But not sure. The main problem to my mind is that a connection string could possibly override items meant to be specified elsewhere. In particular it ought not be allowed to specify the remote username or password, because those are supposed to come from the user mapping object not the server object. I suspect you could break things by trying to specify client_encoding there, as well. In any case, I entirely reject the argument that the existing documentation says this should work. It says that you can specify (most of) the same fields that are allowed in a connection string, not that one of those fields might be taken to *be* a connection string. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change in "policy" on dump ordering?
I thought of a quite different way that we might attack this problem, but I'm not sure if it's worth pursuing or not. The idea is basically that we should get rid of the existing kluge for pushing ACLs to the end altogether, and instead rely on dependency sorting to make things work. This'd require some significant surgery on pg_dump, but it seems doable: * Make ACLs have actual DumpableObject representations that participate in the topological sort. * Add more explicit dependencies between these objects and other ones. For example, we'd fix the matview-permissions problem by adding dependencies not only on the tables a matview references, but on their ACLs. Another case is that we'd want to ensure that a table's ACL comes out after the table data, so as to solve the original problem that the current behavior was meant to deal with, ie allowing restore of tables even if they've been made read-only by revoking the owner's INSERT privilege. But I think that case would best be dealt with by forcing table ACLs to be post-data objects. (Otherwise they might come out in the middle "data" section of a restore, which is likely to break some client assumptions somewhere.) Another variant of that is that table ACLs probably need to come out after CREATE TRIGGER and foreign-key constraints, in case an owner has revoked their own TRIGGER or REFERENCES privilege. This seems potentially doable, but I sure don't see it coming out small enough to be a back-patchable fix; nor would it make things work for existing archive files, only new dumps. In fact, if we don't want to break parallel restore for existing dump files, we'd probably still have to implement the postpone-all-ACLs rule when dealing with an old dump file. (But maybe we could blow off that case, and just say you might have to not use parallel restore with old dumps that contain self-revoke ACLs?) The bigger issue is whether there's some failure case this would cause that I'm missing altogether. Thoughts? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump does not handle indirectly-granted permissions properly
... btw, while you're working on this, it'd be nice if you fixed the header comment for dumpACL(). It is unintelligible as to what racls is, and apparently feels that it need not discuss initacls or initracls at all. I can't say that the reference to "fooacl" is really obvious either. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change in "policy" on dump ordering?
Jordan Gigov <colad...@gmail.com> writes: > But why should a superuser need the ACL to be applied before being allowed > access? If you make the permission-checking function check if the user is a > superuser before looking for per-user grants, wouldn't that solve the issue? The superuser's permissions are not relevant, because the materialized view is run with the permissions of its owner, not the superuser. We are not going to consider changing that, either, because it would open trivial-to-exploit security holes (any user could set up a trojan horse matview and just wait for the next pg_upgrade or dump/restore). regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] AlterUserStmt anmd RoleSpec rules in grammar.y
Pavel Golub <pa...@microolap.com> writes: > I need someone to throw some light on grammar (gram.y). > I'm investigating beta2 regression tests, and found new statement > `ALTER USER ALL SET application_name to 'SLAP';` > ^^^ You'll notice that that statement fails in the regression tests: ALTER USER ALL SET application_name to 'SLAP'; ERROR: syntax error at or near "ALL" The one that works is ALTER ROLE ALL SET application_name to 'SLAP'; and the reason is that AlterRoleSetStmt has a separate production for ALL, but AlterUserSetStmt doesn't. This seems a tad bizarre though. Peter, you added that production (in commit 9475db3a4); is this difference intentional or just an oversight? If it's intentional, what's the reasoning? BTW, I'm quite confused as to why these test cases (in rolenames.sql) seem to predate that commit, and yet it did not change their results. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Testlib.pm vs msys
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > On 07/26/2017 09:33 AM, Tom Lane wrote: >> It might be interesting to look into its copy of IPC::Run and see if >> the wait technology is different from later versions. > It's using 0.94 just like my Linux box. Oh well, I hoped maybe we could update that. >> I still find this pretty ugly :-(. > I'm open to alternative suggestions. But I don't want to spend a heck of > a lot of time on this. Well, let's at least do the temp files a little more safely. Maybe like this? regards, tom lane diff --git a/src/bin/pg_ctl/t/001_start_stop.pl b/src/bin/pg_ctl/t/001_start_stop.pl index 9c3551f..3acc80e 100644 --- a/src/bin/pg_ctl/t/001_start_stop.pl +++ b/src/bin/pg_ctl/t/001_start_stop.pl @@ -32,9 +32,17 @@ else print $conf "listen_addresses = '127.0.0.1'\n"; } close $conf; -command_like([ 'pg_ctl', 'start', '-D', "$tempdir/data", - '-l', "$TestLib::log_path/001_start_stop_server.log" ], - qr/done.*server started/s, 'pg_ctl start'); +my $ctlcmd = [ 'pg_ctl', 'start', '-D', "$tempdir/data", + '-l', "$TestLib::log_path/001_start_stop_server.log" ]; +if ($Config{osname} ne 'msys') +{ + command_like($ctlcmd, qr/done.*server started/s, 'pg_ctl start'); +} +else +{ + # use the version of command_like that doesn't hang on Msys here + command_like_safe($ctlcmd, qr/done.*server started/s, 'pg_ctl start'); +} # sleep here is because Windows builds can't check postmaster.pid exactly, # so they may mistake a pre-existing postmaster.pid for one created by the diff --git a/src/test/perl/TestLib.pm b/src/test/perl/TestLib.pm index fe09689..758c920 100644 --- a/src/test/perl/TestLib.pm +++ b/src/test/perl/TestLib.pm @@ -37,6 +37,7 @@ our @EXPORT = qw( program_version_ok program_options_handling_ok command_like + command_like_safe command_fails_like $windows_os @@ -300,6 +301,24 @@ sub command_like like($stdout, $expected_stdout, "$test_name: matches"); } +sub command_like_safe +{ + # Doesn't rely on detecting end of file on the file descriptors, + # which can fail, causing the process to hang, notably on Msys + # when used with 'pg_ctl start' + my ($cmd, $expected_stdout, $test_name) = @_; + my ($stdout, $stderr); + my $stdoutfile = File::Temp->new(); + my $stderrfile = File::Temp->new(); + print("# Running: " . join(" ", @{$cmd}) . "\n"); + my $result = IPC::Run::run $cmd, '>', $stdoutfile, '2>', $stderrfile; + $stdout = slurp_file($stdoutfile); + $stderr = slurp_file($stderrfile); + ok($result, "$test_name: exit code 0"); + is($stderr, '', "$test_name: no stderr"); + like($stdout, $expected_stdout, "$test_name: matches"); +} + sub command_fails_like { my ($cmd, $expected_stderr, $test_name) = @_; -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade failed with error - ERROR: column "a" in child table must be marked NOT NULL
Michael Paquier <michael.paqu...@gmail.com> writes: > The documentation says the following: > https://www.postgresql.org/docs/9.3/static/ddl-inherit.html > All check constraints and not-null constraints on a parent table are > automatically inherited by its children. Other types of constraints > (unique, primary key, and foreign key constraints) are not inherited. > When adding a primary key, the system does first SET NOT NULL on each > column under cover, but this does not get spread to the child tables > as this is a PRIMARY KEY. The question is, do we want to spread the > NOT NULL constraint created by the PRIMARY KEY command to the child > tables, or not? Yeah, I think we really ought to for consistency. Quite aside from pg_dump hazards, this allows situations where selecting from an apparently not-null column can produce nulls (coming from unconstrained child tables). That's exactly what the automatic inheritance rules are intended to prevent. If we had a way to mark NOT NULL constraints as not-inherited, it'd be legitimate for ADD PRIMARY KEY to paste on one of those instead of a regular one ... but we lack that feature, so it's moot. Not sure what's involved there code-wise, though, nor whether we'd want to back-patch. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Testlib.pm vs msys
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > This made no difference. And I'm not really surprised, since the test is > not hanging when run from an MSVC build. Oh well. > The latter fact makes me > theorize that the problem arises from the fairly ancient perl that Msys > provides, and which we need to use to run prove so the TAP tests > understand the environment's virtualized file paths. It might be interesting to look into its copy of IPC::Run and see if the wait technology is different from later versions. > The attached patch should get around the problem without upsetting the > good work you've been doing in reducing TAP test times generally. I still find this pretty ugly :-(. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change in "policy" on dump ordering?
Robert Haas <robertmh...@gmail.com> writes: > On Tue, Jul 25, 2017 at 10:45 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> Instead, I've prepared the attached draft patch, which addresses the >> problem by teaching pg_backup_archiver.c to process TOC entries in >> three separate passes, "main" then ACLs then matview refreshes. > What worries me a bit is the possibility that something might depend > on a matview having already been refreshed. I cannot immediately > think of a case whether such a thing happens that you won't dismiss as > wrong-headed, but how sure are we that none such will arise? Um, there are already precisely zero guarantees about that. pg_dump has always been free to order these actions in any way that satisfies the dependencies it knows about. It's certainly possible to break it by introducing hidden dependencies within CHECK conditions. But that's always been true, with or without materialized views, and we've always dismissed complaints about it with "sorry, we won't support that". (I don't think the trigger case is such a problem, because we restore data before creating any triggers.) Meanwhile, we have problems that bite people who aren't doing anything stranger than having a matview owned by a non-superuser. How do you propose to fix that without reordering pg_dump's actions? Lastly, the proposed patch has the advantage that it acts at the restore stage, not when a dump is being prepared. Since it isn't affecting what goes into dump archives, it doesn't tie our hands if we think of some better idea later. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Robert Haas <robertmh...@gmail.com> writes: > Based on discussion downthread, it seems like what we actually need to > do is update perl.m4 to extract CCFLAGS. Turns out somebody proposed > a patch for that back in 2002: > https://www.postgresql.org/message-id/Pine.LNX.4.44.0211051045070.16317-20%40wotan.suse.de > It seems to need a rebase. :-) Ah-hah, I *thought* we had considered the question once upon a time. There were some pretty substantial compatibility concerns raised in that thread, which is doubtless why it's still like that. My beef about inter-compiler compatibility (if building PG with a different compiler from that used for Perl) could probably be addressed by absorbing only -D switches from the Perl flags. But Peter seemed to feel that even that could break things, and I worry that he's right for cases like -D_FILE_OFFSET_BITS which affect libc APIs. Usually we'd have made the same decisions as Perl for that sort of thing, but if we didn't, it's a mess. I wonder whether we could adopt some rule like "absorb -D switches for macros whose names do not begin with an underscore". That's surely a hack and three-quarters, but it seems safer than just absorbing everything willy-nilly. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change in "policy" on dump ordering?
I wrote: > The main problem with Kevin's fix, after thinking about it more, is that > it shoves matview refresh commands into the same final processing phase > where ACLs are done, which means that in a parallel restore they will not > be done in parallel. That seems like a pretty serious objection, although > maybe not so serious that we'd be willing to accept a major rewrite in the > back branches to avoid it. > I'm wondering at this point about having restore create a fake DO_ACLS > object (fake in the sense that it isn't in the dump file) that would > participate normally in the dependency sort, and which we'd give a > priority before matview refreshes but after everything else. "Restore" > of that object would perform the same operation we do now of running > through the whole TOC and emitting grants/revokes. So it couldn't be > parallelized in itself (at least not without an additional batch of work) > but it could be treated as an indivisible parallelized task, and then the > matview refreshes could be parallelizable tasks after that. > There's also Peter's proposal of splitting up GRANTs from REVOKEs and > putting only the latter at the end. I'm not quite convinced that that's > a good idea but it certainly merits consideration. After studying things for awhile, I've concluded that that last option is probably not workable. ACL items contain a blob of SQL that would be tricky to pull apart, and is both version and options dependent, and contains ordering dependencies that seem likely to defeat any desire to put the REVOKEs last anyway. Instead, I've prepared the attached draft patch, which addresses the problem by teaching pg_backup_archiver.c to process TOC entries in three separate passes, "main" then ACLs then matview refreshes. It's survived light testing but could doubtless use further review. Another way we could attack this is to adopt something similar to the PRE_DATA_BOUNDARY/POST_DATA_BOUNDARY mechanism; that is, invent more dummy section boundary objects, add dependencies sufficient to constrain all TOC objects to be before or after the appropriate boundaries, and then let the dependency sort go at it. But I think that way is probably more expensive than this one, and it doesn't have any real advantage if there's not a potential for circular dependencies that need to be broken. If somebody else wants to try drafting a patch like that, I won't stand in the way, but I don't wanna do so. Not clear where we want to go from here. Should we try to get this into next month's minor releases, or review it in September's commitfest and back-patch after that? regards, tom lane diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h index 6123859..3687687 100644 *** a/src/bin/pg_dump/pg_backup_archiver.h --- b/src/bin/pg_dump/pg_backup_archiver.h *** typedef enum *** 203,208 --- 203,230 OUTPUT_OTHERDATA /* writing data as INSERT commands */ } ArchiverOutput; + /* + * For historical reasons, ACL items are interspersed with everything else in + * a dump file's TOC; typically they're right after the object they're for. + * However, we need to restore data before ACLs, as otherwise a read-only + * table (ie one where the owner has revoked her own INSERT privilege) causes + * data restore failures. On the other hand, matview REFRESH commands should + * come out after ACLs, as otherwise non-superuser-owned matviews might not + * be able to execute. (If the permissions at the time of dumping would not + * allow a REFRESH, too bad; we won't fix that for you.) These considerations + * force us to make three passes over the TOC, restoring the appropriate + * subset of items in each pass. We assume that the dependency sort resulted + * in an appropriate ordering of items within each subset. + */ + typedef enum + { + RESTORE_PASS_MAIN = 0, /* Main pass (most TOC item types) */ + RESTORE_PASS_ACL, /* ACL item types */ + RESTORE_PASS_REFRESH /* Matview REFRESH items */ + + #define RESTORE_PASS_LAST RESTORE_PASS_REFRESH + } RestorePass; + typedef enum { REQ_SCHEMA = 0x01, /* want schema */ *** struct _archiveHandle *** 329,334 --- 351,357 int noTocComments; ArchiverStage stage; ArchiverStage lastErrorStage; + RestorePass restorePass; /* used only during parallel restore */ struct _tocEntry *currentTE; struct _tocEntry *lastErrorTE; }; diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index f461692..4cfb71c 100644 *** a/src/bin/pg_dump/pg_backup_archiver.c --- b/src/bin/pg_dump/pg_backup_archiver.c *** static ArchiveHandle *_allocAH(const cha *** 58,64 SetupWorkerPtrType setupWorkerPtr); static void _getObjectDescription(PQExpBuffer buf, TocEntry *te, ArchiveHandle *AH); ! static void
Re: [HACKERS] pg_dump does not handle indirectly-granted permissions properly
Stephen Frost <sfr...@snowman.net> writes: > On Tue, Jul 25, 2017 at 20:29 Thom Brown <t...@linux.com> wrote: >> I should point out that this commit was made during the 9.6 cycle, and >> I get the same issue with 9.6. > Interesting that Tom didn't. Still, that does make more sense to me. Yeah, it makes more sense to me too, but nonetheless that's the result I get. I suspect there is an element of indeterminacy here. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL 10 (latest beta) and older ICU
Victor Wagner <vi...@wagner.pp.ru> writes: > PostgreSQL 10 when build with --with-icu requires ICU 4.6 and above. > (because it searches for libicu using pkgconfig, and pgkconfig support > significantly changed in ICU version 4.6) > However, there are some reasons to build PostgreSQL with older versions > of ICU. No doubt, but making that happen seems like a feature, and IMO we're too late in the v10 beta test cycle to be taking new features on board, especially ones with inherently-large portability risks. We could maybe consider patches for this for v11 ... but will anyone still care by then? (Concretely, my concern is that the alpha and beta testing that's happened so far hasn't covered pre-4.6 ICU at all. I'd be surprised if the incompatibility you found so far is the only one.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] pg_dump does not handle indirectly-granted permissions properly
AFAICT, pg_dump has no notion that it needs to be careful about the order in which permissions are granted. I did regression=# create user joe; CREATE ROLE regression=# create user bob; CREATE ROLE regression=# create user alice; CREATE ROLE regression=# \c - joe You are now connected to database "regression" as user "joe". regression=> create table joestable(f1 int); CREATE TABLE regression=> grant select on joestable to alice with grant option; GRANT regression=> \c - alice You are now connected to database "regression" as user "alice". regression=> grant select on joestable to bob; GRANT and then pg_dump'd that. The ACL entry for joestable looks like this: -- -- TOC entry 5642 (class 0 OID 0) -- Dependencies: 606 -- Name: joestable; Type: ACL; Schema: public; Owner: joe -- SET SESSION AUTHORIZATION alice; GRANT SELECT ON TABLE joestable TO bob; RESET SESSION AUTHORIZATION; GRANT SELECT ON TABLE joestable TO alice WITH GRANT OPTION; Unsurprisingly, that fails to restore: db2=# SET SESSION AUTHORIZATION alice; SET db2=> GRANT SELECT ON TABLE joestable TO bob; ERROR: permission denied for relation joestable I am not certain whether this is a newly introduced bug or not. However, I tried it in 9.2-9.6, and all of them produce the GRANTs in the right order: GRANT SELECT ON TABLE joestable TO alice WITH GRANT OPTION; SET SESSION AUTHORIZATION alice; GRANT SELECT ON TABLE joestable TO bob; RESET SESSION AUTHORIZATION; That might be just chance, but my current bet is that Stephen broke it sometime in the v10 cycle --- especially since we haven't heard any complaints like this from the field. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Create language syntax is not proper in pg_dumpall and not working using pg_upgrade
"Joshua D. Drake" <j...@commandprompt.com> writes: > Isn't the simplest solution just to actually document this? Given that it's behaved this way since 8.1 (maybe earlier, I'm not sure), and nobody has complained before, I'm not sure it's worth documenting. There are lots of undocumented behaviors in PG; if we tried to explain every one of them, the docs would become unusably bloated. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Create language syntax is not proper in pg_dumpall and not working using pg_upgrade
Andres Freund <and...@anarazel.de> writes: > I'm not sure what you're arguing for here. Robert wants perfection, of course ;-). As do we all. But there are only so many hours in the day, and rejiggering pg_dump's rules about how to dump PLs is just way down the to-do list. I'm going to go do something with more tangible benefit, like see if we can make its REFRESH MATERIALIZED VIEW commands come out at the right time. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Testlib.pm vs msys
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > On 07/25/2017 11:25 AM, Tom Lane wrote: >> Oh. So when was the last time it worked? And why do the buildfarm >> archives contain apparently-successful log_stage entries for pg_ctl-check >> on jacana, up through yesterday when I looked? > There was a buildfarm bug, since corrected, which was not properly > picking up errors in a corner case. You will see the errors if you look > at all the logs for pg_ctl-check from 12 days back if they exist. The > last known actual good run of this test was 33 days ago, i.e. 2017-06-22 Hm. This failure presumably started with commit f13ea95f9, which introduced use of command_like() in the pg_ctl TAP test; but that didn't go in until 2017-06-28. Was jacana not running in the interim? Anyway, if we believe that it broke with f13ea95f9, hen it's plausible that CMD.EXE has been sharing pg_ctl's stdout/stderr all along, and we just had not noticed before. Could you check what happens if we change the bInheritHandles parameter as I suggested upthread? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Create language syntax is not proper in pg_dumpall and not working using pg_upgrade
Robert Haas <robertmh...@gmail.com> writes: > On Tue, Jul 25, 2017 at 10:31 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> pg_dump doesn't really support that scenario, and I don't feel any >> great need to make it do so. Per the comment in dumpProcLang: > Is this assumption, like, documented someplace? Uh, right there? > I would be on board with the idea that you (or anyone, really) doesn't > want to fix this because it's a fairly unimportant issue, but I balk > at the notion that nothing is wrong here, because to me that looks > busted. Well, it's not just unimportant but smack in the middle of code that is treading a very narrow path to avoid assorted version dependencies. I don't want to risk breaking cases that are actually important in the field to support something that's obviously a toy test case. We might be able to make some simplification/rationalization here whenever we desupport pg_dump from < 8.1 servers (ie pre pg_pltemplate). But right now I'm afraid to touch it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Robert Haas <robertmh...@gmail.com> writes: > On Tue, Jul 25, 2017 at 11:00 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> Hm, I had the idea that we were already asking ExtUtils::Embed for that, >> but now I see we only inquire about LDFLAGS not CCFLAGS. Yes, this sounds >> like a promising avenue to pursue. >> >> It would be useful to see the results of >> perl -MExtUtils::Embed -e ccopts >> on one of the affected installations, and compare that to the problematic >> field(s). > Why ccopts rather than ccflags? I was looking at the current code which fetches ldopts, and analogizing. Don't know the difference between ccflags and ccopts. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Testlib.pm vs msys
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > On 07/25/2017 11:11 AM, Tom Lane wrote: >> What I'm on about is that jacana still succeeds entirely, more than half >> the time. If this is a handle-duplication problem, how could it ever >> succeed? > No, it hangs 100% of the time. The only time you see a result at all is > if I manually intervene. The pg_ctl test is thus currently disabled on > jacana. Oh. So when was the last time it worked? And why do the buildfarm archives contain apparently-successful log_stage entries for pg_ctl-check on jacana, up through yesterday when I looked? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Testlib.pm vs msys
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > On 07/24/2017 09:33 PM, Tom Lane wrote: >> Seems like the TAP test should fail every time, if this is a full >> description. But maybe it's part of the answer, so it seems worth >> experimenting in this direction. > The test that hangs is the only case where we call pg_ctl via > command_like. If we use other mechanisms such as command_ok that don't > try to read the output there is no problem. What I'm on about is that jacana still succeeds entirely, more than half the time. If this is a handle-duplication problem, how could it ever succeed? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] VACUUM and ANALYZE disagreeing on what reltuples means
Tomas Vondra <tomas.von...@2ndquadrant.com> writes: > On 7/25/17 12:55 AM, Tom Lane wrote: >> I think the planner basically assumes that reltuples is the live >> tuple count, so maybe we'd better change VACUUM to get in step. > Attached is a patch that (I think) does just that. The disagreement was > caused by VACUUM treating recently dead tuples as live, while ANALYZE > treats both of those as dead. > At first I was worried that this will negatively affect plans in the > long-running transaction, as it will get underestimates (due to > reltuples not including rows it can see). But that's a problem we > already have anyway, you just need to run ANALYZE in the other session. This definitely will have some impact on plans, at least in cases where there's a significant number of unvacuumable dead tuples. So I think it's a bit late for v10, and I wouldn't want to back-patch at all. Please add to the next commitfest. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Robert Haas <robertmh...@gmail.com> writes: > Perl also has a mechanism for flags added to Configure to be passed > along when building loadable modules; if it didn't, not just plperl > but every Perl module written in C would have this issue if any such > flags where used. > ... > While I'm not sure of the details, I suspect that we need to use one > of those methods to get the CCFLAGS used to build perl, and include > those when SPI.o, Util.o, and plperl.o in src/pl/plperl. Or at least > the -D switches from those CCFLAGS. Hm, I had the idea that we were already asking ExtUtils::Embed for that, but now I see we only inquire about LDFLAGS not CCFLAGS. Yes, this sounds like a promising avenue to pursue. It would be useful to see the results of perl -MExtUtils::Embed -e ccopts on one of the affected installations, and compare that to the problematic field(s). regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > No amount of checking with the Perl community is likely to resolve this > quickly w.r.t. existing releases of Perl. Yes, but if they are shipping broken perl builds that cannot support building of extension modules, they need to be made aware of that. If that *isn't* the explanation, then we need to find out what is. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] estimation for join results cardinality is sometimes more than the product of the downstream nodes'
Alexey Bashtanov <bashta...@imap.cc> writes: > Is there a reason that join cardinality estimates are not limited by the > product of the joined parts cardinalities like in the > join-card-est.patch attached? Because that would be giving an unfair advantage to some paths over others based on nothing except estimation errors. I do not think we'd get a net benefit in plan quality. If we could do this earlier and adjust the join relation's overall cardinality estimate, it might be something to consider. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/perl extension fails on Windows
Amit Kapila <amit.kapil...@gmail.com> writes: > I think the real question is where do we go from here. Ashutosh has > proposed a patch up-thread based on a suggestion from Andrew, but it > is not clear if we want to go there as that seems to be bypassing > handshake mechanism. That definitely seems like the wrong route to me. If the resulting code works, it would at best be accidental. > The other tests and analysis seem to indicate > that the new version Perl binaries on Windows are getting built with > flags that are incompatible with what we use for plperl and it is not > clear why perl is using those flags. Do you think we can do something > at our end to make it work or someone should check with Perl community > about it? It would be a good idea to find somebody who knows more about how these Perl distros are built. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Create language syntax is not proper in pg_dumpall and not working using pg_upgrade
tushar <tushar.ah...@enterprisedb.com> writes: > postgres=# CREATE LANGUAGE alt_lang1 HANDLER plpgsql_call_handler; > CREATE LANGUAGE pg_dump doesn't really support that scenario, and I don't feel any great need to make it do so. Per the comment in dumpProcLang: * Try to find the support function(s). It is not an error if we don't * find them --- if the functions are in the pg_catalog schema, as is * standard in 8.1 and up, then we won't have loaded them. (In this case * we will emit a parameterless CREATE LANGUAGE command, which will * require PL template knowledge in the backend to reload.) So the assumption is basically that PLs that don't have pg_pltemplate entries will not keep their support functions in pg_catalog. I think there are exceptions to handle languages+support functions that are wrapped into extensions ... but you didn't do that either. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [SQL] Postgresql “alter column type” creates an event which contains “temp_table_xxx”
=?UTF-8?B?WmVocmEgR8O8bCDDh2FidWs=?= <zgul.ca...@gmail.com> writes: > => ALTER TABLE test ALTER COLUMN x TYPE integer USING > (trim(x)::integer);ALTER TABLE > Last command I've executed to alter column data type creates an event like > this: > BEGIN 500913table public.pg_temp_1077668: INSERT: x[integer]:14table > public.pg_temp_1077668: INSERT: x[integer]:42COMMIT 500913 > How could I find "real" table name using this record? Is there any way to > see real table name in fetched record? That is the real name --- table rewrites create a table with a temporary name and the desired new column layout, then fill it with data, then exchange the data area with the old table, then drop the temp table. Evidently logical decoding is exposing some of this infrastructure to you. I bet it isn't exposing the critical "swap data" step though, so I wonder how exactly a logical decoding plugin is supposed to make sense of what it can see here. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] cache lookup failed error for partition key with custom opclass
Rushabh Lathia <rushabh.lat...@gmail.com> writes: > On Mon, Jul 24, 2017 at 7:23 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> Some looking around says that this *isn't* the only place that just >> blithely assumes that it will find an opfamily entry. But I agree >> that not checking for that isn't up to project standards. > I go thorough the get_opfamily_proc() in the system and added the > check for InvalidOid. Think I did that already, please compare your results with 278cb4341103e967189997985b09981a73e23a34 regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Testlib.pm vs msys
I wrote: > Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: >> The problem is command_like's use of redirection to strings. Why this >> should be a problem for this particular use is a matter of speculation. > I looked at jacana's two recent pg_ctlCheck failures, and they both > seem to have failed on this: > command_like([ 'pg_ctl', 'start', '-D', "$tempdir/data", > '-l', "$TestLib::log_path/001_start_stop_server.log" ], > qr/done.*server started/s, 'pg_ctl start'); > That is redirecting the postmaster's stdout/stderr into a file, > for sure, so the child processes shouldn't impact EOF detection AFAICS. > It's also hard to explain this way why it only fails some of the time. I reflected a bit more on this issue, and realized that there's a pretty obvious mechanism for this to happen. While on Unix, we have pg_ctl fork-and-exec the postmaster so that no extra processes are laying about, that's not the case on Windows. The Windows code in pg_ctl.c creates a subprocess that runs CMD.EXE, which in turn runs the postmaster as a subprocess. The CMD.EXE process doesn't disappear until the postmaster exits. Now, we tell CMD to redirect the postmaster's stdout and stderr into files, but there's no way to tell CMD to redirect its own handles. So if the CMD process inherits pg_ctl's stdout and stderr, then the prove process would not see EOF on those pipes after pg_ctl exits. Now, this theory still has a Mack-truck-sized hole in it, which is if that's the failure mechanism then why isn't it 100% effective? Seems like the TAP test should fail every time, if this is a full description. But maybe it's part of the answer, so it seems worth experimenting in this direction. A bit of googling Microsoft's documentation suggests that maybe all we have to do is pass CreateProcessAsUser's bInheritHandles parameter as FALSE not TRUE in pg_ctl.c. It's not apparent to me that there are any handles we do need the CMD process to inherit. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] VACUUM and ANALYZE disagreeing on what reltuples means
Tomas Vondra <tomas.von...@2ndquadrant.com> writes: > It seems to me that VACUUM and ANALYZE somewhat disagree on what exactly > reltuples means. VACUUM seems to be thinking that > reltuples = live + dead > while ANALYZE apparently believes that > reltuples = live > The question is - which of the reltuples definitions is the right one? > I've always assumed that "reltuples = live + dead" but perhaps not? I think the planner basically assumes that reltuples is the live tuple count, so maybe we'd better change VACUUM to get in step. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Fwd: [HACKERS] Syncing sql extension versions with shared library versions
Mat Arye <m...@timescaledb.com> writes: > On Mon, Jul 24, 2017 at 1:38 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> I'm not really sure why planner hooks would have anything to do with your >> exposed SQL API? > Sorry what I meant was i'd like to package different versions of my > extension -- both .sql and .c -- > and have the extension act consistently for any version until I do a ALTER > EXTENSION UPDATE. > So, I'd prefer a DB with an older extension to have the logic/code in the > hook not change even if I install a newer version's .so for use in another > database > (but don't update the extension to the newer version). Does that make any > sense? The newer version's .so simply is not going to load into the older version; we intentionally prevent that from happening. It's not necessary anyway because versions do not share library directories. Therefore, you can have foo.so for 9.5 in your 9.5 version's library directory, and foo.so for 9.6 in your 9.6 version's library directory, and the filesystem will keep them straight for you. It is not necessary to call them foo-9.5.so and foo-9.6.so. As for the other point, the usual idea is that if you have a SQL-accessible C function xyz() that needs to behave differently after an extension version update, then you make the extension update script point the SQL function to a different library entry point. If your 1.0 extension version originally had CREATE FUNCTION xyz(...) RETURNS ... LANGUAGE C AS 'MODULE_PATHNAME', 'xyz'; (note that the second part of the AS clause might have been implicit; no matter), then your update script for version 1.1 could do CREATE OR REPLACE FUNCTION xyz(...) RETURNS ... LANGUAGE C AS 'MODULE_PATHNAME', 'xyz_1_1'; Then in the 1.1 version of the C code, the xyz_1_1() C function provides the new behavior, while the xyz() C function provides the old behavior, or maybe just throws an error if you conclude it's impractical to emulate the old behavior exactly. As I mentioned earlier, you can usually set things up so that you can share much of the code between two such functions. The pgstattuple C function in contrib/pgstattuple is one example of having changed a C function's behavior in this way over multiple versions. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Buildfarm failure and dubious coding in predicate.c
Thomas Munro <thomas.mu...@enterprisedb.com> writes: > Ahh, I think I see it. This is an EXEC_BACKEND build farm animal. > Theory: After the backend we see had removed the scratch entry and > before it had restored it, another backend started up and ran > InitPredicateLocks(), which inserted a new scratch entry without > interlocking. Ouch. Yes, I think you're probably right. It needs to skip that if IsUnderPostmaster. Seems like there ought to be an Assert(!found) there, too. And I don't think I entirely like the fact that there's no assertions about the found/not found cases below, either. Will fix, unless you're already on it? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Issue with circular references in VIEW
Gilles Darold <gilles.dar...@dalibo.com> writes: > Le 24/07/2017 à 19:19, Tom Lane a écrit : >> ... I'm inclined to think in terms of fixing it at that level >> rather than in pg_dump. It doesn't look like it would be hard to fix: >> both functions ultimately call get_query_def(), it's just that one passes >> down a tuple descriptor for the view while the other currently doesn't. > I was thinking that this was intentional that pg_get_ruledef() returns > the raw code typed by the user. I will fix it and send a patch following > your explanation. Oh, I just committed a patch. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] segfault in HEAD when too many nested functions call
Andres Freund <and...@anarazel.de> writes: > On 2017-07-21 20:17:54 -0400, Tom Lane wrote: >>> I dislike having the miscadmin.h include in executor.h - but I don't >>> quite see a better alternative. >> I seriously, seriously, seriously dislike that. You practically might as >> well put miscadmin.h into postgres.h. Instead, what do you think of >> requiring the individual ExecProcNode functions to perform >> CHECK_FOR_INTERRUPTS? Since they're already responsible for doing that >> if they have any long-running internal loops, this doesn't seem like a >> modularity violation. It is a risk for bugs-of-omission, sure, but so >> are a lot of other things that the per-node code has to do. > That'd work. Another alternative would be to move the inline definition > of ExecProcNode() (and probably a bunch of other related functions) into > a more internals oriented header. It seems likely that we're going to > add more inline functions to the executor, and that'd reduce the > coupling of external and internal users a bit. Well, it still ends up that the callers of ExecProcNode need to include miscadmin.h, whereas if we move it into the per-node functions, then the per-node files need to include miscadmin.h. I think the latter is better because those files may need to have other CHECK_FOR_INTERRUPTS calls anyway. It's less clear from a modularity standpoint that executor callers should need miscadmin.h. (Or in short, I'm not really okay with *any* header file including miscadmin.h.) >> * I think the comments need more work. Am willing to make a pass over >> that if you want. > That'd be good, but let's wait till we have something more final. Agreed, I'll wait till you produce another version. >> * Can we redefine the ExecCustomScan function pointer as type >> ExecProcNodeCB, eliminating the unsightly cast in nodeCustom.c? > That'd change an "extension API", which is why I skipped it at this > point of the release cycle. It's not like we didn't have this type of > cast all over before. Ok, with changing it, but that's where I came > down. Is this patch really not changing anything else that a custom-scan extension would touch? If not, I'm okay with postponing this bit of cleanup to v11. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Issue with circular references in VIEW
Gilles Darold <gilles.dar...@dalibo.com> writes: > There is an issue with version prior to 10 when dumping views with circular > references. I know that these views are now exported as views in 10 but they > are still exported as TABLE + RULE in prior versions. This conduct to the > following error when columns of sub-queries doesn't have the same aliases > names: The core of this issue, I think, is that pg_get_viewdef() knows that it should make what it prints have output column names that match the view, whereas pg_get_ruledef() does not, even when it is printing an ON SELECT rule. This is a little bit surprising --- you'd really expect those functions to produce identical SELECT statements --- and I think it's likely to break other tools even if pg_dump has managed to skirt the issue. So I'm inclined to think in terms of fixing it at that level rather than in pg_dump. It doesn't look like it would be hard to fix: both functions ultimately call get_query_def(), it's just that one passes down a tuple descriptor for the view while the other currently doesn't. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Change in "policy" on dump ordering?
Stephen Frost <sfr...@snowman.net> writes: > * Peter Eisentraut (peter.eisentr...@2ndquadrant.com) wrote: >> On 3/6/17 03:33, Michael Banck wrote: >>> Would this be a candidate for backpatching, or is the behaviour change >>> in pg_dump trumping the issues it solves? >> Unless someone literally has a materialized view on pg_policy, it >> wouldn't make a difference, so I'm not very keen on bothering to >> backpatch this. > Agreed. So actually, the problem with Jim's patch is that it doesn't fix the problem. pg_dump's attempts to REFRESH matviews will still fail in common cases, because they still come out before GRANTs, because pg_dump treats ACLs as a completely independent thing to be done last. This was noted as far back as 2015 (in a thread previously linked from this thread), and it's also the cause of Jordan Gigov's current complaint at https://www.postgresql.org/message-id/CA%2BnBocAmQ%2BOPNSKUzaaLa-6eGYVw5KqexWJaRoGvrvLyDir9gg%40mail.gmail.com Digging around in the archives, I find that Kevin had already proposed a fix in https://www.postgresql.org/message-id/flat/20160202161407.2778.24659%40wrigleys.postgresql.org which I didn't particularly care for, and apparently nobody else did either. But we really oughta do *something*. The main problem with Kevin's fix, after thinking about it more, is that it shoves matview refresh commands into the same final processing phase where ACLs are done, which means that in a parallel restore they will not be done in parallel. That seems like a pretty serious objection, although maybe not so serious that we'd be willing to accept a major rewrite in the back branches to avoid it. I'm wondering at this point about having restore create a fake DO_ACLS object (fake in the sense that it isn't in the dump file) that would participate normally in the dependency sort, and which we'd give a priority before matview refreshes but after everything else. "Restore" of that object would perform the same operation we do now of running through the whole TOC and emitting grants/revokes. So it couldn't be parallelized in itself (at least not without an additional batch of work) but it could be treated as an indivisible parallelized task, and then the matview refreshes could be parallelizable tasks after that. There's also Peter's proposal of splitting up GRANTs from REVOKEs and putting only the latter at the end. I'm not quite convinced that that's a good idea but it certainly merits consideration. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] cache lookup failed error for partition key with custom opclass
Rushabh Lathia <rushabh.lat...@gmail.com> writes: > PFA patch, where added elog() to add the error message same as all other > places. Some looking around says that this *isn't* the only place that just blithely assumes that it will find an opfamily entry. But I agree that not checking for that isn't up to project standards. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Improve perfomance for index search ANY(ARRAY[]) condition with single item
I wrote: > Dima Pavlov <imyf...@gmail.com> writes: >> That's my first patch so I will be grateful for constructive criticism. > I think it would be better to do this in the planner, see specifically > eval_const_expressions. BTW, currently eval_const_expressions doesn't know anything about ScalarArrayOpExpr, but there's a patch pending to improve that: https://commitfest.postgresql.org/14/1136/ You should probably build your revised patch as a follow-on to that one, else we're going to have merge conflicts. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Improve perfomance for index search ANY(ARRAY[]) condition with single item
Dima Pavlov <imyf...@gmail.com> writes: > The problem was discussed on stackoverflow: > https://stackoverflow.com/questions/45061966/index-usage-with-single-item-anyarray Usually, we're not very happy with submissions that reference external pages for justification; we like to have all the relevant material in our mail archives. > That's my first patch so I will be grateful for constructive criticism. I think it would be better to do this in the planner, see specifically eval_const_expressions. That would allow the transformation to succeed in more cases, like where the array is coming from a parameter rather than an ARRAY[] construct. That is, you should be able to do this not only for ARRAY[x] but for any single-element constant array. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Testlib.pm vs msys
Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > It turns out I was wrong about the problem jacana has been having with > the pg_ctl tests hanging. The problem was not the use of select as a > timeout mechanism, although I think the change to using > Time::Hires::usleep() is correct and shouldn't be reverted. > The problem is command_like's use of redirection to strings. Why this > should be a problem for this particular use is a matter of speculation. > I suspect it's to do with the fact that in this instance pg_ctl is > leaving behind some child processes (i.e. postmaster and children) after > it exits, and so on this particular path IPC::Run isn't detecting the > exit properly. The workaround I have found to work is to redirect > command_like's output instead to a couple of files and then slurp in > those files and delete them. A bit hacky, I know, so I'm open to other > suggestions. Yeah, I'd been eyeing that behavior of IPC::Run a month or so back, though from the opposite direction. If you are reading either stdout or stderr of the executed command into Perl, then it detects command completion by waiting till it gets EOF on those stream(s). If you are reading neither, then it goes into this wonky backoff behavior where it sleeps a bit and then checks waitpid(WNOHANG), with the value of "a bit" continually increasing until it reaches a fairly large value, half a second or a second (I forget). So you have potentially some sizable fraction of a second that's just wasted after command termination. I'd been able to make a small but noticeable improvement in the runtime of some of our TAP test suites by forcing the first behavior, ie reading stdout even if we were going to throw it away. So I'm not really that excited about going in the other direction ;-). It shouldn't matter much time-wise for short-lived commands, but it's disturbing if the EOF technique fails entirely for some cases. I looked at jacana's two recent pg_ctlCheck failures, and they both seem to have failed on this: command_like([ 'pg_ctl', 'start', '-D', "$tempdir/data", '-l', "$TestLib::log_path/001_start_stop_server.log" ], qr/done.*server started/s, 'pg_ctl start'); That is redirecting the postmaster's stdout/stderr into a file, for sure, so the child processes shouldn't impact EOF detection AFAICS. It's also hard to explain this way why it only fails some of the time. I think we need to look at what the recent changes were in this area and try to form a better theory of why it's started to fail here. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Buildfarm failure and dubious coding in predicate.c
I wrote: > And, while I'm looking at this ... isn't this "scratch target" logic > just an ineffective attempt at waving a dead chicken? It's assuming > that freeing an entry in a shared hash table guarantees that it can > insert another entry. But that hash table is partitioned, meaning it has > a separate freelist per partition. So the extra entry only provides a > guarantee that you can insert something into the same partition it's in, > making it useless for this purpose AFAICS. After further study I found the bit in get_hash_entry() about "borrowing" entries from other freelists. That makes all of this safe after all, which is a good thing because I'd also found other assumptions that would have been broken by the behavior I posited above. But I think that that code is desperately undercommented -- the reader could be forgiven for thinking that it was an optional optimization, and not something that is critical for correctness of several different callers. I'm going to go improve the comments. Meanwhile, it's still pretty unclear what happened yesterday on culicidae. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Syncing sql extension versions with shared library versions
Robert Haas <robertmh...@gmail.com> writes: > On Fri, Jul 21, 2017 at 4:17 PM, Mat Arye <m...@timescaledb.com> wrote: >> (I >> want to avoid having to keep backwards-compatibility for all functions in >> future shared-libraries). > Are you sure that's a good idea? It seems like swimming upstream > against the design. I mean, instead of creating a dispatcher library > that loads either v1 or v2, maybe you could just put it all in one > library, add a "v1" or "v2" suffix to the actual function names where > appropriate, and then set up the SQL definitions to call the correct > one. I mean, it's the same thing, but with less chance of the dynamic > loader ruining your day. Worth noting also is that we have a fair amount of experience now with handling API changes in contrib modules. It's worth looking through the update histories of the contrib modules that have shipped multiple versions to see how they dealt with such issues. As Robert suggests, it's just not that hard; usually a few shim functions in the C code will do the trick. I'd also point out that while you may think you don't need to keep backwards compatibility across versions, your users are probably going to think differently. The amount of practical freedom you'd gain here is probably not so much as you're hoping. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] segfault in HEAD when too many nested functions call
Andres Freund <and...@anarazel.de> writes: > On 2017-07-18 16:53:43 -0400, Tom Lane wrote: >> BTW, I don't see why you really need to mess with anything except >> ExecProcNode? Surely the other cases such as MultiExecProcNode are >> not called often enough to justify changing them away from the >> switch technology. Yeah, maybe it would be a bit cleaner if they >> all looked alike ... but if you're trying to make a patch that's >> as little invasive as possible for v10, I'd suggest converting just >> ExecProcNode to this style. > Yea, I was making that statement when not aiming for v10. Attached is a > patch that converts just ExecProcNode. I read through this patch (didn't test it at all yet). > I dislike having the miscadmin.h include in executor.h - but I don't > quite see a better alternative. I seriously, seriously, seriously dislike that. You practically might as well put miscadmin.h into postgres.h. Instead, what do you think of requiring the individual ExecProcNode functions to perform CHECK_FOR_INTERRUPTS? Since they're already responsible for doing that if they have any long-running internal loops, this doesn't seem like a modularity violation. It is a risk for bugs-of-omission, sure, but so are a lot of other things that the per-node code has to do. There might be something to be said for handling the chgParam/rescan tests similarly. That would reduce the ExecProcNode macro to a triviality, which would be a good thing for understandability of the code I think. Some other thoughts: * I think the comments need more work. Am willing to make a pass over that if you want. * In most places, if there's an immediate return-if-trivial-case test, we check stack depth only after that. There's no point in checking and then returning; either you already crashed, or you're at peak stack so far as this code path is concerned. * Can we redefine the ExecCustomScan function pointer as type ExecProcNodeCB, eliminating the unsightly cast in nodeCustom.c? * The various callers of ExecScan() are pretty inconsistently coded. I don't care that much whether they use castNode() or just forcibly cast to ScanState*, but let's make them all look the same. * I believe the usual term for what these function pointers are is "methods", not "callbacks". Certainly we'd call them that if we were working in C++. > I still think we should backpatch at least the check_stack_depth() calls > in ExecInitNode(), ExecEndNode(). No big objection, although I'm not sure it's necessary. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Locale dependency in new postgres_fdw test
Alvaro Herrera <alvhe...@2ndquadrant.com> writes: > Tom Lane wrote: >> We could stabilize this test result by forcing lc_messages = C in >> the foreign server options. However, that would lose regression >> coverage of situations where the remote locale is different, which >> doesn't seem like a terribly good thing. > I don't understand what is the benefit of having a different locale > setting if there's no way that the test would pass except with a very > specific locale. In other words, what coverage are we really losing by > going this route? What the current situation proves (or at least gives evidence for) is that postgres_fdw doesn't have hidden dependencies on the remote server running with the same lc_messages that prevails locally. As an example --- admittedly a bit far-fetched, because this shouldn't ever get past code review --- if postgres_fdw were checking for the presence of specific strings in error messages from the remote, we could hope that the buildfarm would catch that. But if we force the remote lc_messages value throughout the test, we lose that type of coverage. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Buildfarm failure and dubious coding in predicate.c
partitioned. Either one is pretty annoying, considering the very low probability of running out of shared memory right here; but what we've got is not up to project standards IMO. I have some ideas about fixing this by enlisting the help of dynahash.c explicitly, rather than fooling with "scratch entries". But I haven't been able yet to write a design for that that doesn't have obvious bugs. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] More optimization effort?
Robert Haas <robertmh...@gmail.com> writes: > Another very similar (but possibly easier) case is: > select * from pgbench_accounts where aid = 1.0; > This will use a sequential scan rather than an index scan, because the > query optimizer doesn't know that the only integer for which =(int4, > numeric) will return true is 1. Therefore it has to scan the whole > table one row at a time and check, for each one, whether the = > operator returns true. It can't cast the constant to an integer > because the user might have written 1.1 rather than 1.0, in which case > the cast would fail; but the query should return 0 rows, not ERROR. > You can imagine fixing this by having some kind of datatype-specific > knowledge that would replace "aid = 1.0" with "aid = 1" and "aid = > 1.1" with "false"; it would also have to know that "aid = 99" > should be changed to "false" because 99 isn't representable as > int4. Couple thoughts about that: * Another conceivable route to a solution is to add "int = numeric" and friends to the btree opclasses for integers. I'm not sure if there is any fundamental reason we've not done that (it's possible it would fall foul of the requirements about transitivity, not sure). However, this would only fix things to the extent of allowing an index scan to occur; it wouldn't help in non-indexed cases, nor would it know anything about simplifying say "aid = 1.1" to "false". * The already-existing protransform infrastructure could be used to address this type of problem; that is, you could imagine attaching a transform function to numeric_eq that would look for cases like "integer::numeric = nonintegralconstant" and simplify them accordingly. When that feature went in, there was talk of using transforms to simplify e.g. "variable + 0" or "variable * 1", but nobody's got round to anything of the sort yet. * On the other hand, protransform doesn't offer any easy way to apply similar optimizations to a bunch of different functions/operators. For instance, if your goal is to simplify "variable + 0", there are a depressingly large number of "+" operators to write transform functions for. Maybe there's no way around duplicate coding for that, but it looks tedious and bulky. > I have, however, decided not to volunteer to be the one who works on > that project. Me either. Any one of these things would require a *lot* of work in order to have a coherent feature that provided useful behavior across a bunch of different datatypes. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Locale dependency in new postgres_fdw test
I wrote: > I had not realized (or forgot) that postgres_fdw allows the remote > end to run in whatever lc_messages locale is default for the remote > installation. It's a bit surprising that none of the existing test > cases expose any remote-side messages directly, but evidently not. > We could stabilize this test result by forcing lc_messages = C in > the foreign server options. However, that would lose regression > coverage of situations where the remote locale is different, which > doesn't seem like a terribly good thing. It turns out that that way doesn't fix the problem, anyway, because an lc_messages setting in the startup packet is applied too late to change the output for the errors we're interested in. There have been past discussions about maybe fixing that, but it's certainly not happening now, much less in back branches. BTW, I found out while trying to do that that ALTER SERVER does not accept "options '-c lc_messages=C'" anyway, which surprised me quite a bit. The reason turns out to be that libpq labels "options" as a debug option which causes postgres_fdw to ignore it. Should we think about changing that? Being able to set GUC variables for the remote session seems useful. > Another option is to temporarily set VERBOSITY to "terse" so that > the DETAIL is suppressed from the test output. But then we don't > really know why the connection failed, so that could mask issues > that the test case ought to find, too. So we're stuck with that solution. The disadvantage of not being entirely sure why the connection failed could be solved if psql had some way to report just the SQLSTATE of the last failure. I think there's been some discussions about saving error SQLSTATEs into a special variable, as we do for LASTOID for instance; but it doesn't look like that's actually been done yet. We should revisit this if that feature ever materializes. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Locale dependency in new postgres_fdw test
So 8bf58c0d9 immediately blew up in the buildfarm, with eg this on jaguarundi: *** *** 130,136 ALTER SERVER loopback OPTIONS (SET dbname 'no such database'); SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should fail ERROR: could not connect to server "loopback" ! DETAIL: FATAL: database "no such database" does not exist DO $d$ BEGIN EXECUTE $$ALTER SERVER loopback --- 130,136 ALTER SERVER loopback OPTIONS (SET dbname 'no such database'); SELECT c3, c4 FROM ft1 ORDER BY c3, c1 LIMIT 1; -- should fail ERROR: could not connect to server "loopback" ! DETAIL: FATAL: Datenbank ?no such database? existiert nicht DO $d$ BEGIN EXECUTE $$ALTER SERVER loopback *** I had not realized (or forgot) that postgres_fdw allows the remote end to run in whatever lc_messages locale is default for the remote installation. It's a bit surprising that none of the existing test cases expose any remote-side messages directly, but evidently not. We could stabilize this test result by forcing lc_messages = C in the foreign server options. However, that would lose regression coverage of situations where the remote locale is different, which doesn't seem like a terribly good thing. Another option is to temporarily set VERBOSITY to "terse" so that the DETAIL is suppressed from the test output. But then we don't really know why the connection failed, so that could mask issues that the test case ought to find, too. Maybe set lc_messages = C in the options only for the duration of these new test cases? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Add a Gather executor node.
Alvaro Herrera <alvhe...@2ndquadrant.com> writes: > Robert Haas wrote: >> I think those top-of-file lists are just annoying, and would prefer to >> rip them out entirely rather than spend time maintaining them. > +1 To the extent that they're just lists of function names, +1. Some of them have some documentation in them, which you'd need to make sure is duplicative or else copy it to an appropriate place. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PgFDW connection invalidation by ALTER SERVER/ALTER USER MAPPING
Ashutosh Bapat <ashutosh.ba...@enterprisedb.com> writes: > On Fri, Jul 21, 2017 at 10:55 AM, Kyotaro HORIGUCHI > <horiguchi.kyot...@lab.ntt.co.jp> wrote: >> The attached patch differs only in this point. > +1. The patch looks good to me. Pushed with a couple additional changes: we'd all missed that the header comment for GetConnection was obsoleted by this change, and the arguments for GetSysCacheHashValue really need to be coerced to Datum. (I think OID to Datum is the same as what the compiler would do anyway, but best not to assume that.) Back-patching was more exciting than I could wish. It seems that before 9.6, we did not have struct UserMapping storing the OID of the pg_user_mapping row it had been made from. I changed GetConnection to re-look-up that row and get the OID. But that's ugly, and there's a race condition: if user mappings are being added or deleted meanwhile, we might locate a per-user mapping when we're really using a PUBLIC mapping or vice versa, causing the ConnCacheEntry to be labeled with the wrong hash value so that it might not get invalidated properly later. Still, it's significantly better than it was, and that corner case seems unlikely to get hit in practice --- for one thing, you'd have to then revert the mapping addition/deletion before the ConnCacheEntry would be found and used again. I don't want to take the risk of modifying struct UserMapping in stable branches, so it's hard to see a way to make that completely bulletproof before 9.6. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] More optimization effort?
Tatsuo Ishii <is...@sraoss.co.jp> writes: > Currently following query does not use an index: > test=# explain select * from pgbench_accounts where aid*100 < 1; > While following one does use the index. > test=# explain select * from pgbench_accounts where aid < 1/100; > Is it worth to make our optimizer a little bit smarter to convert the > the first query into the second form? That's a rather ambitious definition of "little bit" smarter. For starters, I'm not sure those queries are even fully equivalent w.r.t. issues like overflow and roundoff. While a person might decide that the transformation is OK anyway, I'm not sure that the optimizer should be allowed to take such liberties. But the bigger picture is that doing something that helps to any useful extent would require a really substantial amount of new, datatype- and operator-specific knowledge that doesn't exist in the system today. And as Craig noted, applying that knowledge would be expensive, even in cases where it failed to help. So, color me skeptical ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PgFDW connection invalidation by ALTER SERVER/ALTER USER MAPPING
Kyotaro HORIGUCHI <horiguchi.kyot...@lab.ntt.co.jp> writes: > Here it is. First I tried this using ordinary regression > framework but the effect of this patch is shown only in log and > it contains variable parts so I gave up it before trying more > complex way. > Next I tried existing TAP test but this test needs continuous > session to achieve alternating operation on two sessions but > PostgresNode::psql doesn't offer such a functionality. > Finally, I added a new TAP test library PsqlSession. It offers > interactive psql sessions. Then added a simple test to > postgres_fdw using it. This seems like overkill. We can test it reasonably easily within the existing framework, as shown in the attached patch. I'm also fairly concerned that what you're showing here would be unstable in the buildfarm as a result of race conditions between the multiple sessions. I made some cosmetic updates to the code patch, as well. I think this is actually a bug fix, and should not wait for the next commitfest. regards, tom lane diff --git a/contrib/postgres_fdw/connection.c b/contrib/postgres_fdw/connection.c index 8c33dea..8eb477b 100644 *** a/contrib/postgres_fdw/connection.c --- b/contrib/postgres_fdw/connection.c *** *** 22,27 --- 22,28 #include "pgstat.h" #include "storage/latch.h" #include "utils/hsearch.h" + #include "utils/inval.h" #include "utils/memutils.h" #include "utils/syscache.h" *** typedef struct ConnCacheEntry *** 48,58 --- 49,63 { ConnCacheKey key; /* hash key (must be first) */ PGconn *conn; /* connection to foreign server, or NULL */ + /* Remaining fields are invalid when conn is NULL: */ int xact_depth; /* 0 = no xact open, 1 = main xact open, 2 = * one level of subxact open, etc */ bool have_prep_stmt; /* have we prepared any stmts in this xact? */ bool have_error; /* have any subxacts aborted in this xact? */ bool changing_xact_state; /* xact state change in process */ + bool invalidated; /* true if reconnect is pending */ + uint32 server_hashvalue; /* hash value of foreign server OID */ + uint32 mapping_hashvalue; /* hash value of user mapping OID */ } ConnCacheEntry; /* *** static bool xact_got_connection = false; *** 69,74 --- 74,80 /* prototypes of private functions */ static PGconn *connect_pg_server(ForeignServer *server, UserMapping *user); + static void disconnect_pg_server(ConnCacheEntry *entry); static void check_conn_params(const char **keywords, const char **values); static void configure_remote_session(PGconn *conn); static void do_sql_command(PGconn *conn, const char *sql); *** static void pgfdw_subxact_callback(SubXa *** 78,83 --- 84,90 SubTransactionId mySubid, SubTransactionId parentSubid, void *arg); + static void pgfdw_inval_callback(Datum arg, int cacheid, uint32 hashvalue); static void pgfdw_reject_incomplete_xact_state_change(ConnCacheEntry *entry); static bool pgfdw_cancel_query(PGconn *conn); static bool pgfdw_exec_cleanup_query(PGconn *conn, const char *query, *** GetConnection(UserMapping *user, bool wi *** 130,135 --- 137,146 */ RegisterXactCallback(pgfdw_xact_callback, NULL); RegisterSubXactCallback(pgfdw_subxact_callback, NULL); + CacheRegisterSyscacheCallback(FOREIGNSERVEROID, + pgfdw_inval_callback, (Datum) 0); + CacheRegisterSyscacheCallback(USERMAPPINGOID, + pgfdw_inval_callback, (Datum) 0); } /* Set flag that we did GetConnection during the current transaction */ *** GetConnection(UserMapping *user, bool wi *** 144,161 entry = hash_search(ConnectionHash, , HASH_ENTER, ); if (!found) { ! /* initialize new hashtable entry (key is already filled in) */ entry->conn = NULL; - entry->xact_depth = 0; - entry->have_prep_stmt = false; - entry->have_error = false; - entry->changing_xact_state = false; } /* Reject further use of connections which failed abort cleanup. */ pgfdw_reject_incomplete_xact_state_change(entry); /* * We don't check the health of cached connection here, because it would * require some overhead. Broken connection will be detected when the * connection is actually used. --- 155,182 entry = hash_search(ConnectionHash, , HASH_ENTER, ); if (!found) { ! /* ! * We need only clear "conn" here; remaining fields will be filled ! * later when "conn" is set. ! */ entry->conn = NULL; } /* Reject further use of connections which failed abort cleanup. */ pgfdw_reject_incomplete_xact_state_change(entry); /* + * If the connection needs to be remade due to invalidation, disconnect as + * soon as we're out
Re: [HACKERS] <> join selectivity estimate question
Ashutosh Bapat <ashutosh.ba...@enterprisedb.com> writes: > On Thu, Jul 20, 2017 at 5:30 PM, Thomas Munro > <thomas.mu...@enterprisedb.com> wrote: >> Does anyone know how to test a situation where the join is reversed >> according to >> get_join_variables, or "complicated cases where we can't tell for sure"? > explain select * from pg_class c right join pg_type t on (c.reltype = > t.oid); would end up with *join_is_reversed = true; Is that what you > want? For a semi-join however I don't know how to induce that. AFAIU, > in a semi-join there is only one direction in which join can be > specified. You just have to flip the <> clause around, eg instead of explain analyze select * from tenk1 t where exists (select 1 from int4_tbl i where t.ten <> i.f1); do explain analyze select * from tenk1 t where exists (select 1 from int4_tbl i where i.f1 <> t.ten); No matter what the surrounding query is like exactly, one or the other of those should end up "join_is_reversed". This would be a bit harder to trigger for equality clauses, where you'd have to somehow defeat the EquivalenceClass logic's tendency to rip the clauses apart and reassemble them according to its own whims. But for neqjoinsel that's not a problem. > I didn't get the part about "complicated cases where we can't tell for sure". You could force that with mixed relation membership on one or both sides of the <>, for instance "(a.b + b.y) <> a.c". I don't think it's especially interesting for the present purpose though, since we're going to end up with 1.0 selectivity in any case where examine_variable can't find stats. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Increase Vacuum ring buffer.
Robert Haas <robertmh...@gmail.com> writes: > I think that's a valid point. There are also other concerns here - > e.g. whether instead of adopting the patch as proposed we ought to (a) > use some smaller size, or (b) keep the size as-is but reduce the > maximum fraction of shared_buffers that can be consumed, or (c) divide > the ring buffer size through by autovacuum_max_workers. Personally, > of those approaches, I favor (b). I think a 16MB ring buffer is > probably just fine if you've got 8GB of shared_buffers but I'm > skeptical about it when you've got 128MB of shared_buffers. WFM. I agree with *not* dividing the basic ring buffer size by autovacuum_max_workers. If you have allocated more AV workers, I think you expect AV to go faster, not for the workers to start fighting among themselves. It might, however, be reasonable for the fraction-of-shared-buffers limitation to have something to do with autovacuum_max_workers, so that you can't squeeze yourself out of shared_buffers if you set that number really high. IOW, I think the upthread suggestion of min(shared_buffers/8/autovacuum_workers, 16MB) is basically the right idea, though we could debate the exact constants. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade failed if view contain natural left join condition
"David G. Johnston" <david.g.johns...@gmail.com> writes: > Per the docs: > "If there are no common column names, NATURAL behaves like CROSS JOIN." > I'm being a bit pedantic here but since NATURAL is a replacement for > "ON/USING" it would seem more consistent to describe it, when no matching > columns are found, as "behaves like specifying ON TRUE" instead. Yeah, the analogy to CROSS JOIN falls down if it's an outer join. I'll go fix that. > I find it a bit strange, though not surprising, that it doesn't devolve to > "ON FALSE". No, it's normal that an AND of no conditions degenerates to TRUE. It's like omitting a WHERE clause. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Increase Vacuum ring buffer.
Claudio Freire <klaussfre...@gmail.com> writes: > On Thu, Jul 20, 2017 at 11:59 AM, Robert Haas <robertmh...@gmail.com> wrote: >> I think the question for this patch is "so, why didn't we do it this >> way originally?". >> >> It's no secret that making the ring buffer larger will improve >> performance -- in fact, not having a ring buffer at all would improve >> performance even more. But it would also increase the likelihood that >> the background work of vacuum would impact the performance of >> foreground operations, which is already a pretty serious problem that >> we probably don't want to make worse. I'm not certain what the right >> decision is here, but I think that a careful analysis of possible >> downsides is needed. > IIRC, originally, the default shared_buffers settings was tiny. At the time we set the ring buffer size to 256K, the maximum shared_buffers that initdb would configure was 32MB; and you often didn't get that much due to SHMMAX. Now of course it's 128MB, and you'll pretty much always get that. So there's certainly room to argue that it's time to increase vacuum's ring buffer size, but that line of argument doesn't justify more than ~10X increase at most. Like Robert, I'm afraid of changing this number in a vacuum (ahem). If you've got the default number of autovacuum workers going (3), you'd have them thrashing a total of 3/8ths of shared memory by default, which seems like a lot. We do need to look at the impact on foreground processing, and not just at the speed of vacuum itself. One idea for addressing this would be to raise the max values in the switch, but tighten the fraction-of-shared-buffers limit just below. I wouldn't have any objection to a 16MB ring buffer for vacuum when it is coming out of a 1GB arena ... it just seems like a rather large fraction of 128MB to give to a background process, especially to each of several background processes. Maybe the fraction-of-shared-buffers shouldn't be one size fits all, but a different limit for each case? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Definitional questions for pg_sequences view
What exactly is the point of the new pg_sequences view? It seems like it's intended to ease conversion of applications that formerly did "select * from sequencename", but if so, there are some fairly annoying discrepancies. The old way got you these columns: regression=# \d s1 Sequence "public.s1" Column | Type |Value ---+-+- sequence_name | name| s1 last_value| bigint | 1 start_value | bigint | 1 increment_by | bigint | 1 max_value | bigint | 9223372036854775807 min_value | bigint | 1 cache_value | bigint | 1 log_cnt | bigint | 0 is_cycled | boolean | f is_called | boolean | f but now we offer regression=# \d pg_sequences View "pg_catalog.pg_sequences" Column | Type | Collation | Nullable | Default ---+-+---+--+- schemaname| name| | | sequencename | name| | | sequenceowner | name| | | data_type | regtype | | | start_value | bigint | | | min_value | bigint | | | max_value | bigint | | | increment_by | bigint | | | cycle | boolean | | | cache_size| bigint | | | last_value| bigint | | | Why aren't sequencename, cache_size, and cycle spelled consistently with past practice? And is there a really good reason to order the columns randomly differently from before? The big problem, though, is that there's no convenient way to use this view in a schema-safe manner. If you try to translate select * from my_seq; into select * from pg_sequences where sequencename = 'my_seq'; then you're going to get burnt if there's more than one my_seq in different schemas. There's no easy way to get your search path incorporated into the result. Maybe people will always know how to constrain the schemaname too, but I wouldn't count on it. This could be fixed if it were possible to translate to select * from pg_sequences where seqoid = 'my_seq'::regclass; but the view isn't exposing the sequence OID. Should it? As things stand, it's actually considerably easier and safer to use the pg_sequence catalog directly, because then you *can* do select * from pg_sequence where seqrelid = 'my_seq'::regclass; and you only have to deal with the different-from-before column names. Which pretty much begs the question why we bothered to provide the view. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade failed if view contain natural left join condition
tushar <tushar.ah...@enterprisedb.com> writes: > postgres=# create table t(n int); > CREATE TABLE > postgres=# create table t1(a int); > CREATE TABLE > postgres=# create view ttt1 as SELECT e.n FROM t e NATURAL LEFT JOIN t1 d; > CREATE VIEW You realize of course that that's a pretty useless join definition. Still, yes, we do need to reverse-list the view with correct syntax. Probably t LEFT JOIN t1 ON TRUE would do it. > I think -this issue should be there in the older branches as well but > not checked that. [experiments] Seems to be wrong back to 9.3. Although I have a feeling this might be a mistake in a back-patched bug fix, so that it'd depend on which 9.3.x you looked at. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade failed if view is based on sequence
Thom Brown <t...@linux.com> writes: > On 20 July 2017 at 13:23, tushar <tushar.ah...@enterprisedb.com> wrote: >> postgres=# create sequence seq_9166 start 1 increment 1; >> CREATE SEQUENCE >> postgres=# create or replace view v3_9166 as select * from seq_9166; >> CREATE VIEW > This is because sequence_name, start_value, increment_by, max_value, > min_value, cache_value and is_cycled are no longer output when > selecting from sequences. Yes. This will not be "fixed"; you'll have to adjust the view before you can update it to v10. (If you want those values, you should now get them out of the pg_sequence catalog.) This should have been called out as a significant incompatibility in the v10 release notes, but I see that it's not listed in the right section. Will fix that ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] huge RAM use in multi-command ALTER of table heirarchy
Greg Stark <st...@mit.edu> writes: > On 19 July 2017 at 00:26, Tom Lane <t...@sss.pgh.pa.us> wrote: >> It's probably a bit late in the v10 cycle to be taking any risks in >> this area, but I'd vote for ripping out RememberToFreeTupleDescAtEOX >> as soon as the v11 cycle opens, unless someone can show an example >> of non-broken coding that requires it. (And if so, there ought to >> be a regression test incorporating that.) > Would it be useful to keep in one of the memory checking assertion builds? Why? Code that expects to continue accessing a relcache entry's tupdesc after closing the relcache entry is broken, independently of whether it's in a debug build or not. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] autovacuum can't keep up, bloat just continues to rise
Peter Geoghegan <p...@bowt.ie> writes: > My argument for the importance of index bloat to the more general > bloat problem is simple: any bloat that accumulates, that cannot be > cleaned up, will probably accumulate until it impacts performance > quite noticeably. But that just begs the question: *does* it accumulate indefinitely, or does it eventually reach a more-or-less steady state? The traditional wisdom about btrees, for instance, is that no matter how full you pack them to start with, the steady state is going to involve something like 1/3rd free space. You can call that bloat if you want, but it's not likely that you'll be able to reduce the number significantly without paying exorbitant costs. I'm not claiming that we don't have any problems, but I do think it's important to draw a distinction between bloat and normal operating overhead. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] psql's \r broken since e984ef5861d
I wrote: > Ah. I don't feel like trawling the archives for the discussion right now, > but I believe this was an intentional change to make the behavior more > consistent. Oh ... a quick look in the commit log finds the relevant discussion: https://www.postgresql.org/message-id/flat/9b4ea968-753f-4b5f-b46c-d7d3bf7c8f90%40manitou-mail.org regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers