Re: [HACKERS] More efficient truncation of pg_stat_activity query strings
Hi, On 2017-09-15 17:43:35 +0530, Kuntal Ghosh wrote: > The patch looks good to me. I've done some regression testing with a > custom script on my local system. The script contains the following > statement: > SELECT 'aaa..' as col; > > Test 1 > --- > duration: 300 seconds > clients/threads: 1 > With the patch TPS:13628 (+3.39%) > +0.36% 0.21% postgres postgres [.] pgstat_report_activity > > Test 2 > --- > duration: 300 seconds > clients/threads: 8 > With the patch TPS: 63949 (+20.4%) > +0.40% 0.25% postgres postgres [.] pgstat_report_activity > > This shows the significance of this patch in terms of performance > improvement of pgstat_report_activity. Is there any other tests I > should do for the same? Thanks for the test! I think this looks good, no further tests necessary. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PoC plpgsql - possibility to force custom or generic plan
2017-09-19 20:37 GMT+02:00 Robert Haas: > On Tue, Sep 19, 2017 at 12:45 PM, Pavel Stehule > wrote: > >> You can already set a GUC with function scope. I'm not getting your > >> point. > > > > yes, it is true. But implementation of #option is limited to PLpgSQL - so > > there is not any too much questions - GUC is global - there is lot of > > points: > > > > * what is correct impact on PREPARE > > * what is correct impact on EXECUTE > > * what should be done if this GUC is changed .. > > For better or for worse, as a project we've settled on GUCs as a way > to control behavior. I think it makes more sense to try to apply that > option to new behaviors we want to control than to invent some new > system. > I have nothing against GUC generally - just in this case, I have knowleadge what is expected behave in plpgsql environment and I miss this knowleadge else where, so I am thinking be good start just for plpgsql (where this issue is mentioned often). The some plpgsql limitted implementation is not barier against any another implementation. > -- > Robert Haas > EnterpriseDB: http://www.enterprisedb.com > The Enterprise PostgreSQL Company >
Re: [HACKERS] Re: [COMMITTERS] pgsql: Add citext_pattern_ops for citext contrib module
On 09/19/2017 11:11 AM, Tom Lane wrote: > Andrew Dunstanwrites: >> This seems to have upset a number or animals in the buildfarm. > Actually, after looking closer, my advice is just to drop the new > test cases involving accented letters. It surprises me not in the > least that those would have nonportable behavior: probably, some > locales will case-fold them and some won't. > > (This does open up some questions about whether the opclass will > ever be of use for Alexey's original purpose :-() Actually, it now looks to me like the problem is we forgot to tell postgres that this data is in utf8. The problem appears to resolve (at least on my CentOS test box, where I reproduced the buildfarm error) if I add set client_encoding = 'utf8'; to the sql file. Of course UTF8 bytes interpreted as LATIN1 will give results that are ... interesting. So let's try that first and see if it make the buildfarm go green. Maybe there's hope for Alexey's purpose after all. cheers andrew -- Andrew Dunstanhttps://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pgjdbc logical replication client throwing exception
Hi Andres, I also checked server log. Nothing unusual is recorded there. Do you have any other suggestion. Thank you. Best regards, Dipesh Dangol On Fri, Sep 15, 2017 at 11:32 PM, Dipesh Dangolwrote: > Hi Vladimir, > Ya, initially I was trying with withStatusInterval(20, TimeUnit.SECONDS), > that didn't work so, then only I switched to .withStatusInterval(20, > TimeUnit.MILLISECONDS) > but it is not working as well. I am not aware of type of test cases that > you are pointing. > Could you please send me any link for that one. > > For generating the load, I am using benchmarkSQL, which will generate > around 9000 > transactions per second. I am trying to run streamAPI at the same time > BenchmarskSQL is generating load. If i don't run benchmarkSQL it works > fine I mean > when there are only few transactions to replicate at a time, it works > fine. But when > I run it with that benchmarskSql and try to add some logic like some > conditions, then > it breaks down in between, most of the time within few seconds. > > Hi Andres, > I haven't check the server log yet. Now, I don't access to my working > environment, I will be able to check that only on Monday. If I find any > suspicious > thing in log, I will let you know. > > Thank you guys. > > > > > > > > On Fri, Sep 15, 2017 at 10:05 PM, Andres Freund > wrote: > >> On 2017-09-15 20:00:34 +, Vladimir Sitnikov wrote: >> > ++pgjdbc dev list. >> > >> > >I am facing unusual connection breakdown problem. Here is the simple >> code >> > that I am using to read WAL file: >> > >> > Does it always fails? >> > Can you create a test case? For instance, if you file a pull request >> with >> > the test, it will get automatically tested across various PG versions, >> so >> > it would be easier to reson about >> > >> > Have you tried "withStatusInterval(20, TimeUnit.SECONDS)" instead of 20 >> > millis? I don't think it matter much, however 20ms seems to be an >> overkill. >> >> Also, have you checked the server log? >> >> - Andres >> > >
Re: [HACKERS] PoC plpgsql - possibility to force custom or generic plan
On Tue, Sep 19, 2017 at 12:45 PM, Pavel Stehulewrote: >> You can already set a GUC with function scope. I'm not getting your >> point. > > yes, it is true. But implementation of #option is limited to PLpgSQL - so > there is not any too much questions - GUC is global - there is lot of > points: > > * what is correct impact on PREPARE > * what is correct impact on EXECUTE > * what should be done if this GUC is changed .. For better or for worse, as a project we've settled on GUCs as a way to control behavior. I think it makes more sense to try to apply that option to new behaviors we want to control than to invent some new system. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] UPDATE of partition key
On Fri, Sep 15, 2017 at 7:25 AM, Amit Khandekarwrote: > [ new patch ] This already fails to apply again. In general, I think it would be a good idea to break this up into a patch series rather than have it as a single patch. That would allow some bits to be applied earlier. The main patch will probably still be pretty big, but at least we can make things a little easier by getting some of the cleanup out of the way first. Specific suggestions on what to break out below. If the changes to rewriteManip.c are a marginal efficiency hack and nothing more, then let's commit this part separately before the main patch. If they're necessary for correctness, then please add a comment explaining why they are necessary. There appears to be no reason why the definitions of GetInsertedColumns() and GetUpdatedColumns() need to be moved to a header file as a result of this patch. GetUpdatedColumns() was previously defined in trigger.c and execMain.c and, post-patch, is still called from only those files. GetInsertedColumns() was, and remains, called only from execMain.c. If this were needed I'd suggest doing it as a preparatory patch before the main patch, but it seems we don't need it at all. If I understand correctly, the reason for changing mt_partitions from ResultRelInfo * to ResultRelInfo ** is that, currently, all of the ResultRelInfos for a partitioning hierarchy are allocated as a single chunk, but we can't do that and also reuse the ResultRelInfos created during InitPlan. I suggest that we do this as a preparatory patch. Someone could argue that this is going the wrong way and that we ought to instead make InitPlan() create all of the necessarily ResultRelInfos, but it seems to me that eventually we probably want to allow setting up ResultRelInfos on the fly for only those partitions for which we end up needing them. The code already has some provision for creating ResultRelInfos on the fly - see ExecGetTriggerResultRel. I don't think it's this patch's job to try to apply that kind of thing to tuple routing, but it seems like in the long run if we're inserting 1 tuple into a table with 1000 partitions, or performing 1 update that touches the partition key, it would be best not to create ResultRelInfos for all 1000 partitions just for fun. But this sort of thing seems much easier of mt_partitions is ResultRelInfo ** rather than ResultRelInfo *, so I think what you have is going in the right direction. + * mtstate->resultRelInfo[]. Note: We assume that if the resultRelInfo + * does not belong to subplans, then it already matches the root tuple + * descriptor; although there is no such known scenario where this + * could happen. + */ +if (rootResultRelInfo != resultRelInfo && +mtstate->mt_persubplan_childparent_maps != NULL && +resultRelInfo >= mtstate->resultRelInfo && +resultRelInfo <= mtstate->resultRelInfo + mtstate->mt_nplans - 1) +{ +int map_index = resultRelInfo - mtstate->resultRelInfo; I think you should Assert() that it doesn't happen instead of assuming that it doesn't happen. IOW, remove the last two branches of the if-condition, and then add an Assert() that map_index is sane. It is not clear to me why we need both mt_perleaf_childparent_maps and mt_persubplan_childparent_maps. + * Note: if the UPDATE is converted into a DELETE+INSERT as part of + * update-partition-key operation, then this function is also called + * separately for DELETE and INSERT to capture transition table rows. + * In such case, either old tuple or new tuple can be NULL. That seems pretty strange. I don't quite see how that's going to work correctly. I'm skeptical about the idea that the old tuple capture and new tuple capture can safely happen at different times. I wonder if we should have a reloption controlling whether update-tuple routing is enabled. I wonder how much more expensive it is to execute UPDATE root SET a = a + 1 WHERE a = 1 on a table with 1000 subpartitions with this patch than without, assuming the update succeeds in both cases. I also wonder how efficient this implementation is in general. For example, suppose you make a table with 1000 partitions each containing 10,000 tuples and update them all, and consider three scenarios: (1) partition key not updated but all tuples subject to non-HOT updates because the updated column is indexed, (2) partition key updated but no tuple movement required as a result, (3) partition key updated and all tuples move to a different partition. It would be useful to compare the times, and also to look at perf profiles and see if there are any obvious sources of inefficiency that can be squeezed out. It wouldn't surprise me if tuple movement is a bit slower than the other scenarios, but it would be nice to know how much slower and whether the bottlenecks are anything that we can
Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list
Pavel Stehulewrites: > 2017-09-14 12:33 GMT+02:00 Anthony Bykov : >> As far as I understand, this patch adds functionality (correct me if I'm >> wrong) for users. Shouldn't there be any changes in doc/src/sgml/ with the >> description of new functionality? > It removes undocumented limit. I recheck plpgsql documentation and there > are not any rows about prohibited combinations of data types. I remain of the opinion that this patch is a fundamentally bad idea. It creates an inconsistency between what happens if the INTO target list is a single record/row variable (it absorbs all the columns of the query output) and what happens if a record/row variable is part of a multi-variable target list (now it just absorbs one column, which had better be composite). That's going to confuse people, especially since you haven't documented it. But even with documentation, it doesn't seem like good design. Aside from being inconsistent, it doesn't cover all the cases --- what if you have just one query output column, that is composite, and you'd like it to go into a composite variable? That doesn't work today, and this patch doesn't fix it, but it does create enough confusion that we never would be able to fix it. I'd be much happier if there were some notational difference between I-want-the-composite-variable-to-absorb-a-composite-column and I-want-the-composite-variable-to-absorb-N-scalar-columns. For backwards compatibility with what happens now, the latter would have to be the default. I'm wondering about "var.*" or "(var)" as the notation signaling that you want the former, though neither of those seem like they're very intuitive. If we had a notation like that, it'd be possible to ask for either behavior within a larger target list, except that we'd still need to say that I-want-a-RECORD-variable-to-absorb-N-scalar-columns has to be the only thing in its target list. Otherwise it's not very clear what N ought to be. (In some cases maybe you could reverse-engineer N from context, but I think that'd be overly complicated and bug prone.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.
On 2017-09-19 13:15:28 -0400, Tom Lane wrote: > Andres Freundwrites: > > On 2017-09-19 13:00:33 -0400, Robert Haas wrote: > >> You mean, in the postmaster? > > > Yes. We try to avoid touch shmem there, but it's not like we're > > succeeding fully. See e.g. the pgstat_get_crashed_backend_activity() > > calls (which do rely on shmem being ok to some extent), pmsignal, > > BackgroundWorkerStateChange(), ... > > Well, the point is to avoid touching data structures that could be > corrupted enough to confuse the postmaster. I don't have any problem with > adding some more functionality to pmsignal, say. Given that we're ok with reading pgstat shared memory entries, I think adding a carefully coded variant of SendProcSignal() should be doable in a safe manner. Something roughly like int PostmasterSendProcSignal(pid_t pid, ProcSignalReason reason) { volatile ProcSignalSlot *slot; /* * As this is running in postmaster, be careful not to dereference * any pointers from shared memory that could be corrupted, and to * not to throw errors. */ for (i = 0; i < NumProcSignalSlots; i++) { slot = [i]; if (slot->pss_pid == pid) { /* * The note about race conditions in SendProcSignal applies * here, too */ /* Atomically set the proper flag */ slot->pss_signalFlags[reason] = true; /* Send signal */ return kill(pid, SIGUSR1); } } errno = ESRCH; return -1; } As all the memory offsets are computed based on postmaster process-local variables, this should be safe. I'd rather like to avoid a copy of the procsignal infrastructure if we don't need it... Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Running some tests with different segment sizes
On 2017-09-19 14:05:44 -0400, Tom Lane wrote: > Andres Freundwrites: > > I'm working on merging the customizable segment size patch [1]. I'd > > like to run some of the regression tests using it, to guarantee > > non-standard settings have test coverage. The reason I'd like to adapt > > an existing test, rather than add a new run of the standard regression > > tests, is to avoid bloating the regression time unnecessarily. > > Maybe there's a way to set up a buildfarm animal or two that run all > the tests with a different segsize? Hm, that'd work too. We could make initdb look at getenv("PG_DEFAULT_SEGSIZE") or such? Otherwise it's probably not easy to have all tests respect that. I still would like to have at least one test that explicitly specifies a different size so people can see if they outright break something, but one would be enough if we had such an animal. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Running some tests with different segment sizes
Andres Freundwrites: > I'm working on merging the customizable segment size patch [1]. I'd > like to run some of the regression tests using it, to guarantee > non-standard settings have test coverage. The reason I'd like to adapt > an existing test, rather than add a new run of the standard regression > tests, is to avoid bloating the regression time unnecessarily. Maybe there's a way to set up a buildfarm animal or two that run all the tests with a different segsize? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Running some tests with different segment sizes
Hi, I'm working on merging the customizable segment size patch [1]. I'd like to run some of the regression tests using it, to guarantee non-standard settings have test coverage. The reason I'd like to adapt an existing test, rather than add a new run of the standard regression tests, is to avoid bloating the regression time unnecessarily. I thought about starting with just changing pg_upgrade's rerun of the standard test. Then maybe change one or two tests in src/test/recovery/? Complaints, better ideas? Andres Freund [1] http://archives.postgresql.org/message-id/CAOG9ApHUj10U6ryyTBMqc4pu_yoghULF1YCBwefp4g%2BMovSJQA%40mail.gmail.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL
On Tue, Sep 19, 2017 at 1:28 PM, Stephen Frostwrote: > Tom, > > * Tom Lane (t...@sss.pgh.pa.us) wrote: > > chiru r writes: > > > We are looking for User profiles in ope source PostgreSQL. > > > For example, If a user password failed n+ times while login ,the user > > > access has to be blocked few seconds. > > > Please let us know, is there any plan to implement user profiles in > feature > > > releases?. > > > > Not particularly. You can do that sort of thing already via PAM, > > for example. > > Ugh, hardly and it's hokey and a huge pain to do, and only works on > platforms that have PAM. > > Better is to use an external authentication system (Kerberos, for > example) which can deal with this, but I do think this is also something > we should be considering for core, especially now that we've got a > reasonable password-based authentication method with SCRAM. > > Thanks! > > Stephen > Perhaps, as an alternative, although not currently supported, connection attempts can be added in the future to "Event Triggers"? Users could then create a trigger function to enable/disable logins. -- *Melvin Davidson* I reserve the right to fantasize. Whether or not you wish to share my fantasy is entirely up to you.
[HACKERS] Show backtrace when tap tests fail
Hi, I've had a couple cases where tap tests died, and I couldn't easily see where / why. For development of a new test I found it useful to show backtraces in that case - just adding a use Carp::Always; at the start of the relevant module did the trick. I'm wondering if we shouldn't always do so if the module is installed. I.e. have PostgresNode or something do something like # Include module showing backtraces upon failures. As it's a non-standard module, don't fail if not installed. eval { use Carp::Always; } Comments? - Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL
Tom, * Tom Lane (t...@sss.pgh.pa.us) wrote: > chiru rwrites: > > We are looking for User profiles in ope source PostgreSQL. > > For example, If a user password failed n+ times while login ,the user > > access has to be blocked few seconds. > > Please let us know, is there any plan to implement user profiles in feature > > releases?. > > Not particularly. You can do that sort of thing already via PAM, > for example. Ugh, hardly and it's hokey and a huge pain to do, and only works on platforms that have PAM. Better is to use an external authentication system (Kerberos, for example) which can deal with this, but I do think this is also something we should be considering for core, especially now that we've got a reasonable password-based authentication method with SCRAM. Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.
Andres Freundwrites: > On 2017-09-19 13:00:33 -0400, Robert Haas wrote: >> You mean, in the postmaster? > Yes. We try to avoid touch shmem there, but it's not like we're > succeeding fully. See e.g. the pgstat_get_crashed_backend_activity() > calls (which do rely on shmem being ok to some extent), pmsignal, > BackgroundWorkerStateChange(), ... Well, the point is to avoid touching data structures that could be corrupted enough to confuse the postmaster. I don't have any problem with adding some more functionality to pmsignal, say. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] USER Profiles for PostgreSQL
chiru rwrites: > We are looking for User profiles in ope source PostgreSQL. > For example, If a user password failed n+ times while login ,the user > access has to be blocked few seconds. > Please let us know, is there any plan to implement user profiles in feature > releases?. Not particularly. You can do that sort of thing already via PAM, for example. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.
On 2017-09-19 13:00:33 -0400, Robert Haas wrote: > On Tue, Sep 19, 2017 at 12:51 PM, Andres Freundwrote: > > That'd not be that a crazy amount of > > shared memory that'd need to be touched in shared memory, ... > > You mean, in the postmaster? Yes. We try to avoid touch shmem there, but it's not like we're succeeding fully. See e.g. the pgstat_get_crashed_backend_activity() calls (which do rely on shmem being ok to some extent), pmsignal, BackgroundWorkerStateChange(), ... Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.
On Tue, Sep 19, 2017 at 12:51 PM, Andres Freundwrote: > That'd not be that a crazy amount of > shared memory that'd need to be touched in shared memory, ... You mean, in the postmaster? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] USER Profiles for PostgreSQL
Hi All, Good Morning. We are looking for User profiles in ope source PostgreSQL. For example, If a user password failed n+ times while login ,the user access has to be blocked few seconds. Please let us know, is there any plan to implement user profiles in feature releases?. Thanks, Chiranjeevi
Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed
Hi 2017-09-19 16:14 GMT+02:00 Alexander Korotkov: > On Fri, Sep 8, 2017 at 7:13 AM, Pavel Stehule > wrote: > >> 2017-08-16 14:06 GMT+02:00 Pavel Stehule : >> >>> Hi >>> >>> 2017-08-15 4:37 GMT+02:00 Peter Eisentraut < >>> peter.eisentr...@2ndquadrant.com>: >>> On 3/11/17 07:06, Pavel Stehule wrote: > I am sending a updated version with separated sort direction in special > variable This patch also needs a rebase. >>> >>> I am sending rebased patch >>> >> >> rebased again + fix obsolete help >> > > For me, patch applies cleanly, builds and passed regression tests. > However, patch misses regression tests covering added functionality. > I am not sure if there are any tests related to output of \dt+ commands - there result is platform depend. > Patch is definitely harmless, i.e. it doesn't affect anybody who doesn't > use new functionality. > But I still would prefer ordering to be options of \d* commands while psql > variables be defaults for those options... > I understand a) I don't think so commands like \dt++ (or similar) is good idea - these commands should be simple how it is possible b) this patch doesn't block any other design - more it opens the door because the executive part will be implemented and users can have a experience with with different output sorts - so if people will need more quick change of result sort, then the work in this area will continue. Regards Pavel > -- > Alexander Korotkov > Postgres Professional: http://www.postgrespro.com > The Russian Postgres Company >
Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.
On 2017-09-19 12:24:00 -0400, Tom Lane wrote: > Andres Freundwrites: > > Unfortunately the backends themselves also react with inaccurate error > > messages to things like immediate shutdowns... > > Yeah, those signals are kind of overloaded these days. Not sure if > there's any good way to improve that. We could use the procsignal infrastructure (or some pared down equivalent with just one 'server-status' byte in shmem) for the non-crash immediate shutdown. That'd not be that a crazy amount of shared memory that'd need to be touched in shared memory, and we're not in an actual crash in that case, so no corrupted shmem. OTOH, that'd still leave us with some processes that aren't connected to shmem receiving the bare SIGQUIT - but I think they mostly already have more terse error messages anyway. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PoC plpgsql - possibility to force custom or generic plan
2017-09-19 18:33 GMT+02:00 Robert Haas: > On Mon, Sep 18, 2017 at 11:46 PM, Pavel Stehule > wrote: > > There is possibility to introduce new compile option #option to disable > plan > > cache on function scope. Do you think so it is acceptable solution? It is > > step forward. > > You can already set a GUC with function scope. I'm not getting your point. > yes, it is true. But implementation of #option is limited to PLpgSQL - so there is not any too much questions - GUC is global - there is lot of points: * what is correct impact on PREPARE * what is correct impact on EXECUTE * what should be done if this GUC is changed .. Regards Pavel > -- > Robert Haas > EnterpriseDB: http://www.enterprisedb.com > The Enterprise PostgreSQL Company >
Re: [HACKERS] PG 10 release notes
"'Bruce Momjian'"writes: > On Tue, Sep 19, 2017 at 12:30:01PM -0400, Tom Lane wrote: >> Well, if the intent of the note was to encourage people to raise >> shared_buffers, it didn't do a very good job of that as written, >> because I sure didn't understand it that way. > Do you have any suggestions since it is not a code change that I can > point to? My guess is that the limitation was removed years ago, but we > only found out recently. TBH, I think that's another reason for not release-noting it. There's no concrete change to point to, and so it's hard to figure out what to say. I'm not even very sure that we should be encouraging people to change existing shared_buffers settings; the experiments that Takayuki-san did were only on Windows 10, so we don't really know that changing that would be a good idea on older Windows versions. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PoC plpgsql - possibility to force custom or generic plan
On Mon, Sep 18, 2017 at 11:46 PM, Pavel Stehulewrote: > There is possibility to introduce new compile option #option to disable plan > cache on function scope. Do you think so it is acceptable solution? It is > step forward. You can already set a GUC with function scope. I'm not getting your point. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PG 10 release notes
On Tue, Sep 19, 2017 at 12:30:01PM -0400, Tom Lane wrote: > "'Bruce Momjian'"writes: > > On Tue, Sep 19, 2017 at 12:22:39PM -0400, Tom Lane wrote: > >> We don't normally release-note documentation changes. If this > >> wasn't purely a documentation change, then I was probably in error > >> to decide it didn't need to be in the notes. > > > It was purely a documentation change, but it was a documented change in a > > long-standing and popular practice of not using too many shared buffers > > on Windows, so I thought it wise to add it. > > Well, if the intent of the note was to encourage people to raise > shared_buffers, it didn't do a very good job of that as written, > because I sure didn't understand it that way. Do you have any suggestions since it is not a code change that I can point to? My guess is that the limitation was removed years ago, but we only found out recently. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PG 10 release notes
"'Bruce Momjian'"writes: > On Tue, Sep 19, 2017 at 12:22:39PM -0400, Tom Lane wrote: >> We don't normally release-note documentation changes. If this >> wasn't purely a documentation change, then I was probably in error >> to decide it didn't need to be in the notes. > It was purely a documentation change, but it was a documented change in a > long-standing and popular practice of not using too many shared buffers > on Windows, so I thought it wise to add it. Well, if the intent of the note was to encourage people to raise shared_buffers, it didn't do a very good job of that as written, because I sure didn't understand it that way. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PG 10 release notes
On Tue, Sep 19, 2017 at 12:22:39PM -0400, Tom Lane wrote: > "'Bruce Momjian'"writes: > > I am sure Tom can explain his reasoning. > > We don't normally release-note documentation changes. If this > wasn't purely a documentation change, then I was probably in error > to decide it didn't need to be in the notes. It was purely a documentation change, but it was a documented change in a long-standing and popular practice of not using too many shared buffers on Windows, so I thought it wise to add it. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.
Andres Freundwrites: > Unfortunately the backends themselves also react with inaccurate error > messages to things like immediate shutdowns... Yeah, those signals are kind of overloaded these days. Not sure if there's any good way to improve that. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PG 10 release notes
"'Bruce Momjian'"writes: > I am sure Tom can explain his reasoning. We don't normally release-note documentation changes. If this wasn't purely a documentation change, then I was probably in error to decide it didn't need to be in the notes. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Page Scan Mode in Hash Index
On Thu, Aug 24, 2017 at 11:26 AM, Jesper Pedersenwrote: > Based on the feedback in this thread, I have moved the patch to "Ready for > Committer". Reviewing 0001: _hash_readpage gets the page LSN to see if we can apply LP_DEAD hints, but if the table is unlogged or temporary, the LSN will never change, so the test in _hash_kill_items will always think that it's OK to apply the hints. (This seems like it might be a pretty serious problem, because I'm not sure what would be a viable workaround.) The logic that tries to ensure that so->currPos.{buf,currPage,lsn} get updated is not, to my eyes, obviously correct. Generally, the logic for this stuff seems unnaturally spread out to me. For example, _hash_next() updates currPos.buf, but leaves it to _hash_readpage to set currPage and lsn. That function also sets all three fields when it advances to the next page by calling _hash_readnext(), but when it tries to advance to the next page and doesn't find one it sets buf but not currPage or lsn. It seems to me that this logic should be more centralized somehow. Can we arrange things so that at least buf, currPage, and lsn, and maybe also nextPage and prevPage, get updated at the same time and as soon after reading the buffer as possible? It would be bad if a primary bucket page's hasho_prevblkno field got copied into so->currPos.prevpage, because the value that appears for the primary bucket is not a real block number. But _hash_readpage seems like it can bring about this exact situation, because of this code: +if (!BlockNumberIsValid(opaque->hasho_nextblkno)) +prev_blkno = opaque->hasho_prevblkno; ... +so->currPos.prevPage = prev_blkno; If we're reading the primary bucket page and there are no overflow pages, hasho_nextblkno will not be valid and hasho_prevblkno won't be a real block number. Incidentally, the "if" statement in the above block of code is probably not saving anything; I would suggest for clarity that you do the assignment unconditionally (but this is just a matter of style, so I don't feel super-strongly about it). +return (so->currPos.firstItem <= so->currPos.lastItem); Can this ever return false? It looks to me like we should never reach this code unless that condition holds, and that it would be a bug if we did. If that's correct, maybe this should be an Assert() and the return statement should just return true. +buf = _hash_getbuf(rel, blkno, HASH_READ, LH_OVERFLOW_PAGE); + +/* It might not exist anymore; in which case we can't hint it. */ +if (!BufferIsValid(buf)) +return; This is dead code, because _hash_getbuf always returns a valid buffer. If there's actually a risk of the buffer disappearing, then some other handling is needed for this case. But I suspect that, since a scan always holds a pin on the primary bucket, there's actually no way for this to happen and this is just dead code. The comment in hashgetbitmap claims that _hash_first() or _hash_next() never returns dead tuples. If that were true, it would be a bug, because then scans started during recovery would return wrong answers. A more accurate statement would be something like: _hash_first and _hash_next handle eliminate dead index entries whenever scan->ignored_killed_tuples is true. Therefore, there's nothing to do here except add the results to the TIDBitmap. _hash_readpage contains unnecessary "continue" statements inside the loops. The reason that they're unnecessary is that there's no code below that in the loop anyway, so the loop is already going to go back around to the top. Whether to change this is a matter of style, so I won't complain too much if you want to leave it the way it is, but personally I'd favor removing them. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment
On 19/09/17 16:30, Amit Kapila wrote: > On Tue, Sep 19, 2017 at 3:34 PM, Petr Jelinek >wrote: >> n 18/09/17 18:42, Tom Lane wrote: >>> Amit Kapila writes: On Mon, Sep 18, 2017 at 7:46 PM, Tom Lane wrote: >>> So, frankly, I think we would be best off losing the "logical rep >>> worker slot" business altogether, and making do with just bgworker >>> slots. The alternative is getting the postmaster involved in cleaning >>> up lrep slots as well as bgworker slots, and I'm going to resist >>> any such idea strongly. That way madness lies, or at least an >>> unreliable postmaster --- if we need lrep slots, what other forty-seven >>> data structures are we going to need the postmaster to clean up >>> in the future? >>> >>> I haven't studied the code enough to know why it grew lrep worker >>> slots in the first place. Maybe someone could explain? >>> >> >> I am not quite sure I understand this question, we need to store >> additional info about workers in shared memory so we need slots for that. >> >> If you are asking why they are not identified by the >> BackgroundWorkerHandle, then it's because it's private struct and can't >> be shared with other processes so there is no way to link the logical >> worker info with bgworker directly. >> > > Not sure, but can't we identify the actual worker slot if we just have > pid of background worker? IIUC, you need access to > BackgroundWorkerHandle by other processes, so that you can stop them > like in case of drop subscription command. If so, then, might be > storing pid can serve the purpose. > I don't think that pid can be trusted to belong to same process between the calls, if we could we would not need BackgroundWorkerHandle. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list
2017-09-19 11:43 GMT+02:00 Anthony Bykov: > The following review has been posted through the commitfest application: > make installcheck-world: tested, passed > Implements feature: tested, passed > Spec compliant: tested, passed > Documentation:tested, passed > > Hello, > I've tested it (make check-world) and as far as I understand, it works > fine. > > The new status of this patch is: Ready for Committer > Thank you very much Pavel > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers >
Re: [HACKERS] type cache for concat functions
2017-09-19 12:18 GMT+02:00 Alexander Kuzmenkov: > The following review has been posted through the commitfest application: > make installcheck-world: tested, passed > Implements feature: tested, passed > Spec compliant: tested, passed > Documentation:tested, passed > > Looks good to me. > > The new status of this patch is: Ready for Committer > Thank you very much Pavel > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers >
Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.
On 2017-09-19 07:48:42 -0400, Robert Haas wrote: > Oh, I've not seen that. Mostly, what I think we should fix is the > fact that the libpq messages tend to report that the server crashed > even if it was an orderly shutdown. > > [rhaas ~]$ psql > psql (11devel) > Type "help" for help. > > rhaas=# select 1; > FATAL: terminating connection due to administrator command > server closed the connection unexpectedly > This probably means the server terminated abnormally > before or while processing the request. > The connection to the server was lost. Attempting reset: Failed. > > It's sort of funny (but also sort of sad) that we've got libpq talking > down the server's reliability (and even in the face of evidence which > manifestly contradicts it). Yea, I'm not very happy with that either. I really think we should stop emitting that if we got an actual message from the server. Unfortunately the backends themselves also react with inaccurate error messages to things like immediate shutdowns... Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Boom filters for hash joins (was: A design for amcheck heapam verification)
On Tue, Sep 19, 2017 at 6:28 AM, Tomas Vondrawrote: > The patch is fairly simple, and did not try to push the bloom filters to > scan nodes or anything like that. It might be a meaningful first step, > though, particularly for selective joins (where only small number of > rows from the outer relation has a match in the hash table). I believe that parallelism makes the use of Bloom filter a lot more compelling, too. Obviously this is something that wasn't taken into consideration in 2015. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PG 10 release notes
On Thu, Sep 14, 2017 at 03:13:50AM +, Tsunakawa, Takayuki wrote: > It's embarrassing to ask about such a trivial thing, but I noticed > the following line was missing in the latest release note, which was > originally in Bruce's website: > > Remove documented restriction about using large shared buffers on > Windows (Takayuki Tsunakawa) > > Is this intended? I don't know. The original text was: Remove documented restriction about using large shared buffers on Windows (Takayuki Tsunakawa) and was removed in this commit: commit 749eceff4a1f9740391b126c81af9fd4bf3b1eaa Author: Tom LaneDate: Sun Jul 9 20:11:21 2017 -0400 Doc: desultory copy-editing for v10 release notes. Improve many item descriptions, improve markup, relocate some items that seemed to be in the wrong section. I am sure Tom can explain his reasoning. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Setting pd_lower in GIN metapage
Amit Kapilawrites: > On Tue, Sep 19, 2017 at 9:27 AM, Michael Paquier > wrote: >> I am not saying that no index AMs take advantage FPW compressibility >> for their meta pages. There are cases like this one, as well as one >> code path in BRIN where this is useful, and it is useful as well when >> logging FPWs of the init forks for unlogged relations. > Hmm, why is it useful for logging FPWs of the init forks for unlogged > relations? We don't use REGBUF_STANDARD in those cases. But if we started to do so, that would be a concrete benefit of this patch ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/005_encoding.pl is broken
Michael Paquierwrites: > Now, I just had a look at the logs for a failure and a success, and > one difference can be seen in the subscriber's logs as follows: > -LOG: logical replication table synchronization worker for > subscription "mysub", table "test1" has started > -LOG: logical replication table synchronization worker for > subscription "mysub", table "test1" has finished > +WARNING: out of background worker slots > +HINT: You might need to increase max_worker_processes. > The "+" portion is for a failure, and I think that this causes the > subscription to not consume the changes from the publisher which > explains the failure in the test as the logical worker applying the > changes on the subscriber-side is not here. That would indicate that something isn't ever retrying the worker start; but if that's the case, how is it that we get through the other subscription tests with my random-failure patch in place? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Add citext_pattern_ops for citext contrib module
Andrew Dunstanwrites: > This seems to have upset a number or animals in the buildfarm. Actually, after looking closer, my advice is just to drop the new test cases involving accented letters. It surprises me not in the least that those would have nonportable behavior: probably, some locales will case-fold them and some won't. (This does open up some questions about whether the opclass will ever be of use for Alexey's original purpose :-() regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Bug with pg_basebackup and 'shared' tablespace
On Tuesday, September 19, 2017 12:52:37 PM CEST you wrote: > On Thu, Sep 14, 2017 at 2:17 AM, Pierre Ducroquetwrote: > > All my apologies for the schockingly long time with no answer on this > > topic. > No problem. That's the concept called life I suppose. > > > I will do my best to help review some patches in the current CF. > > Thanks for the potential help! > > > Attached to this email is the new version of the patch, checked against > > HEAD and REL_10_STABLE, with the small change required by Michael (affect > > true/ false to the boolean instead of 1/0) applied. > > Thanks for the new version. > > So I have been pondering about this patch, and allowing multiple > versions of Postgres to run in the same tablespace is mainly here for > upgrade purposes, so I don't think that we should encourage such > practices for performance reasons. There is as well > --tablespace-mapping which is here to cover a similar need and we know > that this solution works, at least people have spent time to make sure > it does. > > Another problem that I have with this patch is that on HEAD, > permission checks are done before receiving any data. I think that > this is really saner to do any checks like that first, so as you can > know if the tablespace is available for write access before writing > any data to it. With this patch, you may finish by writing a bunch of > data to a tablespace path, but then fail on another tablespace because > of permission issues so the backup becomes useless after transferring > and writing potentially hundreds of gigabytes. This is no fun to > detect that potentially too late, and can impact the responsiveness of > instances to get backups and restore from them. > > All in all, this patch gets a -1 from me in its current shape. If > Horiguchi-san or yourself want to process further with this patch, of > course feel free to do so, but I don't feel that we are actually > trying to solve a new problem here. I am switching the patch to > "waiting on author". Hi The point of this patch has never been to 'promote' sharing tablespaces between PostgreSQL versions. This is not a documented feature, and it would be imho a bad idea to promote it because of bugs like this one. But that bug is a problem for people who are migrating between postgresql releases one database at a time on the same server (maybe not a best practice, but a real use case nevertheless). That's how I met this bug, and that's why I wanted to patch it. I know there is a workaround, but to me it seems counter- intuitive that with no warning I can create a postgresql cluster that can not be restored without additional pg_basebackup settings. If it is indeed forbidden to share a tablespace between postgresql releases, why don't we enforce it ? Or at least show a warning when CREATE TABLESPACE encounter a non-empty folder ? Regarding the permission check, I don't really see how this is a real problem with the patch. I have to check on master, but it seems to me that the permission check can still be done before starting writing data on disc. We could just delay the 'empty' folder check, and keep checking the folder permissions. And I will do the pgindent run, thanks for the nitpick :) Pierre -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Add citext_pattern_ops for citext contrib module
Andrew Dunstanwrites: > This seems to have upset a number or animals in the buildfarm. Looks like all the ones that are testing in en_US locale. > I could create a third output file, but I am seriously questioning the > point of all this, What locale did you use to create citext_1.out? We've never before needed more than one output file for non-C locales. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment
On Tue, Sep 19, 2017 at 6:51 PM, Petr Jelinekwrote: > On 19/09/17 15:08, Amit Kapila wrote: >> >> I am not much aware of this area. Can you explain what other usages >> it has apart from in the process that has launched the worker and in >> worker itself? >> > > The subscription "DDL" commands and monitoring functions need access to > that info. > Got your point, but still not sure if we need to maintain additional information to replicate something similar to bgworker machinery. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment
On Tue, Sep 19, 2017 at 3:34 PM, Petr Jelinekwrote: > n 18/09/17 18:42, Tom Lane wrote: >> Amit Kapila writes: >>> On Mon, Sep 18, 2017 at 7:46 PM, Tom Lane wrote: The subscriber log includes 2017-09-18 08:43:08.240 UTC [15672] WARNING: out of background worker slots Maybe that's harmless, but I'm suspicious that it's a smoking gun. >> >>> I have noticed while fixing the issue you are talking that this path >>> is also susceptible to such problems. In >>> WaitForReplicationWorkerAttach(), it relies on >>> GetBackgroundWorkerPid() to know the status of the worker which won't >>> give the correct status in case of fork failure. The patch I have >>> posted has a fix for the issue, >> >> Your GetBackgroundWorkerPid fix looks good as far as it goes, but >> I feel that WaitForReplicationWorkerAttach's problems go way deeper >> than that --- in fact, that function should not exist at all IMO. >> >> The way that launcher.c is set up, it's relying on either the calling >> process or the called process to free the logicalrep slot when done. >> The called process might never exist at all, so the second half of >> that is obviously unreliable; but WaitForReplicationWorkerAttach >> can hardly be relied on either, seeing it's got a big fat exit-on- >> SIGINT in it. I thought about putting a PG_TRY, or more likely >> PG_ENSURE_ERROR_CLEANUP, into it, but that doesn't fix the basic >> problem: you can't assume that the worker isn't ever going to start, >> just because you got a signal that means you shouldn't wait anymore. >> >> Now, this rickety business might be fine if it were only about the >> states of the caller and called processes, but we've got long-lived >> shared data involved (ie the worker slots); failing to clean it up >> is not an acceptable outcome. >> > > We'll clean it up eventually if somebody requests creation of new > logical replication worker and that slot is needed. See the "garbage > collection" part in logicalrep_worker_launch(). I know it's not ideal > solution, but it's the best one I could think of given the bellow. > >> So, frankly, I think we would be best off losing the "logical rep >> worker slot" business altogether, and making do with just bgworker >> slots. The alternative is getting the postmaster involved in cleaning >> up lrep slots as well as bgworker slots, and I'm going to resist >> any such idea strongly. That way madness lies, or at least an >> unreliable postmaster --- if we need lrep slots, what other forty-seven >> data structures are we going to need the postmaster to clean up >> in the future? >> >> I haven't studied the code enough to know why it grew lrep worker >> slots in the first place. Maybe someone could explain? >> > > I am not quite sure I understand this question, we need to store > additional info about workers in shared memory so we need slots for that. > > If you are asking why they are not identified by the > BackgroundWorkerHandle, then it's because it's private struct and can't > be shared with other processes so there is no way to link the logical > worker info with bgworker directly. > Not sure, but can't we identify the actual worker slot if we just have pid of background worker? IIUC, you need access to BackgroundWorkerHandle by other processes, so that you can stop them like in case of drop subscription command. If so, then, might be storing pid can serve the purpose. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Add citext_pattern_ops for citext contrib module
On 19 September 2017 at 15:22, Andrew Dunstanwrote: > > > On 09/19/2017 08:35 AM, Andrew Dunstan wrote: >> Add citext_pattern_ops for citext contrib module >> >> This is similar to text_pattern_ops. >> > > This seems to have upset a number or animals in the buildfarm. > > I could create a third output file, but I am seriously questioning the > point of all this, if we are prepared to accept any result from these > functions and operators, depending on locale. Maybe it would be better > simply to test that the result is not null and have done with it. Thoughts? It makes sense to have a fully detailed output in at least one parameter setting. How about use the new psql feature for \if to skip tests if the local is different to the one for which we have detailed test results? -- Simon Riggshttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Add citext_pattern_ops for citext contrib module
On 09/19/2017 08:35 AM, Andrew Dunstan wrote: > Add citext_pattern_ops for citext contrib module > > This is similar to text_pattern_ops. > This seems to have upset a number or animals in the buildfarm. I could create a third output file, but I am seriously questioning the point of all this, if we are prepared to accept any result from these functions and operators, depending on locale. Maybe it would be better simply to test that the result is not null and have done with it. Thoughts? cheers andrew -- Andrew Dunstanhttps://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed
On Fri, Sep 8, 2017 at 7:13 AM, Pavel Stehulewrote: > 2017-08-16 14:06 GMT+02:00 Pavel Stehule : > >> Hi >> >> 2017-08-15 4:37 GMT+02:00 Peter Eisentraut > com>: >> >>> On 3/11/17 07:06, Pavel Stehule wrote: >>> > I am sending a updated version with separated sort direction in special >>> > variable >>> >>> This patch also needs a rebase. >>> >> >> I am sending rebased patch >> > > rebased again + fix obsolete help > For me, patch applies cleanly, builds and passed regression tests. However, patch misses regression tests covering added functionality. Patch is definitely harmless, i.e. it doesn't affect anybody who doesn't use new functionality. But I still would prefer ordering to be options of \d* commands while psql variables be defaults for those options... -- Alexander Korotkov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Re: [HACKERS] psql - add ability to test whether a variable exists
The following review has been posted through the commitfest application: make installcheck-world: not tested Implements feature: tested, failed Spec compliant: not tested Documentation:tested, failed The patch applies cleanly and compiles + installs fine (although am unable to do installcheck-world on my Cygwin setup). This is how the patch works on my setup. $ /opt/postgres/reviewpatch/bin/psql -U postgres -h localhost psql (11devel, server 9.6.1) Type "help" for help. postgres=# \set i 1 postgres=# \if :{?i} postgres=# \echo 'testing' testing postgres=# \endif postgres=# \if :{?id} postgres@# \echo 'testing' \echo command ignored; use \endif or Ctrl-C to exit current \if block postgres@# \endif postgres=# -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Boom filters for hash joins (was: A design for amcheck heapam verification)
Hi, On 09/19/2017 02:55 AM, Robert Haas wrote: > On Mon, Sep 18, 2017 at 5:13 PM, Peter Geogheganwrote: >> On Mon, Sep 18, 2017 at 2:07 PM, Robert Haas wrote: >>> On Mon, Sep 18, 2017 at 1:29 PM, Tom Lane wrote: Uh, why does the planner need to be involved at all? >>> >>> Because it loses if the Bloom filter fails to filter anything. That's >>> not at all far-fetched; consider SELECT * FROM a.x, b.x WHERE a.x = >>> b.x given a foreign key on a.x referencing b(x). >> >> Wouldn't a merge join be a lot more likely in this case anyway? Low >> selectivity hash joins with multiple batches are inherently slow; the >> wasted overhead of using a bloom filter may not matter. >> >> Obviously this is all pretty speculative. I suspect that this could be >> true, and it seems worth investigating that framing of the problem >> first. > > ISTR Tomas Vondra doing some experiments with this a few years ago and > finding that it was, in fact, a problem. > You seem to have better memory than me, but you're right - I did some experiments with this in 2015, the WIP patch and discussion is here: https://www.postgresql.org/message-id/5670946e.8070...@2ndquadrant.com The whole idea was that with a bloom filter we can reduce the amount of tuples (from the outer relation) written to batches. The patch is fairly simple, and did not try to push the bloom filters to scan nodes or anything like that. It might be a meaningful first step, though, particularly for selective joins (where only small number of rows from the outer relation has a match in the hash table). regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Setting pd_lower in GIN metapage
On Tue, Sep 19, 2017 at 9:33 AM, Michael Paquierwrote: > On Tue, Sep 19, 2017 at 12:57 PM, Michael Paquier > wrote: >> I'd think about adjusting the comments the proper way for each AM so >> as one can read those comments and catch any limitation immediately. >> The fact this has not been pointed out on this thread before any >> checks and the many mails exchanged on the matter on this thread make >> me think that the current code does not outline the current code >> properties appropriately. > > Another thing that we could consider as well is adding an assertion in > XLogRegisterBuffer & friends so as the combination of REGBUF_STANDARD > and REGBUF_NO_IMAGE is forbidden. That's bugging me as well. > Is that related to this patch? If not, then maybe we can discuss it in a separate thread. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Setting pd_lower in GIN metapage
On Tue, Sep 19, 2017 at 9:27 AM, Michael Paquierwrote: > On Tue, Sep 19, 2017 at 12:40 PM, Amit Kapila wrote: >> On Mon, Sep 18, 2017 at 4:03 PM, Michael Paquier >> wrote: >>> On Mon, Sep 18, 2017 at 11:16 AM, Amit Kapila >>> wrote: On Sat, Sep 16, 2017 at 7:15 PM, Michael Paquier wrote: > You have already noticed above that it will help when wal_checking_consistency is used and that was the main motivation to pass REGBUF_STANDARD apart from maintaining consistency. It is not clear to me what is bothering you. If your only worry about these patches is that you want this sentence to be removed from the comment because you think it is obvious or doesn't make much sense, then I think we can leave this decision to committer. I have added it based on Tom's suggestion above thread about explaining why it is inessential or essential to set pd_lower. I think Amit Langote just tried to mimic what I have done in hash and btree patches to maintain consistency. I am also not very sure if we should write some detailed comment or leave the existing comment as it is. I think it is just a matter of different perspective. >>> >>> What is disturbing me a bit is that the existing comments mention >>> something that could be supported (the compression of pages), but >>> that's actually not done and this is unlikely to happen because the >>> number of bytes associated to a meta page is going to be always >>> cheaper than a FPW, which would cost in CPU to store it for >>> compression is enabled. So I think that we should switch comments to >>> mention that pd_lower is set so as this helps with page masking, but >>> we don't take advantage of XLOG compression in the code. >>> >> >> I think that is not true because we do need FPW for certain usages of >> metapage. Consider a case in _hash_doinsert where register metabuf >> with just >> REGBUF_STANDARD, it can take advantage of removing the hole if >> pd_lower is set to its correct position. > > I am not saying that no index AMs take advantage FPW compressibility > for their meta pages. There are cases like this one, as well as one > code path in BRIN where this is useful, and it is useful as well when > logging FPWs of the init forks for unlogged relations. > Hmm, why is it useful for logging FPWs of the init forks for unlogged relations? We don't use REGBUF_STANDARD in those cases. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment
On 19/09/17 15:08, Amit Kapila wrote: > On Tue, Sep 19, 2017 at 6:29 PM, Petr Jelinek >wrote: >> On 19/09/17 14:33, Amit Kapila wrote: >>> On Tue, Sep 19, 2017 at 3:34 PM, Petr Jelinek >>> wrote: n 18/09/17 18:42, Tom Lane wrote: > So, frankly, I think we would be best off losing the "logical rep > worker slot" business altogether, and making do with just bgworker > slots. >>> >>> I think that would be cleaner as compared to what we have now. >>> > The alternative is getting the postmaster involved in cleaning > up lrep slots as well as bgworker slots, and I'm going to resist > any such idea strongly. That way madness lies, or at least an > unreliable postmaster --- if we need lrep slots, what other forty-seven > data structures are we going to need the postmaster to clean up > in the future? > > I haven't studied the code enough to know why it grew lrep worker > slots in the first place. Maybe someone could explain? > I am not quite sure I understand this question, we need to store additional info about workers in shared memory so we need slots for that. >>> >>> Yeah, but you could have used the way we do for parallel query where >>> we setup dsm and share all such information. You can check the logic >>> of execparallel.c and parallel.c to see how we do all such stuff for >>> parallel query. >>> >> >> I don't understand how DSM helps in this case (except perhaps the memory >> allocation being dynamic rather than static). We still need this shared >> memory area to be accessible from other places than launcher (the >> paralllel query has one lead which manages everything, that's not true >> for logical replication) >> > > I am not much aware of this area. Can you explain what other usages > it has apart from in the process that has launched the worker and in > worker itself? > The subscription "DDL" commands and monitoring functions need access to that info. Note that the synchronization workers are not even started by launcher (I experimented with doing that but it slows the process of synchronization quite considerably) so it can't manage them unless the handle is accessible via IPC. >> and we need it to survive restart of launcher >> for example, so all the current issues will stay. >> > > Do you mean to say that you want to save this part of shared memory > across restart of launcher? > Yes. There is no reason why replication should stop because of launcher restart. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PoC: full merge join on comparison clause
Hi Alexander, On Fri, Aug 25, 2017 at 10:11 PM, Alexander Kuzmenkovwrote: > Here is a new version of the patch, rebased to 749c7c41 and with some > cosmetic changes. > I looked at this patch briefly. This is a useful feature. This isn't a design level review of the patch. I may get back to that later. But here are some assorted comments The patch applies cleanly, but there are some whitespace errors. src/backend/executor/nodeMergejoin.c:231: trailing whitespace. + /* src/backend/executor/nodeMergejoin.c:232: trailing whitespace. +* Determine whether we accept lesser and/or equal tuples src/backend/optimizer/path/joinpath.c:499: trailing whitespace. + * a merge join. A merge join executor can only choose inner values that are src/backend/optimizer/path/joinpath.c:501: trailing whitespace. + * have at most one non-equality clause. The implementation may change, so fixing the white space errors may not be priority now. The patch compiles cleanly. You have renamed RestrictInfo member mergeopfamilies as equivopfamilies. I don't think that's a good name; it doesn't convey that these are opfamilies containing merge operators. The changes in check_mergejoinable() suggest that an operator may act as equality operator in few operator families and comparison operator in others. That looks odd. Actually an operator family contains operators other than equality operators, so you may want to retain this member and add a new member to specify whether the clause is an equality clause or not. In mergejoinscansel() you have just removed Assert(op_strategy == BTEqualStrategyNumber); Probably this function is written considering on equality operators. But now that we are using this for all other operators, we will need more changes to this function. That may be the reason why INNER join in your earlier example doesn't choose right costing. The comment change in final_cost_mergejoin() needs more work. n1, n2, n3 are number of rows on inner side with values 1, 2, 3 resp. So n1 + n2 + n3 + ... = size of inner relation is correct. In that context I am not able to understand your change +* If the merge clauses contain inequality, (n1 + n2 + ...) ~= +* (size of inner relation)^2. Some stylistic comments + switch (join_op_strategy) + { + case BTEqualStrategyNumber: + parent->mj_UseEqual[iClause] = true; + break; + case BTLessEqualStrategyNumber: + parent->mj_UseEqual[iClause] = true; + /* fall through */ + case BTLessStrategyNumber: + parent->mj_UseLesser[iClause] = true; + break; + case BTGreaterEqualStrategyNumber: + parent->mj_UseEqual[iClause] = true; + /* fall through */ + case BTGreaterStrategyNumber: + parent->mj_UseLesser[iClause] = true; + break; + default: + Assert(false); Add blank lines between different cases and you may want to replace Assert in default case into an elog(). See for example exprType() or get_jointype_name(). + if (sort_result < 0) + { + result = MJCR_NextOuter; + } We usually do not add {} around a single statement block. -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment
On Tue, Sep 19, 2017 at 6:29 PM, Petr Jelinekwrote: > On 19/09/17 14:33, Amit Kapila wrote: >> On Tue, Sep 19, 2017 at 3:34 PM, Petr Jelinek >> wrote: >>> n 18/09/17 18:42, Tom Lane wrote: >>> So, frankly, I think we would be best off losing the "logical rep worker slot" business altogether, and making do with just bgworker slots. >> >> I think that would be cleaner as compared to what we have now. >> The alternative is getting the postmaster involved in cleaning up lrep slots as well as bgworker slots, and I'm going to resist any such idea strongly. That way madness lies, or at least an unreliable postmaster --- if we need lrep slots, what other forty-seven data structures are we going to need the postmaster to clean up in the future? I haven't studied the code enough to know why it grew lrep worker slots in the first place. Maybe someone could explain? >>> >>> I am not quite sure I understand this question, we need to store >>> additional info about workers in shared memory so we need slots for that. >>> >> >> Yeah, but you could have used the way we do for parallel query where >> we setup dsm and share all such information. You can check the logic >> of execparallel.c and parallel.c to see how we do all such stuff for >> parallel query. >> > > I don't understand how DSM helps in this case (except perhaps the memory > allocation being dynamic rather than static). We still need this shared > memory area to be accessible from other places than launcher (the > paralllel query has one lead which manages everything, that's not true > for logical replication) > I am not much aware of this area. Can you explain what other usages it has apart from in the process that has launched the worker and in worker itself? > and we need it to survive restart of launcher > for example, so all the current issues will stay. > Do you mean to say that you want to save this part of shared memory across restart of launcher? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] user-defined numeric data types triggering ERROR: unsupported type
Hi, while testing a custom data type FIXEDDECIMAL [1], implementing a numeric-like data type with limited range, I ran into a several issues that I suspect may not be entirely intentional / expected behavior. [1] https://github.com/2ndQuadrant/fixeddecimal Attached is a minimal subset of the extension SQL definition, which may be more convenient when looking into the issue. The most important issue is that when planning a simple query, the estimation of range queries on a column with the custom data type fails like this: test=# create table t (a fixeddecimal); CREATE TABLE test=# insert into t select random() from generate_series(1,10); INSERT 0 10 test=# analyze t; ANALYZE test=# select * from t where a > 0.9; ERROR: unsupported type: 16385 The error message here comes from convert_numeric_to_scalar, which gets called during histogram processing (ineq_histogram_selectivity) when approximating the histogram. convert_to_scalar does this: switch (valuetypeid) { ... case NUMERICOID: ... *scaledvalue = convert_numeric_to_scalar(value, valuetypid); *scaledlobound = convert_numeric_to_scalar(lobound, boundstypid); *scaledhibound = convert_numeric_to_scalar(hibound, boundstypid); return true; ... } The first call works fine, as the constant really is numeric (valuetypeid=1700). But the histogram boundaries are using the custom data type, causing the error as convert_numeric_to_scalar expects only a bunch of hard-coded data types. So it's pretty much guaranteed to fail with any user-defined data type. This seems a bit unfortunate :-( One solution would be to implement custom estimation function, replacing scalarltsel/scalargtsel. But that seems rather unnecessary, especially considering there is an implicit cast from fixeddecimal to numeric. Another thing is that when there's just an MCV, the estimation works just fine. So I see two basic ways to fix this: * Make convert_numeric_to_scalar smarter, so that it checks if there is an implicit cast to numeric, and fail only if it does not find one. * Make convert_to_scalar smarter, so that it does return false for unexpected data types, so that ineq_histogram_selectivity uses the default estimate of 0.5 (for that one bucket). Both options seem more favorable than what's happening currently. Is there anything I missed, making those fixes unacceptable? If anything, the fact that MCV estimates work while histogram does not makes this somewhat unpredictable - a change in the data distribution (or perhaps even just a different sample in ANALYZE) may result in sudden failures. I ran into one additional strange thing while investigating this. The attached SQL script defines two operator classes - fixeddecimal_ops and fixeddecimal_numeric_ops, defining (fixeddecimal,fixeddecimal) and (fixeddecimal,numeric) operators. Dropping one of those operator classes changes the estimates in a somewhat suspicious ways. When only fixeddecimal_ops is defined, we get this: test=# explain select * from t where a > 0.1; QUERY PLAN Seq Scan on t (cost=0.00..1943.00 rows=3 width=8) Filter: ((a)::numeric > 0.1) (2 rows) That is, we get the default estimate for inequality clauses, 33%. But when only fixeddecimal_numeric_ops, we get this: test=# explain select * from t where a > 0.1; QUERY PLAN Seq Scan on t (cost=0.00..1693.00 rows=5 width=8) Filter: (a > 0.1) (2 rows) That is, we get 50% estimate, because that's what scalarineqsel uses when it ineq_histogram_selectivity can't compute selectivity from a histogram for some reason. Wouldn't it make it more sense to use the default 33% estimate here? regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services fixeddecimal-minimal.sql Description: application/sql -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment
On 19/09/17 14:33, Amit Kapila wrote: > On Tue, Sep 19, 2017 at 3:34 PM, Petr Jelinek >wrote: >> n 18/09/17 18:42, Tom Lane wrote: >> >>> So, frankly, I think we would be best off losing the "logical rep >>> worker slot" business altogether, and making do with just bgworker >>> slots. > > I think that would be cleaner as compared to what we have now. > >>> The alternative is getting the postmaster involved in cleaning >>> up lrep slots as well as bgworker slots, and I'm going to resist >>> any such idea strongly. That way madness lies, or at least an >>> unreliable postmaster --- if we need lrep slots, what other forty-seven >>> data structures are we going to need the postmaster to clean up >>> in the future? >>> >>> I haven't studied the code enough to know why it grew lrep worker >>> slots in the first place. Maybe someone could explain? >>> >> >> I am not quite sure I understand this question, we need to store >> additional info about workers in shared memory so we need slots for that. >> > > Yeah, but you could have used the way we do for parallel query where > we setup dsm and share all such information. You can check the logic > of execparallel.c and parallel.c to see how we do all such stuff for > parallel query. > I don't understand how DSM helps in this case (except perhaps the memory allocation being dynamic rather than static). We still need this shared memory area to be accessible from other places than launcher (the paralllel query has one lead which manages everything, that's not true for logical replication) and we need it to survive restart of launcher for example, so all the current issues will stay. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] subscription worker signalling wal writer too much
At Sat, 26 Aug 2017 14:45:20 -0700, Jeff Janeswrote in
Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment
On Tue, Sep 19, 2017 at 3:34 PM, Petr Jelinekwrote: > n 18/09/17 18:42, Tom Lane wrote: > >> So, frankly, I think we would be best off losing the "logical rep >> worker slot" business altogether, and making do with just bgworker >> slots. I think that would be cleaner as compared to what we have now. >> The alternative is getting the postmaster involved in cleaning >> up lrep slots as well as bgworker slots, and I'm going to resist >> any such idea strongly. That way madness lies, or at least an >> unreliable postmaster --- if we need lrep slots, what other forty-seven >> data structures are we going to need the postmaster to clean up >> in the future? >> >> I haven't studied the code enough to know why it grew lrep worker >> slots in the first place. Maybe someone could explain? >> > > I am not quite sure I understand this question, we need to store > additional info about workers in shared memory so we need slots for that. > Yeah, but you could have used the way we do for parallel query where we setup dsm and share all such information. You can check the logic of execparallel.c and parallel.c to see how we do all such stuff for parallel query. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postgres_fdw bug in 9.6
On Tue, Sep 5, 2017 at 1:10 PM, Etsuro Fujitawrote: > > > I think Tom is reviewing this patch [1]. > I am marking this as ready for committer as I don't have any new comments and possibly other reviewers have also done reviewing it. -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Generic type subscripting
> On 19 September 2017 at 10:21, Arthur Zakirovwrote: > On Mon, Sep 18, 2017 at 12:25:04PM +0200, Dmitry Dolgov wrote: >> > I think it would be good to add new catalog table. It may be named as >> pg_type_sbs or pg_subscripting (second is better I think). >> > This table may have the fields: >> > - oid >> > - sbstype >> > - sbsinit >> > - sbsfetch >> > - sbsassign >> >> What is `sbstype`? > >'sbstype' is oid of type from pg_type for which subscripting is created. I.e. pg_type may not have the 'typsubsparse' field. I'm confused, why do we need it? I mean, isn't it enough to have a subscripting oid in a pg_type record? > On 18 September 2017 at 12:25, Dmitry Dolgov <9erthali...@gmail.com> wrote: > > Overall I have only one concern about this suggestion - basically it changes > nothing from the perspective of functionality or implementation quality. Few more thoughts about this point. Basically if we're going this way (i.e. having `pg_subscripting`) there will be one possible change of functionality - in this case since we store oids of all the required functions, we can pass them to a `parse` function (so that a custom extension does not need to resolve them every time). At the same time there are consequences of storing `pg_subscripting`, e.g.: * I assume the performance would be worse because we have to do more actions to actually call a proper function. * The implementation itself will be little bit more complex I think. * Should we think about other functionality besides `CREATE` and `DROP`, for example `ALTER` (as far as I see aggregations support that). and maybe something else that I don't see now.
Re: [HACKERS] src/test/subscription/t/005_encoding.pl is broken
On Tue, Sep 19, 2017 at 8:51 PM, Robert Haaswrote: > On Mon, Sep 18, 2017 at 1:58 PM, Andres Freund wrote: >> To my knowledge here's not really any difference between the two in >> logical replication. Received changes are immediately applied, there's >> no equivalent to a walreceiver queing up "logical wal" onto disk. > > Huh? Decoding and applying the changes has to take some finite time > greater than 0. FWIW, the wait method looks fine to me. Now, I just had a look at the logs for a failure and a success, and one difference can be seen in the subscriber's logs as follows: -LOG: logical replication table synchronization worker for subscription "mysub", table "test1" has started -LOG: logical replication table synchronization worker for subscription "mysub", table "test1" has finished +WARNING: out of background worker slots +HINT: You might need to increase max_worker_processes. The "+" portion is for a failure, and I think that this causes the subscription to not consume the changes from the publisher which explains the failure in the test as the logical worker applying the changes on the subscriber-side is not here. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/005_encoding.pl is broken
On Mon, Sep 18, 2017 at 1:58 PM, Andres Freundwrote: > To my knowledge here's not really any difference between the two in > logical replication. Received changes are immediately applied, there's > no equivalent to a walreceiver queing up "logical wal" onto disk. Huh? Decoding and applying the changes has to take some finite time greater than 0. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Perform only one ReadControlFile() during startup.
On Mon, Sep 18, 2017 at 2:04 PM, Andres Freundwrote: > On 2017-09-18 12:16:42 -0400, Robert Haas wrote: >> On Mon, Sep 18, 2017 at 6:32 AM, Andres Freund wrote: >> > One thing that I've noticed for a while, but that I was reminded of >> > again here. We very frequently allow psql to reconnect in case of crash, >> > just for postmaster to notice a child has gone and kill that session. I >> > don't recall that frequently happening, but these days it happens nearly >> > every time. >> >> I don't understand what you're talking about here. > > I often see a backend crash, psql reacting to that crash by > reconnecting, successfully establish a new connection, just to be kicked > off by postmaster that does the crash restart cycle. I've not yet > figured out when exactly this happens and when not. Oh, I've not seen that. Mostly, what I think we should fix is the fact that the libpq messages tend to report that the server crashed even if it was an orderly shutdown. [rhaas ~]$ psql psql (11devel) Type "help" for help. rhaas=# select 1; FATAL: terminating connection due to administrator command server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. It's sort of funny (but also sort of sad) that we've got libpq talking down the server's reliability (and even in the face of evidence which manifestly contradicts it). -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rewriting the test of pg_upgrade as a TAP test
On Tue, Sep 19, 2017 at 6:15 PM, Alvaro Herrerawrote: > Michael Paquier wrote: >> On Thu, Sep 7, 2017 at 11:14 AM, Michael Paquier >> wrote: >> > Or we could make upgradecheck a noop, then remove it once all the MSVC >> > animals have upgraded to a newer version of the buildfarm client which >> > does not use upgradecheck anymore (I am fine to send a patch or a pull >> > request to Andrew for that). >> >> This patch is logged as "waiting on author" in the current commit >> fest, but any new patch will depend on the feedback that any other >> hacker has to offer based on the set of ideas I have posted upthread. >> Hence I am yet unsure what is the correct way to move things forward. >> So, any opinions? Peter or others? > > I think the first step is to send the rebased version of the patch. It > was last posted in April ... Here you go. I have not done anything fancy for cross-version tests yet. -- Michael pgupgrade-tap-test-v4.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] path toward faster partition pruning
I have done some refactoring of the code where I have moved the code of getting the matching clause into the separate function so that it can fetch the matching clause from any set of given restriction list. It can be applied on top of 0002-WIP: planner-side-changes-for-partition-pruning.patch On Sat, Sep 16, 2017 at 3:13 PM, Dilip Kumarwrote: > On Fri, Sep 15, 2017 at 2:20 PM, Amit Langote > wrote: >> On 2017/09/15 11:16, Amit Langote wrote: > > Thanks for the updated patch. I was going through the logic of > get_rel_partitions in 0002 as almost similar functionality will be > required by runtime partition pruning on which Beena is working. The > only difference is that here we are processing the > "rel->baserestrictinfo" and in case of runtime pruning, we also need > to process join clauses which are pushed down to appendrel. > > So can we make some generic logic which can be used for both the patches. > > So basically, we need to do two changes > > 1. In get_rel_partitions instead of processing the > "rel->baserestrictinfo" we can take clause list as input that way we > can pass any clause list to this function. > > 2. Don't call "get_partitions_for_keys" routine from the > "get_rel_partitions", instead, get_rel_partitions can just prepare > minkey, maxkey and the caller of the get_rel_partitions can call > get_partitions_for_keys, because for runtime pruning we need to call > get_partitions_for_keys at runtime. > > After these changes also there will be one problem that the > get_partitions_for_keys is directly fetching the "rightop->constvalue" > whereas, for runtime pruning, we need to store rightop itself and > calculate the value at runtime by param evaluation, I haven't yet > thought how can we make this last part generic. > > -- > Regards, > Dilip Kumar > EnterpriseDB: http://www.enterprisedb.com -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com 0002-refactor_get_rel_partition.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables
Rafia Sabih wrote: > On completing the benchmark for all queries for the above mentioned > setup, following performance improvement can be seen, > Query | Patch | Head > 3 | 1455 | 1631 > 4 | 499 | 4344 > 5 | 1464 | 1606 > 10 | 1475 | 1599 > 12 | 1465 | 1790 > > Note that all values of execution time are in seconds. > To summarise, apart from Q4, all other queries are showing somewhat > 10-20% improvement. Saving 90% of time on the slowest query looks like a worthy improvement on its own right. However, you're reporting execution time only, right? What happens to planning time? In a quick look, $ grep 'Planning time' pg_part_*/4* pg_part_head/4_1.out: Planning time: 3390.699 ms pg_part_head/4_2.out: Planning time: 194.211 ms pg_part_head/4_3.out: Planning time: 210.964 ms pg_part_head/4_4.out: Planning time: 4150.647 ms pg_part_patch/4_1.out: Planning time: 7577.247 ms pg_part_patch/4_2.out: Planning time: 312.421 ms pg_part_patch/4_3.out: Planning time: 304.697 ms pg_part_patch/4_4.out: Planning time: 269.778 ms I think the noise in these few results is too large to draw any conclusions. Maybe a few dozen runs of EXPLAIN (w/o ANALYZE) would tell something significant? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] type cache for concat functions
The following review has been posted through the commitfest application: make installcheck-world: tested, passed Implements feature: tested, passed Spec compliant: tested, passed Documentation:tested, passed Looks good to me. The new status of this patch is: Ready for Committer -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] src/test/subscription/t/002_types.pl hanging on particular environment
n 18/09/17 18:42, Tom Lane wrote: > Amit Kapilawrites: >> On Mon, Sep 18, 2017 at 7:46 PM, Tom Lane wrote: >>> The subscriber log includes >>> 2017-09-18 08:43:08.240 UTC [15672] WARNING: out of background worker slots >>> Maybe that's harmless, but I'm suspicious that it's a smoking gun. > >> I have noticed while fixing the issue you are talking that this path >> is also susceptible to such problems. In >> WaitForReplicationWorkerAttach(), it relies on >> GetBackgroundWorkerPid() to know the status of the worker which won't >> give the correct status in case of fork failure. The patch I have >> posted has a fix for the issue, > > Your GetBackgroundWorkerPid fix looks good as far as it goes, but > I feel that WaitForReplicationWorkerAttach's problems go way deeper > than that --- in fact, that function should not exist at all IMO. > > The way that launcher.c is set up, it's relying on either the calling > process or the called process to free the logicalrep slot when done. > The called process might never exist at all, so the second half of > that is obviously unreliable; but WaitForReplicationWorkerAttach > can hardly be relied on either, seeing it's got a big fat exit-on- > SIGINT in it. I thought about putting a PG_TRY, or more likely > PG_ENSURE_ERROR_CLEANUP, into it, but that doesn't fix the basic > problem: you can't assume that the worker isn't ever going to start, > just because you got a signal that means you shouldn't wait anymore. > > Now, this rickety business might be fine if it were only about the > states of the caller and called processes, but we've got long-lived > shared data involved (ie the worker slots); failing to clean it up > is not an acceptable outcome. > We'll clean it up eventually if somebody requests creation of new logical replication worker and that slot is needed. See the "garbage collection" part in logicalrep_worker_launch(). I know it's not ideal solution, but it's the best one I could think of given the bellow. > So, frankly, I think we would be best off losing the "logical rep > worker slot" business altogether, and making do with just bgworker > slots. The alternative is getting the postmaster involved in cleaning > up lrep slots as well as bgworker slots, and I'm going to resist > any such idea strongly. That way madness lies, or at least an > unreliable postmaster --- if we need lrep slots, what other forty-seven > data structures are we going to need the postmaster to clean up > in the future? > > I haven't studied the code enough to know why it grew lrep worker > slots in the first place. Maybe someone could explain? > I am not quite sure I understand this question, we need to store additional info about workers in shared memory so we need slots for that. If you are asking why they are not identified by the BackgroundWorkerHandle, then it's because it's private struct and can't be shared with other processes so there is no way to link the logical worker info with bgworker directly. I mentioned that I am not happy about this during the original development but it didn't gather any attention which I took to mean that nobody minds. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] JIT compiling expressions/deform + inlining prototype v2.0
On 04.09.2017 23:52, Andres Freund wrote: Hi. That piece of code isn't particularly clear (and has a bug in the submitted version), I'm revising it. ... Yea, I've changed that already, although it's currently added earlier, because the alignment is needed before, to access the column correctly. I've also made number of efficiency improvements, primarily to access columns with an absolute offset if all preceding ones are fixed width not null columns - that is quite noticeable performancewise. Should I wait for new version of your patch or continue review of this code? -- Konstantin Knizhnik Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list
The following review has been posted through the commitfest application: make installcheck-world: tested, passed Implements feature: tested, passed Spec compliant: tested, passed Documentation:tested, passed Hello, I've tested it (make check-world) and as far as I understand, it works fine. The new status of this patch is: Ready for Committer -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rewriting the test of pg_upgrade as a TAP test
Michael Paquier wrote: > On Thu, Sep 7, 2017 at 11:14 AM, Michael Paquier >wrote: > > Or we could make upgradecheck a noop, then remove it once all the MSVC > > animals have upgraded to a newer version of the buildfarm client which > > does not use upgradecheck anymore (I am fine to send a patch or a pull > > request to Andrew for that). > > This patch is logged as "waiting on author" in the current commit > fest, but any new patch will depend on the feedback that any other > hacker has to offer based on the set of ideas I have posted upthread. > Hence I am yet unsure what is the correct way to move things forward. > So, any opinions? Peter or others? I think the first step is to send the rebased version of the patch. It was last posted in April ... -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] UPDATE of partition key
On Tue, Sep 19, 2017 at 1:15 PM, Amit Khandekarwrote: > On 18 September 2017 at 20:45, Dilip Kumar wrote: >> Please find few more comments. >> >> + * in which they appear in the PartitionDesc. Also, extract the >> + * partition key columns of the root partitioned table. Those of the >> + * child partitions would be collected during recursive expansion. >> */ >> + pull_child_partition_columns(_part_cols, oldrelation, oldrelation); >> expand_partitioned_rtentry(root, rte, rti, oldrelation, oldrc, >> lockmode, >append_rel_list, >> + _part_cols, >> >> pcinfo->all_part_cols is only used in case of update, I think we can >> call pull_child_partition_columns >> only if rte has updateCols? >> >> @@ -2029,6 +2034,7 @@ typedef struct PartitionedChildRelInfo >> >> Index parent_relid; >> List *child_rels; >> + Bitmapset *all_part_cols; >> } PartitionedChildRelInfo; >> >> I might be missing something, but do we really need to store >> all_part_cols inside the >> PartitionedChildRelInfo, can't we call pull_child_partition_columns >> directly inside >> inheritance_planner whenever we realize that RTE has some updateCols >> and we want to >> check the overlap? > > One thing we will have to do extra is : Open and close the > partitioned rels again. The idea was that we collect the bitmap > *while* we are already expanding through the tree and the rel is open. > Will check if this is feasible. Oh, I see. > >> >> +extern void partition_walker_init(PartitionWalker *walker, Relation rel); >> +extern Relation partition_walker_next(PartitionWalker *walker, >> + Relation *parent); >> + >> >> I don't see these functions are used anywhere? >> >> +typedef struct PartitionWalker >> +{ >> + List *rels_list; >> + ListCell *cur_cell; >> +} PartitionWalker; >> + >> >> Same as above > > Yes, this was left out from the earlier implementation. Will have this > removed in the next updated patch. Ok. I will continue my review thanks. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Pre-existing bug in trigger.c
Tom Lane wrote: > After studying this awhile, I've concluded that neither of those > ideas leads to a fix simple enough that I'd be comfortable with > back-patching it. What seems like the best answer is to not pass > delete_ok = true to afterTriggerInvokeEvents in AfterTriggerEndQuery. > Then, afterTriggerInvokeEvents will examine the passed event list > pointer only during its loop initialization, at which time it's > surely still valid. For reference: this fix was committed as 27c6619e9c8f. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Generic type subscripting
On Mon, Sep 18, 2017 at 12:25:04PM +0200, Dmitry Dolgov wrote: > > I think it would be good to add new catalog table. It may be named as > pg_type_sbs or pg_subscripting (second is better I think). > > This table may have the fields: > > - oid > > - sbstype > > - sbsinit > > - sbsfetch > > - sbsassign > > What is `sbstype`? 'sbstype' is oid of type from pg_type for which subscripting is created. I.e. pg_type may not have the 'typsubsparse' field. > > And command 'CREATE SUBSCRIPTING' should add an entry to the > pg_subscripting table. It also adds entries to the pg_depend table: > dependency between pg_type and pg_subscripting, dependency between pg_type > and pg_proc. > > 'DROP SUBSCRIPTING' should drop this entries, it should not drop init > function. > > Why we should keep those subscripting functions? From my understanding > they're > totally internal and useless without subscripting context. > Other similar commands do not drop related functions. For example 'DROP CAST' do not drop a function used to perform a cast. > It also adds entries to the pg_depend table: dependency between pg_type and > pg_subscripting, dependency between pg_type and pg_proc. Here was a typo from me. Last entry is dependency between pg_subscripting and pg_proc. -- Arthur Zakirov Postgres Professional: http://www.postgrespro.com Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Repetitive code in RI triggers
> On 11 Apr 2017, at 03:41, Peter Eisentraut> wrote: > > On 4/10/17 11:55, Ildar Musin wrote: >> I was looking through the RI triggers code recently and noticed a few >> almost identical functions, e.g. ri_restrict_upd() and >> ri_restrict_del(). The following patch is an attempt to reduce some of >> repetitive code. > > That looks like something worth pursuing. Please add it to the next > commit fest. Removing reviewer Maksim Milyutin from patch entry due to inactivity and community account email bouncing. Maksim: if you are indeed reviewing this patch, then please update the community account and re-add to the patch entry. cheers ./daniel -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] UPDATE of partition key
On 18 September 2017 at 20:45, Dilip Kumarwrote: > Please find few more comments. > > + * in which they appear in the PartitionDesc. Also, extract the > + * partition key columns of the root partitioned table. Those of the > + * child partitions would be collected during recursive expansion. > */ > + pull_child_partition_columns(_part_cols, oldrelation, oldrelation); > expand_partitioned_rtentry(root, rte, rti, oldrelation, oldrc, > lockmode, >append_rel_list, > + _part_cols, > > pcinfo->all_part_cols is only used in case of update, I think we can > call pull_child_partition_columns > only if rte has updateCols? > > @@ -2029,6 +2034,7 @@ typedef struct PartitionedChildRelInfo > > Index parent_relid; > List *child_rels; > + Bitmapset *all_part_cols; > } PartitionedChildRelInfo; > > I might be missing something, but do we really need to store > all_part_cols inside the > PartitionedChildRelInfo, can't we call pull_child_partition_columns > directly inside > inheritance_planner whenever we realize that RTE has some updateCols > and we want to > check the overlap? One thing we will have to do extra is : Open and close the partitioned rels again. The idea was that we collect the bitmap *while* we are already expanding through the tree and the rel is open. Will check if this is feasible. > > +extern void partition_walker_init(PartitionWalker *walker, Relation rel); > +extern Relation partition_walker_next(PartitionWalker *walker, > + Relation *parent); > + > > I don't see these functions are used anywhere? > > +typedef struct PartitionWalker > +{ > + List *rels_list; > + ListCell *cur_cell; > +} PartitionWalker; > + > > Same as above Yes, this was left out from the earlier implementation. Will have this removed in the next updated patch. -- Thanks, -Amit Khandekar EnterpriseDB Corporation The Postgres Database Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Block level parallel vacuum WIP
On Tue, Sep 19, 2017 at 3:33 PM, Thomas Munrowrote: > On Fri, Sep 8, 2017 at 10:37 PM, Masahiko Sawada > wrote: >> Since v4 patch conflicts with current HEAD I attached the latest version >> patch. > > Hi Sawada-san, > > Here is an interesting failure with this patch: > > test rowsecurity ... FAILED > test rules... FAILED > > Down at the bottom of the build log in the regression diffs file you can see: > > ! ERROR: cache lookup failed for relation 32893 > > https://travis-ci.org/postgresql-cfbot/postgresql/builds/277165907 > Thank you for letting me know. Hmm, it's an interesting failure. I'll investigate it and post the new patch. Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Simplify ACL handling for large objects and removal of superuser() checks
On Tue, Sep 19, 2017 at 1:24 PM, Vaishnavi Prabakaranwrote: > On Mon, Aug 14, 2017, Michael Paquier wrote: >>Attached is a set of 3 patches: > > I tried to review the patch and firstly patch applies cleanly without any > noise. Thanks for the review, Vaishnavi. >>- 0001 removes ALLOW_DANGEROUS_LO_FUNCTIONS > > I think it is better to remove "ALLOW_DANGEROUS_LO_FUNCTIONS" from > pg_config_manual.h as well. Indeed. Fixed. >>@@ -163,22 +150,16 @@ lo_read(int fd, char *buf, int len) >> >> + if ((lobj->flags & IFS_RDLOCK) == 0) >>+ ereport(ERROR, >>+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), >>+ errmsg("large object descriptor %d was not opened for reading", >>+ fd))); > > Do we ever reach this error? Because per my understanding This error can be reached, and it is part of the regression tests. One query which passed previously is now failing: +SELECT loread(lo_open(1001, x'2'::int), 32); -- fail, wrong mode +ERROR: large object descriptor 0 was not opened for reading > I think documented behavior is more appropriate and post ACL > checks for select and update permissions in inv_open(), IFS_RDLOCK flag > should be set regardless of mode specified. OK, so that's one vote for keeping the existing API behavior with write implying read. Thanks for the input. At least I can see no complains about patches 1 and 2 :) -- Michael 0001-Remove-ALLOW_DANGEROUS_LO_FUNCTIONS-for-LO-related-s.patch Description: Binary data 0002-Replace-superuser-checks-of-large-object-import-expo.patch Description: Binary data 0003-Move-ACL-checks-for-large-objects-when-opening-them.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Block level parallel vacuum WIP
On Fri, Sep 8, 2017 at 10:37 PM, Masahiko Sawadawrote: > Since v4 patch conflicts with current HEAD I attached the latest version > patch. Hi Sawada-san, Here is an interesting failure with this patch: test rowsecurity ... FAILED test rules... FAILED Down at the bottom of the build log in the regression diffs file you can see: ! ERROR: cache lookup failed for relation 32893 https://travis-ci.org/postgresql-cfbot/postgresql/builds/277165907 -- Thomas Munro http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GUC for cleanup indexes threshold.
I was just looking the thread since it is found left alone for a long time in the CF app. At Mon, 18 Sep 2017 16:35:58 -0700, Peter Geogheganwrote in > On Wed, Apr 5, 2017 at 3:50 PM, Andres Freund wrote: > > Hi, > > > > On 2017-04-01 03:05:07 +0900, Masahiko Sawada wrote: > >> On Fri, Mar 31, 2017 at 11:44 PM, Robert Haas > >> wrote: > >> [ lots of valuable discussion ] > > > > I think this patch clearly still is in the design stage, and has > > received plenty feedback this CF. I'll therefore move this to the next > > commitfest. > > Does anyone have ideas on a way forward here? I don't, but then I > haven't thought about it in detail in several months. Is the additional storage in metapage to store the current status of vaccum is still unacceptable even if it can avoid useless full-page scan on indexes especially for stable tables? Or, how about additional 1 bit in pg_stat_*_index to indicate that the index *don't* require vacuum cleanup stage. (default value causes cleanup) index_bulk_delete (or ambulkdelete) returns the flag in IndexBulkDeleteResult then lazy_scan_heap stores the flag in stats and in the next cycle it is looked up to decide the necessity of index cleanup. regards, -- Kyotaro Horiguchi NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Back-branch release notes up for review
On Thu, Aug 31, 2017 at 02:53:45AM +, Noah Misch wrote: > On Sat, Aug 26, 2017 at 03:31:12PM -0400, Tom Lane wrote: > > + > > + > > + > > + Show foreign tables > > + in information_schema.table_privileges > > + view (Peter Eisentraut) > > + > > + > > + > > + All other relevant information_schema views include > > + foreign tables, but this one ignored them. > > + > > + > > + > > + Since this view definition is installed by initdb, > > + merely upgrading will not fix the problem. If you need to fix this > > + in an existing installation, you can, as a superuser, do this > > + in psql: > > + > > +BEGIN; > > +DROP SCHEMA information_schema CASCADE; > > +\i SHAREDIR/information_schema.sql > > +COMMIT; > > + > > + (Run pg_config --sharedir if you're uncertain > > + where SHAREDIR is.) This must be repeated in each > > + database to be fixed. > > + > > + > > "DROP SCHEMA information_schema CASCADE;" will drop objects outside > information_schema that depend on objects inside information_schema. For > example, this will drop user-defined views if the view query refers to > information_schema. That's improper in a release-noted update procedure. > This could instead offer a CREATE OR REPLACE VIEW or just hand-wave about the > repaired definition being available in information_schema.sql. I lean toward the former, attached. Conveniently, every released branch has the same definition for this view. diff --git a/doc/src/sgml/release-9.2.sgml b/doc/src/sgml/release-9.2.sgml index faa7ae4..6fa21e3 100644 --- a/doc/src/sgml/release-9.2.sgml +++ b/doc/src/sgml/release-9.2.sgml @@ -58,14 +58,44 @@ in an existing installation, you can, as a superuser, do this in psql: -BEGIN; -DROP SCHEMA information_schema CASCADE; -\i SHAREDIR/information_schema.sql -COMMIT; +SET search_path TO information_schema; +CREATE OR REPLACE VIEW table_privileges AS +SELECT CAST(u_grantor.rolname AS sql_identifier) AS grantor, + CAST(grantee.rolname AS sql_identifier) AS grantee, + CAST(current_database() AS sql_identifier) AS table_catalog, + CAST(nc.nspname AS sql_identifier) AS table_schema, + CAST(c.relname AS sql_identifier) AS table_name, + CAST(c.prtype AS character_data) AS privilege_type, + CAST( + CASE WHEN + -- object owner always has grant options + pg_has_role(grantee.oid, c.relowner, 'USAGE') + OR c.grantable + THEN 'YES' ELSE 'NO' END AS yes_or_no) AS is_grantable, + CAST(CASE WHEN c.prtype = 'SELECT' THEN 'YES' ELSE 'NO' END AS yes_or_no) AS with_hierarchy + +FROM ( +SELECT oid, relname, relnamespace, relkind, relowner, (aclexplode(coalesce(relacl, acldefault('r', relowner.* FROM pg_class + ) AS c (oid, relname, relnamespace, relkind, relowner, grantor, grantee, prtype, grantable), + pg_namespace nc, + pg_authid u_grantor, + ( + SELECT oid, rolname FROM pg_authid + UNION ALL + SELECT 0::oid, 'PUBLIC' + ) AS grantee (oid, rolname) + +WHERE c.relnamespace = nc.oid + AND c.relkind IN ('r', 'v', 'f') + AND c.grantee = grantee.oid + AND c.grantor = u_grantor.oid + AND c.prtype IN ('INSERT', 'SELECT', 'UPDATE', 'DELETE', 'TRUNCATE', 'REFERENCES', 'TRIGGER') + AND (pg_has_role(u_grantor.oid, 'USAGE') + OR pg_has_role(grantee.oid, 'USAGE') + OR grantee.rolname = 'PUBLIC'); - (Run pg_config --sharedir if you're uncertain - where SHAREDIR is.) This must be repeated in each - database to be fixed. + This must be repeated in each database to be fixed, + including template0. diff --git a/doc/src/sgml/release-9.3.sgml b/doc/src/sgml/release-9.3.sgml index f3b00a7..91fbb34 100644 --- a/doc/src/sgml/release-9.3.sgml +++ b/doc/src/sgml/release-9.3.sgml @@ -52,14 +52,44 @@ in an existing installation, you can, as a superuser, do this in psql: -BEGIN; -DROP SCHEMA information_schema CASCADE; -\i SHAREDIR/information_schema.sql -COMMIT; +SET search_path TO information_schema; +CREATE OR REPLACE VIEW table_privileges AS +SELECT CAST(u_grantor.rolname AS sql_identifier) AS grantor, + CAST(grantee.rolname AS sql_identifier) AS grantee, + CAST(current_database() AS sql_identifier) AS table_catalog, + CAST(nc.nspname AS sql_identifier) AS table_schema, + CAST(c.relname AS sql_identifier) AS table_name, + CAST(c.prtype AS character_data) AS privilege_type, + CAST( + CASE WHEN + -- object owner always has grant options + pg_has_role(grantee.oid, c.relowner, 'USAGE') + OR c.grantable + THEN 'YES' ELSE 'NO' END AS
Re: [HACKERS] [PATCH] Improve geometric types
At Fri, 15 Sep 2017 11:25:30 -0400, Robert Haaswrote in > On Fri, Sep 15, 2017 at 4:23 AM, Kyotaro HORIGUCHI > wrote: > > /* don't merge the following same functions with different types > >into single macros so that double evaluation won't happen */ > > > > Is it still too verbose? > > Personally, I don't think such a comment has much value, but I am not > going to spend a lot of time arguing about it. It's really up to the > eventual committer to decide, and that probably won't be me in this > case. My knowledge of the geometric types isn't great, and I am a tad > busy breaking^Wimproving things I do understand. Ok, I agree with you. # Though it is not a issue of geometric types :p I'll withdrow the comment. Maybe someone notices of that when reviewing such a patch. regards, -- Kyotaro Horiguchi NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Race between SELECT and ALTER TABLE NO INHERIT
At Fri, 15 Sep 2017 15:36:26 +0900, Amit Langotewrote in > Hi. > > On 2017/08/28 18:28, Kyotaro HORIGUCHI wrote: > > << the following is another topic >> > > > BTW, in the partitioned table case, the parent is always locked first > using an AccessExclusiveLock. There are other considerations in that > case > such as needing to recreate the partition descriptor upon termination of > inheritance (both the DETACH PARTITION and also DROP TABLE child cases). > >>> > >>> Apart from the degree of concurrency, if we keep parent->children > >>> order of locking, such recreation does not seem to be > >>> needed. Maybe I'm missing something. > >> > >> Sorry to have introduced that topic in this thread, but I will try to > >> explain anyway why things are the way they are currently: > >> > >> Once a table is no longer a partition of the parent (detached or dropped), > >> we must make sure that the next commands in the transaction don't see it > >> as one. That information is currently present in the relcache > >> (rd_partdesc), which is used by a few callers, most notably the > >> tuple-routing code. Next commands must recreate the entry so that the > >> correct thing happens based on the updated information. More precisely, > >> we must invalidate the current entry. RelationClearRelation() will either > >> delete the entry or rebuild it. If it's being referenced somewhere, it > >> will be rebuilt. The place holding the reference may also be looking at > >> the content of rd_partdesc, which we don't require them to make a copy of, > >> so we must preserve its content while other fields of RelationData are > >> being read anew from the catalog. We don't have to preserve it if there > >> has been any change (partition added/dropped), but to make such a change > >> one would need to take a strong enough lock on the relation (parent). We > >> assume here that anyone who wants to reference rd_partdesc takes at least > >> AccessShareLock lock on the relation, and anyone who wants to change its > >> content must take a lock that will conflict with it, so > >> AccessExclusiveLock. Note that in all of this, we are only talking about > >> one relation, that is the parent, so parent -> child ordering of taking > >> locks may be irrelevant. > > > > I think I understand this, anyway DropInherit and DropPartition > > is different-but-at-the-same-level operations so surely needs > > amendment for drop/detach cases. Is there already a solution? Or > > reproducing steps? > > Sorry, I think I forgot to reply to this. Since you seem to have chosen > the other solution (checking that child is still a child), maybe this > reply is a bit too late, but anyway. I choosed it at that time for the reason mentioned upthread, but haven't decided which is better. > DropInherit or NO INHERIT is seen primarily as changing a child table's > (which is the target table of the command) property that it is no longer a > child of the parent, so we lock the child table to block concurrent > operations from considering it a child of parent anymore. The fact that > parent is locked after the child and with ShareUpdateExclusiveLock instead > of AccessExclusiveLock, we observe this race condition when SELECTing from > the parent. > > DropPartition or DETACH PARTITION is seen primarily as changing the parent > table's (which is the target table of the command) property that one of > the partitions is removed, so we lock the parent. Any concurrent > operations that rely on the parent's relcache to get the partition list > will wait for the session that is dropping the partition to finish, so > that they get the fresh information from the relcache (or more importantly > do not end up with information obtained from the relcache going invalid > under them without notice). Note that the lock on the partition/child is > also present and it plays more or less the the same role as it does in the > DropInherit case, but due to different order of locking, reported race > condition does not occur between SELECT on partitioned table and > DROP/DETACH PARTITION. Thank you for the explanation. I understand that the difference comes from which of parent and children has the information about inheritance/partitioning. DROP child and ALTER child NO INHERITS are (I think) the only two operations that intiated from children side. The parent-locking patch results in different solutions for similar problems. However, parent locking on DROPped child requires a new index InheritsRelidIndex to avoid full scan on pg_inherits, but the NO INHERITS case doesn't. This might be a good reason for the difference. I noticed that DROP TABLE partition acquires lock on the parent in a callback for RangeVarGetRelid(Extended). The same way seems reasonable for the NO INHERIT case. As the result the patch becomes far small and less invasive than the previous