Re: [HACKERS] PassDownLimitBound for ForeignScan/CustomScan [take-2]
Hello, Sorry for my late response. The attached patch reflects your comments. > Here are few comments on latest patch: > > > 1. > make/make check is fine, however I am getting regression failure in > postgres_fdw contrib module (attached regression.diff). > Please investigate and fix. > It was an incorrect interaction when postgres_fdw tries to push down sorting to the remote side. We cannot attach LIMIT clause on the plain scan path across SORT, however, the previous version estimated the cost for the plain scan with LIMIT clause even if local sorting is needed. If remote scan may return just 10 rows, estimated cost of the local sort is very lightweight, thus, this unreasonable path was chosen. (On the other hands, its query execution results were correct because ps_numTuples is not delivered across Sort node, so ForeignScan eventually scanned all the remote tuples. It made correct results but not optimal from the viewpoint of performance.) The v3 patch estimates the cost with remote LIMIT clause only if supplied pathkey strictly matches with the final output order of the query, thus, no local sorting is expected. Some of the regression test cases still have different plans but due to the new optimization by remote LIMIT clause. Without remote LIMIT clause, some of regression test cases preferred remote-JOIN + local-SORT then local-LIMIT. Once we have remote-LIMIT option, it allows to discount the cost for remote-SORT by choice of top-k heap sorting. It changed the optimizer's decision on some test cases. Potential one big change is the test case below. -- CROSS JOIN, not pushed down EXPLAIN (VERBOSE, COSTS OFF) SELECT t1.c1, t2.c1 FROM ft1 t1 CROSS JOIN ft2 t2 ORDER BY t1.c1, t2.c1 OFFSET 100 LIMIT 10; It assumed CROSS JOIN was not pushed down due to the cost for network traffic, however, remote LIMIT reduced the estimated number of tuples to be moved. So, all of the CROSS JOIN + ORDER BY + LIMIT became to run on the remote side. > 2. > + * > + * MEMO: root->limit_tuples is not attached when query > contains > + * grouping-clause or aggregate functions. So, we don's adjust > + * rows even if LIMIT is supplied. > > Can you please explain why you are not doing this for grouping-clause or > aggregate functions. > See grouping_planner() at optimizer/plan/planner.c It puts an invalid value on the root->limit_tuples if query has GROUP BY clause, so we cannot know number of tuples to be returned even if we have upper limit actually. /* * Figure out whether there's a hard limit on the number of rows that * query_planner's result subplan needs to return. Even if we know a * hard limit overall, it doesn't apply if the query has any * grouping/aggregation operations, or SRFs in the tlist. */ if (parse->groupClause || parse->groupingSets || parse->distinctClause || parse->hasAggs || parse->hasWindowFuncs || parse->hasTargetSRFs || root->hasHavingQual) root->limit_tuples = -1.0; else root->limit_tuples = limit_tuples; > 3. > Typo: > > don's => don't > Fixed, best regards, PG-Strom Project / NEC OSS Promotion Center KaiGai Kohei passdown-limit-fdw.v3.patch Description: passdown-limit-fdw.v3.patch -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] commitfest 2016-11 status summary
On Mon, Dec 5, 2016 at 2:50 PM, Haribabu Kommi wrote: > Micheal, I need your help in closing the commitfest. And done. Thanks for doing this work! I was ready to use my CF vacuum cleaner as need be, but that does not seem to be needed. > I closed the commitfest using the following assumptions. > > Moved to next CF with needs review. > 1. patch doesn't receive any full review in the commitfest > 2. Patch received feedback at the end of commitfest. > > Moved to next CF with waiting on author: > 1. Patch doesn't apply to HEAD, but didn't receive any feedback. > > Returned with feedback: > 1. Patch received feedback, but author hasn't responded yet. > 2. Author is expected to share an updated patch. > > Rejected: > 1. Any -1 from committer to the approach of the patch All that sounds fair to me. > May be these assumptions needs to be updated, as this is the first > time as CFM for me. > > I definitely may missed judging the current state of the patch. Please feel > free to update the actual status. That's definitely up to the authors and reviewers to double-check what has been done and correct anything they think is not. And far as I can see you have done an admirable work in keeping it on tracks, so huge +1 from me. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Transactions involving multiple postgres foreign servers
On Mon, Dec 5, 2016 at 4:42 PM, Ashutosh Bapat < ashutosh.ba...@enterprisedb.com> wrote: > On Mon, Dec 5, 2016 at 11:04 AM, Haribabu Kommi > wrote: > > > > > > On Fri, Nov 11, 2016 at 5:38 PM, Masahiko Sawada > > wrote: > >> > >> > >> 2PC is a basic building block to support the atomic commit and there > >> are some optimizations way in order to reduce disadvantage of 2PC. As > >> you mentioned, it's hard to support a single model that would suit > >> several type of FDWs. But even if it's not a purpose for sharding, > >> because many other database which could be connected to PostgreSQL via > >> FDW supports 2PC, 2PC for FDW would be useful for not only sharding > >> purpose. That's why I was focusing on implementing 2PC for FDW so far. > > > > > > Moved to next CF with "needs review" status. > > I think this should be changed to "returned with feedback.". The > design and approach itself needs to be discussed. I think, we should > let authors decide whether they want it to be added to the next > commitfest or not. > > When I first started with this work, Tom had suggested me to try to > make PREPARE and COMMIT/ROLLBACK PREPARED involving foreign servers or > at least postgres_fdw servers work. I think, most of my work that > Vinayak and Sawada have rebased to the latest master will be required > for getting what Tom suggested done. We wouldn't need a lot of changes > to that design. PREPARE involving foreign servers errors out right > now. If we start supporting prepared transactions involving foreign > servers that will be a good improvement over the current status-quo. > Once we get that done, we can continue working on the larger problem > of supporting ACID transactions involving foreign servers. Thanks for the update. I closed it in commitfest 2017-01 with "returned with feedback". Author can update it once the new patch is submitted. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] commitfest 2016-11 status summary
Hi All, The commitfest status summary at the end of commitfest. Needs review: 0 Waiting on author: 0 Ready for Commiter: 0 Commited: 41 Moved to next CF: 79 Rejected: 7 Returned with feedback: 20 TOTAL: 147 Overall progress of completion - 46% (doesn't include "moved to next CF") Micheal, I need your help in closing the commitfest. I closed the commitfest using the following assumptions. Moved to next CF with needs review. 1. patch doesn't receive any full review in the commitfest 2. Patch received feedback at the end of commitfest. Moved to next CF with waiting on author: 1. Patch doesn't apply to HEAD, but didn't receive any feedback. Returned with feedback: 1. Patch received feedback, but author hasn't responded yet. 2. Author is expected to share an updated patch. Rejected: 1. Any -1 from committer to the approach of the patch May be these assumptions needs to be updated, as this is the first time as CFM for me. As I observed many patches that are keep on moving to next CF from previous patches, is there any way in commitfest that can highlight those patches, so that those patches gets the review first than the patches that are came late to the commitfest. I definitely may missed judging the current state of the patch. Please feel free to update the actual status. Thanks everyone. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Transactions involving multiple postgres foreign servers
On Mon, Dec 5, 2016 at 11:04 AM, Haribabu Kommi wrote: > > > On Fri, Nov 11, 2016 at 5:38 PM, Masahiko Sawada > wrote: >> >> >> 2PC is a basic building block to support the atomic commit and there >> are some optimizations way in order to reduce disadvantage of 2PC. As >> you mentioned, it's hard to support a single model that would suit >> several type of FDWs. But even if it's not a purpose for sharding, >> because many other database which could be connected to PostgreSQL via >> FDW supports 2PC, 2PC for FDW would be useful for not only sharding >> purpose. That's why I was focusing on implementing 2PC for FDW so far. > > > Moved to next CF with "needs review" status. I think this should be changed to "returned with feedback.". The design and approach itself needs to be discussed. I think, we should let authors decide whether they want it to be added to the next commitfest or not. When I first started with this work, Tom had suggested me to try to make PREPARE and COMMIT/ROLLBACK PREPARED involving foreign servers or at least postgres_fdw servers work. I think, most of my work that Vinayak and Sawada have rebased to the latest master will be required for getting what Tom suggested done. We wouldn't need a lot of changes to that design. PREPARE involving foreign servers errors out right now. If we start supporting prepared transactions involving foreign servers that will be a good improvement over the current status-quo. Once we get that done, we can continue working on the larger problem of supporting ACID transactions involving foreign servers. -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Indirect indexes
On Sun, Nov 13, 2016 at 4:28 AM, Andres Freund wrote: > Hi, > > On 2016-11-01 01:43:31 -0300, Alvaro Herrera wrote: > > Alvaro Herrera wrote: > > > I propose we introduce the concept of "indirect indexes". > > > > This is a WIP non-functional patch for indirect indexes. I've been > > distracted from working on it for some time already and will be off-line > > for half this month yet, but since this was discussed and seems to be > > considered a welcome idea, I am posting it for those who want to have a > > look at what I'm doing. > > I see that this patch has a CF entry, but I'm unclear what reviewer > ought to do at the current state? There's a lot of stuff closer to > being committable in this fest... I see the thread is waiting for an updated patch from author. Closed in 2016-11 commitfest with "returned with feedback" status. Please feel free to update the status once you submit the updated patch or if the current status doesn't reflect the status of the patch. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] delta relations in AFTER triggers
On Wed, Nov 23, 2016 at 10:02 AM, Jim Nasby wrote: > On 11/21/16 3:49 AM, Craig Ringer wrote: > >> After going through that experience, I now agree with Kevin: an >>> > interface where a new SPI interface lets PLs push a named tuplestore >>> > into the SPI connection to make it available to SQL seems like the >>> > simplest and tidiest way. >>> >> That also offers a handy step on the path toward table-valued >> variables and pipelined functions, both of which would be _really_ >> nice for PL/PgSQL users. >> > > FWIW, I expect at some point we'd like the ability to index tuplestores as > well. Moved to next CF with "waiting on author" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Transactions involving multiple postgres foreign servers
On Fri, Nov 11, 2016 at 5:38 PM, Masahiko Sawada wrote: > > 2PC is a basic building block to support the atomic commit and there > are some optimizations way in order to reduce disadvantage of 2PC. As > you mentioned, it's hard to support a single model that would suit > several type of FDWs. But even if it's not a purpose for sharding, > because many other database which could be connected to PostgreSQL via > FDW supports 2PC, 2PC for FDW would be useful for not only sharding > purpose. That's why I was focusing on implementing 2PC for FDW so far. Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] move collation import to backend
On Thu, Dec 1, 2016 at 12:18 AM, Peter Eisentraut < peter.eisentr...@2ndquadrant.com> wrote: > > > > + > +Datum pg_import_system_collations(PG_FUNCTION_ARGS); > + > +Datum > +pg_import_system_collations(PG_FUNCTION_ARGS) > +{ > >>> > >>> Uh? > >> > >> Required to avoid compiler warning about missing prototype. > > > > It seems not to be project style to have prototypes in the middle of the > > file... > > OK, will fix. > Moved to next CF with "waiting on author" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] postgres_fdw : altering foreign table not invalidating prepare statement execution plan.
On Wed, Nov 30, 2016 at 8:05 PM, Etsuro Fujita wrote: > On 2016/11/30 17:53, Amit Langote wrote: > >> On 2016/11/30 17:25, Etsuro Fujita wrote: >> >>> Done. I modified the patch so that any inval in pg_foreign_server also >>> blows the whole plan cache. >>> >> > I noticed the following addition: >> >> + CacheRegisterSyscacheCallback(FOREIGNDATAWRAPPEROID, >> PlanCacheSysCallback, (Datum) 0); >> >> Is that intentional? I thought you meant only to add for >> pg_foreign_server. >> > > Yes, that's intentional; we would need that as well, because cached plans > might reference FDW-level options, not only server/table-level options. I > couldn't come up with regression tests for FDW-level options in > postgres_fdw, though. > > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Multi-tenancy with RLS
On Mon, Oct 3, 2016 at 3:11 PM, Michael Paquier wrote: > On Tue, Jul 19, 2016 at 3:42 PM, Haribabu Kommi > wrote: > > The above changes are based on my understanding to the discussion > occurred in > > this mail. In case if I miss anything, please let me know, i will > > correct the same. > > The patch series still apply. > > + " ((classid = (select oid from pg_class where > relname = 'pg_aggregate'))" > + " OR (classid = (select oid from pg_class where > relname = 'pg_cast') AND has_cast_privilege(objid, 'any'))" > + " OR (classid = (select oid from pg_class where > relname = 'pg_collation'))" > [... long list ...] > That's quite hard to digest... > > +static bool > +get_catalog_policy_string(Oid relationid, Form_pg_class > pg_class_tuple, char *buf) > This is an exceptionally weak interface at quick glance. This is using > SQL strings, and nothing is actually done regarding potentially > conflicting name types... > > The number of new files included in policy.c is impressive as well.. > > This does not count as a full review though, so I am moving it to next > CF. Perhaps it will find its audience. > As the patch doesn't receive full review. Just kept in the commitfest to see any interest from others for this patch. Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Document how to set up TAP tests for Perl 5.8.8
On Sun, Dec 4, 2016 at 2:28 PM, Noah Misch wrote: > Committed after re-filling paragraphs. Thanks, I have updated the CF entry as well. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP
On Mon, Dec 5, 2016 at 9:50 AM, Andreas Karlsson wrote: > On 12/04/2016 03:20 PM, Michael Paquier wrote: >> On Sun, Dec 4, 2016 at 11:11 PM, Andreas Karlsson >> wrote: >>> On 12/04/2016 02:12 PM, Michael Paquier wrote: >>> Does this have a precedent in the code? >> >> >> data_checksums in guc.c is an example, it is marked with >> GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE and its value is updated with >> SetConfigOption() when the control file is read. The idea would be to >> do something like that with LoadedSSL. OK, here is attached what I had in mind as reload-ssl-v08-02.patch for reference. This applies on top of the main patch reload-ssl-v08-01.patch that is the same version as v7 with the issues I reported previously as addressed. LoadedSSL is mapped with a read-only GUC parameter that new sessions can query after connecting. The only use case where that would be useful would be when using sslmode=prefer to check whether the SSL context is loaded even if ssl has been switched from off to on. But let's be honest, pg_stat_ssl reports already this kind of information, making this patch at the end useless because LoadedSSL does not change for an already-spawned backend. > Thanks. I will be quite busy the upcoming couple of weeks so there will be a > while until I can sit down with this. Feel free to contribute to the patch. I am switching the patch as ready for committer. I have no more comments about this patch. Note to committer-san: please look only at v08-01. -- Michael reload-ssl-v08-01.patch Description: application/download reload-ssl-v08-02.patch Description: application/download -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] raw output from copy
On Tue, Nov 22, 2016 at 10:48 PM, Haribabu Kommi wrote: > Hi, > > This is a gentle reminder. > > you assigned as reviewer to the current patch in the 11-2016 commitfest. > But you haven't shared your review yet. Please share your review about > the patch. This will help us in smoother operation of commitfest. > > Please Ignore if you already shared your review. > Patch is not applying properly to HEAD. Moved to next CF with "waiting on author" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] amcheck (B-Tree integrity checking tool)
On Wed, Nov 30, 2016 at 9:06 AM, Peter Geoghegan wrote: > On Wed, Nov 23, 2016 at 2:47 PM, Thomas Munro > wrote: > > Actually I meant that at T2 the btree is exactly same as it was at T1, > > but the order of the values has changed according to the comparator > > (for example, because a collation definition changed when you updated > > your operating system), so that the btree is now corrupted. Based on > > Goetz Graefe's paper I was wondering if amcheck would be unable to > > detect this due to the cousin problem. > > I love that paper (it's more like a book, really). I'm glad that at > least one other person in the community has read some of it, because I > think it has a lot of clues as to how we can begin to address some of > our problems around bloat without novel redesign. Most of the > techniques are quite complementary. > > I wrote a prototype of suffix truncation for text in two days last > week. The regression tests pass, and the technique works well for > certain cases, particularly with many column indexes without > correlation (the index-only-scan case). Let's see if that becomes a > real patch. > > > Thank you for your patient explanation! > > Your questions were excellent, and so deserved my careful consideration. > > > Please see my humble attempt > > to test this scenario and learn something about btrees in the process, > > attached. amcheck does detect the violation of the high key > > invariant. > > This is a nice pattern for testing out if the keyspace is sane, and > how things work. I might incorporate it into my own testing in the > future. Moved to next CF with " needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Logical Replication WIP
On Sun, Dec 4, 2016 at 12:06 PM, Peter Eisentraut < peter.eisentr...@2ndquadrant.com> wrote: > I massaged the temporary replication slot patch a bit. I changed the > column name in pg_stat_replication_slots from "persistent" to > "temporary" and flipped the logical sense, so that it is consistent with > the creation commands. I also adjusted some comments and removed some > changes in ReplicationSlotCreate() that didn't seem to do anything > useful (might have been from a previous patch). > > The attached patch looks good to me. > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] [PROPOSAL] Temporal query processing with range types
On Tue, Oct 25, 2016 at 8:44 PM, Peter Moser wrote: > > We decided to follow your recommendation and add the patch to the > commitfest. > > Path is not applying properly to HEAD. Moved to next CF with "waiting on author" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] COPY as a set returning function
On Tue, Nov 1, 2016 at 7:45 AM, Corey Huinker wrote: > Attached is a patch that implements copy_srf(). > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] sequence data type
On Tue, Nov 22, 2016 at 10:54 PM, Haribabu Kommi wrote: > Hi Vik and Vinayak, > > This is a gentle reminder. > > you both are assigned as reviewer's to the current patch in the 11-2016 > commitfest. > But you haven't shared your review yet. Please share your review about > the patch. This will help us in smoother operation of commitfest. > > Please Ignore if you already shared your review. > > As the patch doesn't received a full review and it is not applying to HEAD properly. Moved to next CF with "waiting on author" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)
On Mon, Dec 5, 2016 at 7:44 AM, Peter Geoghegan wrote: > On Sat, Dec 3, 2016 at 7:23 PM, Tomas Vondra > wrote: > > I do share your concerns about unpredictable behavior - that's > > particularly worrying for pg_restore, which may be used for time- > > sensitive use cases (DR, migrations between versions), so unpredictable > > changes in behavior / duration are unwelcome. > > Right. > > > But isn't this more a deficiency in pg_restore, than in CREATE INDEX? > > The issue seems to be that the reltuples value may or may not get > > updated, so maybe forcing ANALYZE (even very low statistics_target > > values would do the trick, I think) would be more appropriate solution? > > Or maybe it's time add at least some rudimentary statistics into the > > dumps (the reltuples field seems like a good candidate). > > I think that there is a number of reasonable ways of looking at it. It > might also be worthwhile to have a minimal ANALYZE performed by CREATE > INDEX directly, iff there are no preexisting statistics (there is > definitely going to be something pg_restore-like that we cannot fix -- > some ETL tool, for example). Perhaps, as an additional condition to > proceeding with such an ANALYZE, it should also only happen when there > is any chance at all of parallelism being used (but then you get into > having to establish the relation size reliably in the absence of any > pg_class.relpages, which isn't very appealing when there are many tiny > indexes). > > In summary, I would really like it if a consensus emerged on how > parallel CREATE INDEX should handle the ecosystem of tools like > pg_restore, reindexdb, and so on. Personally, I'm neutral on which > general approach should be taken. Proposals from other hackers about > what to do here are particularly welcome. > > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Parallel Index Scans
On Sat, Nov 26, 2016 at 10:35 PM, Amit Kapila wrote: > On Sat, Oct 22, 2016 at 9:07 AM, Amit Kapila > wrote: > > On Fri, Oct 21, 2016 at 10:55 PM, Robert Haas > wrote: > > I have rebased the patch (parallel_index_scan_v2) based on latest > commit e8ac886c (condition variables). I have removed the usage of > ConditionVariablePrepareToSleep as that is is no longer mandatory. I > have also updated docs for wait event introduced by this patch (thanks > to Dilip for noticing it). There is no change in > parallel_index_opt_exec_support patch, but just attaching here for > easier reference. > > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] PoC: Partial sort
On Fri, Dec 2, 2016 at 4:05 AM, Robert Haas wrote: > On Tue, Sep 13, 2016 at 4:32 AM, Alexander Korotkov > wrote: > > On Fri, Apr 8, 2016 at 10:09 PM, Peter Geoghegan wrote: > >> On Wed, Mar 30, 2016 at 8:02 AM, Alexander Korotkov > >> wrote: > >> > Hmm... I'm not completely agree with that. In typical usage partial > sort > >> > should definitely use quicksort. However, fallback to other sort > >> > methods is > >> > very useful. Decision of partial sort usage is made by planner. But > >> > planner makes mistakes. For example, our HashAggregate is purely > >> > in-memory. > >> > In the case of planner mistake it causes OOM. I met such situation in > >> > production and not once. This is why I'd like partial sort to have > >> > graceful > >> > degradation for such cases. > >> > >> I think that this should be moved to the next CF, unless a committer > >> wants to pick it up today. > > > > Patch was rebased to current master. > > Just a few quick observations on this... > > It strikes me that the API contract change in ExecMaterializesOutput > is pretty undesirable. I think it would be better to have a new > executor node for this node rather than overloading the existing > "Sort" node, sharing code where possible of course. The fact that > this would distinguish them more clearly in an EXPLAIN plan seems > good, too. "Partial Sort" is the obvious thing, but there might be > even better alternatives -- maybe "Incremental Sort" or something like > that? Because it's not partially sorting the data, it's making data > that already has some sort order have a more rigorous sort order. > > I think that it will probably be pretty common to have queries where > the data is sorted by (mostly_unique_col) and we want to get it sorted > by (mostly_unique_col, disambiguation_col). In such cases I wonder if > we'll incur a lot of overhead by feeding single tuples to the > tuplesort stuff and performing lots of 1-item sorts. Not sure if that > case needs any special optimization. > > I also think that the "HeapTuple prev" bit in SortState is probably > not the right way of doing things. I think that should use an > additional TupleTableSlot rather than a HeapTuple. > > The feedback from the reviewer has received at the end of commitfest. So Moved to next CF with "waiting on author" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] [PATCH] Generic type subscription
On Thu, Nov 17, 2016 at 10:56 PM, Dmitry Dolgov <9erthali...@gmail.com> wrote: > > On 15 November 2016 at 15:03, Aleksander Alekseev < > a.aleks...@postgrespro.ru> wrote: > > Hello. > > > > I took a look on the latest -v4 patch. I would like to note that this > > patch breaks a backward compatibility. For instance sr_plan extension[1] > > stop to compile with errors > > Thank you for the feedback. > > Well, if we're speaking about this particular extension, if I understood > correctly, it fetches all parse tree nodes from Postgres and generates code > using this information. So to avoid compilation problems I believe you > need to > run `make USE_PGXS=1 genparser` again (it worked for me, I don't see any > mentions of `ArrayRef`). > > But speaking generally, I don't see how we can provide backward > compatibility > for those extensions, who are strongly coupled with implementation details > of > parsing tree. I mean, in terms of interface it's mostly about to replace > `ArrayRef` to `SubscriptingRef`, but I think it's better to do it in the > extension code. > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Patch: Implement failover on libpq connect level.
On Fri, Dec 2, 2016 at 8:54 PM, Robert Haas wrote: > Couldn't this just be a variable in PQconnectPoll(), instead of adding > a new structure member? I have fixed same with a local variable in PQconnectPoll, Initally I thought savedMessage should have same visiblitly as errorMessage so we can restore the errorMessage even outside PQconnectPoll. But that seems not required. Attacting the new patch which fixes the same. -- Thanks and Regards Mithun C Y EnterpriseDB: http://www.enterprisedb.com failover-to-new-master-bug-fix-02.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for changes to recovery.conf API
On Mon, Dec 5, 2016 at 11:34 AM, Haribabu Kommi wrote: > As there was a feedback from others related to the patch and doesn't find > any > update from author. > > Closed in 2016-11 commitfest with "returned with feedback" status. > Please feel free to update the status once you submit the updated patch or > if the current status doesn't reflect the actual status of the patch. Having a consensus here is already a great step forward. I am sure that a new version will be sent for the next CF. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pgcrypto compilation error due to stack-allocated EVP_CIPHER_CTX
On Thu, Dec 1, 2016 at 11:17 AM, Andreas Karlsson wrote: > On 12/01/2016 02:48 AM, Andres Freund wrote: >> >> It appears openssl has removed the public definition of EVP_CIPHER_CTX >> leading to pgcrypto failing with: That's not much surprising, most distributions are still on 1.0.2 as 1.1.0 has created many breakages so a bunch of projects need to patch first. This burden may take a couple of years to sort out. > Yes, I believe this is one of the changes in OpenSSL 1.1. I guess you might > be the first one to try to compile with 1.1 since > 5ff4a67f63fd6d3eb01ff9707d4674ed54a89f3b was pushed. Yes, I can see the failure as well using 1.1.0 on my OSX laptop with homebrew packages. > If we do not already have it I think we should get a build farm animal with > OpenSSL 1.1. I would really like to do it, but ArchLinux ARM is still on 1.0.2, as is ArchLinux :( Finally, attached is a patch to address the failure. make check is passing here for 1.1.0 and 1.0.2. The problem is that OpenSSL 1.1 relies on an opaque structure here so we need to have the pgcrypto code rely on a pointer and not a direct declaration of the structure. EVP_CIPHER_CTX_free() and EVP_CIPHER_CTX_new() have been introduced in 0.9.8 which is the oldest version supported by HEAD, and 5ff4a67f is HEAD-only, so there is no need to back-patch here. I am adding that to the next CF so as we don't forget about it. I'll just switch my laptop to OpenSSL 1.1.0 by default once the issue is fixed, homebrew has packages for 1.0.2 and 1.1.0, that's easy enough to switch. -- Michael pgcrypto-openssl11-fix.patch Description: application/download -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Creating a DSA area to provide work space for parallel execution
On Thu, Dec 1, 2016 at 10:35 PM, Thomas Munro wrote: > On Sat, Nov 26, 2016 at 1:55 AM, Thomas Munro > wrote: > > Here's a new version to apply on top of dsa-v7.patch. > > Here's a version to go with dsa-v8.patch. Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers
On Mon, Dec 5, 2016 at 1:14 PM, Amit Kapila wrote: > On Mon, Dec 5, 2016 at 6:00 AM, Haribabu Kommi > wrote: > > No, that is not true. You have quoted the wrong message, that > discussion was about WALWriteLock contention not about the patch being > discussed in this thread. I have posted the latest set of patches > here [1]. Tomas is supposed to share the results of his tests. He > mentioned to me in PGConf Asia last week that he ran few tests on > Power Box, so let us wait for him to share his findings. > > > Moved to next CF with "waiting on author" status. Please feel free to > > update the status if the current status differs with the actual patch > > status. > > > > I think we should keep the status as "Needs Review". > > [1] - https://www.postgresql.org/message-id/CAA4eK1JjatUZu0% > 2BHCi%3D5VM1q-hFgN_OhegPAwEUJqxf-7pESbg%40mail.gmail.com Thanks for the update. I changed the status to "needs review" in 2017-01 commitfest. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] identity columns
On Tue, Nov 22, 2016 at 10:51 PM, Haribabu Kommi wrote: > Hi Vitaly, > > > This is a gentle reminder. > > you assigned as reviewer to the current patch in the 11-2016 commitfest. > But you haven't shared your review yet on the new patch in this commitfest > . > Please share your review about the patch. This will help us in smoother > operation of commitfest. > > Please Ignore if you already shared your review. > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Bug in to_timestamp().
On Mon, Nov 7, 2016 at 2:26 AM, Artur Zakirov wrote: > > Hm, truly. Fixed it. > > I've attached the fixed patch. > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Proposal for changes to recovery.conf API
On Fri, Dec 2, 2016 at 12:58 PM, Magnus Hagander wrote: > > As there was a feedback from others related to the patch and doesn't find any update from author. Closed in 2016-11 commitfest with "returned with feedback" status. Please feel free to update the status once you submit the updated patch or if the current status doesn't reflect the actual status of the patch. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Pinning a buffer in TupleTableSlot is unnecessary
On Tue, Nov 15, 2016 at 1:17 PM, Noah Misch wrote: > On Mon, Nov 14, 2016 at 10:21:53AM -0800, Peter Geoghegan wrote: > > BTW, I recently noticed that the latest version of Valgrind, 3.12, > > added this new feature: > > > > * Memcheck: > > > > - Added meta mempool support for describing a custom allocator which: > > - Auto-frees all chunks assuming that destroying a pool destroys all > >objects in the pool > > - Uses itself to allocate other memory blocks > > > > It occurred to me that we might be able to make good use of this. > > PostgreSQL memory contexts don't acquire blocks from other contexts; they > all > draw from malloc() directly. Therefore, I don't see this feature giving us > something that the older Memcheck MEMPOOL support does not give us. > > Moved to next CF with "needs review" status. Please feel free to update the status if the current status doesn't reflect the status of the patch. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] New SQL counter statistics view (pg_stat_sql)
On Tue, Nov 22, 2016 at 5:57 PM, Haribabu Kommi wrote: > Hi Vinayak, > > This is a gentle reminder. > > you assigned as reviewer to the current patch in the 11-2016 commitfest. > If you have any more review comments / performance results regarding the > approach of the patch, please share it. Otherwise, you can mark the patch > as "Ready for committer" for committer feedback. This will help us in > smoother > operation of the commitfest. > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Parallel bitmap heap scan
On Wed, Nov 30, 2016 at 4:38 PM, Dilip Kumar wrote: > On Fri, Nov 25, 2016 at 6:55 PM, Dilip Kumar > wrote: > > I have changed the design to directly make it based on DSA instead of > using DHT. > > In new version we no longer use DHT. Instead, of that I have made some > > change in simplehash[1], so that it can allow external allocator. In > > tidbitmap.c, I have register the allocator to simplehash and those > > allocator functions will allocate memory directly from DSA. > > > > simplehash is always using one single memory (during expand it copy > > from old memory to new memory). Which makes remaining processing very > > simple for us. > > > > In tbm_begin_iterate, we need not to scan complete hash table instead > > of that we can just process dsa memory and convert into page array and > > chunk array. > > > > I have tested the performance in my local machine and I observed that > > it's slightly better than older > > DHT based version (complete performance number will be published soon). > > > > Dependency on other patches: > > 1. dsa_area (dsa-v7.patch) > > https://www.postgresql.org/message-id/CAEepm%3D024p- > MeAsDmG%3DR3%2BtR4EGhuGJs_%2BrjFKF0eRoSTmMJnA%40mail.gmail.com > > > > 2. Creating a DSA area to provide work space for parallel execution > > (dsa-area-for-executor-v3.patch) > > https://www.postgresql.org/message-id/CAEepm%3D0HmRefi1% > 2BxDJ99Gj5APHr8Qr05KZtAxrMj8b%2Bay3o6sA%40mail.gmail.com > > > > patch details > > 1. hash_support_alloc_free_v1.patch [1]. > > 2. parallel-bitmap-heap-scan-v3.patch > > I just realised that, my latest patch I just sent to Andres, instead > of replying to all. > Forwarding the same mail to Hackers. > > Performance reading with new patch.. > TPCH-scale factor 10. work_mem 20MB, Power 4 socket machine > > Query Head Patch Improvement > Q4 4811 3290 1.5x > Q6 13136 6198 2.1x > Q14 8119 5057 1.6x > Q15 25652 20143 1.2x > > Explained analyzed results are attached with the mail.. > > * I have also applied Andres patch from below link, for taking this > performance (both for head and for patch). > https://www.postgresql.org/message-id/20161123083351. > 5vramz52nmdokhzz%40alap3.anarazel.de > > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem
On Tue, Nov 22, 2016 at 4:53 AM, Claudio Freire wrote: > On Mon, Nov 21, 2016 at 2:15 PM, Masahiko Sawada > wrote: > > On Fri, Nov 18, 2016 at 6:54 AM, Claudio Freire > wrote: > >> On Thu, Nov 17, 2016 at 6:34 PM, Robert Haas > wrote: > >>> On Thu, Nov 17, 2016 at 1:42 PM, Claudio Freire < > klaussfre...@gmail.com> wrote: > Attached is patch 0002 with pgindent applied over it > > I don't think there's any other formatting issue, but feel free to > point a finger to it if I missed any > >>> > >>> Hmm, I had imagined making all of the segments the same size rather > >>> than having the size grow exponentially. The whole point of this is > >>> to save memory, and even in the worst case you don't end up with that > >>> many segments as long as you pick a reasonable base size (e.g. 64MB). > >> > >> Wastage is bound by a fraction of the total required RAM, that is, > >> it's proportional to the amount of required RAM, not the amount > >> allocated. So it should still be fine, and the exponential strategy > >> should improve lookup performance considerably. > > > > I'm concerned that it could use repalloc for large memory area when > > vacrelstats->dead_tuples.dead_tuple is bloated. It would be overhead > > and slow. > > How large? > > That array cannot be very large. It contains pointers to > exponentially-growing arrays, but the repalloc'd array itself is > small: one struct per segment, each segment starts at 128MB and grows > exponentially. > > In fact, IIRC, it can be proven that such a repalloc strategy has an > amortized cost of O(log log n) per tuple. If it repallocd the whole > tid array it would be O(log n), but since it handles only pointers to > segments of exponentially growing tuples it becomes O(log log n). > > Furthermore, n there is still limited to MAX_INT, which means the cost > per tuple is bound by O(log log 2^32) = 5. And that's an absolute > worst case that's ignoring the 128MB constant factor which is indeed > relevant. > > > What about using semi-fixed memroy space without repalloc; > > Allocate the array of ItemPointerData array, and each ItemPointerData > > array stores the dead tuple locations. The size of ItemPointerData > > array starts with small size (e.g. 16MB or 32MB). After we used an > > array up, we then allocate next segment with twice size as previous > > segment. > > That's what the patch does. > > > But to prevent over allocating memory, it would be better to > > set a limit of allocating size of ItemPointerData array to 512MB or > > 1GB. > > There already is a limit to 1GB (the maximum amount palloc can allocate) > > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers
On Mon, Dec 5, 2016 at 6:00 AM, Haribabu Kommi wrote: > > > On Fri, Nov 4, 2016 at 8:20 PM, Amit Kapila wrote: >> >> On Thu, Nov 3, 2016 at 8:38 PM, Robert Haas wrote: >> > On Tue, Nov 1, 2016 at 11:31 PM, Tomas Vondra >> >> The difference is that both the fast-path locks and msgNumLock went >> >> into >> >> 9.2, so that end users probably never saw that regression. But we don't >> >> know >> >> if that happens for clog and WAL. >> >> >> >> Perhaps you have a working patch addressing the WAL contention, so that >> >> we >> >> could see how that changes the results? >> > >> > I don't think we do, yet. >> > >> >> Right. At this stage, we are just evaluating the ways (basic idea is >> to split the OS writes and Flush requests in separate locks) to reduce >> it. It is difficult to speculate results at this stage. I think >> after spending some more time (probably few weeks), we will be in >> position to share our findings. >> > > As per my understanding the current state of the patch is waiting for the > performance results from author. > No, that is not true. You have quoted the wrong message, that discussion was about WALWriteLock contention not about the patch being discussed in this thread. I have posted the latest set of patches here [1]. Tomas is supposed to share the results of his tests. He mentioned to me in PGConf Asia last week that he ran few tests on Power Box, so let us wait for him to share his findings. > Moved to next CF with "waiting on author" status. Please feel free to > update the status if the current status differs with the actual patch > status. > I think we should keep the status as "Needs Review". [1] - https://www.postgresql.org/message-id/CAA4eK1JjatUZu0%2BHCi%3D5VM1q-hFgN_OhegPAwEUJqxf-7pESbg%40mail.gmail.com -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pgsql: Add putenv support for msvcrt from Visual Studio 2013
On Sun, Dec 4, 2016 at 5:58 AM, Noah Misch wrote: > On Wed, Nov 30, 2016 at 04:24:34PM +, Christian Ullrich wrote: >> * From: Michael Paquier >> > would be nice to mention in a code comment that this what Noah has >> > mentioned upthread: if a CRT loads while pgwin32_putenv() is >> > executing, the newly-loaded CRT will always have the latest value. >> >> I fixed the existing comment by removing the last sentence that is in the >> wrong place now, but I don't see the point in suddenly starting to explain >> why it is done this way and not the other. >> >> > Based on that 0006 will need a rebase, nothing huge though. >> >> Done, new revisions attached. > > I committed patches 1-7 with some comment changes, a pgindent, and other > cosmetic trivia. (The file was pgindent-clean before this work. Patch 6 > still achieved the more-compact formatting you sought.) Thanks all for helping in closing this item. We have a fine result here. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [DOCS] monitoring.sgml - clarify length of query text displayed in pg_stat_statements
Hi On 12/02/2016 11:01 PM, Robert Haas wrote: On Wed, Nov 30, 2016 at 8:45 PM, Ian Barwick wrote: Small doc patch to clarify how much of the query text is show in pg_stat_statements and a link to the relevant GUC. This patch improves the pg_stat_activity documentation, not the pg_stat_statements documentation, but it looks correct, so I have committed it. Many thanks! (Looks like I had a mental short-circuit between "query" and "statement" there). Regards Ian Barwick -- Ian Barwick http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP
On 12/04/2016 03:20 PM, Michael Paquier wrote: On Sun, Dec 4, 2016 at 11:11 PM, Andreas Karlsson wrote: On 12/04/2016 02:12 PM, Michael Paquier wrote: One last thing that I think is missing in this patch is for users the possibility to check via SQL if the SSL context is actually loaded or not. As the context is reloaded after all the new values are available, with the current patch users may see that ssl is set to on but no context is loaded. So why not adding for example a read-only parameter that maps with SSLLoaded? The other three issues are easy to fix, but this one is a bit trickier. Do you mean that we should add another GUC here which is read-only? Yes, that's what I meant. It is hard to track if the reloading has been effective or not. Does this have a precedent in the code? data_checksums in guc.c is an example, it is marked with GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE and its value is updated with SetConfigOption() when the control file is read. The idea would be to do something like that with LoadedSSL. Thanks. I will be quite busy the upcoming couple of weeks so there will be a while until I can sit down with this. Feel free to contribute to the patch. Andreas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Gather Merge
On Thu, Nov 24, 2016 at 11:12 PM, Rushabh Lathia wrote: > > PFA latest patch with fix as well as few cosmetic changes. > > Moved to next CF with "needs review" status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers
On Fri, Nov 4, 2016 at 8:20 PM, Amit Kapila wrote: > On Thu, Nov 3, 2016 at 8:38 PM, Robert Haas wrote: > > On Tue, Nov 1, 2016 at 11:31 PM, Tomas Vondra > >> The difference is that both the fast-path locks and msgNumLock went into > >> 9.2, so that end users probably never saw that regression. But we don't > know > >> if that happens for clog and WAL. > >> > >> Perhaps you have a working patch addressing the WAL contention, so that > we > >> could see how that changes the results? > > > > I don't think we do, yet. > > > > Right. At this stage, we are just evaluating the ways (basic idea is > to split the OS writes and Flush requests in separate locks) to reduce > it. It is difficult to speculate results at this stage. I think > after spending some more time (probably few weeks), we will be in > position to share our findings. > > As per my understanding the current state of the patch is waiting for the performance results from author. Moved to next CF with "waiting on author" status. Please feel free to update the status if the current status differs with the actual patch status. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] Hash tables in dynamic shared memory
On Sun, Nov 20, 2016 at 9:54 AM, John Gorman wrote: > I reviewed the dht-v2.patch and found that it is in excellent shape. > > The overall performance will be faster due to not having to examine > more than one hash bucket array most of the time. > > Reviewer didn't find any problems in the approach of the patch. Provided some improvements on the approach to the author to respond. Moved to next CF with "waiting on author" state. Regards, Hari Babu Fujitsu Australia
Re: [HACKERS] patch: function xmltable
Pavel Stehule wrote: > 2016-12-04 23:00 GMT+01:00 Pavel Stehule : > > I am not sure if I understand well to your ideas - please, check attached > > patch. Thanks, that's what I meant, but I think you went a bit overboard creating new functions in execQual -- seems to me it would work just fine to have the memory switches in the same function, rather than having a number of separate functions just to change the context then call the method. Please remove these shim functions. Also, you forgot to remove the now-unused per_rowset_memcxt struct member. Also, please rename "rc" to something more meaningful -- maybe "rowcount" is good enough. And "doc" would perhaps be better as "document". I'm not completely sure the structs are really sensible yet. I may do some more changes tomorrow. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Parallel tuplesort (for parallel B-Tree index creation)
On Sat, Dec 3, 2016 at 7:23 PM, Tomas Vondra wrote: > I do share your concerns about unpredictable behavior - that's > particularly worrying for pg_restore, which may be used for time- > sensitive use cases (DR, migrations between versions), so unpredictable > changes in behavior / duration are unwelcome. Right. > But isn't this more a deficiency in pg_restore, than in CREATE INDEX? > The issue seems to be that the reltuples value may or may not get > updated, so maybe forcing ANALYZE (even very low statistics_target > values would do the trick, I think) would be more appropriate solution? > Or maybe it's time add at least some rudimentary statistics into the > dumps (the reltuples field seems like a good candidate). I think that there is a number of reasonable ways of looking at it. It might also be worthwhile to have a minimal ANALYZE performed by CREATE INDEX directly, iff there are no preexisting statistics (there is definitely going to be something pg_restore-like that we cannot fix -- some ETL tool, for example). Perhaps, as an additional condition to proceeding with such an ANALYZE, it should also only happen when there is any chance at all of parallelism being used (but then you get into having to establish the relation size reliably in the absence of any pg_class.relpages, which isn't very appealing when there are many tiny indexes). In summary, I would really like it if a consensus emerged on how parallel CREATE INDEX should handle the ecosystem of tools like pg_restore, reindexdb, and so on. Personally, I'm neutral on which general approach should be taken. Proposals from other hackers about what to do here are particularly welcome. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PSQL commands: \quit_if, \quit_unless
2016-12-04 20:55 GMT+01:00 Fabien COELHO : > > Yes, that is a possibility, but this can already be queried into a >>> :-variable, so it is less indispensable. >>> >> >> can you show some examples, please? >> > > SELECT COUNT(*) AS has_unit_extension >FROM pg_extension WHERE extname='unit' \gset > \echo :has_unit_extension > 1 > > So that > > \if ! :hash_unit_extension > CREATE TABLE foo(id SERIAL, stuff UNIT); > \else > \echo "unit extension is not loaded" > \quit > \fi > > Ok, for this example one may try: > > CREATE EXTENSION IF NOT EXISTS unit; > > Or use the "ON_ERROR_STOP" setting, but that is the idea, SQL can be used > to test anything server-side. > understand I am thinking so first step can be simply and much more friendly replaced by specialized function: \if has_extension(...) the functions are supported by pgbench already, so we can take code from there. I don't think so psql script language should be too rich - it should be DSL and often use cases should be supported with friendly syntax. The set of functions can be small in first stage - we can support only one function. Regards Pavel > > -- > Fabien. >
Re: [HACKERS] PSQL commands: \quit_if, \quit_unless
Yes, that is a possibility, but this can already be queried into a :-variable, so it is less indispensable. can you show some examples, please? SELECT COUNT(*) AS has_unit_extension FROM pg_extension WHERE extname='unit' \gset \echo :has_unit_extension 1 So that \if ! :hash_unit_extension CREATE TABLE foo(id SERIAL, stuff UNIT); \else \echo "unit extension is not loaded" \quit \fi Ok, for this example one may try: CREATE EXTENSION IF NOT EXISTS unit; Or use the "ON_ERROR_STOP" setting, but that is the idea, SQL can be used to test anything server-side. -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PSQL commands: \quit_if, \quit_unless
2016-12-04 17:35 GMT+01:00 Fabien COELHO : > > Hello Pavel, > > Some possibilities from pgbench can have sense in psql too - generating >> some random numbers from a range. >> > > Could you expand on the use case where this would be useful? > writing test cases > > In the end we use one parser for psql and for pgbench. >> > > Note that "master" lexer is already shared, thanks to Tom, so as to detect > consistently where a query ends. > > I agree, so step 2 should be enough, and I accept so there is opened door >> for any future enhancing. >> > > Good, because that was the idea:-) > > We can implement some client side boolean functions (similar to pgbench >> functions that can cover often tasks: version_less, version_greather, >> user_exists, tables_exists, index_exists, variable_exists, schema_exists, >> > > Yes, that is a possibility, but this can already be queried into a > :-variable, so it is less indispensable. can you show some examples, please? Regards Pavel > > > -- > Fabien. >
Re: [HACKERS] PSQL commands: \quit_if, \quit_unless
Hello Pavel, Some possibilities from pgbench can have sense in psql too - generating some random numbers from a range. Could you expand on the use case where this would be useful? In the end we use one parser for psql and for pgbench. Note that "master" lexer is already shared, thanks to Tom, so as to detect consistently where a query ends. I agree, so step 2 should be enough, and I accept so there is opened door for any future enhancing. Good, because that was the idea:-) We can implement some client side boolean functions (similar to pgbench functions that can cover often tasks: version_less, version_greather, user_exists, tables_exists, index_exists, variable_exists, schema_exists, Yes, that is a possibility, but this can already be queried into a :-variable, so it is less indispensable. -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP
On Sun, Dec 4, 2016 at 11:11 PM, Andreas Karlsson wrote: > On 12/04/2016 02:12 PM, Michael Paquier wrote: >> >> One last thing that I think is missing in this patch is for users the >> possibility to check via SQL if the SSL context is actually loaded or >> not. As the context is reloaded after all the new values are >> available, with the current patch users may see that ssl is set to on >> but no context is loaded. So why not adding for example a read-only >> parameter that maps with SSLLoaded? > > > The other three issues are easy to fix, but this one is a bit trickier. Do > you mean that we should add another GUC here which is read-only? Yes, that's what I meant. It is hard to track if the reloading has been effective or not. > Does this have a precedent in the code? data_checksums in guc.c is an example, it is marked with GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE and its value is updated with SetConfigOption() when the control file is read. The idea would be to do something like that with LoadedSSL. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP
On 12/04/2016 02:12 PM, Michael Paquier wrote: One last thing that I think is missing in this patch is for users the possibility to check via SQL if the SSL context is actually loaded or not. As the context is reloaded after all the new values are available, with the current patch users may see that ssl is set to on but no context is loaded. So why not adding for example a read-only parameter that maps with SSLLoaded? The other three issues are easy to fix, but this one is a bit trickier. Do you mean that we should add another GUC here which is read-only? Does this have a precedent in the code? Andreas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rename max_parallel_degree?
On Fri, Dec 02, 2016 at 07:45:56AM -0500, Robert Haas wrote: > On Thu, Dec 1, 2016 at 10:07 PM, Haribabu Kommi > wrote: > > From the recent mails, it is not clear to me what is the status of this > > patch. > > moved to next CF with "needs review" state. Please feel free to update > > the status. > > I have committed this patch. And updated the status, too! Thanks! -- Julien Rouhaud http://dalibo.com - http://dalibo.org -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Reload SSL certificates on SIGHUP
On Fri, Dec 2, 2016 at 1:02 PM, Michael Paquier wrote: > On Fri, Dec 2, 2016 at 11:26 AM, Andreas Karlsson wrote: >> I have attached a new version. The commitfest should technically have been >> closed by now, so do what you like with the status. I can always submit the >> patch to the next commitfest. > > I have just moved it to the next CF. Will look at it in couple of > days, that's more or less at the top of my TODO list. Thanks for the new version, it is far easier to check that there is no regression with the new code. /* Public interface */ /* */ + /* Useless noise. +void +be_tls_destroy(void) +{ + SSL_CTX_free(SSL_context); + SSL_context = NULL; } Perhaps we had better leave immediately if SSL_context is already NULL? I have tested the error code paths by enforcing manually an error and I could not see any failures, still I am wondering if calling SSL_CTX_free(NULL) would be safe, particularly if there is a callback added in the future. + if (secure_initialize(false) != 0) + ereport(WARNING, + (errmsg("ssl context not reloaded"))); SSL should be upper-case here. One last thing that I think is missing in this patch is for users the possibility to check via SQL if the SSL context is actually loaded or not. As the context is reloaded after all the new values are available, with the current patch users may see that ssl is set to on but no context is loaded. So why not adding for example a read-only parameter that maps with SSLLoaded? Once those issues are addressed, I think that this will be ready for committer. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers