Re: [HACKERS] SQL MERGE is quite distinct from UPSERT
On Sun, Jul 20, 2014 at 10:41 PM, Craig Ringer wrote: > I'd be very keen to see atomic upsert in Pg. Please Cc me on any patches > / discussion, I'll be an eager tester. Great. Thanks Craig. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] small doccumentation fix in psql
Currently \pset is supported without any argument also, so same is updated in documentation. \pset option [ value ] Changed to \pset [ option [ value ] ] This patch does update the documentation as stated, and make it consistent with the reality and the embedded psql help. This is an improvement and I recommand its inclusion. I would also suggest to move the sentence at the end of the description: "\pset without any arguments displays the current status of all printing options." At then end of the first paragraph, before the detailed description of options, so as to comment directly on this non mandatory "option". -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Checkpointer crashes on slave in 9.4 on windows
During internals tests, it is observed that checkpointer is getting crashed on slave with below log on slave in windows: LOG: checkpointer process (PID 4040) was terminated by exception 0xC005 HINT: See C include file "ntstatus.h" for a description of the hexadecimal value. LOG: terminating any other active server processes I debugged and found that it is happening when checkpointer tries to update shared memory config and below is the call stack. > postgres.exe!LWLockAcquireCommon(LWLock * l=0x, LWLockMode mode=LW_EXCLUSIVE, unsigned __int64 * valptr=0x0020, unsigned __int64 val=18446744073709551615) Line 579 + 0x14 bytes C postgres.exe!LWLockAcquireWithVar(LWLock * l=0x, unsigned __int64 * valptr=0x0020, unsigned __int64 val=18446744073709551615) Line 510 C postgres.exe!WALInsertLockAcquireExclusive() Line 1627 C postgres.exe!UpdateFullPageWrites() Line 9037 C postgres.exe!UpdateSharedMemoryConfig() Line 1364 C postgres.exe!CheckpointerMain() Line 359 C postgres.exe!AuxiliaryProcessMain(int argc=2, char * * argv=0x007d2180) Line 427 C postgres.exe!SubPostmasterMain(int argc=4, char * * argv=0x007d2170) Line 4635 C postgres.exe!main(int argc=4, char * * argv=0x007d2170) Line 207 C Basically, here the issue is that during startup when checkpointer tries to acquire WAL Insertion Locks to update the value of fullPageWrites, it crashes because the same is still not initialized. It will be initialized in InitXLOGAccess() which will get called via RecoveryInProgress() in case recovery is in progress before doing actual checkpoint. However we are trying to access it before that which leads to crash. I think the reason why it occurs only on windows is that on linux fork will ensure that WAL Insertion Locks get initialized with same values as postmaster. To fix this issue, we need to ensure that WAL Insertion Locks should get initialized before we use them, so one of the ways is to call InitXLOGAccess() before calling CheckPointerMain() as I have done in attached patch, other could be to call RecoveryInProgess() much earlier in path than now. Steps to reproduce the issue --- On Master a. Change below parameters in postgresql.conf wal_level = archive archive_mode = on archive_command = 'copy "%p" "c:\\Users\\PostgreSQL\9.4\\bin\\archive\\%f"' archive_timeout = 10 b. Change pg_hba.conf to accept connections from slave c. Start Server d. Connect to server and start online backup psql.exe -p 5432 -c "select pg_start_backup('label-1')"; postgres e. Create the slave directory by copying everything from master f. remove postmaster.pid from slave directoy g. change port on slave g. create recovery.conf with below parameters on slave: standby_mode=on restore_command = 'copy "c:\\Users\\PostgreSQL\9.4\\bin\\archive\\%f" "%p"' h. Stop online backup on master psql.exe -p 5432 -c "select pg_stop_backup('1')"; postgres i. Start the slave and you can observe below logs: LOG: checkpointer process (PID 4040) was terminated by exception 0xC005 HINT: See C include file "ntstatus.h" for a description of the hexadecimal value. Comments? With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com fix_checkpointer_crash_on_slave_v1.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [bug fix] pg_ctl always uses the same event source
On Sun, Jul 20, 2014 at 7:44 AM, Amit Kapila wrote: > On Fri, Jul 18, 2014 at 7:08 PM, MauMau wrote: >> >> From: "Magnus Hagander" >> >>> On Fri, Jul 18, 2014 at 5:33 AM, Amit Kapila >>> wrote: On Thu, Jul 17, 2014 at 4:51 PM, Magnus Hagander wrote: > > > Did anyone actually test this patch? :) > > I admit I did not build it on Windows specifically because I assumed > that was done as part of the development and review. And the changes > to pg_event.c can never have built, since the file does not include > the required header. I have tested it on Windows and infact on Linux as well to see if there is any side impact before marking it as Ready For Committer. It seems to me that the required header is removed in last version (pg_ctl_eventsrc_v11) where MessageBox() related changes have been removed from patch as per recent discussion. Sorry for not being able to check last version posted. >>> >>> >>> Gotcha. Thanks for clarifying, and I apologize if I came across a bit >>> harsh even with the smiley. > > The statement was not at all harsh. I think you are right in asking that > question and I believe that is the minimum expectation once the patch > reaches Ready To Committer stage. > > >> >> I'm sorry to have caused both of you trouble. I have to admit that I >> didn't compile the source when I removed the MessageBox()-related changes. >> The attached patch fixes that. I confirmed successful build this time. > > I have tested the attached patch on windows, it works fine both for > default and non-default cases. It passes other regression tests as well > and build went fine on Linux. > > One more thing about inclusion of new header file in pgevent.c, I think > that is okay because we include it in other modules (client side) present > in bin directory like reindex. Thanks to you both for confirming it works, applied in the current state. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GSoC2014] Patch ALTER TABLE ... SET LOGGED
Re: Fabrízio de Royes Mello 2014-07-16 > Anyway I think all is ok now. Is this ok for you? Hi Fabrízio, it's ok for me now, though Andres' concerns seem valid. > > > +SET TABLESPACE class="PARAMETER">new_tablespace > > > RESET ( class="PARAMETER">storage_parameter [, ... ] ) > > > INHERIT parent_table > > > NO INHERIT class="PARAMETER">parent_table > > > OF type_name > > > NOT OF > > > OWNER TO new_owner > > > -SET TABLESPACE class="PARAMETER">new_tablespace > > > > That should get a footnote in the final commit message. > > > > Sorry, I didn't understand what you meant. I meant to say that when this patch gets committed, the commit message might want to explain that while we are at editing this doc part, we moved SET TABLESPACE to the place where it really should be. > > I think we are almost there :) > > Yeah... thanks a lot for your help. Welcome. I'll look forward to use this in production :) Christoph -- c...@df7cb.de | http://www.df7cb.de/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GSoC2014] Patch ALTER TABLE ... SET LOGGED
On 2014-07-16 20:45:15 -0300, Fabrízio de Royes Mello wrote: > > The rewrite will read in the 'old' contents - but because it's done > > after the pg_class.relpersistence is changed they'll all not be marked > > as BM_PERMANENT in memory. Then the ALTER TABLE is rolled back, > > including the relpersistence setting. Which will unfortunately leave > > pages with the wrong persistency setting in memory, right? > > > > That means should I "FlushRelationBuffers(rel)" before change the > relpersistence? Did my explanation clarify the problem + possible solution sufficiently? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Portability issues in TAP tests
Re: Noah Misch 2014-07-18 <20140718052625.ga2231...@tornado.leadboat.com> > Installing a new version of one Perl module is well within the capabilities of > buildfarm owners. Saving them the trouble, which in turn means more of them > actually activating the TAP tests, might justify the loss. I'd be sad to see > the "subtest" use go away, but I lean toward thinking it's for the best. > > From a technical standpoint, it would be nicest to bundle copies of Test::More > and IPC::Run for the test suites to use. Please not. If no OS packages are available, there's still "cpan IPC::Run" to get it installed to $HOME automatically. > A minor point to add to your list: > > 5. The TAP suites don't work when $PWD contains spaces. Here's yet another thing that I think is pretty major: 6. The tests fail if your $LANG isn't en_something: ok 1 - vacuumdb -z postgres exit code 0 not ok 2 - vacuumdb -z: SQL found in server log # Failed test 'vacuumdb -z: SQL found in server log' # at /home/cbe/projects/postgresql/postgresql/9.4/build/../src/test/perl/TestLib.pm line 221. # 'LOG: Anweisung: VACUUM (ANALYZE); # ' # doesn't match '(?^:statement: VACUUM \(ANALYZE\);)' # Looks like you failed 1 test of 2. ... repeated a gazillion times. pg_regress unsets LANG and LC_*, the perl tests should do also. Christoph -- c...@df7cb.de | http://www.df7cb.de/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Shapes on the regression test for polygon
The first two shapes on src/test/regress/sql/polygon.sql do not make sense to me. They look more like polygons with some more tabs, but still did not match the coordinates. I changed them to make consistent with the shapes. I believe this was the intention of the original author. Patch attached. diff --git a/src/test/regress/expected/polygon.out b/src/test/regress/expected/polygon.out index b252902..66ff51d 100644 --- a/src/test/regress/expected/polygon.out +++ b/src/test/regress/expected/polygon.out @@ -1,28 +1,28 @@ -- -- POLYGON -- -- polygon logic -- -- 3 o --- | +-- | -- 2 + | ---/ | --- 1 # o + --- /| +--/ | +-- 1 # + +-- / o | -- 0 #-o-+ -- --- 0 1 2 3 4 +-- 0 1 2 3 4 -- CREATE TABLE POLYGON_TBL(f1 polygon); -INSERT INTO POLYGON_TBL(f1) VALUES ('(2.0,0.0),(2.0,4.0),(0.0,0.0)'); -INSERT INTO POLYGON_TBL(f1) VALUES ('(3.0,1.0),(3.0,3.0),(1.0,0.0)'); +INSERT INTO POLYGON_TBL(f1) VALUES ('(2.0,2.0),(0.0,0.0),(4.0,0.0)'); +INSERT INTO POLYGON_TBL(f1) VALUES ('(3.0,3.0),(1.0,1.0),(3.0,0.0)'); -- degenerate polygons INSERT INTO POLYGON_TBL(f1) VALUES ('(0.0,0.0)'); INSERT INTO POLYGON_TBL(f1) VALUES ('(0.0,1.0),(0.0,1.0)'); -- bad polygon input strings INSERT INTO POLYGON_TBL(f1) VALUES ('0.0'); ERROR: invalid input syntax for type polygon: "0.0" LINE 1: INSERT INTO POLYGON_TBL(f1) VALUES ('0.0'); ^ INSERT INTO POLYGON_TBL(f1) VALUES ('(0.0 0.0'); ERROR: invalid input syntax for type polygon: "(0.0 0.0" @@ -36,157 +36,170 @@ INSERT INTO POLYGON_TBL(f1) VALUES ('(0,1,2,3'); ERROR: invalid input syntax for type polygon: "(0,1,2,3" LINE 1: INSERT INTO POLYGON_TBL(f1) VALUES ('(0,1,2,3'); ^ INSERT INTO POLYGON_TBL(f1) VALUES ('asdf'); ERROR: invalid input syntax for type polygon: "asdf" LINE 1: INSERT INTO POLYGON_TBL(f1) VALUES ('asdf'); ^ SELECT '' AS four, * FROM POLYGON_TBL; four | f1 --+- - | ((2,0),(2,4),(0,0)) - | ((3,1),(3,3),(1,0)) + | ((2,2),(0,0),(4,0)) + | ((3,3),(1,1),(3,0)) | ((0,0)) | ((0,1),(0,1)) (4 rows) -- overlap SELECT '' AS three, p.* FROM POLYGON_TBL p - WHERE p.f1 && '(3.0,1.0),(3.0,3.0),(1.0,0.0)'; + WHERE p.f1 && '(3.0,3.0),(1.0,1.0),(3.0,0.0)'; three | f1 ---+- - | ((2,0),(2,4),(0,0)) - | ((3,1),(3,3),(1,0)) + | ((2,2),(0,0),(4,0)) + | ((3,3),(1,1),(3,0)) (2 rows) -- left overlap SELECT '' AS four, p.* FROM POLYGON_TBL p - WHERE p.f1 &< '(3.0,1.0),(3.0,3.0),(1.0,0.0)'; + WHERE p.f1 &< '(3.0,3.0),(1.0,1.0),(3.0,0.0)'; four | f1 --+- - | ((2,0),(2,4),(0,0)) - | ((3,1),(3,3),(1,0)) + | ((3,3),(1,1),(3,0)) | ((0,0)) | ((0,1),(0,1)) -(4 rows) +(3 rows) -- right overlap SELECT '' AS two, p.* FROM POLYGON_TBL p - WHERE p.f1 &> '(3.0,1.0),(3.0,3.0),(1.0,0.0)'; + WHERE p.f1 &> '(3.0,3.0),(1.0,1.0),(3.0,0.0)'; two | f1 -+- - | ((3,1),(3,3),(1,0)) + | ((3,3),(1,1),(3,0)) (1 row) -- left of SELECT '' AS one, p.* FROM POLYGON_TBL p - WHERE p.f1 << '(3.0,1.0),(3.0,3.0),(1.0,0.0)'; + WHERE p.f1 << '(3.0,3.0),(1.0,1.0),(3.0,0.0)'; one | f1 -+--- | ((0,0)) | ((0,1),(0,1)) (2 rows) -- right of SELECT '' AS zero, p.* FROM POLYGON_TBL p - WHERE p.f1 >> '(3.0,1.0),(3.0,3.0),(1.0,0.0)'; + WHERE p.f1 >> '(3.0,3.0),(1.0,1.0),(3.0,0.0)'; zero | f1 --+ (0 rows) -- contained SELECT '' AS one, p.* FROM POLYGON_TBL p - WHERE p.f1 <@ polygon '(3.0,1.0),(3.0,3.0),(1.0,0.0)'; + WHERE p.f1 <@ polygon '(3.0,3.0),(1.0,1.0),(3.0,0.0)'; one | f1 -+- - | ((3,1),(3,3),(1,0)) + | ((3,3),(1,1),(3,0)) (1 row) -- same SELECT '' AS one, p.* FROM POLYGON_TBL p - WHERE p.f1 ~= polygon '(3.0,1.0),(3.0,3.0),(1.0,0.0)'; + WHERE p.f1 ~= polygon '(3.0,3.0),(1.0,1.0),(3.0,0.0)'; one | f1 -+- - | ((3,1),(3,3),(1,0)) + | ((3,3),(1,1),(3,0)) (1 row) -- contains SELECT '' AS one, p.* FROM POLYGON_TBL p - WHERE p.f1 @> polygon '(3.0,1.0),(3.0,3.0),(1.0,0.0)'; + WHERE p.f1 @> polygon '(3.0,3.0),(1.0,1.0),(3.0,0.0)'; one | f1 -+- - | ((3,1),(3,3),(1,0)) + | ((3,3),(1,1),(3,0)) (1 row) -- -- polygon logic -- -- 3 o --- | --- 2 + | ---/ | +-- /| +-- 2 + | +--/ | -- 1 / o + -- /| -- 0 +-o-+ -- --- 0 1 2 3 4 +-- 0 1 2 3 4 -- -- left of -SELECT polygon '(2.0,0.
Re: [HACKERS] ALTER TABLESPACE MOVE command tag tweak
Stephen Frost wrote: > * Tom Lane (t...@sss.pgh.pa.us) wrote: > > Stephen Frost writes: > > > That it's more-or-less a bulk 'ALTER TABLE' operation is why I had been > > > trying to think of a way to put it under that command. What if we had a > > > more general way to reference 'all objects in a tablespace'? > > > "tablespace.*" or "ALL:TABLESAPCE"? Are there other places which might > > > benefit from being able to take and operate on all objects in a > > > tablespace? > > > > > Of course, we handle this in 'GRANT' with 'GRANT ON ALL TABLES', so why > > > not 'ALTER TABLE ON ALL TABLES IN TABLESPACE '? that does get > > > pretty darn verbose but is at least a bit more in-line with what we have > > > done before.. > > > > That's not a bad line of thought --- I doubt that verbosity is critical > > here. > > Alright, sounds like this is more-or-less the concensus. I'll see about > making it happen shortly. Stephen, Were you able to work on this? Can you be more specific on the exact grammar you're considering? The proposal above, ALTER TABLE ON ALL TABLES IN TABLESPACE xyz doesn't seem very good to me. I would think it'd be more like ALTER ALL TABLES IN TABLESPACE xyz but then if you return ALTER TABLE as a command tag that might be a bit strange. Maybe ALTER TABLE ALL IN TABLESPACE xyz which AFAICS should work since ALL is already a reserved keyword. Also, how would we document this? Would we have it in the same page as all the ALTER TABLE variants, or would we create a separate page for ALTER TABLE ALL? Keeping in mind that in the future we might want to allow things such as ALTER TABLE ALL IN SCHEMA xyz it might be better to have the selection logic documented neatly in its own little page instead of together with the ALTER TABLE mess which is already rather large. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Portability issues in TAP tests
On Mon, Jul 21, 2014 at 10:06 AM, Christoph Berg wrote: > Re: Noah Misch 2014-07-18 <20140718052625.ga2231...@tornado.leadboat.com> >> Installing a new version of one Perl module is well within the capabilities >> of >> buildfarm owners. Saving them the trouble, which in turn means more of them >> actually activating the TAP tests, might justify the loss. I'd be sad to see >> the "subtest" use go away, but I lean toward thinking it's for the best. >> >> From a technical standpoint, it would be nicest to bundle copies of >> Test::More >> and IPC::Run for the test suites to use. > > Please not. If no OS packages are available, there's still "cpan IPC::Run" > to get it installed to $HOME automatically. > >> A minor point to add to your list: >> >> 5. The TAP suites don't work when $PWD contains spaces. > > Here's yet another thing that I think is pretty major: > > 6. The tests fail if your $LANG isn't en_something: > > ok 1 - vacuumdb -z postgres exit code 0 > not ok 2 - vacuumdb -z: SQL found in server log > > # Failed test 'vacuumdb -z: SQL found in server log' > # at > /home/cbe/projects/postgresql/postgresql/9.4/build/../src/test/perl/TestLib.pm > line 221. > # 'LOG: Anweisung: VACUUM (ANALYZE); > # ' > # doesn't match '(?^:statement: VACUUM \(ANALYZE\);)' > # Looks like you failed 1 test of 2. > > ... repeated a gazillion times. pg_regress unsets LANG and LC_*, the > perl tests should do also. While we're complaining... The tests weren't running for me at all on MacOS X, because I was missing some prerequisite. So I installed it, and now they promptly fail: ok 2 - initdb --version 1..2 ok 1 - initdb with invalid option nonzero exit code ok 2 - initdb with invalid option prints error message # Looks like your test exited with 256 just after 2. not ok 3 - initdb options handling Unlike the regular regression tests, these tests don't seem to provide any guidance on where to see the difference between the expected and actual output, or how to get more details, so that's all I know. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSL information view
--On 12. Juli 2014 15:08:01 +0200 Magnus Hagander wrote: Before doing that, however, I'd like to ask for opinions :) The hack currently exposes a separate view that you can join to pg_stat_activity (or pg_stat_replication) on the pid -- this is sort of the same way that pg_stat_replication works in the first place. Do we want something similar to that for a builtin SSL view as well, or do we want to include the fields directly in pg_stat_activity and pg_stat_replication? I've heard more than once the wish to get this information without contrib..especially for the SSL version used (client and server likewise). So ++1 for this feature. I'd vote for a special view, that will keep the information into a single place and someone can easily join extra information together. -- Thanks Bernd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] RLS Design
On Fri, Jul 18, 2014 at 7:01 PM, Brightwell, Adam wrote: >> I think we do want a way to modify policies. However, we tend to >> avoid syntax that involves unnatural word order, as this certainly >> does. Maybe it's better to follow the example of CREATE RULE and >> CREATE TRIGGER and do something this instead: >> >> CREATE POLICY policy_name ON table_name USING quals; >> ALTER POLICY policy_name ON table_name USING quals; >> DROP POLICY policy_name ON table_name; >> >> The advantage of this is that you can regard "policy_name ON >> table_name" as the identifier for the policy throughout the system. >> You need some kind of identifier of that sort anyway to support >> COMMENT ON, SECURITY LABEL, and ALTER EXTENSION ADD/DROP for policies. > > Sounds good. I certainly think it makes a lot of sense to include the ALTER > functionality, if for no other reason than ease of use. > > Another item to consider, though I believe it can come later, is per-action > policies. Following the above suggested syntax, perhaps that might look > like the following? > > CREATE POLICY policy_name ON table_name FOR action USING quals; > ALTER POLICY policy_name ON table_name FOR action USING quals; > DROP POLICY policy_name ON table_name FOR action; That seems reasonable. You need to give some thought to what happens if the user types: CREATE POLICY pol1 ON tab1 FOR SELECT USING q1; ALTER POLICY pol1 ON tab1 FOR INSERT USING q2; I guess you end up with q1 as the SELECT policy and q2 as the INSERT policy. Similarly, had you typed: CREATE POLICY pol1 ON tab1 USING q1; ALTER POLICY pol1 ON tab1 FOR INSERT USING q2; ...then I guess you end up with q2 for INSERTs and q1 for everything else. I'm wondering if it might be better, though, not to allow the quals to be specified in CREATE POLICY, or else to allow multiple actions. Otherwise, getting pg_dump to DTRT might be complicated. Perhaps: CREATE POLICY pol1 ON tab1 ( [ [ FOR operation [ OR operation ] ... ] USING quals ] ... ); where operation = SELECT | INSERT | UPDATE | DELETE So that you can write things like: CREATE POLICY pol1 ON tab1 (USING a = 1); CREATE POLICY pol2 ON tab2 (FOR INSERT USING a = 1, FOR UPDATE USING b = 1, FOR DELETE USING c = 1); And then, for ALTER, just allow one change at a time, syntax as you proposed. That way each policy can be dumped as a single CREATE statement. > I was also giving some thought to the use of "POLICY", perhaps I am wrong, > but it does seem it could be at risk of becoming ambiguous down the road. I > can't think of any specific examples at the moment, but my concern is what > happens if we wanted to add another "type" of policy, whatever that might > be, later? Would it make more sense to go ahead and qualify this a little > more with "ROW SECURITY POLICY"? I think that's probably over-engineering. I'm not aware of anything else we might add that would be likely to be called a policy, and if we did add something we could probably call it something else instead. And long command names are annoying. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] json_object vs empty string keys
For some reason, or possibly just through my carelessness and for no real reason, json_object() currently disallows empty strings as object keys. However silly empty string keys might be, they are apparently allowed by the JSON rfcs, and this behaviour by json_object() is inconsistent with what the json parser and other json generating functions allow, so I propose to remove the restriction, unless I hear any objections. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Portability issues in TAP tests
On Mon, Jul 21, 2014 at 8:21 AM, Robert Haas wrote: > The tests weren't running for me at all on MacOS X, because I was > missing some prerequisite. So I installed it, and now they promptly > fail: > > ok 2 - initdb --version > 1..2 > ok 1 - initdb with invalid option nonzero exit code > ok 2 - initdb with invalid option prints error message > # Looks like your test exited with 256 just after 2. > not ok 3 - initdb options handling I've had the same issue. The error doesn't seem to come from a failed test, but from within 'prove' itself. Installing Perl 5.18 via Homebrew, and setting PROVE="/perl518/5.18.2/prove" at configure time seems to help avoid this particular problem. So I did not attempt any further diagnosis of the root cause. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GSoC2014] Patch ALTER TABLE ... SET LOGGED
On Mon, Jul 21, 2014 at 9:51 AM, Andres Freund wrote: > > On 2014-07-16 20:45:15 -0300, Fabrízio de Royes Mello wrote: > > > The rewrite will read in the 'old' contents - but because it's done > > > after the pg_class.relpersistence is changed they'll all not be marked > > > as BM_PERMANENT in memory. Then the ALTER TABLE is rolled back, > > > including the relpersistence setting. Which will unfortunately leave > > > pages with the wrong persistency setting in memory, right? > > > > > > > That means should I "FlushRelationBuffers(rel)" before change the > > relpersistence? > > Did my explanation clarify the problem + possible solution sufficiently? > Yes, your explanation was enough. I didn't had time to working on it yet. But I hope working on it today or tomorrow at least. Regards, -- Fabrízio de Royes Mello Consultoria/Coaching PostgreSQL >> Timbira: http://www.timbira.com.br >> Blog sobre TI: http://fabriziomello.blogspot.com >> Perfil Linkedin: http://br.linkedin.com/in/fabriziomello >> Twitter: http://twitter.com/fabriziomello
[HACKERS] Exporting Table-Specified BLOBs Only?
Fundamental issue: PostgreSQL is different than Oracle in how it stores BLOBs http://www.postgresql.org/docs/9.1/static/catalog-pg-largeobject.html. In PostgreSQL ... In the particular tables BLOBS are stored as OID's (unique numbers). There is one table for all BLOB data (see pg_largeobject under Catalogs/PostgreSQL/pg_largeobject in pgadmin tree). It stores colums "OID/pageno/actual binary data (page)". What do I need? A method of using pg_dump to selectively export BLOBs with OID's used in the tables specified with --table --table I have four tables containing BLOBs I want to export w/o the BLOB data - that's easy. What I also want is the remaining 200+ tables exported with THEIR BLOB entries, if any, but NOT one BLOB from the four tables explicitly not specified in pg_dump command; so JUST the BLOBs associated to any of the 200+ tables I do specify. Thanks! Alan
Re: [HACKERS] Portability issues in TAP tests
On Mon, Jul 21, 2014 at 12:00 PM, Thomas Fanghaenel wrote: > On Mon, Jul 21, 2014 at 8:21 AM, Robert Haas wrote: >> The tests weren't running for me at all on MacOS X, because I was >> missing some prerequisite. So I installed it, and now they promptly >> fail: >> >> ok 2 - initdb --version >> 1..2 >> ok 1 - initdb with invalid option nonzero exit code >> ok 2 - initdb with invalid option prints error message >> # Looks like your test exited with 256 just after 2. >> not ok 3 - initdb options handling > > I've had the same issue. The error doesn't seem to come from a failed > test, but from within 'prove' itself. Installing Perl 5.18 via > Homebrew, and setting > PROVE="/perl518/5.18.2/prove" at configure time > seems to help avoid this particular problem. So I did not attempt any > further diagnosis of the root cause. Mmph. Well, I don't want to install a non-default Perl on my system just to make these tests pass, and I don't think that should be a requirement. (But thanks for the information.) -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Portability issues in TAP tests
Robert Haas writes: > Mmph. Well, I don't want to install a non-default Perl on my system > just to make these tests pass, and I don't think that should be a > requirement. I had the same feeling about the Perl on RHEL6 ;-). The TAP tests will need to be a great deal more portable than they've proven so far before they'll really be useful for anything much. I trust we're committed to making that happen. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Portability issues in TAP tests
On 07/21/2014 02:40 PM, Tom Lane wrote: Robert Haas writes: Mmph. Well, I don't want to install a non-default Perl on my system just to make these tests pass, and I don't think that should be a requirement. I had the same feeling about the Perl on RHEL6 ;-). The TAP tests will need to be a great deal more portable than they've proven so far before they'll really be useful for anything much. I trust we're committed to making that happen. I'd be rather inclined to remove them from the check-world and installcheck-world targets until we have dealt with the issues people have identified. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Index-only scans for multicolumn GIST
Hi, hackers! There are new results of my work on GSoC project "Index-only scans for GIST". Previous post is here: http://postgresql.1045698.n5.nabble.com/Index-only-scans-for-GIST-td5804892.html Repository is https://github.com/lubennikovaav/postgres/tree/indexonlygist2 Patch is in attachments. It includes indexonlyscan for multicolumn GiST. It works correctly - results are the same with another scan plans. Fetch() method is realized for box and circle opclasses Improvement for circle opclass is not such distinct as for box opclass, because of recheck. I remember that all "elog" and other bad comments must be fixed before this patch can be committed. Any comments are welcome -- Best regards, Lubennikova Anastasia indexonlygist_2.1.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postgresql.auto.conf and reload
Tom Lane wrote: > Josh Berkus writes: > >> 2. *Then*, in a second pass, enforce requirements like "can't be > >> changed except at server start". > > > This would also make conf.d much more useful; I wouldn't have to worry > > as much about overlapping config settings. > > > Sounds like a 9.5 feature, though. > > No, ALTER SYSTEM is there now and it needs to work right in its first > release. I will go fix this if nobody else does. Just checking -- you didn't get around to dealing with this, right? -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postgresql.auto.conf and reload
Alvaro Herrera writes: > Tom Lane wrote: >> No, ALTER SYSTEM is there now and it needs to work right in its first >> release. I will go fix this if nobody else does. > Just checking -- you didn't get around to dealing with this, right? Not yet... do you want it? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] plpgsql.extra_warnings='num_into_expressions'
Hi again, Here's a patch which allows you to notice those annoying bugs with INTO slightly more quickly. Adding to the next commit phest. .marko *** a/doc/src/sgml/plpgsql.sgml --- b/doc/src/sgml/plpgsql.sgml *** *** 4730,4736 a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$ plpgsql.extra_errors for errors. Both can be set either to a comma-separated list of checks, "none" or "all". The default is "none". Currently the list of available checks ! includes only one: shadowed_variables --- 4730,4736 plpgsql.extra_errors for errors. Both can be set either to a comma-separated list of checks, "none" or "all". The default is "none". Currently the list of available checks ! is as follows: shadowed_variables *** *** 4740,4745 a_output := a_output || $$ if v_$$ || referrer_keys.kind || $$ like '$$ --- 4740,4755 + + num_into_expressions + + + When possible, checks that the number of expressions specified in the + SELECT or RETURNING list of a query matches the number expected by the + INTO target. This check is done on a "best effort" basis. + + + The following example shows the effect of plpgsql.extra_warnings *** a/src/pl/plpgsql/src/pl_gram.y --- b/src/pl/plpgsql/src/pl_gram.y *** *** 22,27 --- 22,28 #include "parser/scanner.h" #include "parser/scansup.h" #include "utils/builtins.h" + #include "nodes/nodefuncs.h" /* Location tracking support --- simpler than bison's default */ *** *** 97,103 static PLpgSQL_row *make_scalar_list1(char *initial_name, PLpgSQL_datum *initial_datum, int lineno, int location); staticvoid check_sql_expr(const char *stmt, int location, ! int leaderlen); staticvoid plpgsql_sql_error_callback(void *arg); staticPLpgSQL_type*parse_datatype(const char *string, int location); staticvoid check_labels(const char *start_label, --- 98,104 PLpgSQL_datum *initial_datum, int lineno, int location); staticvoid check_sql_expr(const char *stmt, int location, ! int leaderlen, PLpgSQL_row *check_row); staticvoid plpgsql_sql_error_callback(void *arg); staticPLpgSQL_type*parse_datatype(const char *string, int location); staticvoid check_labels(const char *start_label, *** *** 106,111 static void check_labels(const char *start_label, --- 107,115 staticPLpgSQL_expr*read_cursor_args(PLpgSQL_var *cursor, int until, const char *expected); staticList*read_raise_options(void); + staticboolfind_a_star_walker(Node *node, void *context); + staticint tlist_result_column_count(Node *stmt); + %} *** *** 1438,1444 for_control : for_variable K_IN PLpgSQL_stmt_fori *new; /* Check first expression is well-formed */ ! check_sql_expr(expr1->query, expr1loc, 7); /* Read and check the second one */ expr2 = read_sql_expression2(K_LOOP, K_BY, --- 1442,1448 PLpgSQL_stmt_fori *new; /* Check first expression is well-formed */ ! check_sql_expr(expr1->query, expr1loc, 7, NULL); /* Read and check the second one */ expr2 = read_sql_expression2(K_LOOP, K_BY, *** *** 1500,1506 for_control : for_variable K_IN pfree(expr1->query);
[HACKERS] IS NOT DISTINCT FROM + Indexing
Hi, I'm curious if there is a reason why "IS NOT DISTINCT FROM" is not an indexable operation in a B-tree index, as it is effectively testing for equality albeit with some "magic" for NULLs? Here is an example of what I mean, running tests on 9.3.4: -- create a table of integers CREATE TABLE numbers AS SELECT x FROM generate_series(1,100) x; -- create a b-tree index CREATE INDEX numbers_x_idx ON numbers (x); -- find x = 500 SELECT * FROM numbers WHERE x = 500; x - 500 (1 row) -- query plan EXPLAIN SELECT * FROM numbers WHERE x = 500; QUERY PLAN -- Index Only Scan using numbers_x_idx on numbers (cost=0.42..8.44 rows=1 width=4) Index Cond: (x = 500) (2 rows) -- now find x IS NOT DISTINCT FROM 500 SELECT * FROM numbers WHERE x IS NOT DISTINCT FROM 500; x - 500 (1 row) -- but the query plan is... EXPLAIN SELECT * FROM numbers WHERE x IS NOT DISTINCT FROM 500; QUERY PLAN --- Seq Scan on numbers (cost=0.00..16925.00 rows=1 width=4) Filter: (NOT (x IS DISTINCT FROM 500)) With NULLs being indexable, I was wondering if there was some reason why IS NOT DISTINCT FROM could not use the index? Thanks, Jonathan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] IS NOT DISTINCT FROM + Indexing
On Mon, Jul 21, 2014 at 4:16 PM, Jonathan S. Katz wrote: > With NULLs being indexable, I was wondering if there was some reason why IS > NOT DISTINCT FROM could not use the index? FWIW this works: postgres=# explain analyze select * from orders where orderid in (5, null); QUERY PLAN -- Index Scan using orders_pkey on orders (cost=0.29..12.60 rows=1 width=60) (actual time=0.019..0.021 rows=1 loops=1) Index Cond: (orderid = ANY ('{5,NULL}'::integer[])) Planning time: 0.100 ms Execution time: 0.416 ms (4 rows) I think that it would almost be a Simple Matter of Programming to make IS NOT DISTINCT FROM indexable. Under the hood, IS DISTINCT FROM isn't very different to using the equality operator: /* * DistinctExpr - expression node for "x IS DISTINCT FROM y" * * Except for the nodetag, this is represented identically to an OpExpr * referencing the "=" operator for x and y. * We use "=", not the more obvious "<>", because more datatypes have "=" * than "<>". This means the executor must invert the operator result. * Note that the operator function won't be called at all if either input * is NULL, since then the result can be determined directly. */ typedef OpExpr DistinctExpr; We're already inverting the equals operator. But that isn't necessarily how a B-Tree index represents equality (that is, a particular B-Tree operator class could have a non-'=' operator that it thinks of as equality-ish - in general that could even be the default B-Tree opclass and there may not be an equals operator). The fact that most types think of the '=' equals operator as equality is just a convention, and so technically IS DISTINCT FROM doesn't invert B-Tree operation 3. See "31.14. Interfacing Extensions To Indexes" for details. The equals operator '=' isn't really supposed to be magic, it just is in some places. Right now the executor is directly inverting the equality operator to make this work (and has done so since long before NULLs were indexable). This is a bit of a kludge. I guess it just works that way because there is no convenient place to insert the special inversion of the operator, and the special NULL handling that currently appears within ExecEvalDistinct(). -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] IS NOT DISTINCT FROM + Indexing
On 2014-07-21 16:51:32 -0700, Peter Geoghegan wrote: > On Mon, Jul 21, 2014 at 4:16 PM, Jonathan S. Katz > wrote: > > With NULLs being indexable, I was wondering if there was some reason why IS > > NOT DISTINCT FROM could not use the index? > > FWIW this works: > > postgres=# explain analyze select * from orders where orderid in (5, null); I rather doubt it will. x in (y1, ... yn) is essentially expanded to x = y1 OR x = y2, ... OR x = yn. I.e. the NULL comparison will be done using normal equality comparison and thus not return a row with a NULL orderid. Am I missing something? > I think that it would almost be a Simple Matter of Programming to make > IS NOT DISTINCT FROM indexable. Under the hood, IS DISTINCT FROM isn't > very different to using the equality operator: But yea, it probably wouldn't take very much for that. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] IS NOT DISTINCT FROM + Indexing
On Mon, Jul 21, 2014 at 4:57 PM, Andres Freund wrote: > I rather doubt it will. x in (y1, ... yn) is essentially expanded to x = > y1 OR x = y2, ... OR x = yn. I.e. the NULL comparison will be done using > normal equality comparison and thus not return a row with a NULL > orderid. Am I missing something? I was a little bit incautious in my use of words. The point is that a scanKey could easily represent a ScalarArrayOpExpr with NULLs and non-NULLs. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [bug fix] Suppress "autovacuum: found orphan temp table" message
From: "Andres Freund" On 2014-07-18 23:38:09 +0900, MauMau wrote: So, I propose a simple fix to change the LOG level to DEBUG1. I don't know which of DEBUG1-DEBUG5 is appropriate, and any level is OK. Could you include this in 9.2.9? Surely that's the wrong end to tackle this from. Hiding actual problems is a seriously bad idea. No, there is no serious problem in the user operation in this situation. Server crash cannot be avoided, and must be anticipated. The problem is that PostgreSQL makes users worried about lots of (probably) unnecessary messages. Is there any problem if we don't output the message? According the past discussion in this community, the user can just ignore this message. If there is no problem, and the customer asks, I'd like to propose it as one temporary measure. Regards MauMau -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] IS NOT DISTINCT FROM + Indexing
"Jonathan S. Katz" writes: > I'm curious if there is a reason why "IS NOT DISTINCT FROM" is not an > indexable operation in a B-tree index, The short reason why not is that it's not an operator (where "operator" is defined as "something with a pg_operator entry"), and all our indexing infrastructure is built around the notion that indexable clauses are of the form "indexed_column indexable_operator comparison_value". You could certainly imagine ways to fix that, but nobody's put in the probably-nontrivial effort required to do so. The btree code itself would likely be the easiest part to fix, as it sort of thinks nulls are real values already. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [BUGS] BUG #9652: inet types don't support min/max
Hi Asif, On Wed, Jul 9, 2014 at 6:51 PM, Haribabu Kommi wrote: > On Mon, Jul 7, 2014 at 6:59 PM, Asif Naeem wrote: > > Hi Haribabu, > > > > Thank you for sharing the patch. I have spent some time to review the > > changes. Overall patch looks good to me, make check and manual testing seems > > run fine with it. There seems no related doc/sgml changes ?. Patch added > > network_smaller() and network_greater() functions but in PG source code, > > general practice seems to be to use “smaller" and “larger” as related > > function name postfix e.g. timestamp_smaller()/timestamp_larger(), > > interval_smaller/interval_larger(), cashsmaller()/cashlarger() etc. Thanks. > > Thanks for reviewing the patch. > > I corrected the function names as smaller and larger. > and also added documentation changes. > > Updated patch attached in the mail. Hari has provided an updated patch as per your comments, if you think patch is fine, could you please move it to Ready For Committer? Incase your review is still pending, then it is okay. I have asked as from your mail it seems to me that the new patch addresses all your concerns. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: [HACKERS] Portability issues in TAP tests
Andrew Dunstan writes: > On 07/21/2014 02:40 PM, Tom Lane wrote: >> I had the same feeling about the Perl on RHEL6 ;-). The TAP tests >> will need to be a great deal more portable than they've proven so far >> before they'll really be useful for anything much. I trust we're >> committed to making that happen. > I'd be rather inclined to remove them from the check-world and > installcheck-world targets until we have dealt with the issues people > have identified. I think it would be reasonable to do that in the 9.4 branch, since it's a bit hard to argue that the TAP tests are production grade at the moment. We could leave them there in HEAD. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] small doccumentation fix in psql
On 21 July 2014 13:39, Fabien COELHO Wrote > This patch does update the documentation as stated, and make it > consistent with the reality and the embedded psql help. This is an > improvement and I recommand its inclusion. > > I would also suggest to move the sentence at the end of the description: > > "\pset without any arguments displays the current status of all > printing options." > > At then end of the first paragraph, before the detailed description of > options, so as to comment directly on this non mandatory "option". > Done. Regards, Dilip psql_doc_v1.patch Description: psql_doc_v1.patch -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] IS NOT DISTINCT FROM + Indexing
On Jul 21, 2014, at 9:51 PM, Tom Lane wrote: > "Jonathan S. Katz" writes: >> I'm curious if there is a reason why "IS NOT DISTINCT FROM" is not an >> indexable operation in a B-tree index, > > The short reason why not is that it's not an operator (where "operator" > is defined as "something with a pg_operator entry"), and all our indexing > infrastructure is built around the notion that indexable clauses are of > the form "indexed_column indexable_operator comparison_value". > > You could certainly imagine ways to fix that, but nobody's put in the > probably-nontrivial effort required to do so. The btree code itself > would likely be the easiest part to fix, as it sort of thinks nulls > are real values already. What got me thinking this initially problem is that I know "IS NULL" is indexable and I was unsure of how adding "IS NOT DISTINCT FROM" would be too different from that - of course, this is from my perspective from primarily operating on the surface. It sounds like the IS NULL work is in the btree code? Even if it is trivial, it would be tough for me personally to hack on without some hand-holding. I did want to ask about it because it can be useful in simplifying some queries when you have do deal with NULLs (though in reality I tend to use IS DISTINCT FROM much more, though in things like triggers) and would be useful with exclusion constraints (though with those it sounds like it would have to be an operator?). If it is a small project someone is interested, I would be happy to contribute by testing. Jonathan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Some bogus results from prairiedog
According to http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2014-07-21%2022%3A36%3A55 prairiedog saw a crash in "make check" on the 9.4 branch earlier tonight; but there's not a lot of evidence as to why in the buildfarm report, because the postmaster log file is truncated well before where things got interesting. Fortunately, I was able to capture a copy of check.log before it got overwritten by the next run. I find that the place where the webserver report stops matches this section of check.log: [53cd99bb.134a:158] LOG: statement: create index test_range_gist_idx on test_range_gist using gist (ir); [53cd99bb.134a:159] LOG: statement: insert into test_range_gist select int4range(g, g+10) from generate_series(1,2000) g; ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^\ @^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@[53cd99ba.1344:329] LOG: statement: INSERT INTO num_exp_div VALUES (7,8,'-1108.80577182462841041118'); [53cd99ba.1344:330] LOG: statement: INSERT INTO num_exp_add VALUES (7,9,'-107955289.045047420'); [53cd99ba.1344:331] LOG: statement: INSERT INTO num_exp_sub VALUES (7,9,'-58101680.954952580'); The ^@'s represent nul bytes, which I find runs of elsewhere in the file as well. I think they are an artifact of OS X buffering policy caused by multiple processes writing into the same file without any interlocks. Perhaps we ought to consider making buildfarm runs use the logging collector by default? But in any case, it seems uncool that either the buildfarm log-upload process, or the buildfarm web server, is unable to cope with log files containing nul bytes. Anyway, to cut to the chase, the crash seems to be from this: TRAP: FailedAssertion("!(FastPathStrongRelationLocks->count[fasthashcode] > 0)", File: "lock.c", Line: 2957) which the postmaster shortly later says this about: [53cd99b6.130e:2] LOG: server process (PID 5230) was terminated by signal 6: Abort trap [53cd99b6.130e:3] DETAIL: Failed process was running: ROLLBACK PREPARED 'foo1'; [53cd99b6.130e:4] LOG: terminating any other active server processes So there is still something rotten in the fastpath lock logic. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] IS NOT DISTINCT FROM + Indexing
"Jonathan S. Katz" writes: > On Jul 21, 2014, at 9:51 PM, Tom Lane wrote: >> The short reason why not is that it's not an operator (where "operator" >> is defined as "something with a pg_operator entry"), and all our indexing >> infrastructure is built around the notion that indexable clauses are of >> the form "indexed_column indexable_operator comparison_value". > What got me thinking this initially problem is that I know "IS NULL" is > indexable and I was unsure of how adding "IS NOT DISTINCT FROM" would be too > different from that - of course, this is from my perspective from primarily > operating on the surface. It sounds like the IS NULL work is in the btree > code? We hacked in IS [NOT] NULL as a potentially indexable construct, but the key thing that made that possible without major redesign is that IS [NOT] NULL is datatype independent, so there's no need to identify any particular underlying operator or opclass. I'm not sure what we'd do to handle IS [NOT] DISTINCT FROM, but that particular approach ain't gonna cut it. Another point is that people are unlikely to be satisfied with planner optimization for IS NOT DISTINCT FROM that doesn't support it as a join clause (i.e., tab1.col1 IS NOT DISTINCT FROM tab2.col2); which is an issue that doesn't arise for IS [NOT] NULL, as it has only one argument. So that brings you into not just indexability but hashing and merging support. I hasten to say that that doesn't necessarily have to happen in a version-zero patch; but trying to make IS NOT DISTINCT FROM into a first-class construct is a big project. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Stating the significance of Lehman & Yao in the nbtree README
On Tue, May 27, 2014 at 1:52 AM, Peter Geoghegan wrote: > > While talking to various people during pgCon, I was reminded that the > nbtree README does a poor job of explaining the actual practical > advantages of L&Y from a high level. The "move right" trick is > initially mentioned only as an adjunct to a discussion of the > special-case handling of root page splits, even though it's of huge > significance. We also say this within nbtinsert.c: > > /* > * If the page was split between the time that we surrendered our read > * lock and acquired our write lock, then this page may no longer be the > * right place for the key we want to insert. In this case, we need to > * move right in the tree. See Lehman and Yao for an excruciatingly > * precise description. > */ > > (Why we need to say something about _bt_moveright() within > _bt_doinsert() that is equally true of both _bt_moveright() callers > isn't clear). There is a mention about the race condition where it needs to move right in another caller (_bt_search) of _bt_moveright() as well. /* * Race -- the page we just grabbed may have split since we read its * pointer in the parent (or metapage). If it has, we may need to * move right to its new sibling. Do that. .. Do you think there is more to what is already mentioned on top of second caller which we should add or you think if it is true for both, then it should be on top of _bt_moveright()? > I think that this isn't enough. Attached patch proposes to add a small > paragraph at the top of the nbtree README, to clarify the advantages > of L&Y from a high level. I don't think it's appropriate to state the > advantages of an algorithm in a README file generally, but in this > instance it makes it easier to understand the algorithm. In general, I agree with you that we should mention about any advantage of the algorithm we are using and especially if it is significant. I think it will be better if can also mention how that advantage or use is realized in our implementation as we are already doing in README of nbtree. + + Even with these read locks, Lehman and Yao's approach obviates the + need of earlier schemes to hold multiple read locks concurrently when + descending the tree as part of servicing index scans (pessimistic lock + coupling). The addition of right-links at all levels, as well as the + addition of a page "high key" allows detection of, and dynamic + recovery from concurrent page splits (that is, splits between + unlocking an internal page, and subsequently locking its child page + during a descent). L&Y Trees are sometimes referred to as "B-Link + trees" in the literature. + The above indicates 2 things: a. L & Y doesn't need to hold read locks concurrently. b. Advantage of right-links at all levels and "high-key". As per my understanding, we are not following point (a) in our code, so what is the benefit we get by having a reference of same in README? Isn't it better if we mention how the point (b) is used in our code and it's advantage together rather than keeping it at top of README? Already README mentions in brief about right-link and how it is used as below: ".. The scan must remember the page's right-link at the time it was scanned, since that is the page to move right to; if we move right to the current right-link then we'd re-scan any items moved by a page split. ..." With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: [HACKERS] small doccumentation fix in psql
This patch does update the documentation as stated, and make it consistent with the reality and the embedded psql help. This is an improvement and I recommand its inclusion. I would also suggest to move the sentence at the end of the description: "\pset without any arguments displays the current status of all printing options." At then end of the first paragraph, before the detailed description of options, so as to comment directly on this non mandatory "option". Indeed, the new patch implements the suggestion, and I think this is an improvement, so I recommand its inclusion. -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] plpgsql.extra_warnings='num_into_expressions'
Hi I looked on this patch and I am thinking so it is not a good idea. It introduce early dependency between functions and pg_class based objects. This check should not be integrated to function validation directly. We can integrate it to plpgsql_check Regards Pavel 2014-07-21 22:56 GMT+02:00 Marko Tiikkaja : > Hi again, > > Here's a patch which allows you to notice those annoying bugs with INTO > slightly more quickly. Adding to the next commit phest. > > > .marko > > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers > >
[HACKERS] how to skip building certain paths in path tree
How do I modify PostgreSQL 9.3 source to skip certain join combinations (skip building certain paths in path tree) by checking some condition
Re: [HACKERS] how to skip building certain paths in path tree
On Tue, Jul 22, 2014 at 10:53 AM, Rajmohan C wrote: > How do I modify PostgreSQL 9.3 source to skip certain join combinations > (skip building certain paths in path tree) by checking some condition > have a look at joinpaths.c
Re: [HACKERS] Request for Patch Feedback: Lag & Lead Window Functions Can Ignore Nulls
On Thu, 2014-07-10 at 23:43 -0700, Jeff Davis wrote: > On Mon, 2014-07-07 at 01:21 -0700, Jeff Davis wrote: > > On Sun, 2014-07-06 at 21:11 -0700, Jeff Davis wrote: > > > On Wed, 2014-04-16 at 12:50 +0100, Nicholas White wrote: > > > > Thanks for the detailed feedback, I'm sorry it took so long to > > > > incorporate it. I've attached the latest version of the patch, fixing > > > > in particular: > > As innocent as this patch seemed at first, it actually opens up a lot of > questions. > > Attached is the (incomplete) edit of the patch so far. I haven't received much feedback so far on my changes, and I don't think I will finish hacking on this in the next few days, so I will mark it "returned with feedback". I don't think it's too far away, but my changes are substantial enough (and incomplete enough) that some feedback is required. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] gaussian distribution pgbench
Please find attached 2 patches, which are a split of the patch discussed in this thread. Please find attached a very minor improvement to apply a code (variable name) simplification directly in patch A so as to avoid a change in patch B. The cumulated patch is the same as previous. (A) add gaussian & exponential options to pgbench \setrandom the patch includes sql test files. There is no change in the *code* from previous already reviewed submissions, so I do not think that it needs another review on that account. However I have (yet again) reworked the *documentation* (for Andres Freund & Robert Haas), in particular both descriptions now follow the same structure (introduction, formula, intuition, rule of thumb and constraint). I have differentiated the concept and the option by putting the later in tags, and added a link to the corresponding wikipedia pages. Please bear in mind that: 1. English is not my native language. 2. this is not easy reading... this is maths, to read slowly:-) 3. word smithing contributions are welcome. I assume somehow that a user interested in gaussian & exponential distributions must know a little bit about probabilities... (B) add pgbench test variants with gauss & exponential. I have reworked the patch so as to avoid copy pasting the 3 test cases, as requested by Andres Freund, thus this is new, although quite simple, code. I have also added explanations in the documentation about how to interpret the "decile" outputs, so as to hopefully address Robert Haas comments. -- Fabien.diff --git a/contrib/pgbench/README b/contrib/pgbench/README new file mode 100644 index 000..6881256 --- /dev/null +++ b/contrib/pgbench/README @@ -0,0 +1,5 @@ +# gaussian and exponential tests +# with XXX as "expo" or "gauss" +psql test < test-init.sql +./pgbench -M prepared -f test-XXX-run.sql -t 100 -P 1 -n test +psql test < test-XXX-check.sql diff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c index 4aa8a50..379ef24 100644 --- a/contrib/pgbench/pgbench.c +++ b/contrib/pgbench/pgbench.c @@ -41,6 +41,7 @@ #include #include #include +#include #ifdef HAVE_SYS_SELECT_H #include #endif @@ -98,6 +99,8 @@ static int pthread_join(pthread_t th, void **thread_return); #define LOG_STEP_SECONDS 5 /* seconds between log messages */ #define DEFAULT_NXACTS 10 /* default nxacts */ +#define MIN_GAUSSIAN_THRESHOLD 2.0 /* minimum threshold for gauss */ + int nxacts = 0; /* number of transactions per client */ int duration = 0; /* duration in seconds */ @@ -471,6 +474,76 @@ getrand(TState *thread, int64 min, int64 max) return min + (int64) ((max - min + 1) * pg_erand48(thread->random_state)); } +/* + * random number generator: exponential distribution from min to max inclusive. + * the threshold is so that the density of probability for the last cut-off max + * value is exp(-threshold). + */ +static int64 +getExponentialrand(TState *thread, int64 min, int64 max, double threshold) +{ + double cut, uniform, rand; + assert(threshold > 0.0); + cut = exp(-threshold); + /* erand in [0, 1), uniform in (0, 1] */ + uniform = 1.0 - pg_erand48(thread->random_state); + /* + * inner expresion in (cut, 1] (if threshold > 0), + * rand in [0, 1) + */ + assert((1.0 - cut) != 0.0); + rand = - log(cut + (1.0 - cut) * uniform) / threshold; + /* return int64 random number within between min and max */ + return min + (int64)((max - min + 1) * rand); +} + +/* random number generator: gaussian distribution from min to max inclusive */ +static int64 +getGaussianrand(TState *thread, int64 min, int64 max, double threshold) +{ + double stdev; + double rand; + + /* + * Get user specified random number from this loop, with + * -threshold < stdev <= threshold + * + * This loop is executed until the number is in the expected range. + * + * As the minimum threshold is 2.0, the probability of looping is low: + * sqrt(-2 ln(r)) <= 2 => r >= e^{-2} ~ 0.135, then when taking the average + * sinus multiplier as 2/pi, we have a 8.6% looping probability in the + * worst case. For a 5.0 threshold value, the looping probability + * is about e^{-5} * 2 / pi ~ 0.43%. + */ + do + { + /* + * pg_erand48 generates [0,1), but for the basic version of the + * Box-Muller transform the two uniformly distributed random numbers + * are expected in (0, 1] (see http://en.wikipedia.org/wiki/Box_muller) + */ + double rand1 = 1.0 - pg_erand48(thread->random_state); + double rand2 = 1.0 - pg_erand48(thread->random_state); + + /* Box-Muller basic form transform */ + double var_sqrt = sqrt(-2.0 * log(rand1)); + stdev = var_sqrt * sin(2.0 * M_PI * rand2); + + /* + * we may try with cos, but there may be a bias induced if the previous + * value fails the test? To be on the safe side, let us try over. + */ + } + while (stdev < -threshold || stdev >= threshold); + + /* stdev is in [-threshold, threshold), normalization to [0,1) */ + rand = (stdev + thresho
Re: [HACKERS] [bug fix] Suppress "autovacuum: found orphan temp table" message
On 2014-07-22 10:09:04 +0900, MauMau wrote: > >From: "Andres Freund" > >>On 2014-07-18 23:38:09 +0900, MauMau wrote: > >>>So, I propose a simple fix to change the LOG level to DEBUG1. I don't > >>>know > >>>which of DEBUG1-DEBUG5 is appropriate, and any level is OK. Could you > >>>include this in 9.2.9? > >> > >>Surely that's the wrong end to tackle this from. Hiding actual problems > >>is a seriously bad idea. > > > >No, there is no serious problem in the user operation in this situation. > >Server crash cannot be avoided, and must be anticipated. The problem is > >that PostgreSQL makes users worried about lots of (probably) unnecessary > >messages. > > Is there any problem if we don't output the message? Yes. The user won't know that possibly gigabytes worth of diskspace aren't freed. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] plpgsql.extra_warnings='num_into_expressions'
On 7/22/14, 7:06 AM, Pavel Stehule wrote: I looked on this patch and I am thinking so it is not a good idea. It introduce early dependency between functions and pg_class based objects. What dependency? The patch only looks at the raw parser output, so it won't e.g. know whether SELECT * INTO a, b FROM foo; is problematic or not. .marko -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Index-only scans for multicolumn GIST
On 07/21/2014 10:47 PM, Anastasia Lubennikova wrote: Hi, hackers! There are new results of my work on GSoC project "Index-only scans for GIST". Previous post is here: http://postgresql.1045698.n5.nabble.com/Index-only-scans-for-GIST-td5804892.html Repository is https://github.com/lubennikovaav/postgres/tree/indexonlygist2 Patch is in attachments. It includes indexonlyscan for multicolumn GiST. It works correctly - results are the same with another scan plans. Fetch() method is realized for box and circle opclasses Improvement for circle opclass is not such distinct as for box opclass, because of recheck. For a circle, the GiST index stores a bounding box of the circle. The new fetch function reverses that, calculating the radius and center of the circle from the bounding box. Those conversions lose some precision due to rounding. Are we okay with that? Floating point math is always subject to rounding errors, but there's a good argument to be made that it's not acceptable to get a different value back when the user didn't explicitly invoke any math functions. As an example: create table circle_tbl (c circle); create index circle_tbl_idx on circle_tbl using gist (c); insert into circle_tbl values ('1.23456789012345,1.23456789012345,1e300'); postgres=# set enable_seqscan=off; set enable_bitmapscan=off; set enable_indexonlyscan=on; SET SET SET postgres=# select * from circle_tbl ; c <(0,0),1e+300> (1 row) postgres=# set enable_indexonlyscan=off; SET postgres=# select * from circle_tbl ; c -- <(1.23456789012345,1.23456789012345),1e+300> (1 row) - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers