[HACKERS] Missing importing option of postgres_fdw
Hi, I noticed that there is no postgres_fdw option to control whether check constraints on remote tables are included in the definitions of foreign tables imported from a remote PG server when performing IMPORT FOREIGN SCHEMA, while we now allow check constraints on foreign tables. Attached is a patch for that. I'll add this to the next CF. Best regards, Etsuro Fujita *** a/contrib/postgres_fdw/expected/postgres_fdw.out --- b/contrib/postgres_fdw/expected/postgres_fdw.out *** *** 3415,3420 CREATE TABLE import_source.t2 (c1 int default 42, c2 varchar NULL, c3 text colla --- 3415,3421 CREATE TYPE typ1 AS (m1 int, m2 varchar); CREATE TABLE import_source.t3 (c1 timestamptz default now(), c2 typ1); CREATE TABLE import_source.x 4 (c1 float8, C 2 text, c3 varchar(42)); + ALTER TABLE import_source.x 4 ADD CONSTRAINT c1positive CHECK (c1 = 0); CREATE TABLE import_source.x 5 (c1 float8); ALTER TABLE import_source.x 5 DROP COLUMN c1; CREATE SCHEMA import_dest1; *** *** 3462,3467 FDW Options: (schema_name 'import_source', table_name 't3') --- 3463,3470 c1 | double precision | | (column_name 'c1') C 2| text | | (column_name 'C 2') c3 | character varying(42) | | (column_name 'c3') + Check constraints: + c1positive CHECK (c1 = 0::double precision) Server: loopback FDW Options: (schema_name 'import_source', table_name 'x 4') *** *** 3518,3523 FDW Options: (schema_name 'import_source', table_name 't3') --- 3521,3528 c1 | double precision | | (column_name 'c1') C 2| text | | (column_name 'C 2') c3 | character varying(42) | | (column_name 'c3') + Check constraints: + c1positive CHECK (c1 = 0::double precision) Server: loopback FDW Options: (schema_name 'import_source', table_name 'x 4') *** *** 3529,3535 FDW Options: (schema_name 'import_source', table_name 'x 5') CREATE SCHEMA import_dest3; IMPORT FOREIGN SCHEMA import_source FROM SERVER loopback INTO import_dest3 ! OPTIONS (import_collate 'false', import_not_null 'false'); \det+ import_dest3 List of foreign tables Schema| Table | Server | FDW Options | Description --- 3534,3540 CREATE SCHEMA import_dest3; IMPORT FOREIGN SCHEMA import_source FROM SERVER loopback INTO import_dest3 ! OPTIONS (import_collate 'false', import_not_null 'false', import_check 'false'); \det+ import_dest3 List of foreign tables Schema| Table | Server | FDW Options | Description *** a/contrib/postgres_fdw/postgres_fdw.c --- b/contrib/postgres_fdw/postgres_fdw.c *** *** 2584,2589 postgresImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid) --- 2584,2590 bool import_collate = true; bool import_default = false; bool import_not_null = true; + bool import_check = true; ForeignServer *server; UserMapping *mapping; PGconn *conn; *** *** 2604,2609 postgresImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid) --- 2605,2612 import_default = defGetBoolean(def); else if (strcmp(def-defname, import_not_null) == 0) import_not_null = defGetBoolean(def); + else if (strcmp(def-defname, import_check) == 0) + import_check = defGetBoolean(def); else ereport(ERROR, (errcode(ERRCODE_FDW_INVALID_OPTION_NAME), *** *** 2824,2829 postgresImportForeignSchema(ImportForeignSchemaStmt *stmt, Oid serverOid) --- 2827,2929 /* Clean up */ PQclear(res); res = NULL; + + /* + * Fetch all table (and column) check constraint data from this schema, + * possibly restricted by EXCEPT or LIMIT TO. + */ + if (import_check) + { + resetStringInfo(buf); + + appendStringInfoString(buf, + SELECT relname, + conname, + pg_get_constraintdef(r.oid) + FROM pg_class c + JOIN pg_namespace n ON + relnamespace = n.oid + JOIN pg_constraint r ON + conrelid = c.oid AND contype = 'c' ); + + appendStringInfoString(buf, + WHERE c.relkind IN ('r', 'f') + AND n.nspname = ); + deparseStringLiteral(buf, stmt-remote_schema); + + /* Apply restrictions for LIMIT TO and EXCEPT */ + if (stmt-list_type == FDW_IMPORT_SCHEMA_LIMIT_TO || + stmt-list_type == FDW_IMPORT_SCHEMA_EXCEPT) + { + bool first_item = true; + + appendStringInfoString(buf, AND c.relname ); + if (stmt-list_type == FDW_IMPORT_SCHEMA_EXCEPT) + appendStringInfoString(buf, NOT ); + appendStringInfoString(buf, IN (); + + /* Append list of table names within IN clause */ +
[HACKERS] Missing psql tab-completion for ALTER FOREIGN TABLE
Here is a patch to add missing tab-completion for ALTER FOREIGN TABLE. I'll add this to the next CF. Best regards, Etsuro Fujita *** a/src/bin/psql/tab-complete.c --- b/src/bin/psql/tab-complete.c *** *** 1128,1134 psql_completion(const char *text, int start, int end) pg_strcasecmp(prev2_wd, TABLE) == 0) { static const char *const list_ALTER_FOREIGN_TABLE[] = ! {ALTER, DROP, RENAME, OWNER TO, SET SCHEMA, NULL}; COMPLETE_WITH_LIST(list_ALTER_FOREIGN_TABLE); } --- 1128,1136 pg_strcasecmp(prev2_wd, TABLE) == 0) { static const char *const list_ALTER_FOREIGN_TABLE[] = ! {ADD, ALTER, DISABLE TRIGGER, DROP, ENABLE, INHERIT, ! NO INHERIT, OPTIONS, OWNER TO, RENAME, SET, ! VALIDATE CONSTRAINT, NULL}; COMPLETE_WITH_LIST(list_ALTER_FOREIGN_TABLE); } -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Missing importing option of postgres_fdw
On Mon, Apr 27, 2015 at 3:15 PM, Etsuro Fujita fujita.ets...@lab.ntt.co.jp wrote: Hi, I noticed that there is no postgres_fdw option to control whether check constraints on remote tables are included in the definitions of foreign tables imported from a remote PG server when performing IMPORT FOREIGN SCHEMA, while we now allow check constraints on foreign tables. Attached is a patch for that. I'll add this to the next CF. I guess that the addition of this option makes sense, but I think that this patch is doing it wrong by using ALTER FOREIGN TABLE and by changing the queries authorized in ImportForeignSchema(). The point of IMPORT FOREIGN SCHEMA is to authorize only CREATE FOREIGN TABLE and not other types of queries, not to mention that CREATE FOREIGN TABLE supports the definitions of constraints at table and column-level. Logically, this patch should just create diffs with postgres_fdw and nothing else. Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] PATCH: remove nclients/nthreads constraint from pgbench
Remove pgbench constraint that the number of clients must be a multiple of the number of threads, by sharing clients among threads as evenly as possible. Rational: allows to test the database load with any number of client without unrelated constraint. The current setting means for instance that testing with a prime number of clients always uses one thread, whereas other number of clients can use a different number of threads. This constraint does not make much sense. -- Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml index a808546..2517a3a 100644 --- a/doc/src/sgml/ref/pgbench.sgml +++ b/doc/src/sgml/ref/pgbench.sgml @@ -326,8 +326,7 @@ pgbench optional replaceableoptions/ /optional replaceabledbname/ para Number of worker threads within applicationpgbench/application. Using more than one thread can be helpful on multi-CPU machines. -The number of clients must be a multiple of the number of threads, -since each thread is given the same number of client sessions to manage. +Clients are distributed as evenly as possible among available threads. Default is 1. /para /listitem diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index 06a4dfd..36ff427 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -2762,6 +2762,7 @@ main(int argc, char **argv) int c; int nclients = 1; /* default number of simulated clients */ int nthreads = 1; /* default number of threads */ + int nclients_shared;/* clients shared between threads */ int is_init_mode = 0; /* initialize mode? */ int is_no_vacuum = 0; /* no vacuum at all before testing? */ int do_vacuum_accounts = 0; /* do vacuum accounts before testing? */ @@ -3122,12 +3123,6 @@ main(int argc, char **argv) if (nxacts = 0 duration = 0) nxacts = DEFAULT_NXACTS; - if (nclients % nthreads != 0) - { - fprintf(stderr, number of clients (%d) must be a multiple of number of threads (%d)\n, nclients, nthreads); - exit(1); - } - /* --sampling-rate may be used only with -l */ if (sample_rate 0.0 !use_log) { @@ -3328,19 +3323,24 @@ main(int argc, char **argv) /* set up thread data structures */ threads = (TState *) pg_malloc(sizeof(TState) * nthreads); + nclients_shared = 0; + for (i = 0; i nthreads; i++) { TState *thread = threads[i]; thread-tid = i; - thread-state = state[nclients / nthreads * i]; - thread-nstate = nclients / nthreads; + thread-state = state[nclients_shared]; + thread-nstate = + (nclients - nclients_shared + nthreads - i - 1) / (nthreads - i); thread-random_state[0] = random(); thread-random_state[1] = random(); thread-random_state[2] = random(); thread-throttle_latency_skipped = 0; thread-latency_late = 0; + nclients_shared += thread-nstate; + if (is_latencies) { /* Reserve memory for the thread to store per-command latencies */ @@ -3364,6 +3364,9 @@ main(int argc, char **argv) } } + /* all clients must be shared between threads */ + Assert(nclients_shared == nclients); + /* get start up time */ INSTR_TIME_SET_CURRENT(start_time); -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL/pgSQL, RAISE and error context
On 4/27/15 11:47 AM, Joel Jacobson wrote: On Mon, Apr 27, 2015 at 6:14 PM, Marko Tiikkaja ma...@joh.to mailto:ma...@joh.to wrote: That sounds weird. log_min_messages are the messages sent to the log; client_min_messages are sent to the client. context_min_messages are not sent to a context, whatever that would mean. Good point. I think it can't be any clearer than the proposed plpgsql.display_context_min_messages client_min_context. It's doing the same thing as min_messages does, just for context instead of the message. Or does this affect client and log the same way? -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reducing tuple overhead
On 4/25/15 12:12 AM, Amit Kapila wrote: ... which isn't possible. You can not go from a heap tuple to an index tuple. We will have the access to index value during delete, so why do you think that we need linkage between heap and index tuple to perform Delete operation? I think we need to think more to design Delete .. by CTID, but that should be doable. The problem with just having the value is that if *anything* changes between how you evaluated the value when you created the index tuple and when you evaluate it a second time you'll corrupt your index. This is actually an incredibly easy problem to have; witness how we allowed indexing timestamptz::date until very recently. That was clearly broken, but because we never attempted to re-run the index expression to do vacuuming at least we never corrupted the index itself. This has been discussed in the past. I have tried to search in archive, but not getting what is the exact problem. Unfortunately I can't find prior discussion now either... :/ -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Temporal extensions
On 4/25/15 7:49 PM, Dave Jones wrote: I've been working on a conversion of several utilities I wrote for another engine, and was wondering if there was any interest in seeing any of them added to contrib/ at some point in the vague undefined future? Not in contrib, no, because there's no reason for these to be tied to specific versions of Postgres. Adding to PGXN would make sense though. (Though, I dislike using timestamps to do change/history tracking, but that's just me...) -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Feedback on getting rid of VACUUM FULL
On 4/25/15 6:30 AM, Simon Riggs wrote: On 24 April 2015 at 22:36, Jim Nasby jim.na...@bluetreble.com mailto:jim.na...@bluetreble.com wrote: Instead of adding forcefsm, I think it would be more useful to accept a target block number. That way we can actually control where the new tuple goes. For this particular case we'd presumably go with normal FSM page selection logic, but someone could chose to to do something more sophisticated if they wanted. [1] http://postgresql.org/message-id/3409.1253147...@sss.pgh.pa.us [2] http://postgresql.org/message-id/3631.1253149...@sss.pgh.pa.us I don't think specifying exact blocks will help, it will get us in more trouble in the long run. I think we need to be able to specify these update placement strategies ... and these new block selection strategies ... We can also design a utility to actively use TARGBLOCK_NEW and FSM_SHRINK to reduce table size without blocking writes. I generally agree, but was trying to keep the scope on this more manageable. A first step in this direction is just providing a method to move a specific tuple to a specific page; if there's no room there throw an error. Having some kind of SQL level support for that will be a lot easier than adding those other modes to the FSM, and will at least allow users to deal with bloat themselves. But this is all stuff for 9.6... Definitely. :) -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff
On 2015-04-27 16:28:49 +0200, Andres Freund wrote: On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote: * So far, there has been a lack of scrutiny about what the patch does in the rewriter (in particular, to support the EXCLUDED.* pseudo-alias expression) and optimizer (the whole concept of an auxiliary query/plan that share a target RTE, and later target ResultRelation). If someone took a close look at that, it would be most helpful. ruleutils.c is also modified for the benefit of EXPLAIN output. This all applies only to the ON CONFLICT UPDATE patch. A committer could push out the IGNORE patch before this was 100% firm. I'm far from an expert on the relevant regions; but I'm starting to look nonetheless. I have to say that on a preliminary look it looks more complicated than it has to. So, I'm looking. And I've a few questions: * Why do we need to spread knowledge about speculative inserts that wide? It's now in 1) Query, 2) ParseState 3) ModifyTable 4) InsertStmt. That seems a bit wide - and as far as I see not really necessary. That e.g. transformUpdateStmt() has if (!pstate-p_is_speculative) blocks seems wrong. * afaics 'auxiliary query' isn't something we have under that name. This isn't really different to what wCTEs do, so I don't think we need this term here. * You refer to 'join like syntax' in a couple places. I don't really see the current syntax being that join like? Is that just a different perception of terminology or is that just remnants of earlier approaches? * I am rather unconvinced we need the whole 'ExcludedVar' mechanism. I don't really see why we need it at all? Can't we 'just' make those plain vars referring to the normal targetlist entry? I feel like I'm missing something here. * The whole dealing with the update clause doesn't seem super clean. I'm not sure yet how to do it nicer though :( Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT syntax issues
On 4/25/15 2:05 PM, Peter Geoghegan wrote: Note that the syntax is quite similar to the SQLite syntax of the same feature, that has ON CONFLICT IGNORE (it also has ON CONFLICT REPLACE, but not ON CONFLICT UPDATE). I don't know anything about SQLite's syntax, but from the online syntax diagrams https://www.sqlite.org/lang_insert.html https://www.sqlite.org/syntax/conflict-clause.html it appears that they are using quite a different syntax. The ON CONFLICT clause is attached to a constraint, specifying the default action for that constraint. The INSERT command can then override this default choice. I think. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Missing importing option of postgres_fdw
On 2015/04/27 15:51, Michael Paquier wrote: On Mon, Apr 27, 2015 at 3:15 PM, Etsuro Fujita fujita.ets...@lab.ntt.co.jp wrote: I noticed that there is no postgres_fdw option to control whether check constraints on remote tables are included in the definitions of foreign tables imported from a remote PG server when performing IMPORT FOREIGN SCHEMA, while we now allow check constraints on foreign tables. I guess that the addition of this option makes sense, but I think that this patch is doing it wrong by using ALTER FOREIGN TABLE and by changing the queries authorized in ImportForeignSchema(). The point of IMPORT FOREIGN SCHEMA is to authorize only CREATE FOREIGN TABLE and not other types of queries, not to mention that CREATE FOREIGN TABLE supports the definitions of constraints at table and column-level. I don't think so. IMO, I think it'd be more convenient and flexible to allow ALTER FOREIGN TABLE for creating foreign table definitions during ImportForeignSchema(). Thanks for the comment! Best regards, Etsuro Fujita -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Custom/Foreign-Join-APIs (Re: [HACKERS] [v9.5] Custom Plan API)
Hi Ashutosh, Thanks for the review. 2015/04/22 19:28、Ashutosh Bapat ashutosh.ba...@enterprisedb.com のメール: Tests --- 1.The postgres_fdw test is re/setting enable_mergejoin at various places. The goal of these tests seems to be to test the sanity of foreign plans generated. So, it might be better to reset enable_mergejoin (and may be all of enable_hashjoin, enable_nestloop_join etc.) to false at the beginning of the testcase and set them again at the end. That way, we will also make sure that foreign plans are chosen irrespective of future planner changes. I have different, rather opposite opinion about it. I disabled other join types as least as the tests pass, because I worry oversights come from planner changes. I hope to eliminate enable_foo from the test script, by improving costing model smarter. 2. In the patch, I see that the inheritance testcases have been deleted from postgres_fdw.sql, is that intentional? I do not see those being replaced anywhere else. It’s accidental removal, I restored the tests about inheritance feature. 3. We need one test for each join type (or at least for INNER and LEFT OUTER) where there are unsafe to push conditions in ON clause along-with safe-to-push conditions. For INNER join, the join should get pushed down with the safe conditions and for OUTER join it shouldn't be. Same goes for WHERE clause, in which case the join will be pushed down but the unsafe-to-push conditions will be applied locally. Currently INNER JOINs with unsafe join conditions are not pushed down, so such test is not in the suit. As you say, in theory, INNER JOINs can be pushed down even they have push-down-unsafe join conditions, because such conditions can be evaluated no local side against rows retrieved without those conditions. 4. All the tests have ORDER BY, LIMIT in them, so the setref code is being exercised. But, something like aggregates would test the setref code better. So, we should add at-least one test like select avg(ft1.c1 + ft2.c2) from ft1 join ft2 on (ft1.c1 = ft2.c1). Added an aggregate case, and also added an UNION case for Append. 5. It will be good to add some test which contain join between few foreign and few local tables to see whether we are able to push down the largest possible foreign join tree to the foreign server. Code --- In classifyConditions(), the code is now appending RestrictInfo::clause rather than RestrictInfo itself. But the callers of classifyConditions() have not changed. Is this change intentional? Yes, the purpose of the change is to make appendConditions (former name is appendWhereClause) can handle JOIN ON clause, list of Expr. The functions which consume the lists produced by this function handle expressions as well RestrictInfo, so you may not have noticed it. Because of this change, we might be missing some optimizations e.g. in function postgresGetForeignPlan() 793 if (list_member_ptr(fpinfo-remote_conds, rinfo)) 794 remote_conds = lappend(remote_conds, rinfo-clause); 795 else if (list_member_ptr(fpinfo-local_conds, rinfo)) 796 local_exprs = lappend(local_exprs, rinfo-clause); 797 else if (is_foreign_expr(root, baserel, rinfo-clause)) 798 remote_conds = lappend(remote_conds, rinfo-clause); 799 else 800 local_exprs = lappend(local_exprs, rinfo-clause); Finding a RestrictInfo in remote_conds avoids another call to is_foreign_expr(). So with this change, I think we are doing an extra call to is_foreign_expr(). Hm, it seems better to revert my change and make appendConditions downcast given information into RestrictInfo or Expr according to the node tag. The function get_jointype_name() returns an empty string for unsupported join types. Instead of that it should throw an error, if some code path accidentally calls the function with unsupported join type e.g. SEMI_JOIN. Agreed, fixed. While deparsing the SQL with rowmarks, the placement of FOR UPDATE/SHARE clause in the original query is not being honored, which means that we will end up locking the rows which are not part of the join result even when the join is pushed to the foreign server. E.g take the following query (it uses the tables created in postgres_fdw.sql tests) contrib_regression=# explain verbose select * from ft1 join ft2 on (ft1.c1 = ft2.c1) for update of ft1; QUERY PLAN
Re: [HACKERS] Fwd: [GENERAL] 4B row limit for CLOB tables
On 4/25/15 1:19 PM, Bruce Momjian wrote: Note if you are storing a table with rows that exceed 2KB in size (aggregate size of each row) then the Maximum number of rows in a table may be limited to 4 Billion, see TOAST. That's not accurate though; you could be limited to far less than 4B rows. If each row has 10 fields that toast, you'd be limited to just 400M rows. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] collate.linux.utf8 test coverage
On Sat, Apr 25, 2015 at 11:36 PM, Michael Paquier michael.paqu...@gmail.com wrote: On Sat, Apr 25, 2015 at 4:51 AM, Andrew Dunstan and...@dunslane.net wrote: The optional buildfarm module that runs this test was broken by commit dcae5faccab64776376d354decda0017c648bb53 Since nobody has responded to my complaint about this, I have disabled it on crake, the only buildfarm machine that has actually been running the test, so we now have no buildfarm coverage for it. Commit dcae5fac has switched in pg_regress --temp-install to --temp-instance, --psqldir to --bindir, and has removed --top-builddir and --extra-install. After looking at TestCollateLinuxUTF8.pm, I think that you need to use the one-liner attached. I am giving a shot at this test in dangomushi by doing the following: - Add TestCollateLinuxUTF8 in the list of modules - Add 'en_US.utf8' in the list of locales - Enable tr_TR.utf8, sv_SE.utf8 and de_DE.utf8. - Patching the module with the patch upthread. As far as I can see the patch is working, I just missed to install a couple of locales on this machine, making the test fail for the first time, but things will hopefully stabilize soon. Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] forward vs backward slashes in msvc build code
On Mon, Apr 27, 2015 at 9:20 AM, Michael Paquier michael.paqu...@gmail.com wrote: On Mon, Apr 27, 2015 at 8:53 AM, Andrew Dunstan and...@dunslane.net wrote: On 04/26/2015 04:08 PM, Noah Misch wrote: On Sun, Apr 26, 2015 at 10:23:58AM -0400, Andrew Dunstan wrote: Setting up a VS2008 environment is hard these days, as MS seems to have removed the download :-( Visual C++ 2008 Express Edition: http://go.microsoft.com/?linkid=7729279 Oh, cool. Thanks. Thanks, I'll try to set up an environment and come up with a real patch tested this time. What I suspect is that we need to correctly replace all the references to backslaches like '\\' or $obj =~ s/\\/_/g; to make things work correctly. So, I have set up an environment with VC tools, and have been able to reproduce the failure. The problems were less trivial than I though first: 1) - The file tree list was not correctly generated, build script generating vcproj file missing tree dependencies when listing items in Filter. For example, for a path like src/backend/access, it missed to list src, backend and access, listing only the files in such paths. 2) I noticed that VC builds do not like *at all* file paths with forward slashes, perhaps it could be possible to use a Condition like hasForwardSlashes but it seems safer to simply enforce the file paths to use backslashes in the vcproj files. That's inconsistent with the vcxproj files, but it works and Mkvcbuild.pm works as before. 3) I found why chkpass was needed as a dependency with libpgport and libpgcommon, actually crypt.c needs to be listed as a unique file, so it needs a fake entry in vcproj and vcxproj files. All those things are corrected in the patch attached, tested with both MS and VC. The build is still failing because of cac76582 that introduced the transforms, but at least it will partially cure currawong and put it at the same level as the other machines. Regards, -- Michael From ae02fa193dd281d95e6f802c49b4c3a650aafe61 Mon Sep 17 00:00:00 2001 From: Michael Paquier mich...@otacoo.com Date: Sun, 26 Apr 2015 22:49:09 -0700 Subject: [PATCH] Fix vcbuild failures and chkpass dependency caused by 854adb8 Switching the Windows build scripts to use forward slashes instead of backslashes has caused a couple of issues in VC builds: - The file tree list was not correctly generated, build script generating vcproj file missing tree dependencies when listing items in Filter. - VC builds do not accept file paths with forward slashes, perhaps it could be possible to use a Condition but it seems safer to simply enforce the file paths to use backslashes in the vcproj files. - chkpass had an unneeded dependency with libpgport and libpgcommon to make build succeed but actually it is not necessary as crypt.c is already listed for this project and should be replaced with a fake name as it is a unique file. --- src/tools/msvc/MSBuildProject.pm | 2 +- src/tools/msvc/Mkvcbuild.pm | 2 -- src/tools/msvc/VCBuildProject.pm | 33 +++-- 3 files changed, 20 insertions(+), 17 deletions(-) diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm index a16f9ac..0c837f7 100644 --- a/src/tools/msvc/MSBuildProject.pm +++ b/src/tools/msvc/MSBuildProject.pm @@ -143,7 +143,7 @@ EOF # File already exists, so fake a new name my $obj = $dir; - $obj =~ s/\\/_/g; + $obj =~ s/\//_/g; print $f EOF; ClCompile Include=$fileNameWithPath diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm index 344b75f..cfb9b24 100644 --- a/src/tools/msvc/Mkvcbuild.pm +++ b/src/tools/msvc/Mkvcbuild.pm @@ -33,12 +33,10 @@ my $contrib_defines = { 'refint' = 'REFINT_VERBOSE' }; my @contrib_uselibpq = ('dblink', 'oid2name', 'postgres_fdw', 'vacuumlo'); my @contrib_uselibpgport = ( - 'chkpass', 'oid2name', 'pg_standby', 'vacuumlo'); my @contrib_uselibpgcommon = ( - 'chkpass', 'oid2name', 'pg_standby', 'vacuumlo'); diff --git a/src/tools/msvc/VCBuildProject.pm b/src/tools/msvc/VCBuildProject.pm index dda662f..add7982 100644 --- a/src/tools/msvc/VCBuildProject.pm +++ b/src/tools/msvc/VCBuildProject.pm @@ -75,43 +75,48 @@ EOF my $dir = $1; my $file = $2; - # Walk backwards down the directory stack and close any dirs we're done with + # Walk backwards down the directory stack and close any dirs + # we're done with. while ($#dirstack = 0) { - if (join('\\', @dirstack) eq -substr($dir, 0, length(join('\\', @dirstack + if (join('/', @dirstack) eq +substr($dir, 0, length(join('/', @dirstack { -last if (length($dir) == length(join('\\', @dirstack))); +last if (length($dir) == length(join('/', @dirstack))); last - if (substr($dir, length(join('\\', @dirstack)), 1) eq '\\'); + if (substr($dir, length(join('/', @dirstack)), 1) eq '/'); } print $f ' ' x $#dirstack . /Filter\n; pop @dirstack; } # Now walk forwards and create whatever directories are
Re: [HACKERS] Fwd: [GENERAL] 4B row limit for CLOB tables
On 4/27/15, Jim Nasby jim.na...@bluetreble.com wrote: On 4/25/15 1:19 PM, Bruce Momjian wrote: Note if you are storing a table with rows that exceed 2KB in size (aggregate size of each row) then the Maximum number of rows in a table may be limited to 4 Billion, see TOAST. That's not accurate though; you could be limited to far less than 4B rows. If each row has 10 fields that toast, you'd be limited to just 400M rows. Good point. I noted that on the TOAST wiki page now, at least (and also mentioned that using partitioning is a work around for now). -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] collate.linux.utf8 test coverage
On 04/27/2015 02:39 AM, Michael Paquier wrote: On Sat, Apr 25, 2015 at 11:36 PM, Michael Paquier michael.paqu...@gmail.com wrote: On Sat, Apr 25, 2015 at 4:51 AM, Andrew Dunstan and...@dunslane.net wrote: The optional buildfarm module that runs this test was broken by commit dcae5faccab64776376d354decda0017c648bb53 Since nobody has responded to my complaint about this, I have disabled it on crake, the only buildfarm machine that has actually been running the test, so we now have no buildfarm coverage for it. Commit dcae5fac has switched in pg_regress --temp-install to --temp-instance, --psqldir to --bindir, and has removed --top-builddir and --extra-install. After looking at TestCollateLinuxUTF8.pm, I think that you need to use the one-liner attached. I am giving a shot at this test in dangomushi by doing the following: - Add TestCollateLinuxUTF8 in the list of modules - Add 'en_US.utf8' in the list of locales - Enable tr_TR.utf8, sv_SE.utf8 and de_DE.utf8. - Patching the module with the patch upthread. As far as I can see the patch is working, I just missed to install a couple of locales on this machine, making the test fail for the first time, but things will hopefully stabilize soon. Regards, Yeah, thanks. crake has been back to running this check for a few days - see for example http://www.pgbuildfarm.org/cgi-bin/show_stage_log.pl?nm=crakedt=2015-04-26%2021%3A09%3A47stg=install-check-collate-en_US.utf8 The patch needed a little adjustment so it runs with the right switches in the right branch, I think. See https://github.com/PGBuildFarm/client-code/commit/2176bba516f9effd17fceec45e93218ef8a21e8d cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Missing importing option of postgres_fdw
On Mon, Apr 27, 2015 at 4:20 PM, Etsuro Fujita fujita.ets...@lab.ntt.co.jp wrote: On 2015/04/27 15:51, Michael Paquier wrote: On Mon, Apr 27, 2015 at 3:15 PM, Etsuro Fujita fujita.ets...@lab.ntt.co.jp wrote: I noticed that there is no postgres_fdw option to control whether check constraints on remote tables are included in the definitions of foreign tables imported from a remote PG server when performing IMPORT FOREIGN SCHEMA, while we now allow check constraints on foreign tables. I guess that the addition of this option makes sense, but I think that this patch is doing it wrong by using ALTER FOREIGN TABLE and by changing the queries authorized in ImportForeignSchema(). The point of IMPORT FOREIGN SCHEMA is to authorize only CREATE FOREIGN TABLE and not other types of queries, not to mention that CREATE FOREIGN TABLE supports the definitions of constraints at table and column-level. I don't think so. IMO, I think it'd be more convenient and flexible to allow ALTER FOREIGN TABLE for creating foreign table definitions during ImportForeignSchema(). Authorizing ALTER FOREIGN TABLE as query string that a FDW can use with IMPORT FOREIGN SCHEMA is a different feature than what is proposed in this patch, aka an option for postgres_fdw and meritates a discussion on its own because it impacts all the FDWs and not only postgres_fdw. Now, related to this patch, we could live without authorizing ALTER FOREIGN TABLE because CREATE FOREIGN TABLE does authorize the definition of CHECK constraints. Also, I imagine that it would be more natural to combine the fetch of the constraint expression in the existing query with a join on conrelid so as to not replicate the logic that your patch is doing with FDW_IMPORT_SCHEMA_LIMIT_TO and FDW_IMPORT_SCHEMA_EXCEPT that reuses the same logic to re-build the [ NOT ] IN clause. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] forward vs backward slashes in msvc build code
On Mon, Apr 27, 2015 at 9:25 PM, Peter Eisentraut pete...@gmx.net wrote: On 4/27/15 2:09 AM, Michael Paquier wrote: 2) I noticed that VC builds do not like *at all* file paths with forward slashes, perhaps it could be possible to use a Condition like hasForwardSlashes but it seems safer to simply enforce the file paths to use backslashes in the vcproj files. That's inconsistent with the vcxproj files, but it works and Mkvcbuild.pm works as before. 3) I found why chkpass was needed as a dependency with libpgport and libpgcommon, actually crypt.c needs to be listed as a unique file, so it needs a fake entry in vcproj and vcxproj files. All those things are corrected in the patch attached, tested with both MS and VC. What's the difference between MS and VC? MS uses vcxproj specs, and VC vcproj specs... If VC doesn't accept forward slashes, doesn't that invalidate the premise of this patch? I don't think so. The point of the patch is to be able to run the Windows build script in a non-Windows environment to check that modifications in the Makefile stucture does not break some basics of the Windows build script, and this is not impacted. Is it worth attempting to maintain it? vcbuild (~2008) has been replaced by msbuild in MS 2010. Hence I imagine that it is still used a lot. Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] collate.linux.utf8 test coverage
On Mon, Apr 27, 2015 at 9:28 PM, Andrew Dunstan wrote: The patch needed a little adjustment so it runs with the right switches in the right branch, I think. See https://github.com/PGBuildFarm/client-code/commit/2176bba516f9effd17fceec45e93218ef8a21e8d Oops, yes. Updated as well... -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] forward vs backward slashes in msvc build code
On 4/27/15 2:09 AM, Michael Paquier wrote: 2) I noticed that VC builds do not like *at all* file paths with forward slashes, perhaps it could be possible to use a Condition like hasForwardSlashes but it seems safer to simply enforce the file paths to use backslashes in the vcproj files. That's inconsistent with the vcxproj files, but it works and Mkvcbuild.pm works as before. 3) I found why chkpass was needed as a dependency with libpgport and libpgcommon, actually crypt.c needs to be listed as a unique file, so it needs a fake entry in vcproj and vcxproj files. All those things are corrected in the patch attached, tested with both MS and VC. What's the difference between MS and VC? If VC doesn't accept forward slashes, doesn't that invalidate the premise of this patch? Is it worth attempting to maintain it? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: knowing detail of config files via SQL
On Sat, Apr 25, 2015 at 3:40 AM, David Steele da...@pgmasters.net wrote: On 4/4/15 9:21 AM, Sawada Masahiko wrote: I added documentation changes to patch is attached. Also I tried to use memory context for allocation of guc_file_variable in TopMemoryContext, but it was failed access after received SIGHUP. Below is my review of the v5 patch: Thank you for your review comment! The latest patch is attached. diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml + sect1 id=view-pg-file-settings + titlestructnamepg_file_settings/structname/title + + indexterm zone=view-pg-file-settings + primarypg_file_settings/primary + /indexterm + + para + The view structnamepg_file_settings/structname provides confirm parameters via SQL, + which is written in configuration file. This view can be update by reloading configration file. + /para Perhaps something like: para The view structnamepg_file_settings/structname provides access to run-time parameters that are defined in configuration files via SQL. In contrast to structnamepg_settings/structname a row is provided for each occurrence of the parameter in a configuration file. This is helpful for discovering why one value may have been used in preference to another when the parameters were loaded. /para + table + titlestructnamepg_file_settings/ Columns/title + + tgroup cols=3 + thead +row + entryName/entry + entryType/entry + entryDescription/entry +/row + /thead + tbody +row + entrystructfieldsourcefile/structfield/entry + entrystructfieldtext/structfield/entry + entryA path of configration file/entry From pg_settings: entryConfiguration file the current value was set in (null for values set from sources other than configuration files, or when examined by a non-superuser); helpful when using literalinclude/ directives in configuration files/entry +/row +row + entrystructfieldsourceline/structfield/entry + entrystructfieldinteger/structfield/entry + entryThe line number in configuration file/entry From pg_settings: entryLine number within the configuration file the current value was set at (null for values set from sources other than configuration files, or when examined by a non-superuser) /entry +/row +row + entrystructfieldname/structfield/entry + entrystructfieldtext/structfield/entry + entryParameter name in configuration file/entry From pg_settings: entryRun-time configuration parameter name/entry +/row +row + entrystructfieldsetting/structfield/entry + entrystructfieldtext/structfield/entry + entryValue of the parameter in configuration file/entry +/row + /tbody + /tgroup + /table + +/sect1 The documentation is updated based on your suggestion. diff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l @@ -342,6 +345,42 @@ ProcessConfigFile(GucContext context) +guc_array = guc_file_variables; +for (item = head; item; item = item-next, guc_array++) +{ +free(guc_array-name); +free(guc_array-value); +free(guc_array-filename); There's an issue freeing these when calling pg_reload_conf(): Fix. postgres=# alter system set max_connections = 100; ALTER SYSTEM postgres=# select * from pg_reload_conf(); LOG: received SIGHUP, reloading configuration files pg_reload_conf t (1 row) postgres=# postgres(25078,0x7fff747b2300) malloc: *** error for object 0x424d380044: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug Of course a config reload can't change max_connections, but I wouldn't expect it to crash, either. +guc_array-sourceline = -1; I can't see the need for this since it is reassigned further down. Fix. -- Up-thread David J had recommended an ordering column (or seqno) that would provide the order in which the settings were loaded. I think this would give valuable information that can't be gleaned from the line numbers alone. Personally I think it would be fine to start from 1 and increment for each setting found, rather than rank within a setting. If the user wants per setting ranking that's what SQL is for. So the output would look something like: sourcefile | sourceline | seqno | name | setting --++---+-+--- /db/postgresql.conf | 64 | 1 | max_connections | 100 /db/postgresql.conf |116 | 2 | shared_buffers | 128MB /db/postgresql.conf |446 | 3 | log_timezone| US/Eastern /db/postgresql.auto.conf | 3 | 4 | max_connections | 200 Yep, I agree with you. Added seqno column into pg_file_settings
Re: [HACKERS] forward vs backward slashes in msvc build code
On 04/27/2015 08:46 AM, Michael Paquier wrote: On Mon, Apr 27, 2015 at 9:25 PM, Peter Eisentraut pete...@gmx.net wrote: On 4/27/15 2:09 AM, Michael Paquier wrote: 2) I noticed that VC builds do not like *at all* file paths with forward slashes, perhaps it could be possible to use a Condition like hasForwardSlashes but it seems safer to simply enforce the file paths to use backslashes in the vcproj files. That's inconsistent with the vcxproj files, but it works and Mkvcbuild.pm works as before. 3) I found why chkpass was needed as a dependency with libpgport and libpgcommon, actually crypt.c needs to be listed as a unique file, so it needs a fake entry in vcproj and vcxproj files. All those things are corrected in the patch attached, tested with both MS and VC. What's the difference between MS and VC? MS uses vcxproj specs, and VC vcproj specs... If VC doesn't accept forward slashes, doesn't that invalidate the premise of this patch? I don't think so. The point of the patch is to be able to run the Windows build script in a non-Windows environment to check that modifications in the Makefile stucture does not break some basics of the Windows build script, and this is not impacted. Is it worth attempting to maintain it? vcbuild (~2008) has been replaced by msbuild in MS 2010. Hence I imagine that it is still used a lot. Regards, Yeah. I've pushed this with tiny fixes to avoid Leaning Toothpick Syndrome (man perlop for definition). Thanks for your work on this. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL/pgSQL, RAISE and error context
Looks good Pavel! May I just suggest you add the default case to src/test/regress/sql/plpgsql.sql and src/test/regress/expected/plpgsql.out, to make it easier for the reviewer to compare the difference between what happens in the default case, when not using the raise-syntax and not using the GUCs? Suggested addition to the beginning of src/test/regress/sql/plpgsql.sql: +do $$ +begin + raise notice 'hello'; +end; +$$; + +do $$ +begin + raise exception 'hello'; +end; +$$; Many thanks for this patch! I will pray to the PL/pgSQL God it will be accepted. :) Best regards, Joel On Sun, Apr 26, 2015 at 9:19 AM, Pavel Stehule pavel.steh...@gmail.com wrote: Hi I reduced this patch, little bit cleaned - now it is based on plpgsql GUC display_context_min_messages - like client_min_messages, log_min_messages. Documentation added. Regards Pavel 2015-04-25 22:23 GMT+02:00 Pavel Stehule pavel.steh...@gmail.com: Hi 2015-04-24 19:16 GMT+02:00 Joel Jacobson j...@trustly.com: On Fri, Apr 24, 2015 at 6:07 PM, Pavel Stehule pavel.steh...@gmail.com wrote: Example: context_messages = -warning, -error, +notice I prefer your first proposal - and there is a precedent for plpgsql - plpgsql_extra_checks It is clean for anybody. +-identifiers looks like horrible httpd config. :) I have to agree on that :) Just thought this is the best we can do if we want to reduce the number of GUCs to a minimum. I played with some prototype and I am thinking so we need only one GUC plpgsql.display_context_messages = 'none'; -- compatible with current plpgsql.display_context_messages = 'all'; plpgsql.display_context_messages = 'exception, log'; -- what I prefer I implemented [ (WITH|WITHOUT) CONTEXT ] clause for RAISE statement RAISE NOTICE WITH CONTEXT 'some message'; RAISE NOTICE WITH CONTEXT USING message = 'some message'; RAISE EXCEPTION WITHOUT CONTEXT 'other message'; The patch is very small with full functionality (without documentation) - I am thinking so it can work. This patch is back compatible - and allow to change default behave simply. plpgsql.display_context_messages can be simplified to some like plpgsql.display_context_min_messages What do you think about it? Regards Pavel
Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0
On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote: It's make-or-break time for this patch. Please help me get it over the line in time. Could you please add the tests for the logical decoding code you added? I presume you have some already/ Heikki is in Northern California this week, and I think we'll have time to talk about the patch I'll be there for a while as well. Starting the week after, arriving on the 5th. ;) Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PL/pgSQL, RAISE and error context
2015-04-27 16:05 GMT+02:00 Joel Jacobson j...@trustly.com: Looks good Pavel! May I just suggest you add the default case to src/test/regress/sql/plpgsql.sql and src/test/regress/expected/plpgsql.out, to make it easier for the reviewer to compare the difference between what happens in the default case, when not using the raise-syntax and not using the GUCs? Suggested addition to the beginning of src/test/regress/sql/plpgsql.sql: +do $$ +begin + raise notice 'hello'; +end; +$$; + +do $$ +begin + raise exception 'hello'; +end; +$$; done Many thanks for this patch! I will pray to the PL/pgSQL God it will be accepted. :) :) -- please, do review, or fix documentation in this patch. I hope, so it will be merged early in 9.6 cycle. It is relatively simple. Pavel Best regards, Joel On Sun, Apr 26, 2015 at 9:19 AM, Pavel Stehule pavel.steh...@gmail.com wrote: Hi I reduced this patch, little bit cleaned - now it is based on plpgsql GUC display_context_min_messages - like client_min_messages, log_min_messages. Documentation added. Regards Pavel 2015-04-25 22:23 GMT+02:00 Pavel Stehule pavel.steh...@gmail.com: Hi 2015-04-24 19:16 GMT+02:00 Joel Jacobson j...@trustly.com: On Fri, Apr 24, 2015 at 6:07 PM, Pavel Stehule pavel.steh...@gmail.com wrote: Example: context_messages = -warning, -error, +notice I prefer your first proposal - and there is a precedent for plpgsql - plpgsql_extra_checks It is clean for anybody. +-identifiers looks like horrible httpd config. :) I have to agree on that :) Just thought this is the best we can do if we want to reduce the number of GUCs to a minimum. I played with some prototype and I am thinking so we need only one GUC plpgsql.display_context_messages = 'none'; -- compatible with current plpgsql.display_context_messages = 'all'; plpgsql.display_context_messages = 'exception, log'; -- what I prefer I implemented [ (WITH|WITHOUT) CONTEXT ] clause for RAISE statement RAISE NOTICE WITH CONTEXT 'some message'; RAISE NOTICE WITH CONTEXT USING message = 'some message'; RAISE EXCEPTION WITHOUT CONTEXT 'other message'; The patch is very small with full functionality (without documentation) - I am thinking so it can work. This patch is back compatible - and allow to change default behave simply. plpgsql.display_context_messages can be simplified to some like plpgsql.display_context_min_messages What do you think about it? Regards Pavel commit d60c21fb798cf25609dc37a4bda3ec7822f790e1 Author: Pavel Stehule pavel.steh...@gooddata.com Date: Sat Apr 25 22:09:28 2015 +0200 initial implementation of (WITH|WITHOUT) CONTEXT clause to plpgsql RAISE statement. initial implementation of plpgsql GUC plpgsql.display_context_messages diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml index d36acf6..8aebb87 100644 --- a/doc/src/sgml/plpgsql.sgml +++ b/doc/src/sgml/plpgsql.sgml @@ -3406,10 +3406,10 @@ END LOOP optional replaceablelabel/replaceable /optional; raise errors. synopsis -RAISE optional replaceable class=parameterlevel/replaceable /optional 'replaceable class=parameterformat/replaceable' optional, replaceable class=parameterexpression/replaceable optional, ... /optional/optional optional USING replaceable class=parameteroption/replaceable = replaceable class=parameterexpression/replaceable optional, ... /optional /optional; -RAISE optional replaceable class=parameterlevel/replaceable /optional replaceable class=parametercondition_name/ optional USING replaceable class=parameteroption/replaceable = replaceable class=parameterexpression/replaceable optional, ... /optional /optional; -RAISE optional replaceable class=parameterlevel/replaceable /optional SQLSTATE 'replaceable class=parametersqlstate/' optional USING replaceable class=parameteroption/replaceable = replaceable class=parameterexpression/replaceable optional, ... /optional /optional; -RAISE optional replaceable class=parameterlevel/replaceable /optional USING replaceable class=parameteroption/replaceable = replaceable class=parameterexpression/replaceable optional, ... /optional; +RAISE optional replaceable class=parameterlevel/replaceable /optional optional ( WITH | WITHOUT ) CONTEXT /optional 'replaceable class=parameterformat/replaceable' optional, replaceable class=parameterexpression/replaceable optional, ... /optional/optional optional USING replaceable class=parameteroption/replaceable = replaceable class=parameterexpression/replaceable optional, ... /optional /optional; +RAISE optional replaceable class=parameterlevel/replaceable /optional optional ( WITH | WITHOUT ) CONTEXT /optional replaceable class=parametercondition_name/ optional USING replaceable class=parameteroption/replaceable = replaceable class=parameterexpression/replaceable optional, ... /optional /optional; +RAISE optional replaceable class=parameterlevel/replaceable /optional optional ( WITH | WITHOUT ) CONTEXT
Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff
On 2015-04-26 18:02:06 -0700, Peter Geoghegan wrote: * So far, there has been a lack of scrutiny about what the patch does in the rewriter (in particular, to support the EXCLUDED.* pseudo-alias expression) and optimizer (the whole concept of an auxiliary query/plan that share a target RTE, and later target ResultRelation). If someone took a close look at that, it would be most helpful. ruleutils.c is also modified for the benefit of EXPLAIN output. This all applies only to the ON CONFLICT UPDATE patch. A committer could push out the IGNORE patch before this was 100% firm. I'm far from an expert on the relevant regions; but I'm starting to look nonetheless. I have to say that on a preliminary look it looks more complicated than it has to. Tom, any chance you could have a look at the parse-analsys-executor transformations? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Shouldn't CREATE TABLE LIKE copy the relhasoids property?
On Sat, Apr 25, 2015 at 11:11:52PM -0400, Tom Lane wrote: Hm, good point; INHERITS will silently override such a specification: regression=# create table base1 (f1 int) with oids; CREATE TABLE regression=# create table c2 () inherits (base1) without oids; CREATE TABLE regression=# \d+ c2 Table public.c2 Column | Type | Modifiers | Storage | Stats target | Description +-+---+-+--+- f1 | integer | | plain | | Inherits: base1 Has OIDs: yes Though I guess unlike inherits there is no reason to mandate the final result be identical to the base table - though here is something to be said for pointing out the inconsistency and requiring the user to alter table if indeed they want to have the oid-ness changed. Yeah, LIKE doesn't necessarily have to behave the same as INHERITS; but probably we should follow that precedent unless we have a specific argument not to. Which I don't. Agreed. Here is an attached patch for 9.6 which works for multiple LIKE'ed tables with multiple inheritance and index creation. I figured out why Tom's OID primary key test was failing so I now process the columns and LIKE first, then the constraints. There is also no longer a dependency on default_with_oids. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c new file mode 100644 index 73924ae..d442bb0 *** a/src/backend/parser/parse_utilcmd.c --- b/src/backend/parser/parse_utilcmd.c *** *** 56,61 --- 56,62 #include rewrite/rewriteManip.h #include utils/acl.h #include utils/builtins.h + #include utils/guc.h #include utils/lsyscache.h #include utils/rel.h #include utils/syscache.h *** transformCreateStmt(CreateStmt *stmt, co *** 150,155 --- 151,157 Oid namespaceid; Oid existing_relid; ParseCallbackState pcbstate; + bool like_found = false; /* * We must not scribble on the passed-in CreateStmt, so copy it. (This is *** transformCreateStmt(CreateStmt *stmt, co *** 242,248 /* * Run through each primary element in the table creation clause. Separate ! * column defs from constraints, and do preliminary analysis. */ foreach(elements, stmt-tableElts) { --- 244,253 /* * Run through each primary element in the table creation clause. Separate ! * column defs from constraints, and do preliminary analysis. We have to ! * process column-defining clauses first because it can control the ! * presence of columns which are referenced by columns referenced by ! * constraints. */ foreach(elements, stmt-tableElts) { *** transformCreateStmt(CreateStmt *stmt, co *** 254,267 transformColumnDefinition(cxt, (ColumnDef *) element); break; - case T_Constraint: - transformTableConstraint(cxt, (Constraint *) element); - break; - case T_TableLikeClause: transformTableLikeClause(cxt, (TableLikeClause *) element); break; default: elog(ERROR, unrecognized node type: %d, (int) nodeTag(element)); --- 259,277 transformColumnDefinition(cxt, (ColumnDef *) element); break; case T_TableLikeClause: + if (!like_found) + { + cxt.hasoids = false; + like_found = true; + } transformTableLikeClause(cxt, (TableLikeClause *) element); break; + case T_Constraint: + /* process later */ + break; + default: elog(ERROR, unrecognized node type: %d, (int) nodeTag(element)); *** transformCreateStmt(CreateStmt *stmt, co *** 269,274 --- 279,305 } } + if (like_found) + { + /* + * To match INHERITS, the existance of any LIKE table with OIDs + * causes the new table to have oids. For the same reason, + * WITH/WITHOUT OIDs is also ignored with LIKE. We prepend + * because the first oid option list entry is honored. Our + * prepended WITHOUT OIDS clause will be overridden if an + * inherited table has oids. + */ + stmt-options = lcons(makeDefElem(oids, + (Node *)makeInteger(cxt.hasoids)), stmt-options); + } + + foreach(elements, stmt-tableElts) + { + Node *element = lfirst(elements); + + if (nodeTag(element) == T_Constraint) + transformTableConstraint(cxt, (Constraint *) element); + } /* * transformIndexConstraints wants cxt.alist to contain only index * statements, so transfer anything we already have into save_alist. *** transformTableLikeClause(CreateStmtConte *** 860,865 --- 891,899 } } + /* We use oids if at least one LIKE'ed table has oids. */ + cxt-hasoids = cxt-hasoids ||
Re: [HACKERS] mogrify and indent features for jsonb
On 04/27/2015 12:46 PM, Petr Jelinek wrote: Another thing I noticed now when reading the code again - you should not be using braces when you only have one command in the code-block. There's one exception I, at least, have to this rule, namely when there's a corresponding compound if or else. I personally find this unaesthetic to put it mildly: if (condition) statement; else { block of statements; } pgindent used to produce stuff like this, and you had to put a comment in the block to get around it, but we stopped that years ago, IIRC. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] PATCH: pgbench - remove thread fork-emulation
This patch removes the pgbench thread fork-emulation code and simplifies things where possible, especially around pthread_create and pthread_join. The stats collection for the report is done directly instead of using an intermediate structure. As a result, if no thread implementation is available, pgbench is restricted to work with only the main thread (ie pgbench -j 1 ...). == Rational == Pgbench currently provides a thread emulation through process forks. This feature was developed way back when it may have been common that some platforms were not supporting threads. This is now very rare (can you name one such platform?). However, the thread fork-emulation feature has drawbacks: Namely, processes are not threads, the memory is not shared (sure), so it hinders simple implementation for some features, or results in not providing these features with fork-emulation, or having a different behavior under fork-emulation: Latency collection (-r) is not supported with fork emulation. Progress (-P) is reported differently with fork emulation. For a new feature under discussion, which consist in allowing one log instead of per-thread logs, supporting fork-emulation requires a (heavy) post-processing external sort phase whereas with actual threads all threads can share and append to the same log file with limited overhead, which is significantly simpler. == Note == This is a small regression (for platforms without thread support, -j J with J 1 is not supported anymore after the patch), so maybe this should be included for PostgreSQL 10.0 only? I do not think this should required, but this is only my opinion. -- Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c index 06a4dfd..989f151 100644 --- a/src/bin/pgbench/pgbench.c +++ b/src/bin/pgbench/pgbench.c @@ -70,20 +70,8 @@ static int pthread_join(pthread_t th, void **thread_return); /* Use platform-dependent pthread capability */ #include pthread.h #else -/* Use emulation with fork. Rename pthread identifiers to avoid conflicts */ -#define PTHREAD_FORK_EMULATION -#include sys/wait.h - -#define pthread_tpg_pthread_t -#define pthread_attr_t pg_pthread_attr_t -#define pthread_create pg_pthread_create -#define pthread_join pg_pthread_join - -typedef struct fork_pthread *pthread_t; -typedef int pthread_attr_t; - -static int pthread_create(pthread_t *thread, pthread_attr_t *attr, void *(*start_routine) (void *), void *arg); -static int pthread_join(pthread_t th, void **thread_return); +/* No threads implementation, use none (-j 1) */ +#define pthread_t void * #endif @@ -210,8 +198,6 @@ typedef struct PGconn *con; /* connection handle to DB */ int id;/* client No. */ int state; /* state No. */ - int cnt; /* xacts count */ - int ecnt; /* error count */ int listen; /* 0 indicates that an async query has been * sent */ int sleeping; /* 1 indicates that the client is napping */ @@ -221,15 +207,19 @@ typedef struct int64 txn_scheduled; /* scheduled start time of transaction (usec) */ instr_time txn_begin; /* used for measuring schedule lag times */ instr_time stmt_begin; /* used for measuring statement latencies */ - int64 txn_latencies; /* cumulated latencies */ - int64 txn_sqlats; /* cumulated square latencies */ bool is_throttled; /* whether transaction throttling is done */ int use_file; /* index in sql_files for this client */ bool prepared[MAX_FILES]; + + /* per client collected stats */ + int cnt; /* xacts count */ + int ecnt; /* error count */ + int64 txn_latencies; /* cumulated latencies */ + int64 txn_sqlats; /* cumulated square latencies */ } CState; /* - * Thread state and result + * Thread state */ typedef struct { @@ -242,6 +232,9 @@ typedef struct int *exec_count; /* number of cmd executions (per Command) */ unsigned short random_state[3]; /* separate randomness for each thread */ int64 throttle_trigger; /* previous/next throttling (us) */ + + /* per thread collected stats */ + instr_time conn_time; int64 throttle_lag; /* total transaction lag behind throttling */ int64 throttle_lag_max; /* max transaction lag */ int64 throttle_latency_skipped; /* lagging transactions skipped */ @@ -250,18 +243,6 @@ typedef struct #define INVALID_THREAD ((pthread_t) 0) -typedef struct -{ - instr_time conn_time; - int64 xacts; - int64 latencies; - int64 sqlats; - int64 throttle_lag; - int64 throttle_lag_max; - int64 throttle_latency_skipped; - int64 latency_late; -} TResult; - /* * queries read from files */ @@ -2895,6 +2876,13 @@ main(int argc, char **argv) fprintf(stderr, invalid number of threads: %d\n, nthreads); exit(1); } +#if !defined(ENABLE_THREAD_SAFETY) +if (nthreads != 1) +{ + fprintf(stderr, no threads available, use only \-j 1\\n); + exit(1); +} +#endif /* !ENABLE_THREAD_SAFETY */ break; case 'C':
Re: [HACKERS] Temporal extensions
Hi Jim, On 27/04/15 21:48, Jim Nasby wrote: On 4/25/15 7:49 PM, Dave Jones wrote: I've been working on a conversion of several utilities I wrote for another engine, and was wondering if there was any interest in seeing any of them added to contrib/ at some point in the vague undefined future? Not in contrib, no, because there's no reason for these to be tied to specific versions of Postgres. Adding to PGXN would make sense though. Thanks for the reply and suggestion - I've just run across PGXN having finished reading through the all the temporal-related e-mails I could find on pgsql-hackers for the last 5 years or so (specifically, Jeff Davis' temporal extension). PGXN looks like a good place to park this (once I've sorted out a few of the things I've run across while reading - TRUNCATE triggers for it for starters!). (Though, I dislike using timestamps to do change/history tracking, but that's just me...) I've been playing around with history tracking (in the context of BI, typically with batch loaded reporting databases) for about 7-8 years now and always found timestamps perfect for the purpose, but are you perhaps referring to using it for audit purposes? If that's the case I'd agree entirely - this is absolutely the wrong tool for such things (which is something I need to put a bit more prominently in the docs - it's buried in the design section at the moment). Or did you mean ranges would be better? They certainly looked intriguing when I started moving this stuff to postgres, and I'd like to re-visit them in the near future as they offer capabilities I don't have with timestamps (such as guaranteeing no overlapping ranges via exclusion constraints) but my initial tests suggested some rather major performance degradation so I put it on the back-burner at first. Thanks, Dave. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT syntax issues
On Mon, Apr 27, 2015 at 1:19 PM, Peter Eisentraut pete...@gmx.net wrote: it appears that they are using quite a different syntax. The ON CONFLICT clause is attached to a constraint, specifying the default action for that constraint. The INSERT command can then override this default choice. I think. Well, MySQL's ON DUPLICATE KEY UPDATE thing is pretty close to what I have. I intend CONFLICT as a broader term, which is somewhat similar to SQLite (and is not needlessly verbose). -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Temporal extensions
On 4/27/15 6:08 PM, Dave Jones wrote: (Though, I dislike using timestamps to do change/history tracking, but that's just me...) I've been playing around with history tracking (in the context of BI, typically with batch loaded reporting databases) for about 7-8 years now and always found timestamps perfect for the purpose, but are you perhaps referring to using it for audit purposes? If that's the case I'd agree entirely - this is absolutely the wrong tool for such things (which is something I need to put a bit more prominently in the docs - it's buried in the design section at the moment). Most warehouses dumb things down to a day level, so it's probably OK there. What I specifically don't like is that using a timestamp to try and determine the order in which something happened is just fraught with gotchas. For starters, now() is locked in when you do a BEGIN, but maybe a newer transaction modifies a table before an older one does. Now the ordering is *backwards*. You have the same problem with using an XID. The only way I've thought of to make this guaranteed safe is to somehow serialize the logging with something like CREATE TABLE customer_history( customer_hid serial primary key -- hid == history_id , previous_customer_hid int references customer_history , customer_id int NOT NULL references customer ... ); CREATE UNIQUE INDEX ... ON customer_history(previous_customer_hid) WHERE previous_customer_hid IS NOT NULL; CREATE UNIQUE INDEX ... ON customer_history(customer_hid) WHERE previous_customer_hid IS NULL; and then have a before trigger enforce NEW.previous_customer_hid := customer_history__get_latest(customer_id) where customer_history__get_latest() will 'walk the chain' starting with the first link customer_id = blah AND previous_customer_id = NULL Because of the indexes that will serialize inserts on a per-customer basis. You could still run into problems with a newer snapshot creating a history record before a transaction with an older snapshot does though. :( Though, if you included txid_current_snapshot() with each record you could probably detect when that happens. Or did you mean ranges would be better? They certainly looked intriguing when I started moving this stuff to postgres, and I'd like to re-visit them in the near future as they offer capabilities I don't have with timestamps (such as guaranteeing no overlapping ranges via exclusion constraints) but my initial tests suggested some rather major performance degradation so I put it on the back-burner at first. If you're going to keep both a start and end for each record you'd definitely want to do it with a range. If you're only keeping the change time then you can handle it differently. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Allow SQL/plpgsql functions to accept record
On 4/25/15 4:50 PM, Tom Lane wrote: Well, we already support local variables of type RECORD in plpgsql, so it's not immediately clear to me that function arguments would be much worse. There are a lot of deficiencies with the RECORD-local-variable implementation: if you try to change the actual RECORD type from one call to the next you'll probably have problems. But it seems like we could avoid that for function arguments by treating RECORD as a polymorphic argument type, and thereby generating a separate set of plan trees for each actual record type passed to the function within a given session. So in principle it ought to work better than the local-variable case does today. In short I suspect that Jim is right and this has more to do with a shortage of round tuits than any fundamental problem. I took a stab at plpgsql and it seems to work ok... but I'm not sure it's terribly valuable because you end up with an anonymous record instead of something that points back to what you handed it. The 'good' news is it doesn't seem to blow up on successive calls with different arguments... Not sure about the SQL-function case. That might be even easier because functions.c doesn't try to cache plans across queries; or maybe not. This on the other hand was rather easy. It's not horribly useful due to built-in restrictions on dealing with record, but that's certainly not plsql's fault, and this satisfies my initial use case of create function cn(record) returns bigint language sql as $$ SELECT count(*) FROM json_each_text( row_to_json($1) ) a WHERE value IS NULL $$; Attached patches both pass make check. The plpgsql is WIP, but I think the SQL one is OK. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com From cc1776d4963b0ae837f61320ba36a6ff3ad7a7cb Mon Sep 17 00:00:00 2001 From: Jim Nasby jim.na...@bluetreble.com Date: Mon, 27 Apr 2015 18:54:51 -0500 Subject: [PATCH] Allow SQL functions to accept a record --- src/backend/catalog/pg_proc.c | 10 +++--- src/test/regress/expected/create_function_3.out | 17 ++--- src/test/regress/sql/create_function_3.sql | 6 +- 3 files changed, 26 insertions(+), 7 deletions(-) diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c index 1229829..daf3297 100644 --- a/src/backend/catalog/pg_proc.c +++ b/src/backend/catalog/pg_proc.c @@ -870,14 +870,18 @@ fmgr_sql_validator(PG_FUNCTION_ARGS) errmsg(SQL functions cannot return type %s, format_type_be(proc-prorettype; - /* Disallow pseudotypes in arguments */ - /* except for polymorphic */ + /* +* Disallow pseudotypes in arguments except for polymorphic and record. In +* the context of validating a function, record may as well be polymorphic, +* so treat it as such. +*/ haspolyarg = false; for (i = 0; i proc-pronargs; i++) { if (get_typtype(proc-proargtypes.values[i]) == TYPTYPE_PSEUDO) { - if (IsPolymorphicType(proc-proargtypes.values[i])) + if (IsPolymorphicType(proc-proargtypes.values[i]) || + proc-proargtypes.values[i] == RECORDOID) haspolyarg = true; else ereport(ERROR, diff --git a/src/test/regress/expected/create_function_3.out b/src/test/regress/expected/create_function_3.out index 6a4352c..cd370e3 100644 --- a/src/test/regress/expected/create_function_3.out +++ b/src/test/regress/expected/create_function_3.out @@ -16,16 +16,26 @@ CREATE FUNCTION functest_A_2(text[]) RETURNS int LANGUAGE 'sql' AS 'SELECT $1[0]::int'; CREATE FUNCTION functest_A_3() RETURNS bool LANGUAGE 'sql' AS 'SELECT false'; +CREATE FUNCTION functest_A_4(record) RETURNS regtype LANGUAGE 'sql' + AS 'SELECT pg_catalog.pg_typeof($1)'; +SELECT functest_A_4(NULL::pg_catalog.pg_class); + functest_a_4 +-- + pg_class +(1 row) + SELECT proname, prorettype::regtype, proargtypes::regtype[] FROM pg_proc WHERE oid in ('functest_A_1'::regproc, 'functest_A_2'::regproc, - 'functest_A_3'::regproc) ORDER BY proname; + 'functest_A_3'::regproc, + 'functest_A_4'::regproc) ORDER BY proname; proname| prorettype |proargtypes --++--- functest_a_1 | boolean| [0:1]={text,date} functest_a_2 | integer| [0:0]={text[]} functest_a_3 | boolean| {} -(3 rows) + functest_a_4 | regtype| [0:0]={record} +(4 rows) -- -- IMMUTABLE | STABLE | VOLATILE @@ -219,10 +229,11 @@ SELECT routine_name, ordinal_position, parameter_name, parameter_default -- Cleanups DROP SCHEMA temp_func_test
Re: [HACKERS] INSERT ... ON CONFLICT syntax issues
On Mon, Apr 27, 2015 at 4:21 PM, Peter Geoghegan p...@heroku.com wrote: * Don't change the ON CONFLICT spelling. * Don't change the names of the pseudo-alias EXCLUDED.* (or the alias TARGET.*). Those seem fine to me as well. * Change the syntax to put the WHERE clause used to infer partial indexes outside parens. * Change the syntax to make this work, by adding the fully reserved keyword DO. Assuming you have a partial index (WHERE is_active) on the column key, you're left with: INSERT ON CONFLICT (key) where is_active DO UPDATE set ... WHERE ... ; or: INSERT ON CONFLICT (key) where is_active DO NOTHING; The DO keyword makes this work where it cannot otherwise, because it's a RESERVED_KEYWORD. I've pushed code that does all this to Github. Documentation changes will follow soon. * Finally, add ON CONFLICT ON CONSTRAINT my_constraint support, so you can name (exactly one) constraint by name. Particularly useful for unique constraints. I really don't want to make this accept multiple constraints, even though we may infer multiple constraints, because messy, and is probably too complex to every be put to good use (bearing in mind that exclusion constraints, that really need this, will still only be supported by the IGNORE/DO NOTHING variant). I have yet to do this, but it should be fairly straightforward. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0
On Sun, Apr 26, 2015 at 6:02 PM, Peter Geoghegan p...@heroku.com wrote: * I privately pointed out to Heikki what I'd said publicly about 6 weeks ago: that there is still a *very* small chance of exclusion constraints exhibiting unprincipled deadlocks (he missed it at the time). I think that this risk is likely to be acceptable, since it takes so much to see it happen (and ON CONFLICT UPDATE/nbtree is unaffected). But let's better characterize the risks, particularly in light of the changes to store speculative tokens in the c_ctid field on newly inserted (speculative) tuples. I think that that probably made the problem significantly less severe, and perhaps it's now entirely theoretical, but I want to make sure. I'm going to try and characterize the risks with the patch here today. So, this can still happen, but is now happening less often than before, I believe. On a 16 core server, with continual 128 client jjanes_upsert exclusion constraint only runs, with fsync=off, I started at this time: 2015-04-27 21:22:28 UTC [ 0 ]: LOG: database system was shut down at 2015-04-27 21:22:25 UTC 2015-04-27 21:22:28 UTC [ 0 ]: LOG: database system is ready to accept connections 2015-04-27 22:47:20 UTC [ 0 ]: LOG: autovacuum launcher started 2015-04-27 22:47:21 UTC [ 0 ]: LOG: autovacuum launcher started Finally, with ON CONFLICT UPDATE (which we don't intend to support with exclusion constraints anyway), the torture testing finally produces a deadlock several hours later (due to having livelock insurance [1]): 2015-04-28 00:22:06 UTC [ 0 ]: LOG: autovacuum launcher started 2015-04-28 00:37:24 UTC [ 432432057 ]: ERROR: deadlock detected 2015-04-28 00:37:24 UTC [ 432432057 ]: DETAIL: Process 130628 waits for ShareLock on transaction 432432127; blocked by process 130589. Process 130589 waits for ShareLock on speculative token 13 of transaction 432432057; blocked by process 130628. Process 130628: insert into upsert_race_test (index, count) values ('7566','-1') on conflict update set count=TARGET.count + EXCLUDED.count where TARGET.index = EXCLUDED.index returning count Process 130589: insert into upsert_race_test (index, count) values ('7566','1') on conflict update set count=TARGET.count + EXCLUDED.count where TARGET.index = EXCLUDED.index returning count 2015-04-28 00:37:24 UTC [ 432432057 ]: HINT: See server log for query details. 2015-04-28 00:37:24 UTC [ 432432057 ]: CONTEXT: while checking exclusion constraint on tuple (3,36) in relation upsert_race_test 2015-04-28 00:37:24 UTC [ 432432057 ]: STATEMENT: insert into upsert_race_test (index, count) values ('7566','-1') on conflict update set count=TARGET.count + EXCLUDED.count where TARGET.index = EXCLUDED.index returning count ON CONFLICT UPDATE will only ever use unique indexes, and so is not affected. Given that exclusion constraints can only be used with IGNORE, and given that this is so hard to recreate, I'm inclined to conclude that it's acceptable. It's certainly way better than risking livelocks by not having deadlock insurance. This is a ridiculously CPU-bound workload, with extreme and constant contention. I'd be surprised if there were any real complaints from the field in practice. Do you think that this is acceptable, Heikki? [1] https://github.com/petergeoghegan/postgres/commit/c842c798e4a9e31dce06b4836b2bdcbafe1155d6#diff-51288d1b75a37ac3b32717ec50b66c23R87 -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Allow SQL/plpgsql functions to accept record
On 04/27/2015 10:35 PM, Jim Nasby wrote: On 4/25/15 4:50 PM, Tom Lane wrote: Well, we already support local variables of type RECORD in plpgsql, so it's not immediately clear to me that function arguments would be much worse. There are a lot of deficiencies with the RECORD-local-variable implementation: if you try to change the actual RECORD type from one call to the next you'll probably have problems. But it seems like we could avoid that for function arguments by treating RECORD as a polymorphic argument type, and thereby generating a separate set of plan trees for each actual record type passed to the function within a given session. So in principle it ought to work better than the local-variable case does today. In short I suspect that Jim is right and this has more to do with a shortage of round tuits than any fundamental problem. I took a stab at plpgsql and it seems to work ok... but I'm not sure it's terribly valuable because you end up with an anonymous record instead of something that points back to what you handed it. The 'good' news is it doesn't seem to blow up on successive calls with different arguments... Not sure about the SQL-function case. That might be even easier because functions.c doesn't try to cache plans across queries; or maybe not. This on the other hand was rather easy. It's not horribly useful due to built-in restrictions on dealing with record, but that's certainly not plsql's fault, and this satisfies my initial use case of create function cn(record) returns bigint language sql as $$ SELECT count(*) FROM json_each_text( row_to_json($1) ) a WHERE value IS NULL $$; Attached patches both pass make check. The plpgsql is WIP, but I think the SQL one is OK. My point remains that we really need methods of a) getting the field names from generic records and b) using text values to access fields of generic records, both as lvalues and rvalues. Without those this feature will be of comparatively little value, IMNSHO. With them it will be much more useful and powerful. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0
On 04/27/2015 07:02 PM, Peter Geoghegan wrote: So, this can still happen, but is now happening less often than before, I believe. On a 16 core server, with continual 128 client jjanes_upsert exclusion constraint only runs, with fsync=off, I started at this time: 2015-04-27 21:22:28 UTC [ 0 ]: LOG: database system was shut down at 2015-04-27 21:22:25 UTC 2015-04-27 21:22:28 UTC [ 0 ]: LOG: database system is ready to accept connections 2015-04-27 22:47:20 UTC [ 0 ]: LOG: autovacuum launcher started 2015-04-27 22:47:21 UTC [ 0 ]: LOG: autovacuum launcher started Finally, with ON CONFLICT UPDATE (which we don't intend to support with exclusion constraints anyway), the torture testing finally produces a deadlock several hours later (due to having livelock insurance [1]): 2015-04-28 00:22:06 UTC [ 0 ]: LOG: autovacuum launcher started 2015-04-28 00:37:24 UTC [ 432432057 ]: ERROR: deadlock detected 2015-04-28 00:37:24 UTC [ 432432057 ]: DETAIL: Process 130628 waits for ShareLock on transaction 432432127; blocked by process 130589. Process 130589 waits for ShareLock on speculative token 13 of transaction 432432057; blocked by process 130628. Process 130628: insert into upsert_race_test (index, count) values ('7566','-1') on conflict update set count=TARGET.count + EXCLUDED.count where TARGET.index = EXCLUDED.index returning count Process 130589: insert into upsert_race_test (index, count) values ('7566','1') on conflict update set count=TARGET.count + EXCLUDED.count where TARGET.index = EXCLUDED.index returning count 2015-04-28 00:37:24 UTC [ 432432057 ]: HINT: See server log for query details. 2015-04-28 00:37:24 UTC [ 432432057 ]: CONTEXT: while checking exclusion constraint on tuple (3,36) in relation upsert_race_test 2015-04-28 00:37:24 UTC [ 432432057 ]: STATEMENT: insert into upsert_race_test (index, count) values ('7566','-1') on conflict update set count=TARGET.count + EXCLUDED.count where TARGET.index = EXCLUDED.index returning count ON CONFLICT UPDATE will only ever use unique indexes, and so is not affected. Given that exclusion constraints can only be used with IGNORE, and given that this is so hard to recreate, I'm inclined to conclude that it's acceptable. It's certainly way better than risking livelocks by not having deadlock insurance. This is a ridiculously CPU-bound workload, with extreme and constant contention. I'd be surprised if there were any real complaints from the field in practice. Do you think that this is acceptable, Heikki? I thought we had an ironclad scheme to prevent deadlocks like this, so I'd like to understand why that happens. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0
On Mon, Apr 27, 2015 at 7:02 PM, Peter Geoghegan p...@heroku.com wrote: Given that exclusion constraints can only be used with IGNORE, and given that this is so hard to recreate, I'm inclined to conclude that it's acceptable. It's certainly way better than risking livelocks by not having deadlock insurance. Uh, I mean livelock insurance here, of course. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: knowing detail of config files via SQL
On Tue, Apr 28, 2015 at 3:22 AM, David Steele da...@pgmasters.net wrote: On 4/27/15 10:31 AM, Sawada Masahiko wrote: Thank you for your review comment! The latest patch is attached. Looks good overall - a few more comments below: Thank you for your reviewing. Attached v8 patch is latest version. diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml +row + entrystructfieldseqno/structfield/entry + entrystructfieldinteger/structfield/entry + entrySequence number of current view/entry I would suggest: Order in which the setting was loaded from the configuration FIx. +/row +row + entrystructfieldname/structfield/entry + entrystructfieldtext/structfield/entry diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c +/* + * show_all_file_settings + */ + +#define NUM_PG_FILE_SETTINGS_ATTS 5 + +Datum +show_all_file_settings(PG_FUNCTION_ARGS) It would be good to get more detail in the function comment, as well as more comments in the function itself. Added some comments. A minor thing, but there are a number of whitespace errors when applying the patch: ../000_pg_file_settings_view_v6.patch:159: indent with spaces. free(guc_file_variables[i].name); ../000_pg_file_settings_view_v6.patch:160: indent with spaces. free(guc_file_variables[i].value); ../000_pg_file_settings_view_v6.patch:161: indent with spaces. free(guc_file_variables[i].filename); ../000_pg_file_settings_view_v6.patch:178: indent with spaces. guc_array-name = guc_strdup(FATAL, item-name); ../000_pg_file_settings_view_v6.patch:179: indent with spaces. guc_array-value = guc_strdup(FATAL, item-value); warning: squelched 2 whitespace errors warning: 7 lines add whitespace errors. I'm sure the committer would appreciate it if you'd clean those up. I tried to get rid of white space. Regards, --- Sawada Masahiko diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index 898865e..50b93cf 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -7437,6 +7437,11 @@ /row row + entrylink linkend=view-pg-file-settingsstructnamepg_file_settings/structname/link/entry + entryparameter settings of file/entry + /row + + row entrylink linkend=view-pg-shadowstructnamepg_shadow/structname/link/entry entrydatabase users/entry /row @@ -9050,6 +9055,74 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx /sect1 + sect1 id=view-pg-file-settings + titlestructnamepg_file_settings/structname/title + + indexterm zone=view-pg-file-settings + primarypg_file_settings/primary + /indexterm + + para + The view structnamepg_file_settings/structname provides access to + run-time parameters that are defined in configuration files via SQL. + In contrast to structnamepg_settings/structname a row is provided for + each occurrence of the parameter in a configuration file. This is helpful + for discovering why one value may have been used in preference to another + when the parameters were loaded. + /para + + table + titlestructnamepg_file_settings/ Columns/title + + tgroup cols=3 + thead +row + entryName/entry + entryType/entry + entryDescription/entry +/row + /thead + tbody +row + entrystructfieldsourcefile/structfield/entry + entrystructfieldtext/structfield/entry + entryA path of configration file/entry +/row +row + entrystructfieldsourceline/structfield/entry + entrystructfieldinteger/structfield/entry + entry + Line number within the configuration file the current value was + set at (null for values set from sources other than configuration files, + or when examined by a non-superuser) + /entry +/row +row + entrystructfieldseqno/structfield/entry + entrystructfieldinteger/structfield/entry + entryOrder in which the setting was loaded from the configuration/entry +/row +row + entrystructfieldname/structfield/entry + entrystructfieldtext/structfield/entry + entry + Configuration file the current value was set in (null for values + set from sources other than configuration files, or when examined by a + non-superuser); helpful when using literalinclude/literal directives in + configuration files + /entry +/row +row + entrystructfieldsetting/structfield/entry + entrystructfieldtext/structfield/entry + entryRun-time configuration parameter name/entry +/row + /tbody + /tgroup + /table + +/sect1 + sect1 id=view-pg-shadow titlestructnamepg_shadow/structname/title diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql index 4c35ef4..31faab0 100644 --- a/src/backend/catalog/system_views.sql +++ b/src/backend/catalog/system_views.sql @@ -411,6 +411,12 @@ CREATE
Re: [HACKERS] basebackups during ALTER DATABASE ... SET TABLESPACE ... not safe?
At 2015-01-30 21:36:42 +0100, and...@2ndquadrant.com wrote: Here's an alternative approach. I think it generally is superior and going in the right direction, but I'm not sure it's backpatchable. It basically consists out of: 1) Make GetLockConflicts() actually work. already commited as being a independent problem. 2) Allow the startup process to actually acquire locks other than AccessExclusiveLocks. There already is code acquiring other locks, but it's currently broken because they're acquired in blocking mode which isn't actually supported in the startup mode. Using this infrastructure we can actually fix a couple small races around database creation/drop. 3) Allow session locks during recovery to be heavier than RowExclusiveLock - since relation/object session locks aren't ever taken out by the user (without a plain lock first) that's safe. merged and submitted independently. 5) Make walsender actually react to recovery conflict interrupts submitted here. (0003) 4) Perform streaming base backups from within a transaction command, to provide error handling. 6) Prevent access to the template database of a CREATE DATABASE during WAL replay. 7) Add an interlock to prevent base backups and movedb() to run concurrently. What we actually came here for. combined, submitted here. (0004) I think this actually doesn't look that bad. Hi Andres. Do you intend to commit these patches? (I just verified that they apply without trouble to HEAD.) -- Abhijit -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: knowing detail of config files via SQL
On Mon, Apr 27, 2015 at 11:31 PM, Sawada Masahiko sawada.m...@gmail.com wrote: On Sat, Apr 25, 2015 at 3:40 AM, David Steele da...@pgmasters.net wrote: On 4/4/15 9:21 AM, Sawada Masahiko wrote: I added documentation changes to patch is attached. Also I tried to use memory context for allocation of guc_file_variable in TopMemoryContext, but it was failed access after received SIGHUP. Below is my review of the v5 patch: Thank you for your review comment! The latest patch is attached. diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml + sect1 id=view-pg-file-settings + titlestructnamepg_file_settings/structname/title + + indexterm zone=view-pg-file-settings + primarypg_file_settings/primary + /indexterm + + para + The view structnamepg_file_settings/structname provides confirm parameters via SQL, + which is written in configuration file. This view can be update by reloading configration file. + /para Perhaps something like: para The view structnamepg_file_settings/structname provides access to run-time parameters that are defined in configuration files via SQL. In contrast to structnamepg_settings/structname a row is provided for each occurrence of the parameter in a configuration file. This is helpful for discovering why one value may have been used in preference to another when the parameters were loaded. /para + table + titlestructnamepg_file_settings/ Columns/title + + tgroup cols=3 + thead +row + entryName/entry + entryType/entry + entryDescription/entry +/row + /thead + tbody +row + entrystructfieldsourcefile/structfield/entry + entrystructfieldtext/structfield/entry + entryA path of configration file/entry From pg_settings: entryConfiguration file the current value was set in (null for values set from sources other than configuration files, or when examined by a non-superuser); helpful when using literalinclude/ directives in configuration files/entry +/row +row + entrystructfieldsourceline/structfield/entry + entrystructfieldinteger/structfield/entry + entryThe line number in configuration file/entry From pg_settings: entryLine number within the configuration file the current value was set at (null for values set from sources other than configuration files, or when examined by a non-superuser) /entry +/row +row + entrystructfieldname/structfield/entry + entrystructfieldtext/structfield/entry + entryParameter name in configuration file/entry From pg_settings: entryRun-time configuration parameter name/entry +/row +row + entrystructfieldsetting/structfield/entry + entrystructfieldtext/structfield/entry + entryValue of the parameter in configuration file/entry +/row + /tbody + /tgroup + /table + +/sect1 The documentation is updated based on your suggestion. diff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l @@ -342,6 +345,42 @@ ProcessConfigFile(GucContext context) +guc_array = guc_file_variables; +for (item = head; item; item = item-next, guc_array++) +{ +free(guc_array-name); +free(guc_array-value); +free(guc_array-filename); There's an issue freeing these when calling pg_reload_conf(): Fix. postgres=# alter system set max_connections = 100; ALTER SYSTEM postgres=# select * from pg_reload_conf(); LOG: received SIGHUP, reloading configuration files pg_reload_conf t (1 row) postgres=# postgres(25078,0x7fff747b2300) malloc: *** error for object 0x424d380044: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug Of course a config reload can't change max_connections, but I wouldn't expect it to crash, either. +guc_array-sourceline = -1; I can't see the need for this since it is reassigned further down. Fix. -- Up-thread David J had recommended an ordering column (or seqno) that would provide the order in which the settings were loaded. I think this would give valuable information that can't be gleaned from the line numbers alone. Personally I think it would be fine to start from 1 and increment for each setting found, rather than rank within a setting. If the user wants per setting ranking that's what SQL is for. So the output would look something like: sourcefile | sourceline | seqno | name | setting --++---+-+--- /db/postgresql.conf | 64 | 1 | max_connections | 100 /db/postgresql.conf |116 | 2 | shared_buffers | 128MB /db/postgresql.conf |446 | 3 | log_timezone| US/Eastern /db/postgresql.auto.conf | 3 | 4 |
Re: [HACKERS] PL/pgSQL, RAISE and error context
On 4/27/15 6:08 PM, Fabrízio de Royes Mello wrote: On Sun, Apr 26, 2015 at 4:19 AM, Pavel Stehule pavel.steh...@gmail.com wrote: I reduced this patch, little bit cleaned - now it is based on plpgsql GUC display_context_min_messages - like client_min_messages, log_min_messages. What you think just context_min_messages ? That sounds weird. log_min_messages are the messages sent to the log; client_min_messages are sent to the client. context_min_messages are not sent to a context, whatever that would mean. .m -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] improving speed of make check-world
On Sat, Apr 25, 2015 at 7:23 AM, Michael Paquier michael.paqu...@gmail.com wrote: On Sat, Apr 25, 2015 at 7:59 AM, Peter Eisentraut pete...@gmx.net wrote: On 4/23/15 1:22 PM, Jeff Janes wrote: Something about this commit (dcae5faccab64776376d354d) broke make check in parallel conditions when started from a clean directory. It fails with a different error each time, one example: make -j4 check /dev/null In file included from gram.y:14515: scan.c: In function 'yy_try_NUL_trans': scan.c:10307: warning: unused variable 'yyg' /usr/bin/ld: tab-complete.o: No such file: No such file or directory collect2: ld returned 1 exit status make[3]: *** [psql] Error 1 make[2]: *** [all-psql-recurse] Error 2 make[2]: *** Waiting for unfinished jobs make[1]: *** [all-bin-recurse] Error 2 make: *** [all-src-recurse] Error 2 make: *** Waiting for unfinished jobs I think the problem is that check depends on all, but now also depends on temp-install, which in turn runs install and all. With a sufficient amount of parallelism, you end up running two alls on top of each other. It seems this can be fixed by removing the check: all dependency. Try removing that in the top-level GNUmakefile.in and see if the problem goes away. For completeness, we should then also remove it in the other makefiles. Yep. I spent some time yesterday and today poking at that, but I clearly missed that dependency.. Attached is a patch fixing the problem. I tested check and check-world with some parallel jobs and both worked. In the case of check, the amount of logs is very reduced because all the install process is done by temp-install which redirects everything into tmp_install/log/install.log. This change fixed the problem for me. It also made this age-old compiler warning go away: In file included from gram.y:14515: scan.c: In function 'yy_try_NUL_trans': scan.c:10307: warning: unused variable 'yyg' I guess by redirecting it into the log file you indicated, but is that a good idea to redirect stderr? Cheers, Jeff -- Michael
Re: [HACKERS] PL/pgSQL, RAISE and error context
On Sun, Apr 26, 2015 at 4:19 AM, Pavel Stehule pavel.steh...@gmail.com wrote: Hi I reduced this patch, little bit cleaned - now it is based on plpgsql GUC display_context_min_messages - like client_min_messages, log_min_messages. What you think just context_min_messages ? Regards, -- Fabrízio de Royes Mello Consultoria/Coaching PostgreSQL Timbira: http://www.timbira.com.br Blog: http://fabriziomello.github.io Linkedin: http://br.linkedin.com/in/fabriziomello Twitter: http://twitter.com/fabriziomello Github: http://github.com/fabriziomello
Re: [HACKERS] Improving RLS qual pushdown
Dean, * Dean Rasheed (dean.a.rash...@gmail.com) wrote: I took another look at this and came up with some alternate comment rewording. I also added a couple of additional comments that should hopefully clarify the code a bit. Finally got back to this. Made a few additional changes that made things look clearer, to me at least, and added documentation and comments. Also added a bit to the regression tests (in particular, an explicit check on the RLS side of this, not that it should be different, but just in case things diverge in the future). Please review and let me know if you see any issues. Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] PL/pgSQL, RAISE and error context
On Mon, Apr 27, 2015 at 6:14 PM, Marko Tiikkaja ma...@joh.to wrote: That sounds weird. log_min_messages are the messages sent to the log; client_min_messages are sent to the client. context_min_messages are not sent to a context, whatever that would mean. Good point. I think it can't be any clearer than the proposed plpgsql.display_context_min_messages
Re: [HACKERS] mogrify and indent features for jsonb
On 18/04/15 20:35, Dmitry Dolgov wrote: Sorry for late reply. Here is a slightly improved version of the patch with the new `h_atoi` function, I hope this implementation will be more appropriate. It's better, but a) I don't like the name of the function b) I don't see why we need the function at all given that it's called from only one place and is effectively couple lines of code. In general I wonder if it wouldn't be better to split the replacePath into 3 functions, one replacePath, one replacePathObject, replacePathArray and call those Object/Array ones from the replacePath and inlining the h_atoi code into the Array one. If this was done then it would make also sense to change the main if/else in replacePath into a switch. Another thing I noticed now when reading the code again - you should not be using braces when you only have one command in the code-block. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT syntax issues
On Mon, Apr 27, 2015 at 10:20 AM, Bruce Momjian br...@momjian.us wrote: Agreed, and I like the DO [ UPDATE | NOTHING ] too. Here is what I think I need to do: * Don't change the ON CONFLICT spelling. * Don't change the names of the pseudo-alias EXCLUDED.* (or the alias TARGET.*). Those seem fine to me as well. * Change the syntax to put the WHERE clause used to infer partial indexes outside parens. * Change the syntax to make this work, by adding the fully reserved keyword DO. Assuming you have a partial index (WHERE is_active) on the column key, you're left with: INSERT ON CONFLICT (key) where is_active DO UPDATE set ... WHERE ... ; or: INSERT ON CONFLICT (key) where is_active DO NOTHING; The DO keyword makes this work where it cannot otherwise, because it's a RESERVED_KEYWORD. * Finally, add ON CONFLICT ON CONSTRAINT my_constraint support, so you can name (exactly one) constraint by name. Particularly useful for unique constraints. I really don't want to make this accept multiple constraints, even though we may infer multiple constraints, because messy, and is probably too complex to every be put to good use (bearing in mind that exclusion constraints, that really need this, will still only be supported by the IGNORE/DO NOTHING variant). Are we in agreement? -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT syntax issues
On Sun, Apr 26, 2015 at 09:34:12AM -0400, Stephen Frost wrote: * Heikki Linnakangas (hlinn...@iki.fi) wrote: On 04/25/2015 12:01 PM, Andres Freund wrote: INSERT ... ON CONFLICT (cola, colb [WHERE predicate_for_partial]) UPDATE|IGNORE My problem with the WHERE being inside the parens in the above is that it's a) different from CREATE INDEX b) unclear whether the WHERE belongs to colb or the whole index expression. The equivalent for aggregates, which I bet is going to be used less often, caused a fair amount of confusing. That's why I wanted the WHERE outside the (), which requires either adding DO between the index inference clause, and the action, to avoid ambiguities in the grammar. Yeah, having the WHERE outside the parens seems much nicer. What is the ambiguity? I like having it outside the parens also. Agreed, and I like the DO [ UPDATE | NOTHING ] too. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GSSAPI, SSPI - include_realm default
Bruce, all, * Bruce Momjian (br...@momjian.us) wrote: On Tue, Dec 9, 2014 at 05:38:25PM -0500, Stephen Frost wrote: My comment that include_realm is supported back to 8.4 was because there is an expectation that a pg_hba.conf file can be used unchanged across several major releases. So when 9.5 comes out and people update their pg_hba.conf files for 9.5, those files will still work in old releases. But the time to do those updates is then, not now. The back-branches are being patched to discourage using the default because it's not a secure approach. New users start using PG all the time and so changing the existing documentation is worthwhile to ensure those new users understand. A note in the release notes for whichever minor release the change to the documentation shows up in would be a good way to make existing users aware of the change and hopefully encourage them to review their configuration. If we don't agree that the change should be made then we can discuss that, but everyone commenting so far has agreed on the change. Where are we on this? Patches for master and 9.4 attached. The 9.4 patch should cherry-pick down to the other current releases just fine. Please provide any comments or suggestions for changes. If all looks good, I'll push this to change the default for 9.5 to be include_realm=1 and the documentation updates to recommend it in back-branches. Thanks! Stephen From ee6682b895b2f00ac89da6355b28960c358b8569 Mon Sep 17 00:00:00 2001 From: Stephen Frost sfr...@snowman.net Date: Mon, 27 Apr 2015 12:40:07 -0400 Subject: [PATCH] Change default for include_realm to 0 The default behavior for GSS and SSPI authentication methods has long been to strip the realm off of the principal, however, this is not a secure approach in multi-realm environments and the use-case for the parameter at all has been superseded by the regex-based mapping support available in pg_ident.conf. Change the default for include_realm to be '0', meaning that we do NOT remove the realm from the principal by default. Any installations which depend on the existing behavior will need to update their configurations (ideally by leaving include_realm on and adding a mapping in pg_ident.conf). Note that the mapping capability exists in all currently supported versions of PostgreSQL and so this change can be done today. Barring that, existing users can update their configurations today to explicitly set include_realm=0 to ensure that the prior behavior is maintained when they upgrade. This needs to be noted in the release notes. Per discussion with Magnus and Peter. --- doc/src/sgml/client-auth.sgml | 74 +-- src/backend/libpq/hba.c | 13 2 files changed, 63 insertions(+), 24 deletions(-) diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml index d27dd49..620ddc6 100644 --- a/doc/src/sgml/client-auth.sgml +++ b/doc/src/sgml/client-auth.sgml @@ -947,15 +947,24 @@ omicron bryanh guest1 /para para -Client principals must have their productnamePostgreSQL/ database user -name as their first component, for example -literalpgusername@realm/. Alternatively, you can use a user name -mapping to map from the first component of the principal name to the -database user name. By default, the realm of the client is -not checked by productnamePostgreSQL/. If you have cross-realm -authentication enabled and need to verify the realm, use the -literalkrb_realm/ parameter, or enable literalinclude_realm/ -and use user name mapping to check the realm. +Client principals can be mapped to different productnamePostgreSQL/ +database user names with filenamepg_ident.conf/. For example, +literalpgusername@realm/ could be mapped to just literalpgusername/. +Alternatively, you can use the full literalusername@realm/ principal as +the role name in productnamePostgreSQL/ without any mapping. + /para + + para +productnamePostgreSQL/ also supports a parameter to strip the realm from +the principal. This method is supported for backwards compatibility and is +strongly discouraged as it is then impossible to distinguish different users +with the same username but coming from different realms. To enable this, +set literalinclude_realm/ to 0. For simple single-realm +installations, literalinclude_realm/ combined with the +literalkrb_realm/ parameter (which checks that the realm provided +matches exactly what is in the krb_realm parameter) would be a secure but +less capable option compared to specifying an explicit mapping in +filenamepg_ident.conf/. /para para @@ -997,10 +1006,13 @@ omicron bryanh guest1 termliteralinclude_realm/literal/term listitem para -If set to 1, the realm name from the
Re: [HACKERS] Shouldn't CREATE TABLE LIKE copy the relhasoids property?
Bruce Momjian wrote: Agreed. Here is an attached patch for 9.6 which works for multiple LIKE'ed tables with multiple inheritance and index creation. I figured out why Tom's OID primary key test was failing so I now process the columns and LIKE first, then the constraints. There is also no longer a dependency on default_with_oids. It seems to me that waiting for 9.6 for what's arguably a bug fix is too much. It's not like this is a new feature. Why don't we just make sure it is as correct as possible and get it done for 9.5? It's not even in beta yet, nor feature freeze. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] INSERT ... ON CONFLICT UPDATE/IGNORE 4.0, parser/executor stuff
On Mon, Apr 27, 2015 at 2:52 PM, Andres Freund and...@anarazel.de wrote: So, I'm looking. And I've a few questions: * Why do we need to spread knowledge about speculative inserts that wide? It's now in 1) Query, 2) ParseState 3) ModifyTable 4) InsertStmt. That seems a bit wide - and as far as I see not really necessary. That e.g. transformUpdateStmt() has if (!pstate-p_is_speculative) blocks seems wrong. Maybe it shouldn't be spelt p_is_speculative, because speculative insertion is a low-level mechanism. Maybe it should be something like p_is_auxiliary...but you don't like that word either. :-) If setTargetTable() took the RangeVar NULL-ness as a proxy for a child auxiliary query, and if free_parsestate() took a flag to indicate that it shouldn't perform a heap_close() when an auxiliary statement's parent must do it instead, then it wouldn't be nescessary to add a p_is_speculative field. But that seems worse. * afaics 'auxiliary query' isn't something we have under that name. This isn't really different to what wCTEs do, so I don't think we need this term here. Well, I guess auxiliary is just a descriptive word, as opposed to one that describes a technical mechanism peculiar to Postgres/this patch. I don't feel that it's important either way. * You refer to 'join like syntax' in a couple places. I don't really see the current syntax being that join like? Is that just a different perception of terminology or is that just remnants of earlier approaches? No. It is join-like. It looks the same as and behaves similarly to UPDATE FROM ... . There is one difference, though -- excluded.* is not visible from RETURNING (because of the ambiguity of doing it the UPDATE way, where, as with a self-join, both tuples have identical column names. And because the INSERT RETURNING behavior should be dominant). That's all I mean by join-like. People thought that looked nicer than a straight expression that operates on a Var (which FWIW is kind of what MySQL does), so that's the way it works. * I am rather unconvinced we need the whole 'ExcludedVar' mechanism. I don't really see why we need it at all? Can't we 'just' make those plain vars referring to the normal targetlist entry? I feel like I'm missing something here. If you allow multiple RTEs by not having the rewriter swap out EXCLUDED vars for ExcludedExpr(), what you end up with is an UPDATE subquery with a join (and, if you change nothing else in the patch, a segfault). Fixing that segfault is probably going to require more complexity in the optimizer, and certainly will require more in the executor. Imagine teaching ModifyTable nodes about rescan to make a row lock conflict handled from its parent (if we had a real join). Alternatively, maybe you could have the EPQ stuff install a tuple and then execute a single EvalPlanQualNext() against a scantuple (which you'd also have to install using EPQ). You can install multiple tuples, which is used for SELECT FOR UPDATE's EPQ stuff. But that seems very significantly more complicated than what I actually did. Think of ExcludedExpr as a way of pretending that an expression on the target's Vars is a Var referencing a distinct RTE, simply because people think that looks nicer. The EXCLUDED.* tuple legitimately originates form the same tuple context as the target tuple, which the structure of the code reflects. * The whole dealing with the update clause doesn't seem super clean. I'm not sure yet how to do it nicer though :( At least it isn't all that much code. I think that the main thing that this design has to recommend it is code re-use. The patch actually isn't all that big in terms of lines of code. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal: knowing detail of config files via SQL
On 4/27/15 10:31 AM, Sawada Masahiko wrote: Thank you for your review comment! The latest patch is attached. Looks good overall - a few more comments below: diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml +row + entrystructfieldseqno/structfield/entry + entrystructfieldinteger/structfield/entry + entrySequence number of current view/entry I would suggest: Order in which the setting was loaded from the configuration +/row +row + entrystructfieldname/structfield/entry + entrystructfieldtext/structfield/entry diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c +/* + * show_all_file_settings + */ + +#define NUM_PG_FILE_SETTINGS_ATTS 5 + +Datum +show_all_file_settings(PG_FUNCTION_ARGS) It would be good to get more detail in the function comment, as well as more comments in the function itself. A minor thing, but there are a number of whitespace errors when applying the patch: ../000_pg_file_settings_view_v6.patch:159: indent with spaces. free(guc_file_variables[i].name); ../000_pg_file_settings_view_v6.patch:160: indent with spaces. free(guc_file_variables[i].value); ../000_pg_file_settings_view_v6.patch:161: indent with spaces. free(guc_file_variables[i].filename); ../000_pg_file_settings_view_v6.patch:178: indent with spaces. guc_array-name = guc_strdup(FATAL, item-name); ../000_pg_file_settings_view_v6.patch:179: indent with spaces. guc_array-value = guc_strdup(FATAL, item-value); warning: squelched 2 whitespace errors warning: 7 lines add whitespace errors. I'm sure the committer would appreciate it if you'd clean those up. -- - David Steele da...@pgmasters.net signature.asc Description: OpenPGP digital signature