Add an assert to the length of shared hashtable name

2022-09-25 Thread Xiaoran Wang
 The max size for the shared memory hash table name is SHMEM_INDEX_KEYSIZE - 1
 (shared hash table name is stored and indexed by ShmemIndex hash table,
 the key size of it is SHMEM_INDEX_KEYSIZE), but when the caller uses a
 longer hash table name, it doesn't report any error, instead it just
 uses the first SHMEM_INDEX_KEYSIZE chars as the hash table name.

 When some hash tables' names have the same prefix which is longer than
 (SHMEM_INDEX_KEYSIZE - 1), issues will come: those hash tables actually
 are created as the same hash table whose name is the prefix. So add the
 assert to prevent it.


fix_shmem_hash_table_name_length.patch
Description: fix_shmem_hash_table_name_length.patch


RE: Data is copied twice when specifying both child and parent table in publication

2022-09-25 Thread wangw.f...@fujitsu.com
On Mon, Sep 26, 2022 at 10:31 AM Osumi, Takamichi/大墨 昂道 
 wrote:
> Hi, thank you for updating the patchset.
> 
> 
> FYI, I noticed that the patch for head is no longer applicable.

Thanks for your kindly reminder and comment.

> $ git apply --check HEAD_v10-0001-Fix-data-replicated-twice-when-specifying-
> publis.patch
> error: patch failed: src/backend/catalog/pg_publication.c:1097
> error: src/backend/catalog/pg_publication.c: patch does not apply

Rebased the patch based on the changes in HEAD (13a185f).

> Also, one minor comment on the change in src/include/catalog/pg_proc.dat.
> 
>  # publications
> -{ oid => '6119', descr => 'get information of tables in a publication',
> -  proname => 'pg_get_publication_tables', prorows => '1000', proretset => 
> 't',
> -  provolatile => 's', prorettype => 'record', proargtypes => 'text',
> -  proallargtypes => '{text,oid,int2vector,pg_node_tree}',
> -  proargmodes => '{i,o,o,o}', proargnames => '{pubname,relid,attrs,qual}',
> +{ oid => '6119',
> +  descr => 'get information of the tables belonging to the specified
> publications.',
> 
> Please remove the period at the end of 'descr' string.
> It seems we don't write it in this file and removing it makes the code more
> aligned.

Improved as suggested.
Also modified the description to be consistent with the comments atop the
function pg_get_publication_tables.

Attach the new patches.

Regards,
Wang wei


HEAD_v11-0001-Fix-data-replicated-twice-when-specifying-publis.patch
Description:  HEAD_v11-0001-Fix-data-replicated-twice-when-specifying-publis.patch


REL15_v11-0001-Fix-data-replicated-twice-when-specifying-publis_patch
Description:  REL15_v11-0001-Fix-data-replicated-twice-when-specifying-publis_patch


REL14_v11-0001-Fix-data-replicated-twice-when-specifying-publis_patch
Description:  REL14_v11-0001-Fix-data-replicated-twice-when-specifying-publis_patch


REL13_v11-0001-Fix-data-replicated-twice-when-specifying-publis_patch
Description:  REL13_v11-0001-Fix-data-replicated-twice-when-specifying-publis_patch


Re: Removing unused parameter in SnapBuildGetOrBuildSnapshot

2022-09-25 Thread Amit Kapila
On Fri, Sep 23, 2022 at 8:58 AM Amit Kapila  wrote:
>
> On Wed, Sep 21, 2022 at 8:11 PM Zhang Mingli  wrote:
> >
> > On Sep 21, 2022, 22:22 +0800, Melih Mutlu , wrote:
> >
> > Hi hackers,
> >
> > Sharing a small patch to remove an unused parameter in 
> > SnapBuildGetOrBuildSnapshot function in snapbuild.c
> >
> >
> > +1, Good Catch.
> >
>
> LGTM.
>

Pushed.

-- 
With Regards,
Amit Kapila.




Re: Add common function ReplicationOriginName.

2022-09-25 Thread Peter Smith
On Wed, Sep 21, 2022 at 8:22 PM Peter Smith  wrote:
>
...
> > > On Wed, Sep 21, 2022 at 3:23 PM Amit Kapila  
> > > wrote:
> > > ...
> > >
> > > > Can't we use the existing function ReplicationOriginNameForTablesync()
> > > > by passing relid as InvalidOid for this purpose? We need a check
> > > > inside to decide which name to construct, otherwise, it should be
> > > > fine. If we agree with this, then we can change the name of the
> > > > function to something like ReplicationOriginNameForLogicalRep or
> > > > ReplicationOriginNameForLogicalRepWorkers.
> > > >
...
>
> OK, I can unify the 2 functions as you suggested. I will post another
> patch in a few days.
>

PSA patch v3 to combine the different replication origin name
formatting in a single function ReplicationOriginNameForLogicalRep as
suggested.

--
Kind Regards,
Peter Smith.
Fujitsu Australia


v3-0001-Add-common-function-ReplicationOriginNameForLogic.patch
Description: Binary data


RE: Perform streaming logical transactions by background workers and parallel apply

2022-09-25 Thread wangw.f...@fujitsu.com
On Thur, Sep 22, 2022 at 18:12 PM Amit Kapila  wrote:
> Few comments on v33-0001
> ===

Thanks for your comments.

> 1.
> + else if (data->streaming == SUBSTREAM_PARALLEL &&
> + data->protocol_version <
> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)
> + ereport(ERROR,
> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
> + errmsg("requested proto_version=%d does not support
> streaming=parallel mode, need %d or higher",
> + data->protocol_version,
> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)));
> 
> I think we can improve this error message as: "requested
> proto_version=%d does not support parallel streaming mode, need %d or
> higher".

Improved.

> 2.
> --- a/doc/src/sgml/monitoring.sgml
> +++ b/doc/src/sgml/monitoring.sgml
> @@ -3184,7 +3184,7 @@ SELECT pid, wait_event_type, wait_event FROM
> pg_stat_activity WHERE wait_event i
>
>
> OID of the relation that the worker is synchronizing; null for the
> -   main apply worker
> +   main apply worker and the apply parallel worker
>
>   
> 
> This and other changes in monitoring.sgml refers the workers as "apply
> parallel worker". Isn't it better to use parallel apply worker as we
> are using at other places in the patch? But, I have another question,
> do we really need to display entries for parallel apply workers in
> pg_stat_subscription if it doesn't have any meaningful information? I
> think we can easily avoid it in pg_stat_get_subscription by checking
> apply_leader_pid.

Make sense. Improved as suggested.
Do not display parallel apply worker related information in this view after
applying 0001 patch. But display entries for parallel apply worker after
applying 0005 patch.

> 3.
> ApplyWorkerMain()
> {
> ...
> ...
> +
> + if (server_version >= 16 &&
> + MySubscription->stream == SUBSTREAM_PARALLEL)
> + options.proto.logical.streaming = pstrdup("parallel");
> 
> After deciding here whether the parallel streaming mode is enabled or
> not, we recheck the same thing in apply_handle_stream_abort() and
> parallel_apply_can_start(). In parallel_apply_can_start(), we do it
> via two different checks. How about storing this information say in
> structure MyLogicalRepWorker in ApplyWorkerMain() and then use it at
> other places?

Improved as suggested.
Added a new flag "in_parallel_apply" to structure MyLogicalRepWorker.

Because the patch set could not be applied cleanly, I rebased and shared them
for review.
I have not addressed the comment you posted in [1]. I will share the new patch
set when I finish them.

The new patches were attached in [2].

[1] - 
https://www.postgresql.org/message-id/CAA4eK1KjGNA8T8O77rRhkv6bRT6OsdQaEy--2hNrJFCc80bN0A%40mail.gmail.com
[2] - 
https://www.postgresql.org/message-id/OS3PR01MB6275F4A7CA186412E2FF2ED29E529%40OS3PR01MB6275.jpnprd01.prod.outlook.com

Regards,
Wang wei


Re: Refactor backup related code (was: Is it correct to say, "invalid data in file \"%s\"", BACKUP_LABEL_FILE in do_pg_backup_stop?)

2022-09-25 Thread Michael Paquier
On Fri, Sep 23, 2022 at 09:12:22PM +0900, Michael Paquier wrote:
> I've read this one, and nothing is standing out at quick glance, so
> that looks rather reasonable to me.  I should be able to spend more
> time on that at the beginning of next week, and maybe apply it.

What I had at hand seemed fine on a second look, so applied after
tweaking a couple of comments.  One thing that I have been wondering
after-the-fact is whether it would be cleaner to return the contents
of the backup history file or backup_label as a char rather than a
StringInfo?  This simplifies a bit what the callers of
build_backup_content() need to do.
--
Michael
diff --git a/src/include/access/xlogbackup.h b/src/include/access/xlogbackup.h
index cb15b8b80a..8ec3d88b0a 100644
--- a/src/include/access/xlogbackup.h
+++ b/src/include/access/xlogbackup.h
@@ -15,7 +15,6 @@
 #define XLOG_BACKUP_H
 
 #include "access/xlogdefs.h"
-#include "lib/stringinfo.h"
 #include "pgtime.h"
 
 /* Structure to hold backup state. */
@@ -36,7 +35,7 @@ typedef struct BackupState
 	pg_time_t	stoptime;		/* backup stop time */
 } BackupState;
 
-extern StringInfo build_backup_content(BackupState *state,
-	   bool ishistoryfile);
+extern char *build_backup_content(BackupState *state,
+  bool ishistoryfile);
 
 #endif			/* XLOG_BACKUP_H */
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 7606ee128a..1dd6df0fe1 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -8711,7 +8711,7 @@ do_pg_backup_stop(BackupState *state, bool waitforarchive)
 	}
 	else
 	{
-		StringInfo	history_file;
+		char	   *history_file;
 
 		/*
 		 * Write the backup-end xlog record
@@ -8751,8 +8751,7 @@ do_pg_backup_stop(BackupState *state, bool waitforarchive)
 
 		/* Build and save the contents of the backup history file */
 		history_file = build_backup_content(state, true);
-		fprintf(fp, "%s", history_file->data);
-		pfree(history_file->data);
+		fprintf(fp, "%s", history_file);
 		pfree(history_file);
 
 		if (fflush(fp) || ferror(fp) || FreeFile(fp))
diff --git a/src/backend/access/transam/xlogbackup.c b/src/backend/access/transam/xlogbackup.c
index c01c1f9010..90b5273b02 100644
--- a/src/backend/access/transam/xlogbackup.c
+++ b/src/backend/access/transam/xlogbackup.c
@@ -23,15 +23,16 @@
  * When ishistoryfile is true, it creates the contents for a backup history
  * file, otherwise it creates contents for a backup_label file.
  *
- * Returns the result generated as a palloc'd StringInfo.
+ * Returns the result generated as a palloc'd string.
  */
-StringInfo
+char *
 build_backup_content(BackupState *state, bool ishistoryfile)
 {
 	char		startstrbuf[128];
 	char		startxlogfile[MAXFNAMELEN]; /* backup start WAL file */
 	XLogSegNo	startsegno;
 	StringInfo	result = makeStringInfo();
+	char	   *data;
 
 	Assert(state != NULL);
 
@@ -76,5 +77,8 @@ build_backup_content(BackupState *state, bool ishistoryfile)
 		appendStringInfo(result, "STOP TIMELINE: %u\n", state->stoptli);
 	}
 
-	return result;
+	data = result->data;
+	pfree(result);
+
+	return data;
 }
diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index f724b18733..a801a94fe8 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -130,7 +130,7 @@ pg_backup_stop(PG_FUNCTION_ARGS)
 	Datum		values[PG_BACKUP_STOP_V2_COLS] = {0};
 	bool		nulls[PG_BACKUP_STOP_V2_COLS] = {0};
 	bool		waitforarchive = PG_GETARG_BOOL(0);
-	StringInfo	backup_label;
+	char	   *backup_label;
 	SessionBackupState status = get_backup_status();
 
 	/* Initialize attributes information in the tuple descriptor */
@@ -153,7 +153,7 @@ pg_backup_stop(PG_FUNCTION_ARGS)
 	backup_label = build_backup_content(backup_state, false);
 
 	values[0] = LSNGetDatum(backup_state->stoppoint);
-	values[1] = CStringGetTextDatum(backup_label->data);
+	values[1] = CStringGetTextDatum(backup_label);
 	values[2] = CStringGetTextDatum(tablespace_map->data);
 
 	/* Deallocate backup-related variables */
@@ -162,7 +162,6 @@ pg_backup_stop(PG_FUNCTION_ARGS)
 	pfree(tablespace_map->data);
 	pfree(tablespace_map);
 	tablespace_map = NULL;
-	pfree(backup_label->data);
 	pfree(backup_label);
 
 	/* Returns the record as Datum */
diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index 495bbb506a..411cac9be3 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -317,15 +317,14 @@ perform_base_backup(basebackup_options *opt, bbsink *sink)
 			{
 struct stat statbuf;
 bool		sendtblspclinks = true;
-StringInfo	backup_label;
+char	   *backup_label;
 
 bbsink_begin_archive(sink, "base.tar");
 
 /* In the main tar, include the backup_label first... */
 backup_label = build_backup_content(backup_state, false);
 sendFileWithContent(sink, BACKUP_LABEL_FILE,
-	backup_label->data, );
-pfree(backup_label->data);
+

RE: Data is copied twice when specifying both child and parent table in publication

2022-09-25 Thread osumi.takami...@fujitsu.com
On Tuesday, September 20, 2022 3:18 PM Wang, Wei/王 威  
wrote:
> Rebased the patch based on the changes in HEAD (20b6847).
> Attach the new patches.
Hi, thank you for updating the patchset.


FYI, I noticed that the patch for head is no longer applicable.

$ git apply --check 
HEAD_v10-0001-Fix-data-replicated-twice-when-specifying-publis.patch
error: patch failed: src/backend/catalog/pg_publication.c:1097
error: src/backend/catalog/pg_publication.c: patch does not apply


Also, one minor comment on the change in src/include/catalog/pg_proc.dat.

 # publications
-{ oid => '6119', descr => 'get information of tables in a publication',
-  proname => 'pg_get_publication_tables', prorows => '1000', proretset => 't',
-  provolatile => 's', prorettype => 'record', proargtypes => 'text',
-  proallargtypes => '{text,oid,int2vector,pg_node_tree}',
-  proargmodes => '{i,o,o,o}', proargnames => '{pubname,relid,attrs,qual}',
+{ oid => '6119',
+  descr => 'get information of the tables belonging to the specified 
publications.',

Please remove the period at the end of 'descr' string.
It seems we don't write it in this file and removing it makes the code more 
aligned.



Best Regards,
Takamichi Osumi



Re: Refactor backup related code (was: Is it correct to say, "invalid data in file \"%s\"", BACKUP_LABEL_FILE in do_pg_backup_stop?)

2022-09-25 Thread Michael Paquier
On Thu, Sep 22, 2022 at 08:25:31AM -0700, Andres Freund wrote:
> Due to the merge of the meson based build this patch needs some
> adjustment. See
> https://cirrus-ci.com/build/6146162607521792
> Looks like it just requires adding xlogbackup.c to
> src/backend/access/transam/meson.build.

Thanks for the reminder.  I have played a bit with meson and ninja,
and that's a rather straight-forward experience.  The output is muuuch
nicer to parse.
--
Michael


signature.asc
Description: PGP signature


Re: Allow foreign keys to reference a superset of unique columns

2022-09-25 Thread James Coleman
Hello Kaiting,

The use case you're looking to handle seems interesting to me.

On Wed, Jul 27, 2022 at 3:11 PM Tom Lane  wrote:
>
> Kaiting Chen  writes:
> > I'd like to propose a change to PostgreSQL to allow the creation of a 
> > foreign
> > key constraint referencing a superset of uniquely constrained columns.
>
> TBH, I think this is a fundamentally bad idea and should be rejected
> outright.  It fuzzes the semantics of the FK relationship, and I'm
> not convinced that there are legitimate use-cases.  Your example
> schema could easily be dismissed as bad design that should be done
> some other way.
>
> For one example of where the semantics get fuzzy, it's not
> very clear how the extra-baggage columns ought to participate in
> CASCADE updates.  Currently, if we have
>CREATE TABLE foo (a integer PRIMARY KEY, b integer);
> then an update that changes only foo.b doesn't need to update
> referencing tables, and I think we even have optimizations that
> assume that if no unique-key columns are touched then RI checks
> need not be made.  But if you did
>CREATE TABLE bar (x integer, y integer,
>  FOREIGN KEY (x, y) REFERENCES foo(a, b) ON UPDATE 
> CASCADE);
> then perhaps you expect bar.y to be updated ... or maybe you don't?
>
> Another example is that I think the idea is only well-defined when
> the subset column(s) are a primary key, or at least all marked NOT NULL.
> Otherwise they're not as unique as you're claiming.  But then the FK
> constraint really has to be dependent on a PK constraint not just an
> index definition, since indexes in themselves don't enforce not-nullness.
> That gets back to Peter's complaint that referring to an index isn't
> good enough.
>
> Anyway, seeing that the patch touches neither ri_triggers.c nor any
> regression tests, I find it hard to believe that such semantic
> questions have been thought through.
>
> It's also unclear to me how this ought to interact with the
> information_schema views concerning foreign keys.  We generally
> feel that we don't want to present any non-SQL-compatible data
> in information_schema, for fear that it will confuse applications
> that expect to see SQL-spec behavior there.  So do we leave such
> FKs out of the views altogether, or show only the columns involving
> the associated unique constraint?  Neither answer seems pleasant.
>
> FWIW, the patch is currently failing to apply per the cfbot.
> I think you don't need to manually update backend/nodes/ anymore,
> but the gram.y changes look to have been sideswiped by some
> other recent commit.

As I was reading through the email chain I had this thought: could you
get the same benefit (or 90% of it anyway) by instead allowing the
creation of a uniqueness constraint that contains more columns than
the index backing it? So long as the index backing it still guaranteed
the uniqueness on a subset of columns that would seem to be safe.

Tom notes the additional columns being nullable is something to think
about. But if we go the route of allowing unique constraints with
backing indexes having a subset of columns from the constraint I don't
think the nullability is an issue since it's already the case that a
unique constraint can be declared on columns that are nullable. Indeed
it's also the case that we already support a foreign key constraint
backed by a unique constraint including nullable columns.

Because such an approach would, I believe, avoid changing the foreign
key code (or semantics) at all, I believe that would address Tom's
concerns about information_schema and fuzziness of semantics.

After writing down that idea I noticed Wolfgang Walther had commented
similarly, but it appears that that idea got lost (or at least not
responded to).

I'd be happy to sign up to review an updated patch if you're
interested in continuing this effort. If so, could you register the
patch in the CF app (if not there already)?

Thanks,
James Coleman




postgres_fdw uninterruptible during connection establishment / ProcSignalBarrier

2022-09-25 Thread Andres Freund
Hi,

I amd working on adding an "installcheck" equivalent mode to the meson
build. One invocation of something "installcheck-world" like lead to things
getting stuck.

Lots of log messages like:

2022-09-25 16:16:30.999 PDT [2705454][client 
backend][28/1112:41269][pg_regress] LOG:  still waiting for backend with PID 
2705178 to accept ProcSignalBarrier
2022-09-25 16:16:30.999 PDT [2705454][client 
backend][28/1112:41269][pg_regress] STATEMENT:  DROP DATABASE IF EXISTS 
"regression_test_parser_regress"
2022-09-25 16:16:31.006 PDT [2705472][client 
backend][22/3699:41294][pg_regress] LOG:  still waiting for backend with PID 
2705178 to accept ProcSignalBarrier
2022-09-25 16:16:31.006 PDT [2705472][client 
backend][22/3699:41294][pg_regress] STATEMENT:  DROP DATABASE IF EXISTS 
"regression_test_predtest_regress"

a stacktrace of 2705178 shows:

(gdb) bt
#0  0x7f67d26fe1b3 in __GI___poll (fds=fds@entry=0x7ffebe187c88, 
nfds=nfds@entry=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x7f67cfd03c1c in pqSocketPoll (sock=, 
forRead=forRead@entry=1, forWrite=forWrite@entry=0, end_time=end_time@entry=-1)
at 
../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-misc.c:1125
#2  0x7f67cfd04310 in pqSocketCheck (conn=conn@entry=0x562f875a9b70, 
forRead=forRead@entry=1, forWrite=forWrite@entry=0, end_time=end_time@entry=-1)
at 
../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-misc.c:1066
#3  0x7f67cfd043fd in pqWaitTimed (forRead=forRead@entry=1, 
forWrite=forWrite@entry=0, conn=conn@entry=0x562f875a9b70, 
finish_time=finish_time@entry=-1)
at ../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-misc.c:998
#4  0x7f67cfcfc47b in connectDBComplete (conn=conn@entry=0x562f875a9b70) at 
../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-connect.c:2166
#5  0x7f67cfcfe248 in PQconnectdbParams 
(keywords=keywords@entry=0x562f87613d20, values=values@entry=0x562f87613d70, 
expand_dbname=expand_dbname@entry=0)
at 
../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-connect.c:659
#6  0x7f67cfd29536 in connect_pg_server 
(server=server@entry=0x562f876139b0, user=user@entry=0x562f87613980)
at 
../../../../home/andres/src/postgresql/contrib/postgres_fdw/connection.c:474
#7  0x7f67cfd29910 in make_new_connection 
(entry=entry@entry=0x562f8758b2c8, user=user@entry=0x562f87613980)
at 
../../../../home/andres/src/postgresql/contrib/postgres_fdw/connection.c:344
#8  0x7f67cfd29da0 in GetConnection (user=0x562f87613980, 
will_prep_stmt=will_prep_stmt@entry=false, state=state@entry=0x562f876136a0)
at 
../../../../home/andres/src/postgresql/contrib/postgres_fdw/connection.c:204
#9  0x7f67cfd35294 in postgresBeginForeignScan (node=0x562f87612e70, 
eflags=)


and it turns out that backend can't be graciously be terminated.


Maybe I am missing something, but I don't think it's OK for
connect_pg_server() to connect in a blocking manner, without accepting
interrupts?

Greetings,

Andres Freund




Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

2022-09-25 Thread Justin Pryzby
On Sun, Sep 25, 2022 at 04:50:54PM -0400, Hamid Akhtar wrote:
> On Sun, 28 Aug 2022 at 17:30, Nathan Bossart  wrote:
> 
> > On Thu, Aug 25, 2022 at 10:12:00AM +0530, Bharath Rupireddy wrote:
> > > Please review the v12 patch attached.
> >
> > LGTM.  I've marked this as ready-for-committer.
> >
> 
> This patch needs an update. It is failing on Windows for the test case
> "33/238 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade with an
> "exit status 2" error.
> 
> The details are available on the cfbot.cputue.org.
> https://cirrus-ci.com/task/5709014662119424?logs=check_world#L267

It failed like:
https://api.cirrus-ci.com/v1/artifact/task/5709014662119424/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade
[19:35:31.017](76.307s) ok 10 - run of pg_upgrade for new instance
[19:35:31.017](0.000s) not ok 11 - pg_upgrade_output.d/ removed after 
pg_upgrade success
[19:35:31.018](0.001s) #   Failed test 'pg_upgrade_output.d/ removed after 
pg_upgrade success'
#   at C:/cirrus/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 261.

This is a transient failure; it normally passes:
https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3389

Actually, it's the same problem as reported here:
20220919213217.ptqfdlcc5idk5...@awork3.anarazel.de

I don't think there's anything in place to help scrape the cirrusci
logs, but I suppose other patches have had/will have this same failure.

I don't know but maybe meson made this worse, by exercising/stressing
concurrent filesystem operations.

-- 
Justin




Re: First draft of the PG 15 release notes

2022-09-25 Thread Justin Pryzby
On Fri, Sep 23, 2022 at 01:33:07PM -0400, Tom Lane wrote:
> "Jonathan S. Katz"  writes:
> > On 9/23/22 11:25 AM, Tom Lane wrote:
> >> I'm planning to do a final(?) pass over the v15 notes today,
> >> but I thought it'd be appropriate to push this separately.
> 
> > RE "final pass", there's still an errant "BACKPATCHED"[1] that still 
> > needs addressing. I didn't have a chance to verify if it was indeed 
> > backpatched.
> 
> Yeah, that one indeed needs removed (and I've done so).  I see a
> few other places where Bruce left notes about things that need more
> clarification.  I'm just finishing a pass of "update for subsequent
> commits", and then I'll start on copy-editing.

Some possible changes for your consideration.

+Store pg_upgrade's log and
+temporary files in a subdirectory of the new cluster called
+pg_upgrade_output.d (Justin Pryzby)

+Previously such files were left in the current directory,
+requiring manual cleanup.  It's still necessary to remove them
+manually afterwards, but now one can just remove that whole
+subdirectory.

If pg_upgrade succeeds, then it removes the dir itself (so it's not
"necessary").

And if it fails after starting to restore the schema, then it's
necessary to remove not the "subdirectory" but the whole new-cluster
dir.

+Make pg_upgrade preserve tablespace
+and database OIDs, as well as table relfilenode numbers

s/table/relation/ ?

You changed this to use spaces:
|The new setting is log_destination = jsonlog.
but then left these without spaces:
|and wal_level=minimal.
|This is enabled via --compress=lz4 and requires

+   value, use the transaction start time not wall clock time to

s/not/rather than/ ?

+Adjust psql so that Readline's

should use Readline ?

+Previously a pound marker was inserted, but that's pretty
+unhelpful in SQL.

This sounds more like a candid commit message than a release note.

+Improve performance of dumping databases with many objects

s/of/when/ ?

+   New options are server to write the

*The* new options

+In some cases a partition child table could appear more than once.

Technically "partition child table" is redundant

-- 
Justin




Re: add checkpoint stats of snapshot and mapping files of pg_logical dir

2022-09-25 Thread Hamid Akhtar
On Sun, 28 Aug 2022 at 17:30, Nathan Bossart 
wrote:

> On Thu, Aug 25, 2022 at 10:12:00AM +0530, Bharath Rupireddy wrote:
> > Please review the v12 patch attached.
>
> LGTM.  I've marked this as ready-for-committer.
>

This patch needs an update. It is failing on Windows for the test case
"33/238 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade with an
"exit status 2" error.

The details are available on the cfbot.cputue.org.
https://cirrus-ci.com/task/5709014662119424?logs=check_world#L267

--
Hamid Akhtar,
Percona LLC, www.percona.com


Re: pg_basebackup --create-slot-if-not-exists?

2022-09-25 Thread Michael Banck
Hi,

On Wed, Sep 21, 2022 at 09:12:04PM -0400, Tom Lane wrote:
> In any case I agree with the point that --create-slot seems
> rather obsolete.  If you are trying to resume in a previous
> replication stream (which seems like the point of persistent
> slots) then the slot had better already exist.  If you are
> satisfied with just starting replication from the current
> instant, then a temp slot seems like what you want.

One advantage of using a permanent slot is that it's getting written
into the recovery configuration when you use --write-recovery-conf and
you only need to start the standby after intial bootstrap to have it
connect using the slot.

Not sure that's worth keeping it around, but it makes automating things
somewhat simpler I guess. I do somewhat agree with the thread starter,
that --create-slot-if-not-exists would make things even easier, but in
the light of your concerns regarding security it's probably not the best
idea and would make things even more convoluted than they are now.


Michael




Re: [RFC] building postgres with meson - v13

2022-09-25 Thread Andres Freund
Hi,

On 2022-09-24 17:33:49 -0700, Peter Geoghegan wrote:
> On Sat, Sep 24, 2022 at 5:13 PM Andres Freund  wrote:
> > > One more question about this, that wasn't covered by the Wiki page: is
> > > there some equivalent to "make installcheck" with meson builds?
> >
> > Not yet. Nothing impossible, just not done yet. Partially because 
> > installcheck
> > is so poorly defined (run against an already running server for pg_regress 
> > vs
> > using "system" installed binaries for tap tests).
> 
> Got it. I can work around that by just having an old autoconf-based
> vpath build directory. I'll need to do this when I run Valgrind.
> 
> My workaround would be annoying if I needed to run "installcheck"
> anywhere near as frequently as I run "make check-world". But that
> isn't the case. meson delivers a significant improvement in the metric
> that really matters to me, so I can't really complain.

My gut feeling is that we should use this opportunity to split 'installcheck'
into two. "test a running server" and "test installed binaries".

I think the cleanest way to do this with meson would be to utilize meson
tests's "setups".
$ meson test --setup 'running-server'
would run all [selected] tests compatible with running against a running
server. And
$ meson test --setup 'installed'
would test installed binaries.


Greetings,

Andres Freund




Re: Allow foreign keys to reference a superset of unique columns

2022-09-25 Thread James Coleman
On Sun, Sep 25, 2022 at 4:49 AM Wolfgang Walther
 wrote:
>
> James Coleman:
> > If I'm following properly this sounds like an overengineered EAV
> > schema, and neither of those things inspires me to think "this is a
> > use case I want to support".
> >
> > That being said, I know that sometimes examples that have been
> > abstracted enough to share aren't always the best, so perhaps there's
> > something underlying this that's a more valuable example.
>
> Most my use-cases are slightly denormalized and I was looking for an
> example that didn't require those kind of FKS only because of the
> denormalization. So that's why it might have been a bit artifical or
> abstracted too much.
>
> Take another example: I deal with multi-tenancy systems, for which I
> want to use RLS to separate the data between tenants:
>
> CREATE TABLE tenants (tenant INT PRIMARY KEY);
>
> Each tenant can create multiple users and groups:
>
> CREATE TABLE users (
>"user" INT PRIMARY KEY,
>tenant INT NOT NULL REFERENCES tenants
> );
>
> CREATE TABLLE groups (
>"group" INT PRIMARY KEY,
>tenant INT NOT NULL REFERENCES tenants
> );
>
> Users can be members of groups. The simple approach would be:
>
> CREATE TABLE members (
>PRIMARY KEY ("user", "group"),
>"user" INT REFERENCES users,
>"group" INT REFERENCES groups
> );
>
> But this falls short in two aspects:
> - To make RLS policies simpler to write and quicker to execute, I want
> to add "tenant" columns to **all** other tables. A slightly denormalized
> schema for efficiency.
> - The schema above does not ensure that users can only be members in
> groups of the same tenant. Our business model requires to separate
> tenants cleanly, but as written above, cross-tenant memberships would be
> allowed.
>
> In comes the "tenant" column which solves both of this:
>
> CREATE TABLE members (
>PRIMARY KEY ("user", "group"),
>tenant INT REFERENCES tenants,
>"user" INT,
>"group" INT,
>FOREIGN KEY ("user", tenant) REFERENCES users ("user", tenant),
>FOREIGN KEY ("group", tenant) REFERENCES groups ("group", tenant)
> );
>
> This is not possible to do right now, without adding more UNIQUE
> constraints to the users and groups tables - on a superset of already
> unique columns.

Thanks, that's a more interesting use case IMO (and doesn't smell in
the way the other did).

> >> bar.y is a little bit like a generated value in that sense, it should
> >> always match foo.b. I think it would be great, if we could actually go a
> >> step further, too: On an update to bar.x to a new value, if foo.a=bar.x
> >> exists, I would like to set bar.y automatically to the new foo.b.
> >> Otherwise those kind of updates always have to either query foo before,
> >> or add a trigger to do the same.
> >
> > Isn't this actually contradictory to the behavior you currently have
> > with a multi-column foreign key? In the example above then an update
> > to bar.x is going to update the rows in foo that match bar.x = foo.a
> > and bar.y = foo.b *using the old values of bar.x and bar.y* to be the
> > new values.
>
> No, I think there was a misunderstanding. An update to bar should not
> update rows in foo. An update to bar.x should update bar.y implicitly,
> to match the new value of foo.b.
>
> > You seem to be suggesting that instead it should look for
> > other rows that already match the *new value* of only one of the
> > columns in the constraint.
>
> Yes. I think basically what I'm suggesting is, that for an FK to a
> superset of unique columns, all the FK-logic should still be done on the
> already unique set of columns only - and then the additional columns
> should be mirrored into the referencing table. The referencing table can
> then put additional constraints on this column. In the members example
> above, this additional constraint is the fact that the tenant column
> can't be filled with two different values for the users and groups FKs.

If we have a declared constraint on x,y where x is unique based on an
index including on x I do not think we should have that fk constraint
work differently than a constraint on x,y where there is a unique
index on x,y. That would seem to be incredibly confusing behavior
(even if it would be useful for some specific use case).

> But this could also be a CHECK constraint to allow FKs only to a subset
> of rows in the target table:

Are you suggesting a check constraint that queries another table?
Because check constraints are not supposed to do that (I assume this
is technically possible to declare via a function, just like it is
technically possible to do in a functional index, but like in the
index case it's a bad idea since it's not actually guaranteed).

> CREATE TYPE foo_type AS ENUM ('A', 'B', 'C');
>
> CREATE TABLE foo (
>f INT PRIMARY KEY,
>type foo_type
> );
>
> CREATE TABLE bar (
>b INT PRIMARY KEY,
>f INT,
>ftype foo_type CHECK (ftype <> 'C'),
>FOREIGN KEY (f, ftype) REFERENCES foo (f, 

Re: Add ON CONFLICT DO RETURN clause

2022-09-25 Thread Peter Geoghegan
On Sun, Sep 25, 2022 at 8:55 AM Wolfgang Walther
 wrote:
> The attached patch adds a DO RETURN clause to be able to do this:
>
> INSERT INTO x (id) VALUES (1)
>ON CONFLICT DO RETURN
>RETURNING created_at;
>
> Much simpler. This will either insert or do nothing - but in both cases
> return a row.

How can you tell which it was, though?

I don't see why this statement should ever perform steps for any row
that are equivalent to DO NOTHING processing -- it should at least
lock each and every affected row, if only to conclusively determine
that there really must be a conflict.

In general ON CONFLICT DO UPDATE allows the user to add a WHERE clause
to back out of updating a row based on an arbitrary predicate. DO
NOTHING has no such WHERE clause. So DO NOTHING quite literally does
nothing for any rows that had conflicts, unlike DO UPDATE, which will
at the very least lock the row (with or without an explicit WHERE
clause).

The READ COMMITTED behavior for DO NOTHING is a bit iffy, even
compared to DO UPDATE, but the advantages in bulk loading scenarios
can be decisive. Or at least they were before we had MERGE.

-- 
Peter Geoghegan




Re: tweak to a few index tests to hits ambuildempty() routine.

2022-09-25 Thread Tom Lane
Alvaro Herrera  writes:
> On 2022-Sep-25, Tom Lane wrote:
>> I propose that we revert 4fb5c794e and instead add separate test
>> cases that just create unlogged indexes (I guess they don't actually
>> need to *do* anything with them?).

> WFM.  I can do it next week, or feel free to do so if you want.

On it now.

regards, tom lane




Re: tweak to a few index tests to hits ambuildempty() routine.

2022-09-25 Thread Alvaro Herrera
On 2022-Sep-25, Tom Lane wrote:

> That's what it's saying *now*, but after rereading this whole thread
> I see that it apparently said something different last week.  So the
> coverage is probabilistic, which squares with this discussion and
> with some tests I just did locally.  That's not good.

Completely agreed.

> I propose that we revert 4fb5c794e and instead add separate test
> cases that just create unlogged indexes (I guess they don't actually
> need to *do* anything with them?).

WFM.  I can do it next week, or feel free to do so if you want.

> Looks like dec8ad367 could be reverted as well, in view of 2f2e24d90.

Yeah, sounds good.

-- 
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/




Add ON CONFLICT DO RETURN clause

2022-09-25 Thread Wolfgang Walther
When using ON CONFLICT DO NOTHING together with RETURNING, the 
conflicted rows are not returned. Sometimes, this would be useful 
though, for example when generated columns or default values are in play:


CREATE TABLE x (
  id INT PRIMARY KEY,
  created_at TIMESTAMPTZ DEFAULT CURRENT_TIMEMSTAMP
);

To get the created_at timestamp for a certain id **and** at the same 
time create this id in case it does not exist, yet, I can currently do:


INSERT INTO x (id) VALUES (1)
  ON CONFLICT DO UPDATE
  SET id=EXCLUDED.id
  RETURNING created_at;

However that will result in a useless UPDATE of the row.

I could probably add a trigger to prevent the UPDATE in that case. Or I 
could do something in a CTE. Or in multiple statements in plpgsql - this 
is what I currently do in application code.


The attached patch adds a DO RETURN clause to be able to do this:

INSERT INTO x (id) VALUES (1)
  ON CONFLICT DO RETURN
  RETURNING created_at;

Much simpler. This will either insert or do nothing - but in both cases 
return a row.


Thoughts?

Best

Wolfgang>From 83a0031ed2ded46cbf6fd130bd76680267db7a5e Mon Sep 17 00:00:00 2001
From: Wolfgang Walther 
Date: Sun, 25 Sep 2022 16:20:44 +0200
Subject: [PATCH v1] Add ON CONFLICT DO RETURN clause

This behaves the same as DO NOTHING, but returns the row when used together 
with RETURNING.
---
 doc/src/sgml/postgres-fdw.sgml|   6 +-
 doc/src/sgml/ref/insert.sgml  |  15 +-
 src/backend/commands/explain.c|  21 +-
 src/backend/executor/nodeModifyTable.c|  24 +-
 src/backend/optimizer/util/plancat.c  |   4 +
 src/backend/parser/gram.y |  10 +
 src/backend/parser/parse_clause.c |   7 +
 src/backend/utils/adt/ruleutils.c |   4 +
 src/include/nodes/nodes.h |   1 +
 .../expected/insert-conflict-do-nothing-2.out | 186 
 .../expected/insert-conflict-do-nothing.out   |  46 
 .../expected/partition-key-update-3.out   | 206 ++
 .../specs/insert-conflict-do-nothing-2.spec   |  11 +
 .../specs/insert-conflict-do-nothing.spec |   5 +
 .../specs/partition-key-update-3.spec |  11 +
 src/test/regress/expected/insert_conflict.out |  25 +++
 src/test/regress/sql/insert_conflict.sql  |  19 ++
 17 files changed, 587 insertions(+), 14 deletions(-)

diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml
index bfd344cdc0..e5b6b8501f 100644
--- a/doc/src/sgml/postgres-fdw.sgml
+++ b/doc/src/sgml/postgres-fdw.sgml
@@ -80,9 +80,9 @@
  
   Note that postgres_fdw currently lacks support for
   INSERT statements with an ON CONFLICT DO
-  UPDATE clause.  However, the ON CONFLICT DO 
NOTHING
-  clause is supported, provided a unique index inference specification
-  is omitted.
+  UPDATE or ON CONFLICT DO RETURN clause.
+  However, the ON CONFLICT DO NOTHING clause is supported,
+  provided a unique index inference specification is omitted.
   Note also that postgres_fdw supports row movement
   invoked by UPDATE statements executed on partitioned
   tables, but it currently does not handle the case where a remote partition
diff --git a/doc/src/sgml/ref/insert.sgml b/doc/src/sgml/ref/insert.sgml
index 7cea70329e..eb0c721637 100644
--- a/doc/src/sgml/ref/insert.sgml
+++ b/doc/src/sgml/ref/insert.sgml
@@ -36,6 +36,7 @@ INSERT INTO table_name [ AS and conflict_action is 
one of:
 
 DO NOTHING
+DO RETURN
 DO UPDATE SET { column_name = 
{ expression | DEFAULT } |
 ( column_name 
[, ...] ) = [ ROW ] ( { expression 
| DEFAULT } [, ...] ) |
 ( column_name 
[, ...] ) = ( sub-SELECT )
@@ -336,9 +337,11 @@ INSERT INTO table_name [ AS conflict_target is violated, the
 alternative conflict_action is taken.
 ON CONFLICT DO NOTHING simply avoids inserting
-a row as its alternative action.  ON CONFLICT DO
-UPDATE updates the existing row that conflicts with the
-row proposed for insertion as its alternative action.
+a row as its alternative action.  ON CONFLICT DO RETURN
+avoids inserting the row, but returns the row when 
RETURNING
+is specified.  ON CONFLICT DO UPDATE updates the
+existing row that conflicts with the row proposed for insertion as
+its alternative action.

 

@@ -379,7 +382,7 @@ INSERT INTO table_name [ AS arbiter
 indexes.  Either performs unique index
 inference, or names a constraint explicitly.  For
-ON CONFLICT DO NOTHING, it is optional to
+DO NOTHING and DO RETURN, it is 
optional to
 specify a conflict_target; when
 omitted, conflicts with all usable constraints (and unique
 indexes) are handled.  For ON CONFLICT DO
@@ -395,8 +398,8 @@ INSERT INTO table_name [ AS 
 conflict_action specifies an
 alternative ON CONFLICT action.  It can be
-either DO NOTHING, or a DO
-UPDATE clause specifying the exact details of the
+  

Re: tweak to a few index tests to hits ambuildempty() routine.

2022-09-25 Thread Tom Lane
I wrote:
> Yeah.  You can see that the coverage-test animal is not reaching it
> anymore:
> https://coverage.postgresql.org/src/backend/access/gin/ginvacuum.c.gcov.html

That's what it's saying *now*, but after rereading this whole thread
I see that it apparently said something different last week.  So the
coverage is probabilistic, which squares with this discussion and
with some tests I just did locally.  That's not good.  I shudder to
imagine how much time somebody might waste trying to locate a bug
in this area, if a test failure appears and disappears regardless
of code changes they make while chasing it.

I propose that we revert 4fb5c794e and instead add separate test
cases that just create unlogged indexes (I guess they don't actually
need to *do* anything with them?).  Looks like dec8ad367 could be
reverted as well, in view of 2f2e24d90.

regards, tom lane




Re: tweak to a few index tests to hits ambuildempty() routine.

2022-09-25 Thread Tom Lane
a.kozhemya...@postgrespro.ru writes:
> But my point is that after 4fb5c794e5 for most developer setups and 
> buildfarm members, e.g.:
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=guaibasaurus=2022-09-25%2001%3A01%3A13
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tayra=2022-09-24%2020%3A40%3A00
> the ginbulkdelete() most probably is not tested.
> In other words, it seems that we've just lost the effect of 4c51a2d1e4:
> Add a test case that exercises vacuum's deletion of empty GIN
> posting pages.

Yeah.  You can see that the coverage-test animal is not reaching it
anymore:
https://coverage.postgresql.org/src/backend/access/gin/ginvacuum.c.gcov.html

So it seems clear that 4fb5c794e5 made at least some coverage worse
not better.  I think we'd better rejigger it to add some new indexes
not repurpose old ones.

regards, tom lane




Re: tweak to a few index tests to hits ambuildempty() routine.

2022-09-25 Thread a . kozhemyakin
Yes, with MAX_CONNECTIONS=1 and -O2 the function ginbulkdelete() is 
reached while the vacuum test.
But my point is that after 4fb5c794e5 for most developer setups and 
buildfarm members, e.g.:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=guaibasaurus=2022-09-25%2001%3A01%3A13
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tayra=2022-09-24%2020%3A40%3A00
the ginbulkdelete() most probably is not tested.
In other words, it seems that we've just lost the effect of 4c51a2d1e4:
Add a test case that exercises vacuum's deletion of empty GIN
posting pages.  Since this is a temp table, it should now work
reliably to delete a bunch of rows and immediately VACUUM.
Before the preceding commit, this would not have had the desired
effect, at least not in parallel regression tests.

Noah Misch писал 2022-09-25 00:20:
On Wed, Sep 21, 2022 at 02:10:42PM +0700, a.kozhemya...@postgrespro.ru 
wrote:
After analyzing this, I found out why we don't reach that Assert but 
we have
coverage shown - firstly, it reached via another test, vacuum; 
secondly, it
depends on the gcc optimization flag. We reach that Assert only when 
using

-O0.
If we build with -O2 or -Og that function is not reached (due to 
different

results of the heap_prune_satisfies_vacuum() check inside
heap_page_prune()).


With "make check MAX_CONNECTIONS=1", does that difference between -O0 
and -O2
still appear?  Compiler optimization shouldn't consistently change 
pruning
decisions.  It could change pruning decisions probabilistically, by 
changing

which parallel actions overlap.  If the difference disappears under
MAX_CONNECTIONS=1, the system is likely fine.





Re: has_privs_of_role vs. is_member_of_role, redux

2022-09-25 Thread Wolfgang Walther

Robert Haas:

Well, maybe. Suppose that role A has been granted pg_read_all_settings
WITH INHERIT TRUE, SET TRUE and role B has been granted
pg_read_all_settings WITH INHERIT TRUE, SET FALSE. A can create a
table owned by pg_read_all_settings. If A does that, then B can now
create a trigger on that table and usurp the privileges of
pg_read_all_settings, after which B can now create any number of
objects owned by pg_read_all_settings.


I'm not seeing how this is possible. A trigger function would run with 
the invoking user's privileges by default, right? So B would have to 
create a trigger with a SECURITY DEFINER function, which is owned by 
pg_read_all_settings to actually usurp the privileges of that role. But 
creating objects with that owner is exactly the thing B can't do.


What am I missing?

Best

Wolfgang




Re: Allow foreign keys to reference a superset of unique columns

2022-09-25 Thread Wolfgang Walther

James Coleman:

If I'm following properly this sounds like an overengineered EAV
schema, and neither of those things inspires me to think "this is a
use case I want to support".

That being said, I know that sometimes examples that have been
abstracted enough to share aren't always the best, so perhaps there's
something underlying this that's a more valuable example.


Most my use-cases are slightly denormalized and I was looking for an 
example that didn't require those kind of FKS only because of the 
denormalization. So that's why it might have been a bit artifical or 
abstracted too much.


Take another example: I deal with multi-tenancy systems, for which I 
want to use RLS to separate the data between tenants:


CREATE TABLE tenants (tenant INT PRIMARY KEY);

Each tenant can create multiple users and groups:

CREATE TABLE users (
  "user" INT PRIMARY KEY,
  tenant INT NOT NULL REFERENCES tenants
);

CREATE TABLLE groups (
  "group" INT PRIMARY KEY,
  tenant INT NOT NULL REFERENCES tenants
);

Users can be members of groups. The simple approach would be:

CREATE TABLE members (
  PRIMARY KEY ("user", "group"),
  "user" INT REFERENCES users,
  "group" INT REFERENCES groups
);

But this falls short in two aspects:
- To make RLS policies simpler to write and quicker to execute, I want 
to add "tenant" columns to **all** other tables. A slightly denormalized 
schema for efficiency.
- The schema above does not ensure that users can only be members in 
groups of the same tenant. Our business model requires to separate 
tenants cleanly, but as written above, cross-tenant memberships would be 
allowed.


In comes the "tenant" column which solves both of this:

CREATE TABLE members (
  PRIMARY KEY ("user", "group"),
  tenant INT REFERENCES tenants,
  "user" INT,
  "group" INT,
  FOREIGN KEY ("user", tenant) REFERENCES users ("user", tenant),
  FOREIGN KEY ("group", tenant) REFERENCES groups ("group", tenant)
);

This is not possible to do right now, without adding more UNIQUE 
constraints to the users and groups tables - on a superset of already 
unique columns.



bar.y is a little bit like a generated value in that sense, it should
always match foo.b. I think it would be great, if we could actually go a
step further, too: On an update to bar.x to a new value, if foo.a=bar.x
exists, I would like to set bar.y automatically to the new foo.b.
Otherwise those kind of updates always have to either query foo before,
or add a trigger to do the same.


Isn't this actually contradictory to the behavior you currently have
with a multi-column foreign key? In the example above then an update
to bar.x is going to update the rows in foo that match bar.x = foo.a
and bar.y = foo.b *using the old values of bar.x and bar.y* to be the
new values.


No, I think there was a misunderstanding. An update to bar should not 
update rows in foo. An update to bar.x should update bar.y implicitly, 
to match the new value of foo.b.



You seem to be suggesting that instead it should look for
other rows that already match the *new value* of only one of the
columns in the constraint.


Yes. I think basically what I'm suggesting is, that for an FK to a 
superset of unique columns, all the FK-logic should still be done on the 
already unique set of columns only - and then the additional columns 
should be mirrored into the referencing table. The referencing table can 
then put additional constraints on this column. In the members example 
above, this additional constraint is the fact that the tenant column 
can't be filled with two different values for the users and groups FKs. 
But this could also be a CHECK constraint to allow FKs only to a subset 
of rows in the target table:


CREATE TYPE foo_type AS ENUM ('A', 'B', 'C');

CREATE TABLE foo (
  f INT PRIMARY KEY,
  type foo_type
);

CREATE TABLE bar (
  b INT PRIMARY KEY,
  f INT,
  ftype foo_type CHECK (ftype <> 'C'),
  FOREIGN KEY (f, ftype) REFERENCES foo (f, type);
);

In this example, the additional ftype column is just used to enforce 
that bar can only reference rows with type A or B, but not C. Assume:


INSERT INTO foo VALUES (1, 'A'), (2, 'B'), (3, 'C');

In this case, it would be nice to be able to do the following, i.e. 
derive the value for bar.ftype automatically:


INSERT INTO bar (b, f) VALUES (10, 1); -- bar.ftype is then 'A'
UPDATE bar SET f = 2 WHERE b = 10; -- bar.ftype is then 'B'

And it would throw errors in the following cases, because the 
automatically derived value fails the CHECK constraint:


INSERT INTO bar (b, f) VALUES (20, 3);
UPDATE bar SET f = 3 WHERE b = 10;

Note: This "automatically derived columns" extension would be a separate 
feature. Really nice to have, but the above mentioned FKs to supersets 
of unique columns would be very valuable without it already.


Best

Wolfgang