Thank you for all the comments, they were very helpful!
I'll address all previous comments on this email.
> In a scenario where a parent table and the partitioned tables live in
> different schemas, creating a schema based on the schema that contains
> only the partitions arguably generates useless tables.
> ...
> I'm not saying it's wrong, but perhaps you should consider ERROR/WARN if
> trying to copy a schema with "orphan" partitions
>
I've fixed this by adding a WARNING message and skipping the table
partition.
On this new version of the patch I've also added support for FK's and it
has the same behaviour, if a FK reference a table outside from the
source schema the FK will not be created.
I think that any object that reference an object outside of the source
schema should be skipped. It fells more safe to me to avoid sharing the
same object by multiple schemas.
> The same applies for creating schema that contains only the parent table
> ...
> Even if parent and children live in the same schema, they become
> detached in the new copy -- I'd argue that this one is a bug.
>
I've added support for partitioned tables on this new version. Although
this works for the test cases that I've added on create_schema.sql (make
check is also happy) I'm not sure if it's the best way to do it. On
getPartitionBoundSpec() I get PartitionBoundSpec from a given partitiond
oid as a string and then call stringToNode to actually generate a
PartitionBoundSpec. The problem IIUC is that fields like lowerdatums,
upperdatums and listdatums are already transformed so when
transformPartitionBoundValue() call transformExpr() the Node* will
already be transformed (e.g PartitionRangeDatum vs T_A_Const) and it
will fail on switch (nodeTag(Node))
I fixed this by adding a is_transformed field on PartitionBoundSpec and
set it to true on getPartitionBoundSpec(). Hash partition bounds only
have int values (modulus and remainder) and it shows to be required to
still do the transformation for this case.
When I was writing this email I've noticed that PartitionSpec need the
same workaround for partition keys that are expressions. I want to study
more the code to understand how we could properly fix this. Any idea is
welcome.
> Comments are also being ignored, but I guess it was already mentioned
> upthread:
>
It should work... I'll investigate this.
> I also just noticed that UNLOGGED tables are cloned as normal tables:
> ...
> Adding this to collectSchemaTablesLike in schemacmds.c could do the trick:
> newRelation->relpersistence = classForm->relpersistence;
>
I've also fixed this.
> > I think the approach will fail at some point if you keep using CREATE
> > TABLE LIKE, because so many things will be incomplete or at least
> > strange, that you'll have to redo many of them.
> > Sequencevalues will be shared with the old table.
> > Constraints/Indexes names will have to be renamed.
> > Partitioned tables will have to be attached.
> > Foreign Keys are not created either.
>
> Quite right. Either we draw a clear line here, or it will be a tough nut
> to crack.
>
The idea is not to use CREATE TABLE LIKE for all scenarios. Sorry if I
was not clear about to mention this.
As CreateSchemaCommand() use ProcessUtility() to process the
schema_element's of CREATE SCHEMA the overall idea of adding support for
LIKE was to generate a bunch of parsenodes that re-create the elements
of the source schema into the new schema and append it on this list of
schema_elements.
I think that using CREATE TABLE LIKE is the simple way to do this for
normal tables without complex data types but it certanately will not
work for all kind of objects that a schema can have. For example, on FK
I've generated a AlterTableStmt to include a FK on a table after it is
created. For domains for another case I think it would need to generate
a CreateDomainStmt.
I also want to mention that I don't think that we would be able to
properly re-created 100% all objects from the source schema into the new
schema. Some objects will be hard to copy and can still generate bougy
objects like functions for example as David mention on [1] (we can
support some kind of functions but some others will be hard).
Another issue is to handle complex relations like the following:
- Function A returns a boolean type
- Custom DOMAIN type call function A on CHECK constraint
- Function B return/use a type of custom DOMAIN
What we should create first? It can have functions that depends on
domains, so domains should be created first, but it can also have
domains that depends on functions, so functions would need to be created
first. This would be trick to fix.
But I still think that we could have support for some kind of objects. I
think that the following would be a good start point:
- tables
- partitioned tables
- indexes
- fks
- sequences
- types
For more complex scenarios we could document the limitation and perhaps
try to including WARNING's messages to mention these limitations when
re-creating a schema.
Overall, on this new patch version I've fixed/added the following:
- Index names are preserved on the new schema
- FK's are created on the new schema
- Support for partitioned tables
- More documentation (all documentation is on 0004)
Know issues remaining:
- Comment's are not being created
- Sequence of SERIAL columns are shared
- Workaround with is_transformed field
- Certanately a bunch of other things...
Thgouths?
[1]
https://www.postgresql.org/message-id/CAKFQuwZFAkGMoV0fLtf%2BHJL4_cwivz0Vqfk1r5SemUZvNNfxkg%40mail.gmail.com
--
Matheus Alcantara
EDB: https://www.enterprisedb.com
From 71c1a1b779fcf460e7ff74fc303df505d36be229 Mon Sep 17 00:00:00 2001
From: Matheus Alcantara <[email protected]>
Date: Tue, 3 Feb 2026 17:57:47 -0300
Subject: [PATCH v3 1/4] Add CREATE SCHEMA ... LIKE support
This patch introduces a new LIKE clause for CREATE SCHEMA that allows
copying table definitions from an existing schema to a new one.
Syntax:
CREATE SCHEMA <name> LIKE <source_schema> [like_options...]
Where like_options is:
{ INCLUDING | EXCLUDING } { TABLE | INDEX | ALL }
The implementation reuses the existing CREATE TABLE ... LIKE
infrastructure to copy table structures, including columns, defaults,
constraints, and optionally indexes.
---
src/backend/commands/indexcmds.c | 3 +-
src/backend/commands/schemacmds.c | 157 ++++++++++++++++++++
src/backend/commands/tablecmds.c | 4 +-
src/backend/parser/gram.y | 53 +++++++
src/backend/parser/parse_utilcmd.c | 17 ++-
src/include/nodes/parsenodes.h | 17 +++
src/include/parser/parse_utilcmd.h | 3 +-
src/test/regress/expected/create_schema.out | 123 +++++++++++++++
src/test/regress/sql/create_schema.sql | 73 +++++++++
src/tools/pgindent/typedefs.list | 1 +
10 files changed, 443 insertions(+), 8 deletions(-)
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 635679cc1f2..eeac9919fdd 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -1515,7 +1515,8 @@ DefineIndex(ParseState *pstate,
childStmt =
generateClonedIndexStmt(NULL,
parentIndex,
attmap,
-
NULL);
+
NULL,
+
false);
/*
* Recurse as the starting user ID.
Callee will use that
diff --git a/src/backend/commands/schemacmds.c
b/src/backend/commands/schemacmds.c
index 4c51e920626..4c6edfa3da6 100644
--- a/src/backend/commands/schemacmds.c
+++ b/src/backend/commands/schemacmds.c
@@ -14,6 +14,7 @@
*/
#include "postgres.h"
+#include "access/genam.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "access/xact.h"
@@ -28,16 +29,139 @@
#include "commands/event_trigger.h"
#include "commands/schemacmds.h"
#include "miscadmin.h"
+#include "nodes/makefuncs.h"
#include "parser/parse_utilcmd.h"
#include "parser/scansup.h"
#include "tcop/utility.h"
#include "utils/acl.h"
#include "utils/builtins.h"
+#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/rel.h"
#include "utils/syscache.h"
static void AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid
newOwnerId);
+static List *collectSchemaTablesLike(Oid srcNspOid, const char *newSchemaName,
+ bits32
options);
+
+/*
+ * Subroutine for CREATE SCHEMA LIKE.
+ *
+ * It return a list of CreateStmt statements for tables that are on source
+ * schema that should be created on target schema.
+ *
+ * It uses CREATE TABLE ... LIKE existing infrastructure.
+ */
+static List *
+collectSchemaTablesLike(Oid srcNspOid, const char *newSchemaName,
+ bits32 options)
+{
+ List *result = NIL;
+ bool preserveIndexNames = false;
+ Relation pg_class;
+ SysScanDesc scan;
+ ScanKeyData key;
+ HeapTuple tuple;
+ bits32 tableOptions;
+
+ /*
+ * Determine CREATE TABLE LIKE options. We copy most properties by
+ * default, but indexes are controlled by the CREATE_SCHEMA_LIKE_INDEX
+ * option.
+ */
+ tableOptions = CREATE_TABLE_LIKE_COMMENTS |
+ CREATE_TABLE_LIKE_COMPRESSION |
+ CREATE_TABLE_LIKE_CONSTRAINTS |
+ CREATE_TABLE_LIKE_DEFAULTS |
+ CREATE_TABLE_LIKE_GENERATED |
+ CREATE_TABLE_LIKE_IDENTITY |
+ CREATE_TABLE_LIKE_STATISTICS |
+ CREATE_TABLE_LIKE_STORAGE;
+
+ /*
+ * If indexes are enabled to be created we want to preserve the index
+ * names for the new schema.
+ */
+ if (options & CREATE_SCHEMA_LIKE_INDEX)
+ {
+ tableOptions |= CREATE_TABLE_LIKE_INDEXES;
+ preserveIndexNames = true;
+ }
+
+ pg_class = table_open(RelationRelationId, AccessShareLock);
+
+ /*
+ * Scan pg_class filtering relations of source schema that need to be
+ * created in the target schema.
+ */
+ ScanKeyInit(&key,
+ Anum_pg_class_relnamespace,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(srcNspOid));
+
+ scan = systable_beginscan(pg_class, ClassNameNspIndexId, true,
+ NULL, 1, &key);
+
+ while (HeapTupleIsValid(tuple = systable_getnext(scan)))
+ {
+ Form_pg_class classForm = (Form_pg_class) GETSTRUCT(tuple);
+ CreateStmt *createStmt;
+ TableLikeClause *likeClause;
+ RangeVar *newRelation;
+ RangeVar *sourceRelation;
+
+ /* Only process regular and partitioned tables */
+ if (classForm->relkind != RELKIND_RELATION &&
+ classForm->relkind != RELKIND_PARTITIONED_TABLE)
+ continue;
+
+ createStmt = makeNode(CreateStmt);
+ likeClause = makeNode(TableLikeClause);
+
+ /* Target table in new schema */
+ newRelation = makeRangeVar(pstrdup(newSchemaName),
+
pstrdup(NameStr(classForm->relname)),
+ -1);
+ newRelation->relpersistence = classForm->relpersistence;
+
+ /* Source table reference */
+ sourceRelation = makeRangeVar(get_namespace_name(srcNspOid),
+
pstrdup(NameStr(classForm->relname)),
+ -1);
+
+ likeClause->relation = sourceRelation;
+ likeClause->options = tableOptions;
+ likeClause->relationOid = InvalidOid;
+ likeClause->preserveIndexNames = preserveIndexNames;
+
+ createStmt->relation = newRelation;
+ createStmt->tableElts = list_make1(likeClause);
+ createStmt->inhRelations = NIL;
+ createStmt->partbound = NULL;
+ createStmt->partspec = NULL;
+ createStmt->ofTypename = NULL;
+ createStmt->constraints = NIL;
+ createStmt->nnconstraints = NIL;
+ createStmt->options = NIL;
+ createStmt->oncommit = ONCOMMIT_NOOP;
+
+ /*
+ * XXX: Should we have INCLUDING TABLESPACE? If not, should we
use the
+ * same tablespace of source table?
+ */
+ createStmt->tablespacename = NULL;
+ createStmt->accessMethod = NULL;
+ createStmt->if_not_exists = false;
+
+
+ result = lappend(result, createStmt);
+ }
+
+ systable_endscan(scan);
+ table_close(pg_class, AccessShareLock);
+
+ return result;
+}
/*
* CREATE SCHEMA
@@ -187,6 +311,39 @@ CreateSchemaCommand(CreateSchemaStmt *stmt, const char
*queryString,
EventTriggerCollectSimpleCommand(address, InvalidObjectAddress,
(Node
*) stmt);
+ /*
+ * Process LIKE clause if present. We collect objects from the source
+ * schema and append them to the schema elements list.
+ */
+ if (stmt->like_clause != NULL)
+ {
+ SchemaLikeClause *like = stmt->like_clause;
+ List *like_stmts = NIL;
+ Oid srcNspOid;
+ AclResult like_aclresult;
+
+ /* Look up source schema */
+ srcNspOid = get_namespace_oid(like->schemaname, false);
+
+ /* Check permission to read from source schema */
+ like_aclresult = object_aclcheck(NamespaceRelationId, srcNspOid,
+
GetUserId(), ACL_USAGE);
+ if (like_aclresult != ACLCHECK_OK)
+ aclcheck_error(like_aclresult, OBJECT_SCHEMA,
like->schemaname);
+
+ /* Collect tables if requested */
+ if ((like->options & CREATE_SCHEMA_LIKE_TABLE) ||
+ (like->options == CREATE_SCHEMA_LIKE_ALL))
+ {
+ like_stmts = collectSchemaTablesLike(srcNspOid,
+
schemaName,
+
like->options);
+ }
+
+ /* Append LIKE-generated statements to explicit schema elements
*/
+ stmt->schemaElts = list_concat(like_stmts, stmt->schemaElts);
+ }
+
/*
* Examine the list of commands embedded in the CREATE SCHEMA command,
and
* reorganize them into a sequentially executable order with no forward
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index f976c0e5c7e..8010fe71a40 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -1301,7 +1301,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid
ownerId,
false);
idxstmt =
generateClonedIndexStmt(NULL, idxRel,
-
attmap, &constraintOid);
+
attmap, &constraintOid, false);
DefineIndex(NULL,
RelationGetRelid(rel),
idxstmt,
@@ -20771,7 +20771,7 @@ AttachPartitionEnsureIndexes(List **wqueue, Relation
rel, Relation attachrel)
stmt = generateClonedIndexStmt(NULL,
idxRel, attmap,
-
&conOid);
+
&conOid, false);
DefineIndex(NULL,
RelationGetRelid(attachrel),
stmt, InvalidOid,
RelationGetRelid(idxRel),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 713ee5c10a2..0c23b23561d 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -600,6 +600,7 @@ static Node *makeRecursiveViewSelect(char *relname, List
*aliases, Node *query);
%type <node> DomainConstraint TableConstraint TableLikeClause
%type <ival> TableLikeOptionList TableLikeOption
+%type <ival> SchemaLikeOptionList SchemaLikeOption
%type <str> column_compression opt_column_compression
column_storage opt_column_storage
%type <list> ColQualList
%type <node> ColConstraint ColConstraintElem ConstraintAttr
@@ -1570,6 +1571,7 @@ CreateSchemaStmt:
n->authrole = $5;
n->schemaElts = $6;
n->if_not_exists = false;
+ n->like_clause = NULL;
$$ = (Node *) n;
}
| CREATE SCHEMA ColId OptSchemaEltList
@@ -1581,6 +1583,7 @@ CreateSchemaStmt:
n->authrole = NULL;
n->schemaElts = $4;
n->if_not_exists = false;
+ n->like_clause = NULL;
$$ = (Node *) n;
}
| CREATE SCHEMA IF_P NOT EXISTS opt_single_name
AUTHORIZATION RoleSpec OptSchemaEltList
@@ -1597,6 +1600,7 @@ CreateSchemaStmt:
parser_errposition(@9)));
n->schemaElts = $9;
n->if_not_exists = true;
+ n->like_clause = NULL;
$$ = (Node *) n;
}
| CREATE SCHEMA IF_P NOT EXISTS ColId OptSchemaEltList
@@ -1613,6 +1617,40 @@ CreateSchemaStmt:
parser_errposition(@7)));
n->schemaElts = $7;
n->if_not_exists = true;
+ n->like_clause = NULL;
+ $$ = (Node *) n;
+ }
+ /* CREATE SCHEMA ... LIKE variants */
+ | CREATE SCHEMA ColId LIKE ColId SchemaLikeOptionList
+ {
+ CreateSchemaStmt *n =
makeNode(CreateSchemaStmt);
+ SchemaLikeClause *l =
makeNode(SchemaLikeClause);
+
+ n->schemaname = $3;
+ n->authrole = NULL;
+ n->schemaElts = NIL;
+ n->if_not_exists = false;
+
+ l->schemaname = $5;
+ l->options = $6;
+ n->like_clause = l;
+
+ $$ = (Node *) n;
+ }
+ | CREATE SCHEMA IF_P NOT EXISTS ColId LIKE ColId
SchemaLikeOptionList
+ {
+ CreateSchemaStmt *n =
makeNode(CreateSchemaStmt);
+ SchemaLikeClause *l =
makeNode(SchemaLikeClause);
+
+ n->schemaname = $6;
+ n->authrole = NULL;
+ n->schemaElts = NIL;
+ n->if_not_exists = true;
+
+ l->schemaname = $8;
+ l->options = $9;
+ n->like_clause = l;
+
$$ = (Node *) n;
}
;
@@ -1639,6 +1677,21 @@ schema_stmt:
| ViewStmt
;
+/*
+ * Options for CREATE SCHEMA ... LIKE
+ */
+SchemaLikeOptionList:
+ SchemaLikeOptionList INCLUDING SchemaLikeOption
{ $$ = $1 | $3; }
+ | SchemaLikeOptionList EXCLUDING SchemaLikeOption
{ $$ = $1 & ~$3; }
+ | /* EMPTY */
{ $$ = 0; }
+ ;
+
+SchemaLikeOption:
+ TABLE { $$ =
CREATE_SCHEMA_LIKE_TABLE; }
+ | INDEX { $$ =
CREATE_SCHEMA_LIKE_INDEX; }
+ | ALL { $$ = CREATE_SCHEMA_LIKE_ALL; }
+ ;
+
/*****************************************************************************
*
diff --git a/src/backend/parser/parse_utilcmd.c
b/src/backend/parser/parse_utilcmd.c
index b5f4c72459d..b3188d87209 100644
--- a/src/backend/parser/parse_utilcmd.c
+++ b/src/backend/parser/parse_utilcmd.c
@@ -1567,7 +1567,8 @@ expandTableLikeClause(RangeVar *heapRel, TableLikeClause
*table_like_clause)
index_stmt = generateClonedIndexStmt(heapRel,
parent_index,
attmap,
-
NULL);
+
NULL,
+
table_like_clause->preserveIndexNames);
/* Copy comment on index, if requested */
if (table_like_clause->options &
CREATE_TABLE_LIKE_COMMENTS)
@@ -1694,7 +1695,8 @@ transformOfType(CreateStmtContext *cxt, TypeName
*ofTypename)
IndexStmt *
generateClonedIndexStmt(RangeVar *heapRel, Relation source_idx,
const AttrMap *attmap,
- Oid *constraintOid)
+ Oid *constraintOid,
+ bool preserveIndexName)
{
Oid source_relid = RelationGetRelid(source_idx);
HeapTuple ht_idxrel;
@@ -1771,12 +1773,19 @@ generateClonedIndexStmt(RangeVar *heapRel, Relation
source_idx,
index->reset_default_tblspc = false;
/*
- * We don't try to preserve the name of the source index; instead, just
+ * By default, we don't preserve the name of the source index; instead,
* let DefineIndex() choose a reasonable name. (If we tried to preserve
* the name, we'd get duplicate-relation-name failures unless the source
* table was in a different schema.)
+ *
+ * However, when preserveIndexName is true (e.g., when copying tables
+ * between schemas via CREATE SCHEMA ... LIKE), we preserve the original
+ * name since it won't conflict in the different namespace.
*/
- index->idxname = NULL;
+ if (preserveIndexName)
+ index->idxname = pstrdup(NameStr(idxrelrec->relname));
+ else
+ index->idxname = NULL;
/*
* If the index is marked PRIMARY or has an exclusion condition, it's
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 646d6ced763..2171c778f84 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -782,6 +782,8 @@ typedef struct TableLikeClause
RangeVar *relation;
bits32 options; /* OR of TableLikeOption flags
*/
Oid relationOid; /* If table has been looked up,
its OID */
+ bool preserveIndexNames; /* true if index name should be
preserved
+ * for
the new table */
} TableLikeClause;
typedef enum TableLikeOption
@@ -2411,6 +2413,20 @@ typedef enum ObjectType
* executed after the schema itself is created.
* ----------------------
*/
+typedef struct SchemaLikeClause
+{
+ NodeTag type;
+ char *schemaname; /* source schema name */
+ bits32 options; /* OR of SchemaLikeOption flags
*/
+} SchemaLikeClause;
+
+typedef enum SchemaLikeOption
+{
+ CREATE_SCHEMA_LIKE_TABLE = 1 << 0,
+ CREATE_SCHEMA_LIKE_INDEX = 1 << 1,
+ CREATE_SCHEMA_LIKE_ALL = PG_INT32_MAX
+} SchemaLikeOption;
+
typedef struct CreateSchemaStmt
{
NodeTag type;
@@ -2418,6 +2434,7 @@ typedef struct CreateSchemaStmt
RoleSpec *authrole; /* the owner of the created schema */
List *schemaElts; /* schema components (list of
parsenodes) */
bool if_not_exists; /* just do nothing if schema already
exists? */
+ SchemaLikeClause *like_clause; /* LIKE clause if present, else NULL */
} CreateSchemaStmt;
typedef enum DropBehavior
diff --git a/src/include/parser/parse_utilcmd.h
b/src/include/parser/parse_utilcmd.h
index 53f282f77ec..6841d7b16bc 100644
--- a/src/include/parser/parse_utilcmd.h
+++ b/src/include/parser/parse_utilcmd.h
@@ -39,6 +39,7 @@ extern List *expandTableLikeClause(RangeVar *heapRel,
extern IndexStmt *generateClonedIndexStmt(RangeVar *heapRel,
Relation source_idx,
const AttrMap *attmap,
-
Oid *constraintOid);
+
Oid *constraintOid,
+
bool preserveIndexName);
#endif /* PARSE_UTILCMD_H */
diff --git a/src/test/regress/expected/create_schema.out
b/src/test/regress/expected/create_schema.out
index 93302a07efc..7aa93deb947 100644
--- a/src/test/regress/expected/create_schema.out
+++ b/src/test/regress/expected/create_schema.out
@@ -94,5 +94,128 @@ CREATE SCHEMA regress_schema_1 AUTHORIZATION CURRENT_ROLE
DROP SCHEMA regress_schema_1 CASCADE;
NOTICE: drop cascades to table regress_schema_1.tab
RESET ROLE;
+--
+-- CREATE SCHEMA ... LIKE tests
+--
+-- Create a source schema with various objects
+CREATE SCHEMA regress_source_schema;
+CREATE TABLE regress_source_schema.t1 (
+ id int GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
+ name text NOT NULL,
+ created_at timestamp DEFAULT now()
+);
+CREATE TABLE regress_source_schema.t2 (
+ id int REFERENCES regress_source_schema.t1(id),
+ data jsonb
+);
+CREATE INDEX idx_t1_name ON regress_source_schema.t1(name);
+CREATE INDEX idx_t2_data ON regress_source_schema.t2 USING gin(data);
+CREATE UNLOGGED TABLE regress_source_schema.t3(a int);
+-- Test basic LIKE with TABLE
+CREATE SCHEMA regress_copy1 LIKE regress_source_schema INCLUDING TABLE;
+-- Verify tables were copied
+SELECT table_name FROM information_schema.tables
+WHERE table_schema = 'regress_copy1' ORDER BY table_name;
+ table_name
+------------
+ t1
+ t2
+ t3
+(3 rows)
+
+-- Verify table structure (should have columns but not indexes)
+\d regress_copy1.t1
+ Table "regress_copy1.t1"
+ Column | Type | Collation | Nullable |
Default
+------------+-----------------------------+-----------+----------+------------------------------
+ id | integer | | not null | generated
always as identity
+ name | text | | not null |
+ created_at | timestamp without time zone | | | now()
+
+-- Verify that relpersistence is the same
+SELECT relpersistence, relname from pg_class where relname = 't3' and
relnamespace = 'regress_copy1'::regnamespace::oid;;
+ relpersistence | relname
+----------------+---------
+ u | t3
+(1 row)
+
+-- Test LIKE with TABLE and INDEX
+CREATE SCHEMA regress_copy2 LIKE regress_source_schema INCLUDING TABLE
INCLUDING INDEX;
+-- Verify indexes were copied (the name should be the same)
+SELECT indexname FROM pg_indexes WHERE schemaname = 'regress_copy2' ORDER BY
indexname;
+ indexname
+-------------
+ idx_t1_name
+ idx_t2_data
+ t1_pkey
+(3 rows)
+
+-- Verify table structure (should have columns and indexes)
+\d regress_copy2.t1
+ Table "regress_copy2.t1"
+ Column | Type | Collation | Nullable |
Default
+------------+-----------------------------+-----------+----------+------------------------------
+ id | integer | | not null | generated
always as identity
+ name | text | | not null |
+ created_at | timestamp without time zone | | | now()
+Indexes:
+ "t1_pkey" PRIMARY KEY, btree (id)
+ "idx_t1_name" btree (name)
+
+-- Test EXCLUDING option
+CREATE SCHEMA regress_copy3 LIKE regress_source_schema INCLUDING ALL EXCLUDING
INDEX;
+-- Should have tables but no indexes
+SELECT table_name FROM information_schema.tables
+WHERE table_schema = 'regress_copy3' ORDER BY table_name;
+ table_name
+------------
+ t1
+ t2
+ t3
+(3 rows)
+
+SELECT indexname FROM pg_indexes WHERE schemaname = 'regress_copy3' ORDER BY
indexname;
+ indexname
+-----------
+(0 rows)
+
+-- Test IF NOT EXISTS with LIKE
+CREATE SCHEMA IF NOT EXISTS regress_copy1 LIKE regress_source_schema INCLUDING
TABLE;
+NOTICE: schema "regress_copy1" already exists, skipping
+-- Test empty source schema
+CREATE SCHEMA regress_empty_source;
+CREATE SCHEMA regress_copy4 LIKE regress_empty_source INCLUDING ALL;
+SELECT table_name FROM information_schema.tables
+WHERE table_schema = 'regress_copy4' ORDER BY table_name;
+ table_name
+------------
+(0 rows)
+
+-- Test source schema does not exist
+CREATE SCHEMA regress_copy_fail LIKE nonexistent_schema INCLUDING TABLE;
+ERROR: schema "nonexistent_schema" does not exist
+-- Clean up LIKE tests
+DROP SCHEMA regress_copy1 CASCADE;
+NOTICE: drop cascades to 3 other objects
+DETAIL: drop cascades to table regress_copy1.t1
+drop cascades to table regress_copy1.t2
+drop cascades to table regress_copy1.t3
+DROP SCHEMA regress_copy2 CASCADE;
+NOTICE: drop cascades to 3 other objects
+DETAIL: drop cascades to table regress_copy2.t1
+drop cascades to table regress_copy2.t2
+drop cascades to table regress_copy2.t3
+DROP SCHEMA regress_copy3 CASCADE;
+NOTICE: drop cascades to 3 other objects
+DETAIL: drop cascades to table regress_copy3.t1
+drop cascades to table regress_copy3.t2
+drop cascades to table regress_copy3.t3
+DROP SCHEMA regress_copy4 CASCADE;
+DROP SCHEMA regress_empty_source CASCADE;
+DROP SCHEMA regress_source_schema CASCADE;
+NOTICE: drop cascades to 3 other objects
+DETAIL: drop cascades to table regress_source_schema.t1
+drop cascades to table regress_source_schema.t2
+drop cascades to table regress_source_schema.t3
-- Clean up
DROP ROLE regress_create_schema_role;
diff --git a/src/test/regress/sql/create_schema.sql
b/src/test/regress/sql/create_schema.sql
index 1b7064247a1..a00cf75869d 100644
--- a/src/test/regress/sql/create_schema.sql
+++ b/src/test/regress/sql/create_schema.sql
@@ -66,5 +66,78 @@ CREATE SCHEMA regress_schema_1 AUTHORIZATION CURRENT_ROLE
DROP SCHEMA regress_schema_1 CASCADE;
RESET ROLE;
+--
+-- CREATE SCHEMA ... LIKE tests
+--
+
+-- Create a source schema with various objects
+CREATE SCHEMA regress_source_schema;
+
+CREATE TABLE regress_source_schema.t1 (
+ id int GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
+ name text NOT NULL,
+ created_at timestamp DEFAULT now()
+);
+
+CREATE TABLE regress_source_schema.t2 (
+ id int REFERENCES regress_source_schema.t1(id),
+ data jsonb
+);
+
+CREATE INDEX idx_t1_name ON regress_source_schema.t1(name);
+CREATE INDEX idx_t2_data ON regress_source_schema.t2 USING gin(data);
+
+CREATE UNLOGGED TABLE regress_source_schema.t3(a int);
+
+-- Test basic LIKE with TABLE
+CREATE SCHEMA regress_copy1 LIKE regress_source_schema INCLUDING TABLE;
+
+-- Verify tables were copied
+SELECT table_name FROM information_schema.tables
+WHERE table_schema = 'regress_copy1' ORDER BY table_name;
+
+-- Verify table structure (should have columns but not indexes)
+\d regress_copy1.t1
+
+-- Verify that relpersistence is the same
+SELECT relpersistence, relname from pg_class where relname = 't3' and
relnamespace = 'regress_copy1'::regnamespace::oid;;
+
+-- Test LIKE with TABLE and INDEX
+CREATE SCHEMA regress_copy2 LIKE regress_source_schema INCLUDING TABLE
INCLUDING INDEX;
+
+-- Verify indexes were copied (the name should be the same)
+SELECT indexname FROM pg_indexes WHERE schemaname = 'regress_copy2' ORDER BY
indexname;
+
+-- Verify table structure (should have columns and indexes)
+\d regress_copy2.t1
+
+-- Test EXCLUDING option
+CREATE SCHEMA regress_copy3 LIKE regress_source_schema INCLUDING ALL EXCLUDING
INDEX;
+
+-- Should have tables but no indexes
+SELECT table_name FROM information_schema.tables
+WHERE table_schema = 'regress_copy3' ORDER BY table_name;
+SELECT indexname FROM pg_indexes WHERE schemaname = 'regress_copy3' ORDER BY
indexname;
+
+-- Test IF NOT EXISTS with LIKE
+CREATE SCHEMA IF NOT EXISTS regress_copy1 LIKE regress_source_schema INCLUDING
TABLE;
+
+-- Test empty source schema
+CREATE SCHEMA regress_empty_source;
+CREATE SCHEMA regress_copy4 LIKE regress_empty_source INCLUDING ALL;
+SELECT table_name FROM information_schema.tables
+WHERE table_schema = 'regress_copy4' ORDER BY table_name;
+
+-- Test source schema does not exist
+CREATE SCHEMA regress_copy_fail LIKE nonexistent_schema INCLUDING TABLE;
+
+-- Clean up LIKE tests
+DROP SCHEMA regress_copy1 CASCADE;
+DROP SCHEMA regress_copy2 CASCADE;
+DROP SCHEMA regress_copy3 CASCADE;
+DROP SCHEMA regress_copy4 CASCADE;
+DROP SCHEMA regress_empty_source CASCADE;
+DROP SCHEMA regress_source_schema CASCADE;
+
-- Clean up
DROP ROLE regress_create_schema_role;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 39c76691c86..6ea863991fa 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2719,6 +2719,7 @@ ScanKeywordList
ScanState
ScanTypeControl
ScannerCallbackState
+SchemaLikeClause
SchemaQuery
SearchPathCacheEntry
SearchPathCacheKey
--
2.52.0
From 695ba090adabb015ac9f33b73fd3b09318f9206e Mon Sep 17 00:00:00 2001
From: Matheus Alcantara <[email protected]>
Date: Tue, 10 Feb 2026 18:05:20 -0300
Subject: [PATCH v3 2/4] Add partition table support to CREATE SCHEMA ... LIKE
This commit extends CREATE SCHEMA ... LIKE to properly handle
partitioned tables and their partitions:
- Partitioned tables are created with their partition specification
(RANGE, LIST, or HASH) preserved by reconstructing PartitionSpec
from pg_partitioned_table.
- Partitions within the same source schema are attached to their
parent table in the new schema by setting inhRelations and
partbound in the CreateStmt.
- Partitions whose parent table is in a different schema ("orphan
partitions") are skipped with a WARNING, since the partition
relationship cannot be recreated.
- Statement ordering ensures partitioned tables are created before
their partitions.
The transformPartitionBound() function in parse_utilcmd.c is modified to
detect already transformed partition bounds (retrieved from pg_class.
relpartbound) and skip transformation, since these contain Const nodes
rather than raw expressions.
---
src/backend/commands/schemacmds.c | 276 +++++++++++++++++++-
src/backend/commands/tablecmds.c | 3 +
src/backend/parser/parse_utilcmd.c | 9 +
src/include/nodes/parsenodes.h | 4 +
src/test/regress/expected/create_schema.out | 134 ++++++++++
src/test/regress/sql/create_schema.sql | 103 ++++++++
6 files changed, 519 insertions(+), 10 deletions(-)
diff --git a/src/backend/commands/schemacmds.c
b/src/backend/commands/schemacmds.c
index 4c6edfa3da6..d28e3429266 100644
--- a/src/backend/commands/schemacmds.c
+++ b/src/backend/commands/schemacmds.c
@@ -15,6 +15,7 @@
#include "postgres.h"
#include "access/genam.h"
+#include "access/relation.h"
#include "access/htup_details.h"
#include "access/table.h"
#include "access/xact.h"
@@ -22,6 +23,8 @@
#include "catalog/dependency.h"
#include "catalog/indexing.h"
#include "catalog/namespace.h"
+#include "catalog/partition.h"
+#include "catalog/pg_partitioned_table.h"
#include "catalog/objectaccess.h"
#include "catalog/pg_authid.h"
#include "catalog/pg_database.h"
@@ -30,6 +33,7 @@
#include "commands/schemacmds.h"
#include "miscadmin.h"
#include "nodes/makefuncs.h"
+#include "nodes/nodeFuncs.h"
#include "parser/parse_utilcmd.h"
#include "parser/scansup.h"
#include "tcop/utility.h"
@@ -38,24 +42,200 @@
#include "utils/fmgroids.h"
#include "utils/lsyscache.h"
#include "utils/rel.h"
+#include "utils/ruleutils.h"
#include "utils/syscache.h"
static void AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid
newOwnerId);
static List *collectSchemaTablesLike(Oid srcNspOid, const char *newSchemaName,
bits32
options);
+static PartitionSpec *buildPartitionSpecForRelation(Oid relid);
+static PartitionBoundSpec *getPartitionBoundSpec(Oid partOid);
+
+/* Returns a PartitionBoundSpec node for the given partition OID. */
+static PartitionBoundSpec *
+getPartitionBoundSpec(Oid partOid)
+{
+ HeapTuple tuple;
+ Datum datum;
+ bool isnull;
+ PartitionBoundSpec *boundspec;
+
+ tuple = SearchSysCache1(RELOID, ObjectIdGetDatum(partOid));
+ if (!HeapTupleIsValid(tuple))
+ elog(ERROR, "cache lookup failed for relation %u", partOid);
+
+ datum = SysCacheGetAttr(RELOID, tuple,
+
Anum_pg_class_relpartbound,
+ &isnull);
+ if (isnull)
+ elog(ERROR, "partition bound for relation %u is null", partOid);
+
+ boundspec = stringToNode(TextDatumGetCString(datum));
+
+ /*
+ * stringToNode return a bound spec already transformed for LIST and
RANGE
+ * strategies. Hash bounds only have modulus/remainder as integers
+ */
+ if (!boundspec->is_default && boundspec->strategy !=
PARTITION_STRATEGY_HASH)
+ boundspec->is_transformed = true;
+
+ ReleaseSysCache(tuple);
+
+ Assert(IsA(boundspec, PartitionBoundSpec));
+
+ return boundspec;
+}
+
+/*
+ * Constructs a PartitionSpec that can be used in a CreateStmt to recreate a
+ * partitioned table with the same partition key.
+ */
+static PartitionSpec *
+buildPartitionSpecForRelation(Oid relid)
+{
+ PartitionSpec *partspec;
+ HeapTuple tuple;
+ Form_pg_partitioned_table form;
+ Datum datum;
+ oidvector *partcollation;
+ List *partexprs = NIL;
+ ListCell *partexpr_item;
+ Relation rel;
+ TupleDesc tupdesc;
+ int keyno;
+
+ tuple = SearchSysCache1(PARTRELID, ObjectIdGetDatum(relid));
+ if (!HeapTupleIsValid(tuple))
+ elog(ERROR, "cache lookup failed for partition key of %u",
relid);
+
+ form = (Form_pg_partitioned_table) GETSTRUCT(tuple);
+
+ /* Get partcollation */
+ datum = SysCacheGetAttrNotNull(PARTRELID, tuple,
+
Anum_pg_partitioned_table_partcollation);
+ partcollation = (oidvector *) DatumGetPointer(datum);
+
+ /* Get partition expressions if any */
+ if (!heap_attisnull(tuple, Anum_pg_partitioned_table_partexprs, NULL))
+ {
+ Datum exprsDatum;
+ char *exprsString;
+
+ exprsDatum = SysCacheGetAttrNotNull(PARTRELID, tuple,
+
Anum_pg_partitioned_table_partexprs);
+ exprsString = TextDatumGetCString(exprsDatum);
+ partexprs = (List *) stringToNode(exprsString);
+
+ pfree(exprsString);
+
+ Assert(IsA(partexprs, List));
+ }
+
+ /* Open relation to get attribute names */
+ rel = relation_open(relid, AccessShareLock);
+ tupdesc = RelationGetDescr(rel);
+
+ /* Build PartitionSpec */
+ partspec = makeNode(PartitionSpec);
+ partspec->strategy = form->partstrat;
+ partspec->partParams = NIL;
+ partspec->location = -1;
+
+ partexpr_item = list_head(partexprs);
+
+ for (keyno = 0; keyno < form->partnatts; keyno++)
+ {
+ AttrNumber attnum = form->partattrs.values[keyno];
+ PartitionElem *pelem;
+ Oid keycolcollation;
+ Oid partcoll;
+
+ pelem = makeNode(PartitionElem);
+ pelem->location = -1;
+
+ if (attnum != 0)
+ {
+ /* Simple column reference */
+ Form_pg_attribute attr = TupleDescAttr(tupdesc, attnum
- 1);
+
+ pelem->name = pstrdup(NameStr(attr->attname));
+ pelem->expr = NULL;
+ keycolcollation = attr->attcollation;
+ }
+ else
+ {
+ /*
+ * Expression-based partition key.
+ *
+ * The partexprs list contains the partition key
expressions in
+ * their internal node representation, previously
extracted from
+ * pg_partitioned_table.partexprs. We need to convert
each
+ * expression back to SQL text form for use in CREATE
TABLE.
+ */
+ Node *partexpr;
+
+ Assert(partexpr_item != NULL);
+
+ /*
+ * Fetch the current expression and advance the
iterator.
+ * partexprs contains only expressions (not simple
column refs),
+ * so we consume them in order as we encounter
expression-based
+ * partition keys (attnum == 0) while iterating through
partattrs.
+ */
+ partexpr = (Node *) lfirst(partexpr_item);
+ partexpr_item = lnext(partexprs, partexpr_item);
+
+ pelem->name = NULL;
+ pelem->expr = partexpr;
+ keycolcollation = exprCollation(partexpr);
+
+ partspec->is_transformed = true;
+ }
+
+ /* Handle collation */
+ partcoll = partcollation->values[keyno];
+ if (OidIsValid(partcoll) && partcoll != keycolcollation)
+ pelem->collation =
list_make1(makeString(generate_collation_name(partcoll)));
+ else
+ pelem->collation = NIL;
+
+ /* Handle opclass - only include if not default */
+ pelem->opclass = NIL;
+
+ partspec->partParams = lappend(partspec->partParams, pelem);
+ }
+
+ relation_close(rel, AccessShareLock);
+ ReleaseSysCache(tuple);
+
+ return partspec;
+}
/*
* Subroutine for CREATE SCHEMA LIKE.
*
- * It return a list of CreateStmt statements for tables that are on source
- * schema that should be created on target schema.
+ * It returns a list of CreateStmt statements for tables that are in the source
+ * schema that should be created in the target schema.
+ *
+ * It uses CREATE TABLE ... LIKE existing infrastructure, with special handling
+ * for partitioned tables and their partitions:
*
- * It uses CREATE TABLE ... LIKE existing infrastructure.
+ * - Partitioned tables are created with their partition specification
preserved.
+ * - Partitions whose parent table is in the same schema are attached to the
+ * new parent table in the target schema.
+ * - Partitions whose parent table is in a different schema are skipped with
+ * a WARNING, since we cannot recreate the partition relationship.
+ *
+ * The returned list is ordered so that partitioned tables appear before their
+ * partitions, ensuring correct creation order.
*/
static List *
collectSchemaTablesLike(Oid srcNspOid, const char *newSchemaName,
bits32 options)
{
+ List *regularTables = NIL;
+ List *partitionedTables = NIL;
+ List *partitions = NIL;
List *result = NIL;
bool preserveIndexNames = false;
Relation pg_class;
@@ -63,6 +243,10 @@ collectSchemaTablesLike(Oid srcNspOid, const char
*newSchemaName,
ScanKeyData key;
HeapTuple tuple;
bits32 tableOptions;
+ char *srcSchemaName;
+ ListCell *lc;
+
+ srcSchemaName = get_namespace_name(srcNspOid);
/*
* Determine CREATE TABLE LIKE options. We copy most properties by
@@ -105,6 +289,7 @@ collectSchemaTablesLike(Oid srcNspOid, const char
*newSchemaName,
while (HeapTupleIsValid(tuple = systable_getnext(scan)))
{
Form_pg_class classForm = (Form_pg_class) GETSTRUCT(tuple);
+ Oid relOid = classForm->oid;
CreateStmt *createStmt;
TableLikeClause *likeClause;
RangeVar *newRelation;
@@ -115,6 +300,33 @@ collectSchemaTablesLike(Oid srcNspOid, const char
*newSchemaName,
classForm->relkind != RELKIND_PARTITIONED_TABLE)
continue;
+ /*
+ * Check if this is a partition. Partitions need special
handling.
+ */
+ if (classForm->relispartition)
+ {
+ Oid parentOid;
+ Oid parentNspOid;
+
+ parentOid = get_partition_parent(relOid, false);
+ parentNspOid = get_rel_namespace(parentOid);
+
+ /*
+ * If the parent is in a different schema, skip this
partition
+ * with a warning. We cannot recreate the partition
relationship
+ * since the parent table is not being copied.
+ */
+ if (parentNspOid != srcNspOid)
+ {
+ ereport(WARNING,
+ (errmsg("skipping partition
\"%s.%s\" because its parent table is in schema \"%s\"",
+ srcSchemaName,
+
NameStr(classForm->relname),
+
get_namespace_name(parentNspOid))));
+ continue;
+ }
+ }
+
createStmt = makeNode(CreateStmt);
likeClause = makeNode(TableLikeClause);
@@ -125,7 +337,7 @@ collectSchemaTablesLike(Oid srcNspOid, const char
*newSchemaName,
newRelation->relpersistence = classForm->relpersistence;
/* Source table reference */
- sourceRelation = makeRangeVar(get_namespace_name(srcNspOid),
+ sourceRelation = makeRangeVar(pstrdup(srcSchemaName),
pstrdup(NameStr(classForm->relname)),
-1);
@@ -144,22 +356,66 @@ collectSchemaTablesLike(Oid srcNspOid, const char
*newSchemaName,
createStmt->nnconstraints = NIL;
createStmt->options = NIL;
createStmt->oncommit = ONCOMMIT_NOOP;
-
- /*
- * XXX: Should we have INCLUDING TABLESPACE? If not, should we
use the
- * same tablespace of source table?
- */
createStmt->tablespacename = NULL;
createStmt->accessMethod = NULL;
createStmt->if_not_exists = false;
+ /*
+ * Handle partitioned tables: preserve the partition
specification.
+ */
+ if (classForm->relkind == RELKIND_PARTITIONED_TABLE)
+ {
+ createStmt->partspec =
buildPartitionSpecForRelation(relOid);
+ partitionedTables = lappend(partitionedTables,
createStmt);
+ }
+
+ /*
+ * Handle partitions: set up inheritance and partition bound to
attach
+ * this partition to the parent table in the new schema.
+ */
+ else if (classForm->relispartition)
+ {
+ Oid parentOid;
+ char *parentName;
+ RangeVar *parent;
+
+ parentOid = get_partition_parent(relOid, false);
+ parentName = get_rel_name(parentOid);
+
+ /* Reference to parent in new schema */
+ parent = makeRangeVar(pstrdup(newSchemaName),
+
pstrdup(parentName),
+ -1);
+
+ createStmt->inhRelations = list_make1(parent);
+ createStmt->partbound = getPartitionBoundSpec(relOid);
- result = lappend(result, createStmt);
+ partitions = lappend(partitions, createStmt);
+ }
+ else
+ {
+ /* Regular table */
+ regularTables = lappend(regularTables, createStmt);
+ }
}
systable_endscan(scan);
table_close(pg_class, AccessShareLock);
+ /*
+ * Build the result list in proper order: regular tables first, then
+ * partitioned tables, then partitions. This ensures parent tables exist
+ * before their partitions are created.
+ */
+ foreach(lc, regularTables)
+ result = lappend(result, lfirst(lc));
+
+ foreach(lc, partitionedTables)
+ result = lappend(result, lfirst(lc));
+
+ foreach(lc, partitions)
+ result = lappend(result, lfirst(lc));
+
return result;
}
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 8010fe71a40..554ebb54e04 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -19759,6 +19759,9 @@ transformPartitionSpec(Relation rel, PartitionSpec
*partspec)
ParseNamespaceItem *nsitem;
ListCell *l;
+ if (partspec->is_transformed)
+ return copyObject(partspec);
+
newspec = makeNode(PartitionSpec);
newspec->strategy = partspec->strategy;
diff --git a/src/backend/parser/parse_utilcmd.c
b/src/backend/parser/parse_utilcmd.c
index b3188d87209..4b39e3c7587 100644
--- a/src/backend/parser/parse_utilcmd.c
+++ b/src/backend/parser/parse_utilcmd.c
@@ -4612,6 +4612,15 @@ transformPartitionBound(ParseState *pstate, Relation
parent,
int partnatts = get_partition_natts(key);
List *partexprs = get_partition_exprs(key);
+ /*
+ * Check if the bound specification is already in transformed form
(i.e.,
+ * it was retrieved from pg_class.relpartbound rather than parsed from
+ * SQL). This happens when copying partitions via CREATE SCHEMA ...
LIKE.
+ * In this case, skip transformation and return a copy of the input.
+ */
+ if (spec->is_transformed)
+ return copyObject(spec);
+
/* Avoid scribbling on input */
result_spec = copyObject(spec);
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 2171c778f84..40a7143aadd 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -916,6 +916,8 @@ typedef struct PartitionSpec
PartitionStrategy strategy;
List *partParams; /* List of PartitionElems */
ParseLoc location; /* token location, or -1 if
unknown */
+
+ bool is_transformed;
} PartitionSpec;
/*
@@ -928,6 +930,8 @@ struct PartitionBoundSpec
{
NodeTag type;
+ bool is_transformed;
+
char strategy; /* see PARTITION_STRATEGY codes
above */
bool is_default; /* is it a default partition
bound? */
diff --git a/src/test/regress/expected/create_schema.out
b/src/test/regress/expected/create_schema.out
index 7aa93deb947..0ecbea18fda 100644
--- a/src/test/regress/expected/create_schema.out
+++ b/src/test/regress/expected/create_schema.out
@@ -194,6 +194,138 @@ WHERE table_schema = 'regress_copy4' ORDER BY table_name;
-- Test source schema does not exist
CREATE SCHEMA regress_copy_fail LIKE nonexistent_schema INCLUDING TABLE;
ERROR: schema "nonexistent_schema" does not exist
+--
+-- Test partitioned tables handling
+--
+-- Create a schema with partitioned table and partitions
+CREATE SCHEMA regress_part_source;
+CREATE TABLE regress_part_source.sales (
+ id int,
+ sale_date date,
+ amount numeric
+) PARTITION BY RANGE (sale_date);
+CREATE TABLE regress_part_source.sales2 (
+ id SERIAL,
+ amount NUMERIC,
+ sale_date DATE
+) PARTITION BY RANGE (EXTRACT(YEAR FROM sale_date));
+CREATE TABLE regress_part_source.sales_2025 PARTITION OF
regress_part_source.sales2 FOR VALUES FROM (2024) TO (2025);
+CREATE TABLE regress_part_source.sales_2026 PARTITION OF
regress_part_source.sales2 FOR VALUES FROM (2025) TO (2026);
+CREATE TABLE regress_part_source.sales_2023 PARTITION OF
regress_part_source.sales
+ FOR VALUES FROM ('2023-01-01') TO ('2024-01-01');
+CREATE TABLE regress_part_source.sales_2024 PARTITION OF
regress_part_source.sales
+ FOR VALUES FROM ('2024-01-01') TO ('2025-01-01');
+-- Copy the schema - partitioned table should preserve partition spec
+-- and partitions should be attached
+CREATE SCHEMA regress_part_copy LIKE regress_part_source INCLUDING ALL;
+-- Verify the partitioned table was created with partition spec and that
+-- partitions were attached
+\d+ regress_part_copy.sales
+ Partitioned table "regress_part_copy.sales"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target
| Description
+-----------+---------+-----------+----------+---------+---------+--------------+-------------
+ id | integer | | | | plain |
|
+ sale_date | date | | | | plain |
|
+ amount | numeric | | | | main |
|
+Partition key: RANGE (sale_date)
+Partitions: regress_part_copy.sales_2023 FOR VALUES FROM ('01-01-2023') TO
('01-01-2024'),
+ regress_part_copy.sales_2024 FOR VALUES FROM ('01-01-2024') TO
('01-01-2025')
+
+\d+ regress_part_copy.sales2
+ Partitioned table
"regress_part_copy.sales2"
+ Column | Type | Collation | Nullable | Default
| Storage | Stats target | Description
+-----------+---------+-----------+----------+--------------------------------------------------------+---------+--------------+-------------
+ id | integer | | not null |
nextval('regress_part_source.sales2_id_seq'::regclass) | plain |
|
+ amount | numeric | | |
| main | |
+ sale_date | date | | |
| plain | |
+Partition key: RANGE (EXTRACT(year FROM sale_date))
+Not-null constraints:
+ "sales2_id_not_null" NOT NULL "id"
+Partitions: regress_part_copy.sales_2025 FOR VALUES FROM ('2024') TO ('2025'),
+ regress_part_copy.sales_2026 FOR VALUES FROM ('2025') TO ('2026')
+
+-- Test orphan partitions: parent in different schema
+-- Create partition in different schema than its parent
+CREATE SCHEMA regress_orphan_source;
+CREATE TABLE regress_orphan_source.orphan_part PARTITION OF
regress_part_source.sales
+ FOR VALUES FROM ('2025-01-01') TO ('2026-01-01');
+-- Copy the schema with orphan partition - should skip with WARNING
+CREATE SCHEMA regress_orphan_copy LIKE regress_orphan_source INCLUDING ALL;
+WARNING: skipping partition "regress_orphan_source.orphan_part" because its
parent table is in schema "regress_part_source"
+-- The orphan partition should NOT be copied (it was skipped)
+\d regress_orphan_copy.*
+-- Test list partitioning
+CREATE SCHEMA regress_list_part_source;
+CREATE TABLE regress_list_part_source.products (
+ id int,
+ category text,
+ name text
+) PARTITION BY LIST (category);
+CREATE TABLE regress_list_part_source.products_electronics
+ PARTITION OF regress_list_part_source.products
+ FOR VALUES IN ('electronics', 'computers');
+CREATE TABLE regress_list_part_source.products_clothing
+ PARTITION OF regress_list_part_source.products
+ FOR VALUES IN ('clothing', 'shoes');
+CREATE SCHEMA regress_list_part_copy LIKE regress_list_part_source INCLUDING
ALL;
+-- Verify list partitioned table structure and that partitions were attached
correctly
+\d+ regress_list_part_copy.products
+ Partitioned table "regress_list_part_copy.products"
+ Column | Type | Collation | Nullable | Default | Storage | Stats target
| Description
+----------+---------+-----------+----------+---------+----------+--------------+-------------
+ id | integer | | | | plain |
|
+ category | text | | | | extended |
|
+ name | text | | | | extended |
|
+Partition key: LIST (category)
+Partitions: regress_list_part_copy.products_clothing FOR VALUES IN
('clothing', 'shoes'),
+ regress_list_part_copy.products_electronics FOR VALUES IN
('electronics', 'computers')
+
+-- Test hash partitioning
+CREATE SCHEMA regress_hash_part_source;
+CREATE TABLE regress_hash_part_source.events (
+ id int,
+ event_type text
+) PARTITION BY HASH (id);
+CREATE TABLE regress_hash_part_source.events_0
+ PARTITION OF regress_hash_part_source.events
+ FOR VALUES WITH (MODULUS 2, REMAINDER 0);
+CREATE TABLE regress_hash_part_source.events_1
+ PARTITION OF regress_hash_part_source.events
+ FOR VALUES WITH (MODULUS 2, REMAINDER 1);
+CREATE SCHEMA regress_hash_part_copy LIKE regress_hash_part_source INCLUDING
ALL;
+-- Verify hash partitioned table structure and that partitions were attached
correctly
+\d+ regress_hash_part_copy.events
+ Partitioned table "regress_hash_part_copy.events"
+ Column | Type | Collation | Nullable | Default | Storage | Stats
target | Description
+------------+---------+-----------+----------+---------+----------+--------------+-------------
+ id | integer | | | | plain |
|
+ event_type | text | | | | extended |
|
+Partition key: HASH (id)
+Partitions: regress_hash_part_copy.events_0 FOR VALUES WITH (modulus 2,
remainder 0),
+ regress_hash_part_copy.events_1 FOR VALUES WITH (modulus 2,
remainder 1)
+
+-- Clean up partition tests
+DROP SCHEMA regress_part_source CASCADE;
+NOTICE: drop cascades to 5 other objects
+DETAIL: drop cascades to table regress_part_source.sales
+drop cascades to table regress_part_source.sales2
+drop cascades to default value for column id of table regress_part_copy.sales2
+drop cascades to default value for column id of table
regress_part_copy.sales_2025
+drop cascades to default value for column id of table
regress_part_copy.sales_2026
+DROP SCHEMA regress_part_copy CASCADE;
+NOTICE: drop cascades to 2 other objects
+DETAIL: drop cascades to table regress_part_copy.sales
+drop cascades to table regress_part_copy.sales2
+DROP SCHEMA regress_orphan_source CASCADE;
+DROP SCHEMA regress_orphan_copy CASCADE;
+DROP SCHEMA regress_list_part_source CASCADE;
+NOTICE: drop cascades to table regress_list_part_source.products
+DROP SCHEMA regress_list_part_copy CASCADE;
+NOTICE: drop cascades to table regress_list_part_copy.products
+DROP SCHEMA regress_hash_part_source CASCADE;
+NOTICE: drop cascades to table regress_hash_part_source.events
+DROP SCHEMA regress_hash_part_copy CASCADE;
+NOTICE: drop cascades to table regress_hash_part_copy.events
-- Clean up LIKE tests
DROP SCHEMA regress_copy1 CASCADE;
NOTICE: drop cascades to 3 other objects
@@ -211,6 +343,8 @@ DETAIL: drop cascades to table regress_copy3.t1
drop cascades to table regress_copy3.t2
drop cascades to table regress_copy3.t3
DROP SCHEMA regress_copy4 CASCADE;
+DROP SCHEMA regress_copy_func CASCADE;
+ERROR: schema "regress_copy_func" does not exist
DROP SCHEMA regress_empty_source CASCADE;
DROP SCHEMA regress_source_schema CASCADE;
NOTICE: drop cascades to 3 other objects
diff --git a/src/test/regress/sql/create_schema.sql
b/src/test/regress/sql/create_schema.sql
index a00cf75869d..8102149e5ea 100644
--- a/src/test/regress/sql/create_schema.sql
+++ b/src/test/regress/sql/create_schema.sql
@@ -131,11 +131,114 @@ WHERE table_schema = 'regress_copy4' ORDER BY table_name;
-- Test source schema does not exist
CREATE SCHEMA regress_copy_fail LIKE nonexistent_schema INCLUDING TABLE;
+--
+-- Test partitioned tables handling
+--
+
+-- Create a schema with partitioned table and partitions
+CREATE SCHEMA regress_part_source;
+
+CREATE TABLE regress_part_source.sales (
+ id int,
+ sale_date date,
+ amount numeric
+) PARTITION BY RANGE (sale_date);
+
+CREATE TABLE regress_part_source.sales2 (
+ id SERIAL,
+ amount NUMERIC,
+ sale_date DATE
+) PARTITION BY RANGE (EXTRACT(YEAR FROM sale_date));
+
+CREATE TABLE regress_part_source.sales_2025 PARTITION OF
regress_part_source.sales2 FOR VALUES FROM (2024) TO (2025);
+CREATE TABLE regress_part_source.sales_2026 PARTITION OF
regress_part_source.sales2 FOR VALUES FROM (2025) TO (2026);
+
+CREATE TABLE regress_part_source.sales_2023 PARTITION OF
regress_part_source.sales
+ FOR VALUES FROM ('2023-01-01') TO ('2024-01-01');
+
+CREATE TABLE regress_part_source.sales_2024 PARTITION OF
regress_part_source.sales
+ FOR VALUES FROM ('2024-01-01') TO ('2025-01-01');
+
+-- Copy the schema - partitioned table should preserve partition spec
+-- and partitions should be attached
+CREATE SCHEMA regress_part_copy LIKE regress_part_source INCLUDING ALL;
+
+-- Verify the partitioned table was created with partition spec and that
+-- partitions were attached
+\d+ regress_part_copy.sales
+\d+ regress_part_copy.sales2
+
+-- Test orphan partitions: parent in different schema
+-- Create partition in different schema than its parent
+CREATE SCHEMA regress_orphan_source;
+CREATE TABLE regress_orphan_source.orphan_part PARTITION OF
regress_part_source.sales
+ FOR VALUES FROM ('2025-01-01') TO ('2026-01-01');
+
+-- Copy the schema with orphan partition - should skip with WARNING
+CREATE SCHEMA regress_orphan_copy LIKE regress_orphan_source INCLUDING ALL;
+
+-- The orphan partition should NOT be copied (it was skipped)
+\d regress_orphan_copy.*
+
+-- Test list partitioning
+CREATE SCHEMA regress_list_part_source;
+
+CREATE TABLE regress_list_part_source.products (
+ id int,
+ category text,
+ name text
+) PARTITION BY LIST (category);
+
+CREATE TABLE regress_list_part_source.products_electronics
+ PARTITION OF regress_list_part_source.products
+ FOR VALUES IN ('electronics', 'computers');
+
+CREATE TABLE regress_list_part_source.products_clothing
+ PARTITION OF regress_list_part_source.products
+ FOR VALUES IN ('clothing', 'shoes');
+
+CREATE SCHEMA regress_list_part_copy LIKE regress_list_part_source INCLUDING
ALL;
+
+-- Verify list partitioned table structure and that partitions were attached
correctly
+\d+ regress_list_part_copy.products
+
+-- Test hash partitioning
+CREATE SCHEMA regress_hash_part_source;
+
+CREATE TABLE regress_hash_part_source.events (
+ id int,
+ event_type text
+) PARTITION BY HASH (id);
+
+CREATE TABLE regress_hash_part_source.events_0
+ PARTITION OF regress_hash_part_source.events
+ FOR VALUES WITH (MODULUS 2, REMAINDER 0);
+
+CREATE TABLE regress_hash_part_source.events_1
+ PARTITION OF regress_hash_part_source.events
+ FOR VALUES WITH (MODULUS 2, REMAINDER 1);
+
+CREATE SCHEMA regress_hash_part_copy LIKE regress_hash_part_source INCLUDING
ALL;
+
+-- Verify hash partitioned table structure and that partitions were attached
correctly
+\d+ regress_hash_part_copy.events
+
+-- Clean up partition tests
+DROP SCHEMA regress_part_source CASCADE;
+DROP SCHEMA regress_part_copy CASCADE;
+DROP SCHEMA regress_orphan_source CASCADE;
+DROP SCHEMA regress_orphan_copy CASCADE;
+DROP SCHEMA regress_list_part_source CASCADE;
+DROP SCHEMA regress_list_part_copy CASCADE;
+DROP SCHEMA regress_hash_part_source CASCADE;
+DROP SCHEMA regress_hash_part_copy CASCADE;
+
-- Clean up LIKE tests
DROP SCHEMA regress_copy1 CASCADE;
DROP SCHEMA regress_copy2 CASCADE;
DROP SCHEMA regress_copy3 CASCADE;
DROP SCHEMA regress_copy4 CASCADE;
+DROP SCHEMA regress_copy_func CASCADE;
DROP SCHEMA regress_empty_source CASCADE;
DROP SCHEMA regress_source_schema CASCADE;
--
2.52.0
From 96226a6d87e32637e960394aa720e4f5358b57b5 Mon Sep 17 00:00:00 2001
From: Matheus Alcantara <[email protected]>
Date: Tue, 10 Feb 2026 18:05:33 -0300
Subject: [PATCH v3 3/4] Add foreign key support to CREATE SCHEMA ... LIKE
This commit extends CREATE SCHEMA ... LIKE to copy foreign key
constraints between tables:
- FK constraints referencing tables within the same source schema
are recreated in the new schema, pointing to the corresponding
copied tables.
- FK constraints referencing tables in external schemas are skipped
with a WARNING, since the referenced table is not being copied.
- All FK properties are preserved: ON UPDATE/DELETE actions (CASCADE,
SET NULL, SET DEFAULT, RESTRICT, NO ACTION), match type (FULL,
PARTIAL, SIMPLE), deferrable/deferred settings, and ON DELETE
SET NULL/DEFAULT column lists.
FK constraints are applied after all tables are created using
ALTER TABLE ... ADD CONSTRAINT, since both the referencing and
referenced tables must exist.
---
src/backend/commands/schemacmds.c | 176 ++++++++++++++++++++
src/test/regress/expected/create_schema.out | 148 +++++++++++++++-
src/test/regress/sql/create_schema.sql | 100 ++++++++++-
3 files changed, 422 insertions(+), 2 deletions(-)
diff --git a/src/backend/commands/schemacmds.c
b/src/backend/commands/schemacmds.c
index d28e3429266..d99419c317f 100644
--- a/src/backend/commands/schemacmds.c
+++ b/src/backend/commands/schemacmds.c
@@ -27,6 +27,7 @@
#include "catalog/pg_partitioned_table.h"
#include "catalog/objectaccess.h"
#include "catalog/pg_authid.h"
+#include "catalog/pg_constraint.h"
#include "catalog/pg_database.h"
#include "catalog/pg_namespace.h"
#include "commands/event_trigger.h"
@@ -48,6 +49,7 @@
static void AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid
newOwnerId);
static List *collectSchemaTablesLike(Oid srcNspOid, const char *newSchemaName,
bits32
options);
+static List *collectSchemaForeignKeysLike(Oid srcNspOid, const char
*newSchemaName);
static PartitionSpec *buildPartitionSpecForRelation(Oid relid);
static PartitionBoundSpec *getPartitionBoundSpec(Oid partOid);
@@ -419,6 +421,157 @@ collectSchemaTablesLike(Oid srcNspOid, const char
*newSchemaName,
return result;
}
+/*
+ * Collect foreign key constraints from source schema tables.
+ *
+ * Returns a list of AlterTableStmt nodes that add FK constraints to tables
+ * in the new schema. Only FKs that reference tables within the same source
+ * schema are included; FKs referencing external schemas are skipped with
+ * a WARNING.
+ *
+ * This must be called after tables are created, as FKs require both the
+ * referencing and referenced tables to exist.
+ */
+static List *
+collectSchemaForeignKeysLike(Oid srcNspOid, const char *newSchemaName)
+{
+ List *result = NIL;
+ Relation pg_constraint;
+ SysScanDesc scan;
+ HeapTuple tuple;
+ char *srcSchemaName;
+ ScanKeyData key;
+
+ srcSchemaName = get_namespace_name(srcNspOid);
+
+ ScanKeyInit(&key,
+ Anum_pg_constraint_connamespace,
+ BTEqualStrategyNumber, F_OIDEQ,
+ ObjectIdGetDatum(srcNspOid));
+
+ pg_constraint = table_open(ConstraintRelationId, AccessShareLock);
+
+ /* Scan constraints filtering for FKs on source schema tables */
+ scan = systable_beginscan(pg_constraint, InvalidOid, false,
+ NULL, 1, &key);
+
+ while (HeapTupleIsValid(tuple = systable_getnext(scan)))
+ {
+ Form_pg_constraint con = (Form_pg_constraint) GETSTRUCT(tuple);
+ Oid relid = con->conrelid;
+ Oid confrelid = con->confrelid;
+ Oid refNspOid;
+ AlterTableStmt *alterStmt;
+ AlterTableCmd *cmd;
+ Constraint *fkcon;
+ RangeVar *rel;
+ RangeVar *pktable;
+ int numkeys;
+ AttrNumber conkey[INDEX_MAX_KEYS];
+ AttrNumber confkey[INDEX_MAX_KEYS];
+ char *relname;
+ char *refrelname;
+ AttrNumber fkdelsetcols[INDEX_MAX_KEYS];
+ int numfkdelsetcols;
+
+
+ /* Only process foreign key constraints */
+ if (con->contype != CONSTRAINT_FOREIGN)
+ continue;
+
+ /* Check if referenced table is in source schema */
+ refNspOid = get_rel_namespace(confrelid);
+ if (refNspOid != srcNspOid)
+ {
+ ereport(WARNING,
+ (errmsg("skipping foreign key \"%s\" on
table \"%s.%s\" because it references table \"%s.%s\" in a different schema",
+ NameStr(con->conname),
+ srcSchemaName,
+ get_rel_name(relid),
+
get_namespace_name(refNspOid),
+
get_rel_name(confrelid))));
+ continue;
+ }
+
+ /* Extract FK constraint details */
+ DeconstructFkConstraintRow(tuple, &numkeys, conkey, confkey,
+ NULL, NULL,
NULL,
+
&numfkdelsetcols, fkdelsetcols);
+
+ /* Build the Constraint node for the FK */
+ fkcon = makeNode(Constraint);
+ fkcon->contype = CONSTR_FOREIGN;
+ fkcon->conname = pstrdup(NameStr(con->conname));
+ fkcon->deferrable = con->condeferrable;
+ fkcon->initdeferred = con->condeferred;
+ fkcon->is_enforced = con->conenforced;
+ fkcon->skip_validation = false;
+ fkcon->initially_valid = con->convalidated;
+ fkcon->location = -1;
+
+ /* Build FK column list */
+ relname = get_rel_name(relid);
+ fkcon->fk_attrs = NIL;
+ for (int i = 0; i < numkeys; i++)
+ {
+ char *attname = get_attname(relid, conkey[i],
false);
+
+ fkcon->fk_attrs = lappend(fkcon->fk_attrs,
makeString(attname));
+ }
+
+ /* Build PK column list and reference table */
+ refrelname = get_rel_name(confrelid);
+ pktable = makeRangeVar(pstrdup(newSchemaName),
+ pstrdup(refrelname),
+ -1);
+ fkcon->pktable = pktable;
+ fkcon->pk_attrs = NIL;
+ for (int i = 0; i < numkeys; i++)
+ {
+ char *attname = get_attname(confrelid,
confkey[i], false);
+
+ fkcon->pk_attrs = lappend(fkcon->pk_attrs,
makeString(attname));
+ }
+
+ /* Set FK actions */
+ fkcon->fk_matchtype = con->confmatchtype;
+ fkcon->fk_upd_action = con->confupdtype;
+ fkcon->fk_del_action = con->confdeltype;
+ fkcon->fk_del_set_cols = NIL;
+ /* Handle ON DELETE SET NULL/DEFAULT (col1, col2, ...) */
+ for (int i = 0; i < numfkdelsetcols; i++)
+ {
+ char *attname = get_attname(relid,
fkdelsetcols[i], false);
+
+ fkcon->fk_del_set_cols = lappend(fkcon->fk_del_set_cols,
+
makeString(attname));
+ }
+
+ /* Build ALTER TABLE ADD CONSTRAINT statement */
+ alterStmt = makeNode(AlterTableStmt);
+ cmd = makeNode(AlterTableCmd);
+
+ rel = makeRangeVar(pstrdup(newSchemaName),
+ pstrdup(relname),
+ -1);
+
+ cmd->subtype = AT_AddConstraint;
+ cmd->def = (Node *) fkcon;
+
+ alterStmt->relation = rel;
+ alterStmt->cmds = list_make1(cmd);
+ alterStmt->objtype = OBJECT_TABLE;
+ alterStmt->missing_ok = false;
+
+ result = lappend(result, alterStmt);
+ }
+
+ systable_endscan(scan);
+ table_close(pg_constraint, AccessShareLock);
+
+ return result;
+}
+
/*
* CREATE SCHEMA
*
@@ -444,6 +597,7 @@ CreateSchemaCommand(CreateSchemaStmt *stmt, const char
*queryString,
AclResult aclresult;
ObjectAddress address;
StringInfoData pathbuf;
+ List *fk_stmts = NIL;
GetUserIdAndSecContext(&saved_uid, &save_sec_context);
@@ -570,6 +724,10 @@ CreateSchemaCommand(CreateSchemaStmt *stmt, const char
*queryString,
/*
* Process LIKE clause if present. We collect objects from the source
* schema and append them to the schema elements list.
+ *
+ * Note: FK constraints are collected separately and executed after all
+ * tables are created, since they require both referencing and
referenced
+ * tables to exist.
*/
if (stmt->like_clause != NULL)
{
@@ -596,6 +754,15 @@ CreateSchemaCommand(CreateSchemaStmt *stmt, const char
*queryString,
like->options);
}
+ /*
+ * If INCLUDING TABLE INCLUDING INDEX or INCLUDING ALL is used
also
+ * collect FK's references to create on new schema.
+ */
+ if (like->options & CREATE_SCHEMA_LIKE_ALL ||
+ (like->options & CREATE_SCHEMA_LIKE_TABLE
+ && like->options & CREATE_TABLE_LIKE_INDEXES))
+ fk_stmts = collectSchemaForeignKeysLike(srcNspOid,
schemaName);
+
/* Append LIKE-generated statements to explicit schema elements
*/
stmt->schemaElts = list_concat(like_stmts, stmt->schemaElts);
}
@@ -610,6 +777,15 @@ CreateSchemaCommand(CreateSchemaStmt *stmt, const char
*queryString,
parsetree_list = transformCreateSchemaStmtElements(stmt->schemaElts,
schemaName);
+
+ /*
+ * Append foreign key constraints from LIKE clause. This must be done
+ * after all tables are created, since FKs require both the referencing
+ * and referenced tables to exist.
+ */
+ if (fk_stmts != NIL)
+ parsetree_list = list_concat(parsetree_list, fk_stmts);
+
/*
* Execute each command contained in the CREATE SCHEMA. Since the
grammar
* allows only utility commands in CREATE SCHEMA, there is no need to
pass
diff --git a/src/test/regress/expected/create_schema.out
b/src/test/regress/expected/create_schema.out
index 0ecbea18fda..bfde6dffbe2 100644
--- a/src/test/regress/expected/create_schema.out
+++ b/src/test/regress/expected/create_schema.out
@@ -105,7 +105,7 @@ CREATE TABLE regress_source_schema.t1 (
created_at timestamp DEFAULT now()
);
CREATE TABLE regress_source_schema.t2 (
- id int REFERENCES regress_source_schema.t1(id),
+ id int,
data jsonb
);
CREATE INDEX idx_t1_name ON regress_source_schema.t1(name);
@@ -304,6 +304,152 @@ Partition key: HASH (id)
Partitions: regress_hash_part_copy.events_0 FOR VALUES WITH (modulus 2,
remainder 0),
regress_hash_part_copy.events_1 FOR VALUES WITH (modulus 2,
remainder 1)
+--
+-- Test foreign key handling
+--
+-- Create a schema with tables that have FK relationships within the schema
+CREATE SCHEMA regress_fk_source;
+CREATE TABLE regress_fk_source.parent (
+ id int PRIMARY KEY,
+ name text
+);
+CREATE TABLE regress_fk_source.child (
+ id int PRIMARY KEY,
+ parent_id int REFERENCES regress_fk_source.parent(id) ON DELETE CASCADE,
+ data text
+);
+-- Add another FK relationship
+CREATE TABLE regress_fk_source.grandchild (
+ id int PRIMARY KEY,
+ child_id int REFERENCES regress_fk_source.child(id) ON UPDATE CASCADE ON
DELETE SET NULL,
+ info text
+);
+-- Test ON DELETE SET NULL with column list (fk_del_set_cols)
+CREATE TABLE regress_fk_source.multi_col_parent (
+ id int,
+ sub_id int,
+ PRIMARY KEY (id, sub_id)
+);
+CREATE TABLE regress_fk_source.multi_col_child (
+ id int PRIMARY KEY,
+ parent_id int,
+ parent_sub_id int,
+ extra_data text,
+ FOREIGN KEY (parent_id, parent_sub_id)
+ REFERENCES regress_fk_source.multi_col_parent(id, sub_id)
+ ON DELETE SET NULL (parent_id) -- only parent_id set to NULL, not
parent_sub_id
+);
+-- Copy the schema - FKs should be recreated pointing to new schema tables
+CREATE SCHEMA regress_fk_copy LIKE regress_fk_source INCLUDING ALL;
+-- Verify the FK constraints were copied and actions were preserved
+\d regress_fk_copy.child
+ Table "regress_fk_copy.child"
+ Column | Type | Collation | Nullable | Default
+-----------+---------+-----------+----------+---------
+ id | integer | | not null |
+ parent_id | integer | | |
+ data | text | | |
+Indexes:
+ "child_pkey" PRIMARY KEY, btree (id)
+Foreign-key constraints:
+ "child_parent_id_fkey" FOREIGN KEY (parent_id) REFERENCES
regress_fk_copy.parent(id) ON DELETE CASCADE
+Referenced by:
+ TABLE "regress_fk_copy.grandchild" CONSTRAINT "grandchild_child_id_fkey"
FOREIGN KEY (child_id) REFERENCES regress_fk_copy.child(id) ON UPDATE CASCADE
ON DELETE SET NULL
+
+\d regress_fk_copy.grandchild
+ Table "regress_fk_copy.grandchild"
+ Column | Type | Collation | Nullable | Default
+----------+---------+-----------+----------+---------
+ id | integer | | not null |
+ child_id | integer | | |
+ info | text | | |
+Indexes:
+ "grandchild_pkey" PRIMARY KEY, btree (id)
+Foreign-key constraints:
+ "grandchild_child_id_fkey" FOREIGN KEY (child_id) REFERENCES
regress_fk_copy.child(id) ON UPDATE CASCADE ON DELETE SET NULL
+
+-- Verify ON DELETE SET NULL (column_list) was preserved
+\d regress_fk_copy.multi_col_child
+ Table "regress_fk_copy.multi_col_child"
+ Column | Type | Collation | Nullable | Default
+---------------+---------+-----------+----------+---------
+ id | integer | | not null |
+ parent_id | integer | | |
+ parent_sub_id | integer | | |
+ extra_data | text | | |
+Indexes:
+ "multi_col_child_pkey" PRIMARY KEY, btree (id)
+Foreign-key constraints:
+ "multi_col_child_parent_id_parent_sub_id_fkey" FOREIGN KEY (parent_id,
parent_sub_id) REFERENCES regress_fk_copy.multi_col_parent(id, sub_id) ON
DELETE SET NULL (parent_id)
+
+-- Test FK to external schema (should be skipped with WARNING)
+CREATE SCHEMA regress_fk_external;
+CREATE TABLE regress_fk_external.external_ref (
+ id int PRIMARY KEY
+);
+CREATE SCHEMA regress_fk_mixed;
+CREATE TABLE regress_fk_mixed.internal_parent (
+ id int PRIMARY KEY
+);
+-- Table with FK to table in same schema (should be copied)
+CREATE TABLE regress_fk_mixed.internal_child (
+ id int PRIMARY KEY,
+ parent_id int REFERENCES regress_fk_mixed.internal_parent(id)
+);
+-- Table with FK to external schema (FK should be skipped with WARNING)
+CREATE TABLE regress_fk_mixed.external_child (
+ id int PRIMARY KEY,
+ ref_id int REFERENCES regress_fk_external.external_ref(id)
+);
+-- Copy should warn about external FK but copy internal FK
+CREATE SCHEMA regress_fk_mixed_copy LIKE regress_fk_mixed INCLUDING ALL;
+WARNING: skipping foreign key "external_child_ref_id_fkey" on table
"regress_fk_mixed.external_child" because it references table
"regress_fk_external.external_ref" in a different schema
+-- Verify only internal FK was copied (external FK should be skipped)
+\d regress_fk_mixed_copy.internal_child
+ Table "regress_fk_mixed_copy.internal_child"
+ Column | Type | Collation | Nullable | Default
+-----------+---------+-----------+----------+---------
+ id | integer | | not null |
+ parent_id | integer | | |
+Indexes:
+ "internal_child_pkey" PRIMARY KEY, btree (id)
+Foreign-key constraints:
+ "internal_child_parent_id_fkey" FOREIGN KEY (parent_id) REFERENCES
regress_fk_mixed_copy.internal_parent(id)
+
+\d egress_fk_mixed_copy.external_child -- should be empty
+-- Test default EXCLUDING with FK and ensure that ALTER TABLE ADD CONSTRAINT is
+-- not executed.
+CREATE SCHEMA regress_copy_empty LIKE regress_fk_source;
+\d+ regress_copy_empty
+-- Clean up FK tests
+DROP SCHEMA regress_fk_source CASCADE;
+NOTICE: drop cascades to 5 other objects
+DETAIL: drop cascades to table regress_fk_source.parent
+drop cascades to table regress_fk_source.child
+drop cascades to table regress_fk_source.grandchild
+drop cascades to table regress_fk_source.multi_col_parent
+drop cascades to table regress_fk_source.multi_col_child
+DROP SCHEMA regress_fk_copy CASCADE;
+NOTICE: drop cascades to 5 other objects
+DETAIL: drop cascades to table regress_fk_copy.child
+drop cascades to table regress_fk_copy.grandchild
+drop cascades to table regress_fk_copy.multi_col_child
+drop cascades to table regress_fk_copy.multi_col_parent
+drop cascades to table regress_fk_copy.parent
+DROP SCHEMA regress_fk_external CASCADE;
+NOTICE: drop cascades to 2 other objects
+DETAIL: drop cascades to table regress_fk_external.external_ref
+drop cascades to constraint external_child_ref_id_fkey on table
regress_fk_mixed.external_child
+DROP SCHEMA regress_fk_mixed CASCADE;
+NOTICE: drop cascades to 3 other objects
+DETAIL: drop cascades to table regress_fk_mixed.internal_parent
+drop cascades to table regress_fk_mixed.internal_child
+drop cascades to table regress_fk_mixed.external_child
+DROP SCHEMA regress_fk_mixed_copy CASCADE;
+NOTICE: drop cascades to 3 other objects
+DETAIL: drop cascades to table regress_fk_mixed_copy.external_child
+drop cascades to table regress_fk_mixed_copy.internal_child
+drop cascades to table regress_fk_mixed_copy.internal_parent
-- Clean up partition tests
DROP SCHEMA regress_part_source CASCADE;
NOTICE: drop cascades to 5 other objects
diff --git a/src/test/regress/sql/create_schema.sql
b/src/test/regress/sql/create_schema.sql
index 8102149e5ea..f6f9ef2f719 100644
--- a/src/test/regress/sql/create_schema.sql
+++ b/src/test/regress/sql/create_schema.sql
@@ -80,7 +80,7 @@ CREATE TABLE regress_source_schema.t1 (
);
CREATE TABLE regress_source_schema.t2 (
- id int REFERENCES regress_source_schema.t1(id),
+ id int,
data jsonb
);
@@ -223,6 +223,104 @@ CREATE SCHEMA regress_hash_part_copy LIKE
regress_hash_part_source INCLUDING ALL
-- Verify hash partitioned table structure and that partitions were attached
correctly
\d+ regress_hash_part_copy.events
+--
+-- Test foreign key handling
+--
+
+-- Create a schema with tables that have FK relationships within the schema
+CREATE SCHEMA regress_fk_source;
+
+CREATE TABLE regress_fk_source.parent (
+ id int PRIMARY KEY,
+ name text
+);
+
+CREATE TABLE regress_fk_source.child (
+ id int PRIMARY KEY,
+ parent_id int REFERENCES regress_fk_source.parent(id) ON DELETE CASCADE,
+ data text
+);
+
+-- Add another FK relationship
+CREATE TABLE regress_fk_source.grandchild (
+ id int PRIMARY KEY,
+ child_id int REFERENCES regress_fk_source.child(id) ON UPDATE CASCADE ON
DELETE SET NULL,
+ info text
+);
+
+-- Test ON DELETE SET NULL with column list (fk_del_set_cols)
+CREATE TABLE regress_fk_source.multi_col_parent (
+ id int,
+ sub_id int,
+ PRIMARY KEY (id, sub_id)
+);
+
+CREATE TABLE regress_fk_source.multi_col_child (
+ id int PRIMARY KEY,
+ parent_id int,
+ parent_sub_id int,
+ extra_data text,
+ FOREIGN KEY (parent_id, parent_sub_id)
+ REFERENCES regress_fk_source.multi_col_parent(id, sub_id)
+ ON DELETE SET NULL (parent_id) -- only parent_id set to NULL, not
parent_sub_id
+);
+
+-- Copy the schema - FKs should be recreated pointing to new schema tables
+CREATE SCHEMA regress_fk_copy LIKE regress_fk_source INCLUDING ALL;
+
+-- Verify the FK constraints were copied and actions were preserved
+\d regress_fk_copy.child
+\d regress_fk_copy.grandchild
+
+-- Verify ON DELETE SET NULL (column_list) was preserved
+\d regress_fk_copy.multi_col_child
+
+-- Test FK to external schema (should be skipped with WARNING)
+CREATE SCHEMA regress_fk_external;
+
+CREATE TABLE regress_fk_external.external_ref (
+ id int PRIMARY KEY
+);
+
+CREATE SCHEMA regress_fk_mixed;
+
+CREATE TABLE regress_fk_mixed.internal_parent (
+ id int PRIMARY KEY
+);
+
+-- Table with FK to table in same schema (should be copied)
+CREATE TABLE regress_fk_mixed.internal_child (
+ id int PRIMARY KEY,
+ parent_id int REFERENCES regress_fk_mixed.internal_parent(id)
+);
+
+-- Table with FK to external schema (FK should be skipped with WARNING)
+CREATE TABLE regress_fk_mixed.external_child (
+ id int PRIMARY KEY,
+ ref_id int REFERENCES regress_fk_external.external_ref(id)
+);
+
+-- Copy should warn about external FK but copy internal FK
+CREATE SCHEMA regress_fk_mixed_copy LIKE regress_fk_mixed INCLUDING ALL;
+
+-- Verify only internal FK was copied (external FK should be skipped)
+\d regress_fk_mixed_copy.internal_child
+\d egress_fk_mixed_copy.external_child -- should be empty
+
+-- Test default EXCLUDING with FK and ensure that ALTER TABLE ADD CONSTRAINT is
+-- not executed.
+CREATE SCHEMA regress_copy_empty LIKE regress_fk_source;
+
+\d+ regress_copy_empty
+
+
+-- Clean up FK tests
+DROP SCHEMA regress_fk_source CASCADE;
+DROP SCHEMA regress_fk_copy CASCADE;
+DROP SCHEMA regress_fk_external CASCADE;
+DROP SCHEMA regress_fk_mixed CASCADE;
+DROP SCHEMA regress_fk_mixed_copy CASCADE;
+
-- Clean up partition tests
DROP SCHEMA regress_part_source CASCADE;
DROP SCHEMA regress_part_copy CASCADE;
--
2.52.0
From d838ee9701fc36f1967819bbf8b5bfdf1bf552d8 Mon Sep 17 00:00:00 2001
From: Matheus Alcantara <[email protected]>
Date: Wed, 11 Feb 2026 15:35:25 -0300
Subject: [PATCH v3 4/4] Add documentation for CREATE SCHEMA ... LIKE
---
doc/src/sgml/ref/create_schema.sgml | 92 ++++++++++++++++++++++++++++-
1 file changed, 90 insertions(+), 2 deletions(-)
diff --git a/doc/src/sgml/ref/create_schema.sgml
b/doc/src/sgml/ref/create_schema.sgml
index ed69298ccc6..3b9bad643d1 100644
--- a/doc/src/sgml/ref/create_schema.sgml
+++ b/doc/src/sgml/ref/create_schema.sgml
@@ -22,6 +22,7 @@ PostgreSQL documentation
<refsynopsisdiv>
<synopsis>
CREATE SCHEMA <replaceable class="parameter">schema_name</replaceable> [
AUTHORIZATION <replaceable class="parameter">role_specification</replaceable> ]
[ <replaceable class="parameter">schema_element</replaceable> [ ... ] ]
+CREATE SCHEMA <replaceable class="parameter">schema_name</replaceable> [ IF
NOT EXISTS ] LIKE <replaceable class="parameter">source_schema</replaceable> [
<replaceable class="parameter">like_option</replaceable> [ ... ] ]
CREATE SCHEMA AUTHORIZATION <replaceable
class="parameter">role_specification</replaceable> [ <replaceable
class="parameter">schema_element</replaceable> [ ... ] ]
CREATE SCHEMA IF NOT EXISTS <replaceable
class="parameter">schema_name</replaceable> [ AUTHORIZATION <replaceable
class="parameter">role_specification</replaceable> ]
CREATE SCHEMA IF NOT EXISTS AUTHORIZATION <replaceable
class="parameter">role_specification</replaceable>
@@ -32,6 +33,10 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION <replaceable
class="parameter">role_sp
| CURRENT_ROLE
| CURRENT_USER
| SESSION_USER
+
+<phrase>and <replaceable class="parameter">like_option</replaceable> can
be:</phrase>
+
+ { INCLUDING | EXCLUDING } { TABLE | INDEX | ALL }
</synopsis>
</refsynopsisdiv>
@@ -120,6 +125,87 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION <replaceable
class="parameter">role_sp
</para>
</listitem>
</varlistentry>
+
+ <varlistentry>
+ <term><literal>LIKE <replaceable>source_schema</replaceable> [
<replaceable>like_option</replaceable> ... ]</literal></term>
+ <listitem>
+ <para>
+ The <literal>LIKE</literal> clause specifies a schema from which
the new
+ schema automatically copies the definitions of all supported objects
+ (such as tables and indexes).
+ </para>
+
+ <para>
+ The new schema and the original schema are completely decoupled
after
+ creation is complete. Changes to objects in the original schema will
+ not be applied to the new schema, and changes to the new schema will
+ not affect the original.
+ </para>
+
+ <para>
+ Only objects that are self-contained within the source schema or
+ reference other objects within that same schema are copied. Any
object
+ that references an object located in a different schema will be
+ skipped. For example, a foreign key referencing a table in a
+ different schema, or an index on a table located elsewhere, will
+ not be created in the new schema.
+ </para>
+
+ <para>
+ The optional <replaceable>like_option</replaceable> clauses specify
+ which additional properties of the original schema's objects to
copy.
+ Specifying <literal>INCLUDING</literal> copies the object, while
+ specifying <literal>EXCLUDING</literal> omits the object.
+ <literal>EXCLUDING</literal> is the default. If multiple
+ specifications are made for the same kind of object, the last one
+ is used. The available options are:
+ </para>
+
+ <variablelist>
+ <varlistentry>
+ <term><literal>INCLUDING TABLE</literal></term>
+ <listitem>
+ <para>
+ Definitions of tables, partitioned tables, and their
associated
+ sequences for identity columns are copied to the new schema.
+ Foreign keys are re-created in the new schema only if the
+ referenced table is also part of the source schema.
+ </para>
+ <para>
+ Partitioned tables are supported, provided that the parent
+ table and all of its partitions are defined within the source
+ schema. If a partition or its parent table resides in a
+ different schema, that specific object will be skipped.
+ </para>
+ <para>
+ Note that for <type>SERIAL</type> data types, the sequence is
+ shared with the source schema; a new sequence is not created
+ for the new table, meaning the new table's column will default
+ to the original sequence.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>INCLUDING INDEX</literal></term>
+ <listitem>
+ <para>
+ Indexes from the source schema will be created on the new
tables,
+ provided the index does not reference objects in a different
+ schema. This option only takes effect if
+ <literal>INCLUDING TABLE</literal> is also active. If tables
+ are excluded (either by default or explicitly), indexes will
+ not be created, as they have no parent object to attach to.
+ For example, <literal>CREATE SCHEMA foo LIKE bar INCLUDING
INDEX</literal>
+ or <literal>CREATE SCHEMA foo LIKE bar INCLUDING ALL
EXCLUDING TABLE</literal>
+ will result in no indexes being created.
+ </para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</refsect1>
@@ -128,8 +214,10 @@ CREATE SCHEMA IF NOT EXISTS AUTHORIZATION <replaceable
class="parameter">role_sp
<para>
To create a schema, the invoking user must have the
- <literal>CREATE</literal> privilege for the current database.
- (Of course, superusers bypass this check.)
+ <literal>CREATE</literal> privilege for the current database. For
+ <literal>LIKE</literal> the user also must have the <literal>USAGE</literal>
+ privilege on the source schema.
+ (Of course, superusers bypass these checks.)
</para>
</refsect1>
--
2.52.0