Hi Álvaro and ALL,

On Mon, Jan 19, 2026 at 10:44 PM Tatsuro Yamada <[email protected]>
wrote:

> Sorry, I just realized that I accidentally added an unnecessary filter
> condition to one of the queries.
> Specifically, the extra condition was:
>     AND n.nspname LIKE 'many_cns'
>
> Because of this, I was not able to measure the two queries under identical
> conditions. I will rerun the benchmarks and share the updated results
> tomorrow.
>

The error in the query has been fixed, and the execution times were
measured again.
As a result, there was no significant difference in execution time
between the two queries (see the attached test_result2.txt).
When comparing the cost of the top node, the original query has a
slightly lower cost, so I decided to keep it as is.


> # Summary
>> > The following changes are planned for the next patch:
>> >
>> > - Changed the query (using UNION ALL)
>> > - Changed the columns and their order (and the sort order accordingly):
>> > - Schema | Table | Type | Name | Definition
>> > - Toggle definition verbosity with the + option
>> > - Added a test case: \dCN cust*.order*
>> >
>> > The following items will not be included for now (as they are not
>> critical):
>> >
>> > - Option to switch sort order
>> >   (e.g., sort by constraint name vs. table name)
>> > - Improved command name checking logic
>>
>> Yeah, that sounds good to me.
>>
>
> Hopefully, I will also be able to submit a new patch along with the
> revised
> measurement results tomorrow.
>

The patch (v6) incorporates the changes listed above.
Note that, as mentioned earlier, the query itself was not changed.

Below is an example of the result set after the corrections
(from the regression test):

\dCN con_*
                   List of constraints
 Schema | Table |   Type    |            Name
--------+-------+-----------+----------------------------
 public | con_c | NOT NULL  | con_c_primary_col_not_null
 public | con_c | PK        | con_c_pkey
 public | con_p | CHECK     | con_p_check_col_check
 public | con_p | EXCLUSION | con_p_exclusion
 public | con_p | FK        | con_p_foreign_col_fkey
 public | con_p | NOT NULL  | con_p_notnull_col_not_null
 public | con_p | NOT NULL  | con_p_primary_col_not_null
 public | con_p | PK        | con_p_pkey
 public | con_p | TRIGGER   | con_p_trigger
 public | con_p | UNIQUE    | con_p_unique_col_key
(10 rows)

What do you think?
Please find the attached patch.

Regards,
Tatsuro Yamada
-- setup
---- create table constaraints: 500000
CREATE OR REPLACE FUNCTION create_notnull_table(
    p_table_name text,
    p_col_count  int
)
RETURNS void
LANGUAGE plpgsql
AS $$
DECLARE
    ddl text;
BEGIN
    SELECT
        format('CREATE TABLE %I (', p_table_name) || E'\n' ||
        string_agg(
            format('  c%s smallint not null', i),
            E',\n'
            ORDER BY i
        ) || E'\n);'
    INTO ddl
    FROM generate_series(1, p_col_count) AS i;

    EXECUTE ddl;
END;
$$;

SELECT create_notnull_table('notnull_temp', 1000);

select 'create table notnull_table_' || i || ' (like notnull_temp including 
all);'
from generate_series(1, 500) i;
\gexec

drop table notnull_temp;

select count(*)
from   pg_catalog.pg_constraint AS cns2
       JOIN pg_catalog.pg_class c on c.oid = cns2.conrelid
       JOIN pg_catalog.pg_namespace n ON n.oid = cns2.connamespace
where conname like 'notnull%';

---- create domain constraints: 500000
SELECT 'CREATE DOMAIN hoge_domain_'|| i || ' smallint NOT NULL CHECK (VALUE > 0 
AND VALUE < 100);'
FROM generate_series(1, 250000) i;
\gexec

-- measurement
---- mine
explain analyze
SELECT n.nspname AS "Schema",
       c.relname AS "Table",
       cns.contype AS "Type",
       cns.conname AS "Name" ,
       pg_catalog.pg_get_constraintdef(cns.oid, true) AS "Definition"
FROM pg_catalog.pg_constraint cns
     JOIN pg_catalog.pg_namespace n ON n.oid = cns.connamespace
     LEFT JOIN pg_catalog.pg_type t ON t.oid = cns.contypid
     LEFT JOIN pg_catalog.pg_class c on c.oid = cns.conrelid
WHERE ((cns.conrelid <> 0 AND pg_catalog.pg_table_is_visible(cns.conrelid))
      OR (cns.contypid > 0 AND pg_catalog.pg_type_is_visible(t.oid)))
  AND n.nspname <> 'pg_catalog'
  AND n.nspname <> 'information_schema'
ORDER BY 1, 2, 4;

---- Alvaro suggested
explain analyze
SELECT n.nspname AS "Schema",
       cns.relname AS "Table",
       cns.contype AS "Type",
       cns.conname AS "Name" ,
       pg_catalog.pg_get_constraintdef(cns.oid, true) AS "Definition"
FROM ( select cns1.connamespace, NULL as relname, cns1.contype, cns1.conname, 
cns1.oid 
       from   pg_catalog.pg_constraint AS cns1
              JOIN pg_catalog.pg_type t ON t.oid = cns1.contypid
       where  pg_catalog.pg_type_is_visible(t.oid)
       union all                                         
       select cns2.connamespace, c.relname, cns2.contype, cns2.conname, 
cns2.oid 
       from   pg_catalog.pg_constraint AS cns2
              JOIN pg_catalog.pg_class c on c.oid = cns2.conrelid
       where  pg_catalog.pg_table_is_visible(cns2.conrelid)
     ) AS cns                         
     JOIN pg_catalog.pg_namespace n ON n.oid = cns.connamespace
WHERE n.nspname <> 'pg_catalog'
  AND n.nspname <> 'information_schema'
ORDER BY 1, 2, 4 ;


-- new results
Mine:
 Execution Time: 12080.315 ms
 Execution Time: 12190.169 ms
 Execution Time: 11789.138 ms

Alvaro suggested:
 Execution Time: 12065.964 ms
 Execution Time: 12212.447 ms
 Execution Time: 12319.788 ms

-- compare the cost of the top node
Mine:
 Gather Merge  (cost=80535.98..90890.43 rows=88905 width=224) (actual 
time=11120.302..11472.557 rows=1000000.00 loops=1)

Alvaro suggested:
 Gather Merge  (cost=126600.94..127378.47 rows=6676 width=225) (actual 
time=11943.160..12302.426 rows=1000000.00 loops=1)

Attachment: v6-0001-Add-list-constraints-meta-command-dCN-on-psql.patch
Description: Binary data

Reply via email to