Add API and ABI stability guidance to the C language docs
Includes guidance for major and minor version releases, and sets
reasonable expectations for extension developers to follow.
Author: David Wheeler, Peter Eisentraut
Discussion:
https://www.postgresql.org/message-id/flat/5DA9F9D2-B8B2-43D
Doc: mention executor memory usage for enable_partitionwise* GUCs
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in w
Doc: mention executor memory usage for enable_partitionwise* GUCs
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in w
Doc: mention executor memory usage for enable_partitionwise* GUCs
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in w
Doc: mention executor memory usage for enable_partitionwise* GUCs
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in w
Doc: mention executor memory usage for enable_partitionwise* GUCs
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in w
Doc: mention executor memory usage for enable_partitionwise* GUCs
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in w
Doc: mention executor memory usage for enable_partitionwise* GUCs
Prior to this commit, the docs for enable_partitionwise_aggregate and
enable_partitionwise_join mentioned the additional overheads enabling
these causes for the query planner, but they mentioned nothing about the
possible surge in w
Improve performance of dumpSequence().
This function dumps the sequence definitions. It is called once
per sequence, and each such call executes a query to retrieve the
metadata for a single sequence. This can cause pg_dump to take
significantly longer, especially when there are many sequences.
Introduce pg_sequence_read_tuple().
This new function returns the data for the given sequence, i.e.,
the values within the sequence tuple. Since this function is a
substitute for SELECT from the sequence, the SELECT privilege is
required on the sequence in question. It returns all NULLs for
sequ
Parse sequence type and integer metadata in dumpSequence().
This commit modifies dumpSequence() to parse all the sequence
metadata into the appropriate types instead of carting around
string pointers to the PGresult data. Besides allowing us to free
the PGresult storage earlier in the function, t
Improve performance of dumpSequenceData().
As one might guess, this function dumps the sequence data. It is
called once per sequence, and each such call executes a query to
retrieve the relevant data for a single sequence. This can cause
pg_dump to take significantly longer, especially when ther
Allow parallel workers to cope with a newly-created session user ID.
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is beca
Allow parallel workers to cope with a newly-created session user ID.
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is beca
Allow parallel workers to cope with a newly-created session user ID.
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is beca
Allow parallel workers to cope with a newly-created session user ID.
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is beca
Allow parallel workers to cope with a newly-created session user ID.
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is beca
Allow parallel workers to cope with a newly-created session user ID.
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is beca
Allow parallel workers to cope with a newly-created session user ID.
Parallel workers failed after a sequence like
BEGIN;
CREATE USER foo;
SET SESSION AUTHORIZATION foo;
because check_session_authorization could not see the uncommitted
pg_authid row for "foo". This is beca
Remove unused ParamListInfo argument from ExecRefreshMatView.
Author: Yugo Nagata
Discussion:
https://postgr.es/m/20240726122630.70e889f63a4d7e26f8549...@sraoss.co.jp
Branch
--
master
Details
---
https://git.postgresql.org/pg/commitdiff/f683d3a4ca6dc441a86ed90070f126c20ea46b45
Modified
Add is_create parameter to RefreshMatviewByOid().
RefreshMatviewByOid is used for both REFRESH and CREATE MATERIALIZED
VIEW. This flag is currently just used for handling internal error
messages, but also aimed to improve code-readability.
Author: Yugo Nagata
Discussion:
https://postgr.es/m/202
Revert "Allow parallel workers to cope with a newly-created session user ID."
This reverts commit 48536305370acd75c6264fcefec7fae7af8c5440.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at be
Revert "Allow parallel workers to cope with a newly-created session user ID."
This reverts commit 68855c03878c0c90227e24533ca40127da3578cd.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at be
Revert "Allow parallel workers to cope with a newly-created session user ID."
This reverts commit 216201027d90e99a0a2b2d2efba85dc0aac94c62.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at be
Revert "Allow parallel workers to cope with a newly-created session user ID."
This reverts commit 5887dd4894db5ac1c6411615160555ac6e57e49b.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at be
Revert "Allow parallel workers to cope with a newly-created session user ID."
This reverts commit 849326e49a5dd56941eb8fb4699130c301bff303.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at be
Revert "Allow parallel workers to cope with a newly-created session user ID."
This reverts commit f5f30c22ed69fb37b896c4d4546b2ab823c3fd61.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at be
Revert "Allow parallel workers to cope with a newly-created session user ID."
This reverts commit 97380d4803d1f9188a1436c6fe7ecd7db285c55c.
Some buildfarm animals are failing with "cannot change
"client_encoding" during a parallel operation". It looks like
assign_client_encoding is unhappy at be
Evaluate arguments of correlated SubPlans in the referencing ExprState
Until now we generated an ExprState for each parameter to a SubPlan and
evaluated them one-by-one ExecScanSubPlan. That's sub-optimal as creating lots
of small ExprStates
a) makes JIT compilation more expensive
b) wastes memory
Avoid duplicate table scans for cross-partition updates during logical
replication.
When performing a cross-partition update in the apply worker, it
needlessly scans the old partition twice, resulting in noticeable
overhead.
This commit optimizes it by removing the redundant table scan.
Author:
30 matches
Mail list logo