On Sat, Sep 6, 2025 at 10:33 AM Dilip Kumar wrote:
> On Wed, Aug 13, 2025 at 4:17 PM Zhijie Hou (Fujitsu)
> wrote:
> >
> > Here is the initial POC patch for this idea.
> >
> >
> > If no parallel apply worker is available, the leader will apply the
> &g
tential issues like [0] - seems like it
is unaffected, because parallel apply workers sync their concurrent
updates and wait for each other to commit.
[0]:
https://www.postgresql.org/message-id/flat/CADzfLwWC49oanFSGPTf%3D6FJoTw-kAnpPZV8nVqAyR5KL68LrHQ%40mail.gmail.com#5f6b3be849f8d95c166decfa
led
> further down.
Thanks for the patch.
> Each parallel apply worker records the local end LSN of the transaction it
> applies in shared memory. Subsequently, the leader gathers these local end
> LSNs
> and logs them in the local 'lsn_mapping' for verifying whether they
On Fri, Sep 5, 2025 at 2:59 PM Dilip Kumar wrote:
>
> On Mon, Aug 11, 2025 at 10:16 AM Amit Kapila wrote:
> >
>
> +1 for the idea. So I see we already have the parallel apply workers
> for the large streaming transaction so I am trying to think what
> additional probl
On Fri, Sep 5, 2025 at 5:15 PM Mihail Nikalayeu
wrote:
>
> Hello, Amit!
>
> Amit Kapila :
> > So, in such cases as we won't be able to detect
> > transaction dependencies, it would be better to allow out-of-order
> > commits optionally.
>
> I think it is better to enable preserve order by default
a
> bottleneck due to the single apply worker model. While users can
> mitigate this by creating multiple publication-subscription pairs,
> this approach has scalability and usability limitations.
>
> Currently, PostgreSQL supports parallel apply only for large streaming
> transactions
Hi,
I ran tests to compare the performance of logical synchronous
replication with parallel-apply against physical synchronous
replication.
Highlights
===
On pgHead:(current behavior)
- With synchronous physical replication set to remote_apply, the
Primary’s TPS drops by ~60% (≈2.5x
leader_applied_nxact = 423621 (1.71%)
> >
> > case4: #parallel_workers = 16
> >#total_pgbench_txns = 24938255
> > parallelized_nxact = 24937754 (99.99%)
> > dependent_nxact= 142 (0.0005%)
> > leader_applied_nxact = 360 (0.0014%)
> >
> &g
also did some benchmarking of the proposed parallel apply patch and
compare it with my prewarming approach.
And parallel apply is significantly more efficient than prefetch (it is
expected).
So I had two tests (more details here):
https://www.postgresql.org/message-id/flat/84ed36b8-7d06-4
On Wed, Aug 13, 2025 at 4:17 PM Zhijie Hou (Fujitsu)
wrote:
>
> Here is the initial POC patch for this idea.
>
Thank you Hou-san for the patch.
I did some performance benchmarking for the patch and overall, the
results show substantial performance improvements.
Please find the details as follows
On Wed, Aug 13, 2025 at 8:57 PM Bruce Momjian wrote:
>
> On Wed, Aug 13, 2025 at 09:50:27AM +0530, Amit Kapila wrote:
> > On Tue, Aug 12, 2025 at 10:40 PM Bruce Momjian wrote:
> > > > Currently, PostgreSQL supports parallel apply only for large streaming
> > > &
On Wed, Aug 13, 2025 at 09:50:27AM +0530, Amit Kapila wrote:
> On Tue, Aug 12, 2025 at 10:40 PM Bruce Momjian wrote:
> > > Currently, PostgreSQL supports parallel apply only for large streaming
> > > transactions (streaming=parallel). This proposal aims to extend
>
to the single apply worker model. While users can
> mitigate this by creating multiple publication-subscription pairs,
> this approach has scalability and usability limitations.
>
> Currently, PostgreSQL supports parallel apply only for large streaming
> transactions (streaming=para
t; or you assume that streaming mode will be used (now it is possible to enforce
> parallel apply of short transactions using
> `debug_logical_replication_streaming`)?
>
The current proposal is based on reorderbuffer serializing
transactions as we are doing now.
> It seems to be sensel
s generate data
> > on the publisher, the subscriber's apply process often becomes a
> > bottleneck due to the single apply worker model. While users can
> > mitigate this by creating multiple publication-subscription pairs,
> > this approach has scalability and usabi
s a
> bottleneck due to the single apply worker model. While users can
> mitigate this by creating multiple publication-subscription pairs,
> this approach has scalability and usability limitations.
>
> Currently, PostgreSQL supports parallel apply only for large streaming
> transac
model. While users can
mitigate this by creating multiple publication-subscription pairs,
this approach has scalability and usability limitations.
Currently, PostgreSQL supports parallel apply only for large streaming
transactions (streaming=parallel). This proposal aims to extend
parallelism t
On Mon, Aug 11, 2025 at 3:00 PM Kirill Reshke wrote:
>
> On Mon, 11 Aug 2025 at 13:45, Amit Kapila wrote:
> >
> > I am not sure if that is directly applicable because this work
> > proposes to track dependencies based on logical WAL contents. However,
> > if you can point me to README on the over
e next few days.
For the dependent transactions workload, if we choose to go with the
deadlock detection approach, there will be lot of retries which may
not lead to good apply improvements. Also, we may choose to enable
this form of parallel-apply optionally due to reasons mentioned in my
first e
On 11/8/2025 06:45, Amit Kapila wrote:
The core idea is that the leader apply worker ensures the following:
a. Identifies dependencies between transactions. b. Coordinates
parallel workers to apply independent transactions concurrently. c.
Ensures correct ordering for dependent transactions.
Depe
On Mon, 11 Aug 2025 at 13:45, Amit Kapila wrote:
>
>
> I am not sure if that is directly applicable because this work
> proposes to track dependencies based on logical WAL contents. However,
> if you can point me to README on the overall design of the work you
> are pointing to then I can check i
On Mon, Aug 11, 2025 at 1:39 PM Kirill Reshke wrote:
>
>
> > Design Overview
> >
> > To safely parallelize non-streaming transactions, we must ensure that
> > transaction dependencies are respected to avoid failures and
> > deadlocks. Consider the following scenarios to un
a
> bottleneck due to the single apply worker model. While users can
> mitigate this by creating multiple publication-subscription pairs,
> this approach has scalability and usability limitations.
>
> Currently, PostgreSQL supports parallel apply only for large streaming
> transacti
ating multiple publication-subscription pairs,
this approach has scalability and usability limitations.
Currently, PostgreSQL supports parallel apply only for large streaming
transactions (streaming=parallel). This proposal aims to extend
parallelism to non-streaming transactions, thereby impr
On Wed, 11 Oct 2023 at 19:54, Zhijie Hou (Fujitsu)
wrote:
> The parallel apply worker didn't add null termination to the string received
> from the leader apply worker via the shared memory queue. This action doesn't
> bring bugs as it's binary data but violates
nclusion that
> it'd be
> safe to use such a convention in apply workers. Aren't the things being
> passed
> around here usually text strings?
I think the data passed to parallel apply worker is of mixed types. If we see
the data reading logic for it like logicalrep_read_attrs(
On Thu, 12 Oct 2023 at 05:04, Tom Lane wrote:
>
> Alvaro Herrera writes:
> > I was thinking about this when skimming the other StringInfo thread a
> > couple of days ago. I wondered if it wouldn't be more convenient to
> > change the convention that all StringInfos are null-terminated: what is
>
Alvaro Herrera writes:
> I was thinking about this when skimming the other StringInfo thread a
> couple of days ago. I wondered if it wouldn't be more convenient to
> change the convention that all StringInfos are null-terminated: what is
> really the reason to have them all be like that?
It mak
On 2023-Oct-11, Amit Kapila wrote:
> Yeah, it may not be a good idea to modify the buffer pointing to
> shared memory without any lock as we haven't reserved that part of
> memory. So, we can't follow the trick used in exec_bind_message() to
> maintain the convention that StringInfos have a traili
On Wed, Oct 11, 2023 at 12:18 PM Zhijie Hou (Fujitsu)
wrote:
>
> The parallel apply worker didn't add null termination to the string received
> from the leader apply worker via the shared memory queue. This action doesn't
> bring bugs as it's binary data but viola
Hi Hou-san.
+ /*
+ * Note that the data received via the shared memory queue is not
+ * null-terminated. So we use the StringInfo API to store the
+ * string so as to maintain the convention that StringInfos has a
+ * trailing null.
+ */
"... that StringInfos has a trailing null."
Probably shoul
Hi,
The parallel apply worker didn't add null termination to the string received
from the leader apply worker via the shared memory queue. This action doesn't
bring bugs as it's binary data but violates the rule established in StringInfo,
which guarantees the presence of a terminat
; > Because another place already detached from the queue before stopping
> > the parallel apply workers. So, I combined both the patches and
> > changed a few comments and a commit message. Let me know what you
> > think of the attached.
>
> I have one comment on the de
re
> > in
> > regression tests after the registration reorder patch because the dsm is
> > detached earlier after applying the patch.
> >
>
> I think it is only possible for the leader apply can worker to try to
> receive the error message from an error queue after your 00
> > > >
> > > > > > While investigating this issue, I've reviewed the code around
> > > > > > callbacks and worker termination etc and I found a problem.
> > > > > >
> > > > > > A parallel apply worker calls th
r the leader apply can worker to try to
receive the error message from an error queue after your 0002 patch.
Because another place already detached from the queue before stopping
the parallel apply workers. So, I combined both the patches and
changed a few comments and a commit message. Let me kno
Amit Kapila
> > > wrote:
> > > >
> > > > On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada
> > > wrote:
> > > > >
> > > > > While investigating this issue, I've reviewed the code around
> > > > >
ada
> > wrote:
> > > >
> > > > While investigating this issue, I've reviewed the code around
> > > > callbacks and worker termination etc and I found a problem.
> > > >
> > > > A parallel apply worker calls the before_shme
ound
> > > callbacks and worker termination etc and I found a problem.
> > >
> > > A parallel apply worker calls the before_shmem_exit callbacks in the
> > > following order:
> > >
> > > 1. ShutdownPostgres()
> > > 2. logicalrep_w
On Tue, May 2, 2023 at 12:22 PM Amit Kapila wrote:
>
> On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada
> wrote:
> >
> > While investigating this issue, I've reviewed the code around
> > callbacks and worker termination etc and I found a problem.
> >
On Wednesday, May 3, 2023 3:17 PM Amit Kapila wrote:
>
> On Tue, May 2, 2023 at 9:46 AM Amit Kapila
> wrote:
> >
> > On Tue, May 2, 2023 at 9:06 AM Zhijie Hou (Fujitsu)
> > wrote:
> > >
> > > On Friday, April 28, 2023 2:18 PM Masahiko Sawada
> wrote:
> > > >
> > > > >
> > > > > Alexander, does
On Tue, May 2, 2023 at 9:46 AM Amit Kapila wrote:
>
> On Tue, May 2, 2023 at 9:06 AM Zhijie Hou (Fujitsu)
> wrote:
> >
> > On Friday, April 28, 2023 2:18 PM Masahiko Sawada
> > wrote:
> > >
> > > >
> > > > Alexander, does the proposed patch fix the problem you are facing?
> > > > Sawada-San, an
On Tue, May 2, 2023 at 9:06 AM Zhijie Hou (Fujitsu)
wrote:
>
> On Friday, April 28, 2023 2:18 PM Masahiko Sawada
> wrote:
> >
> > >
> > > Alexander, does the proposed patch fix the problem you are facing?
> > > Sawada-San, and others, do you see any better way to fix it than what
> > > has been
the
> database.
Thanks for the review. I agree that it’s better to use a new variable here.
Attach the patch for the same.
>
> FWIW, we might need to be careful about the timing when we call
> logicalrep_worker_detach() in the worker's termination process.
On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada wrote:
>
> While investigating this issue, I've reviewed the code around
> callbacks and worker termination etc and I found a problem.
>
> A parallel apply worker calls the before_shmem_exit callbacks in the
>
On Fri, Apr 28, 2023 at 6:01 PM Amit Kapila wrote:
>
> On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada
> wrote:
> >
> > On Fri, Apr 28, 2023 at 11:51 AM Amit Kapila
> > wrote:
> > >
> > > On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)
> > > wrote:
> > > >
> > > > On Wednesday, April 26
On Fri, Apr 28, 2023 at 11:48 AM Masahiko Sawada wrote:
>
> On Fri, Apr 28, 2023 at 11:51 AM Amit Kapila wrote:
> >
> > On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)
> > wrote:
> > >
> > > On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin
> > > wrote:
> > > >
> > > > IIUC, that asse
, fileset deletion and lock release)
in a separate callback that is registered after connecting to the
database.
While investigating this issue, I've reviewed the code around
callbacks and worker termination etc and I found a problem.
A parallel apply worker calls the before_shmem_exit callback
Hello Amit and Zhijie,
28.04.2023 05:51, Amit Kapila wrote:
On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)
wrote:
I think the problem is that it tried to release locks in
logicalrep_worker_onexit() before the initialization of the process is complete
because this callback function was re
On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)
wrote:
>
> On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin
> wrote:
> >
> > IIUC, that assert will fail in case of any error raised between
> > ApplyWorkerMain()->logicalrep_worker_attach()->before_shmem_exit() and
> > ApplyWorkerMain()-
On Wed, Apr 26, 2023 at 4:11 PM Zhijie Hou (Fujitsu)
wrote:
>
> On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin
> wrote:
>
> Thanks for reporting the issue.
>
> I think the problem is that it tried to release locks in
> logicalrep_worker_onexit() before the initialization of the process is
On Wednesday, April 26, 2023 5:00 PM Alexander Lakhin
wrote:
> Please look at a new anomaly that can be observed starting from 216a7848.
>
> The following script:
> echo "CREATE SUBSCRIPTION testsub CONNECTION 'dbname=nodb'
> PUBLICATION testpub WITH (connect = false);
> ALTER SUBSCRIPTION tests
Hello hackers,
Please look at a new anomaly that can be observed starting from 216a7848.
The following script:
echo "CREATE SUBSCRIPTION testsub CONNECTION 'dbname=nodb' PUBLICATION testpub
WITH (connect = false);
ALTER SUBSCRIPTION testsub ENABLE;" | psql
sleep 1
rm $PGINST/lib/libpqwalreceiv
On Mon, Apr 24, 2023 at 2:24 PM Amit Kapila wrote:
>
> On Mon, Apr 24, 2023 at 7:26 AM Masahiko Sawada wrote:
> >
> > While looking at the worker.c, I realized that we have the following
> > code in handle_streamed_transaction():
> >
> > default:
> > Assert(false);
> >
On Mon, Apr 24, 2023 at 7:26 AM Masahiko Sawada wrote:
>
> While looking at the worker.c, I realized that we have the following
> code in handle_streamed_transaction():
>
> default:
> Assert(false);
> return false; / silence compiler warning /
>
> I think it's
At Mon, 24 Apr 2023 08:59:07 +0530, Amit Kapila wrote
in
> > Sorry for posting multiple times in a row, but I'm a bit unceratin
> > whether we should use FATAL or ERROR for this situation. The stream is
> > not provided by user, and the session or process cannot continue.
> >
>
> I think ERROR
On Mon, Apr 24, 2023 at 8:40 AM Kyotaro Horiguchi
wrote:
>
> At Mon, 24 Apr 2023 11:50:37 +0900 (JST), Kyotaro Horiguchi
> wrote in
> > In my opinion, it is fine to replace the Assert with an ERROR.
>
> Sorry for posting multiple times in a row, but I'm a bit unceratin
> whether we should use FA
At Mon, 24 Apr 2023 11:50:37 +0900 (JST), Kyotaro Horiguchi
wrote in
> In my opinion, it is fine to replace the Assert with an ERROR.
Sorry for posting multiple times in a row, but I'm a bit unceratin
whether we should use FATAL or ERROR for this situation. The stream is
not provided by user, a
At Mon, 24 Apr 2023 11:50:37 +0900 (JST), Kyotaro Horiguchi
wrote in
> I concur that returning false is problematic.
>
> For assertion builds, Assert typically provides more detailed
> information than elog. However, in this case, it wouldn't matter much
> since the worker would repeatedly rest
At Mon, 24 Apr 2023 10:55:44 +0900, Masahiko Sawada
wrote in
> While looking at the worker.c, I realized that we have the following
> code in handle_streamed_transaction():
>
> default:
> Assert(false);
> return false; / silence compiler warning /
>
> I th
On Mon, Jan 9, 2023 at 5:51 PM Amit Kapila wrote:
>
> On Sun, Jan 8, 2023 at 11:32 AM houzj.f...@fujitsu.com
> wrote:
> >
> > On Sunday, January 8, 2023 11:59 AM houzj.f...@fujitsu.com
> > wrote:
> > > Attach the updated patch set.
> >
> > Sorry, the commit message of 0001 was accidentally dele
LGTM. My only comment is about the commit message.
==
Commit message
d9d7fe6 reuse existing wait event when sending data in apply worker. But we
should have invent a new wait state if we are waiting at a new place, so fix
this.
~
SUGGESTION
d9d7fe6 made use of an existing wait event when se
On Wed, Feb 15, 2023 at 8:55 AM houzj.f...@fujitsu.com
wrote:
>
> On Wednesday, February 15, 2023 10:34 AM Amit Kapila
> wrote:
> >
> > > >
> > > > So names like the below seem correct format:
> > > >
> > > > a) WAIT_EVENT_LOGICAL_APPLY_SEND_DATA
> > > > b) WAIT_EVENT_LOGICAL_LEADER_SEND_DATA
>
> > > >
> > > > On Fri, Feb 10, 2023 at 8:56 AM Peter Smith
> wrote:
> > > > >
> > > > > My first impression was the
> > > > > WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed
> > > > > mis
t; > >
> > > > My first impression was the
> > > > WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed misleading
> > > > because that makes it sound like the parallel apply worker is doing
> > > > the sending, but IIUC it's really the opposite.
> >
distinguish and understand. Here is a tiny patch for
> > > > > > that.
> > > > > >
> > > >
> > > > As discussed[1], we'd better invent a new state for this purpose, so
> > > > here is the patch
> > > > that does the same.
> &
e for this purpose, so here
> > > is the patch
> > > that does the same.
> > >
> > > [1]
> > > https://www.postgresql.org/message-id/CAA4eK1LTud4FLRbS0QqdZ-pjSxwfFLHC1Dx%3D6Q7nyROCvvPSfw%40mail.gmail.com
> > >
> >
> > My first impression was the
> > WAIT_EVENT_LOGICAL_
for that.
> > > >
> >
> > As discussed[1], we'd better invent a new state for this purpose, so here
> > is the patch
> > that does the same.
> >
> > [1]
> > https://www.postgresql.org/message-id/CAA4eK1LTud4FLRbS0QqdZ-pjSxwfFLHC1Dx%3D6Q7nyROCv
> that does the same.
>
> [1]
> https://www.postgresql.org/message-id/CAA4eK1LTud4FLRbS0QqdZ-pjSxwfFLHC1Dx%3D6Q7nyROCvvPSfw%40mail.gmail.com
>
My first impression was the
WAIT_EVENT_LOGICAL_PARALLEL_APPLY_SEND_DATA name seemed misleading
because that makes it sound like the parallel apply work
On Tuesday, February 7, 2023 11:17 AM Amit Kapila
wrote:
>
> On Mon, Feb 6, 2023 at 3:43 PM houzj.f...@fujitsu.com
> wrote:
> >
> > while reading the code, I noticed that in pa_send_data() we set wait
> > event to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while
> sending
> > the message to
On Tue, Feb 7, 2023 15:37 PM Amit Kapila wrote:
> On Tue, Feb 7, 2023 at 12:41 PM Masahiko Sawada
> wrote:
> >
> > On Fri, Feb 3, 2023 at 6:44 PM Amit Kapila wrote:
> >
> > > We need to think of a predictable
> > > way to test this path which may not be difficult. But I guess it would
> > > be b
On Tue, Feb 7, 2023 at 12:41 PM Masahiko Sawada wrote:
>
> On Fri, Feb 3, 2023 at 6:44 PM Amit Kapila wrote:
>
> > We need to think of a predictable
> > way to test this path which may not be difficult. But I guess it would
> > be better to wait for some feedback from the field about this feature
On Fri, Feb 3, 2023 at 6:44 PM Amit Kapila wrote:
>
> On Fri, Feb 3, 2023 at 1:28 PM Masahiko Sawada wrote:
> >
> > On Fri, Feb 3, 2023 at 12:29 PM houzj.f...@fujitsu.com
> > wrote:
> > >
> > > On Friday, February 3, 2023 11:04 AM Amit Kapila
> > > wrote:
> > > >
> > > > On Thu, Feb 2, 2023 at
On Mon, Feb 6, 2023 at 3:43 PM houzj.f...@fujitsu.com
wrote:
>
> while reading the code, I noticed that in pa_send_data() we set wait event
> to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while sending the
> message to the queue. Because this state is used in multiple places, user
> might
> n
AIN which is used in the
> main
> loop of leader apply worker(LogicalRepApplyLoop). But the event in
> pg_send_data() is only for message send, so it seems fine to use
> WAIT_EVENT_MQ_SEND, besides MQ_SEND is also unique in parallel apply
> worker and
> user can distinglish withou
ut the event in
pg_send_data() is only for message send, so it seems fine to use
WAIT_EVENT_MQ_SEND, besides MQ_SEND is also unique in parallel apply worker and
user can distinglish without adding new event.
Best Regards,
Hou zj
Dear Hou,
> while reading the code, I noticed that in pa_send_data() we set wait event
> to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while sending
> the
> message to the queue. Because this state is used in multiple places, user
> might
> not be able to distinguish what they are waiting for
Hi,
while reading the code, I noticed that in pa_send_data() we set wait event
to WAIT_EVENT_LOGICAL_PARALLEL_APPLY_STATE_CHANGE while sending the
message to the queue. Because this state is used in multiple places, user might
not be able to distinguish what they are waiting for. So It seems we'd
On Fri, Feb 3, 2023 at 1:28 PM Masahiko Sawada wrote:
>
> On Fri, Feb 3, 2023 at 12:29 PM houzj.f...@fujitsu.com
> wrote:
> >
> > On Friday, February 3, 2023 11:04 AM Amit Kapila
> > wrote:
> > >
> > > On Thu, Feb 2, 2023 at 4:52 AM Peter Smith
> > > wrote:
> > > >
> > > > Some minor review co
On Fri, Feb 3, 2023 at 12:29 PM houzj.f...@fujitsu.com
wrote:
>
> On Friday, February 3, 2023 11:04 AM Amit Kapila
> wrote:
> >
> > On Thu, Feb 2, 2023 at 4:52 AM Peter Smith
> > wrote:
> > >
> > > Some minor review comments for v91-0001
> > >
> >
> > Pushed this yesterday after addressing your
On Friday, February 3, 2023 11:04 AM Amit Kapila
wrote:
>
> On Thu, Feb 2, 2023 at 4:52 AM Peter Smith
> wrote:
> >
> > Some minor review comments for v91-0001
> >
>
> Pushed this yesterday after addressing your comments!
Thanks for pushing.
Currently, we have two remaining patches which we
On Thu, Feb 2, 2023 at 4:52 AM Peter Smith wrote:
>
> Some minor review comments for v91-0001
>
Pushed this yesterday after addressing your comments!
--
With Regards,
Amit Kapila.
can be used to direct the leader apply worker to send changes to the
+shared memory queue or to serialize changes to the file. When set to
+buffered, the leader sends changes to parallel apply
+workers via a shared memory queue. When set to
+immediate, the leader s
On Tue, Jan 31, 2023 at 9:04 AM houzj.f...@fujitsu.com
wrote:
>
> I think your comment makes sense, thanks.
> I updated the patch for the same.
>
The patch looks mostly good to me. I have made a few changes in the
comments and docs, see attached.
--
With Regards,
Amit Kapila.
v91-0001-Allow-t
Thanks for the updates to address all of my previous review comments.
Patch v90-0001 LGTM.
--
Kind Reagrds,
Peter Smith.
Fujitsu Australia
On Tuesday, January 31, 2023 8:23 AM Peter Smith wrote:
>
> On Mon, Jan 30, 2023 at 5:23 PM houzj.f...@fujitsu.com
> wrote:
> >
> > On Monday, January 30, 2023 12:13 PM Peter Smith
> wrote:
> > >
> > > Here are my review comments for v88-0002.
> >
> > Thanks for your comments.
> >
> > >
> > > =
On Monday, January 30, 2023 10:20 PM Masahiko Sawada
wrote:
>
>
> I have one comment on v89 patch:
>
> + /*
> +* Using 'immediate' mode returns false to cause a switch to
> +* PARTIAL_SERIALIZE mode so that the remaining changes will
> be serialized.
> +*/
> +
On Mon, Jan 30, 2023 at 5:23 PM houzj.f...@fujitsu.com
wrote:
>
> On Monday, January 30, 2023 12:13 PM Peter Smith
> wrote:
> >
> > Here are my review comments for v88-0002.
>
> Thanks for your comments.
>
> >
> > ==
> > General
> >
> > 1.
> > The test cases are checking the log content but
"On the publisher, it allows streaming or serializing each
> > change in logical decoding."),
> > + gettext_noop("Controls the internal behavior of logical replication
> > publisher and subscriber"),
> > + gettext_noop("On the publisher, it allows s
Dear Hou,
Thank you for updating the patch!
I checked your replies and new patch, and it seems good.
Currently I have no comments
Best Regards,
Hayato Kuroda
FUJITSU LIMITED
streaming or serializing changes immediately in logical
> decoding.
> ```
>
> Typo "allows allows" -> "allows"
Fixed.
> 3. test general
>
> You confirmed that the leader started to serialize changes, but did not ensure
> the endpoint.
> IIUC the parall
ch
> change in logical decoding."),
> + gettext_noop("Controls the internal behavior of logical replication
> publisher and subscriber"),
> + gettext_noop("On the publisher, it allows streaming or "
> + "serializing each change in logical
On Mon, Jan 30, 2023 at 5:40 AM Peter Smith wrote:
>
> Patch v88-0001 LGTM.
>
Pushed.
--
With Regards,
Amit Kapila.
rializing each change in logical decoding. On the "
+ "subscriber, in parallel streaming mode, it allows "
+ "the leader apply worker to serialize changes to "
+ "files and notifies the parallel apply workers to "
+ "read and apply them at the end of the tran
Patch v88-0001 LGTM.
Below are just some minor review comments about the commit message.
==
Commit message
1.
We have discussed having this parameter as a subscription option but
exposing a parameter that is primarily used for testing/debugging to users
didn't seem advisable and there is no
On Wed, Jan 25, 2023 at 3:27 PM Amit Kapila wrote:
>
> On Wed, Jan 25, 2023 at 10:05 AM Amit Kapila wrote:
> >
> > On Wed, Jan 25, 2023 at 3:15 AM Peter Smith wrote:
> > >
> > > 1.
> > > @@ -210,7 +210,7 @@ int logical_decoding_work_mem;
> > > static const Size max_changes_in_memory = 4096; /*
+allows streaming or serializing changes immediately in logical
decoding.
```
Typo "allows allows" -> "allows"
3. test general
You confirmed that the leader started to serialize changes, but did not ensure
the endpoint.
IIUC the parallel apply worker exits af
on is set
> + to parallel, this parameter also allows the leader
> +apply worker to send changes to the shared memory queue or to
> serialize
> + changes. When set to buffered, the leader sends
> +changes to parallel apply workers via shared memory queue.
Dear Amit,
>
> I have updated the patch accordingly and it looks good to me. I'll
> push this first patch early next week (Monday) unless there are more
> comments.
Thanks for updating. I checked v88-0001 and I have no objection. LGTM.
Best Regards,
Hayato Kuroda
FUJITSU LIMITED
On Wed, Jan 25, 2023 at 10:05 AM Amit Kapila wrote:
>
> On Wed, Jan 25, 2023 at 3:15 AM Peter Smith wrote:
> >
> > 1.
> > @@ -210,7 +210,7 @@ int logical_decoding_work_mem;
> > static const Size max_changes_in_memory = 4096; /* XXX for restore only */
> >
> > /* GUC variable */
> > -int logical
1 - 100 of 520 matches
Mail list logo