On Fri, Nov 11, 2022 at 2:12 PM houzj.f...@fujitsu.com <houzj.f...@fujitsu.com> wrote: >
Few comments on v46-0001: ====================== 1. +static void +apply_handle_stream_abort(StringInfo s) { ... + /* Send STREAM ABORT message to the parallel apply worker. */ + parallel_apply_send_data(winfo, s->len, s->data); + + if (abort_toplevel_transaction) + { + parallel_apply_unlock_stream(xid, AccessExclusiveLock); Shouldn't we need to release this lock before sending the message as we are doing for streap_prepare and stream_commit? If there is a reason for doing it differently here then let's add some comments for the same. 2. It seems once the patch makes the file state as busy (LEADER_FILESET_BUSY), it will only be accessible after the leader apply worker receives a transaction end message like stream_commit. Is my understanding correct? If yes, then why can't we make it accessible after the stream_stop message? Are you worried about the concurrency handling for reading and writing the file? If so, we can probably deal with it via some lock for reading and writing to file for each change. I think after this we may not need additional stream level lock/unlock in parallel_apply_spooled_messages. I understand that you probably want to keep the code simple so I am not suggesting changing it immediately but just wanted to know whether you have considered alternatives here. 3. Don't we need to release the transaction lock at stream_abort in parallel apply worker? I understand that we are not waiting for it in the leader worker but still parallel apply worker should release it if acquired at stream_start by it. 4. A minor comment change as below: diff --git a/src/backend/replication/logical/worker.c b/src/backend/replication/logical/worker.c index 43f09b7e9a..c771851d1f 100644 --- a/src/backend/replication/logical/worker.c +++ b/src/backend/replication/logical/worker.c @@ -1851,6 +1851,9 @@ apply_handle_stream_abort(StringInfo s) parallel_apply_stream_abort(&abort_data); /* + * We need to wait after processing rollback to savepoint for the next set + * of changes. + * * By the time parallel apply worker is processing the changes in * the current streaming block, the leader apply worker may have * sent multiple streaming blocks. So, try to lock only if there -- With Regards, Amit Kapila.