On Fri, Feb 3, 2017 at 10:56 AM, Kyotaro HORIGUCHI
<horiguchi.kyot...@lab.ntt.co.jp> wrote:
> At Fri, 3 Feb 2017 01:02:47 +0900, Fujii Masao <masao.fu...@gmail.com> wrote 
> in <CAHGQGwHqQVHmQ7wM=elnnp1_oy-gvssacajxwje4nc2twsq...@mail.gmail.com>
>> On Thu, Feb 2, 2017 at 2:36 PM, Michael Paquier
>> <michael.paqu...@gmail.com> wrote:
>> > On Thu, Feb 2, 2017 at 2:13 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> >> Kyotaro HORIGUCHI <horiguchi.kyot...@lab.ntt.co.jp> writes:
>> >>> Then, the reason for the TRY-CATCH cluase is that I found that
>> >>> some functions called from there can throw exceptions.
>> >>
>> >> Yes, but all LWLocks should be released by normal error recovery.
>> >> It should not be necessary for this code to clean that up by hand.
>> >> If it were necessary, there would be TRY-CATCH around every single
>> >> LWLockAcquire in the backend, and we'd have an unreadable and
>> >> unmaintainable system.  Please don't add a TRY-CATCH unless it's
>> >> *necessary* -- and you haven't explained why this one is.
>>
>> Yes.
>
> Thank you for the suggestion. I minunderstood that.
>
>> > Putting hands into the code and at the problem, I can see that
>> > dropping a subscription on a node makes it unresponsive in case of a
>> > stop. And that's just because calls to LWLockRelease are missing as in
>> > the patch attached. A try/catch problem should not be necessary.
>>
>> Thanks for the patch!
>>
>> With the patch, LogicalRepLauncherLock is released at the end of
>> DropSubscription(). But ISTM that the lock should be released just after
>> logicalrep_worker_stop() and there is no need to protect the removal of
>> replication slot with the lock.
>
> That's true. logicalrep_worker_stop returns after confirmig that
> worker->proc is cleard, so no false relaunch cannot be caused.
> After all, logicalrep_worker_stop is surrounded by
> LWLockAcquire/Relase pair. So it can be moved into the funciton
> and make the lock secrion to be more narrower.
>
>>     /*
>>     * If we found worker but it does not have proc set it is starting up,
>>     * wait for it to finish and then kill it.
>>     */
>>     while (worker && !worker->proc)
>>     {
>>
>> ISTM that the above loop in logicalrep_worker_stop() is not necessary
>> because LogicalRepLauncherLock ensures that the above condition is
>> always false. Thought? Am I missing something?
>
> The lock exists only to keep the launcher from starting a
> worker. Creating a subscription and starting a worker for the
> slot run independently.
>
>> If the above condition is true, which means that there is the worker slot
>> having the "subid" of the worker to kill, but its "proc" has not been set 
>> yet.
>
> Yes. The situation happens after launcher sets subid and before
> ApplyWorkerMain attaches the slot.  The lock doesn't protect the
> section.

No. logicalrep_worker_launch() calls WaitForReplicationWorkerAttach()
and waits for the worker to attach to the slot. Then LogicalRepLauncherLock
is released. So both "subid" and "proc" should be set while the lock is being
held.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to