On 21/04/17 04:38, Masahiko Sawada wrote:
> On Thu, Apr 20, 2017 at 8:43 PM, Petr Jelinek
> <petr.jeli...@2ndquadrant.com> wrote:
>> On 20/04/17 06:21, Masahiko Sawada wrote:
>>> On Thu, Apr 20, 2017 at 12:30 AM, Petr Jelinek
>>> <petr.jeli...@2ndquadrant.com> wrote:
>>>> On 19/04/17 15:57, Masahiko Sawada wrote:
>>>>> On Wed, Apr 19, 2017 at 10:07 PM, Petr Jelinek
>>>>> <petr.jeli...@2ndquadrant.com> wrote:
>>>>>> On 19/04/17 14:42, Masahiko Sawada wrote:
>>>>>>> On Wed, Apr 19, 2017 at 5:12 PM, Kyotaro HORIGUCHI
>>>>>>> <horiguchi.kyot...@lab.ntt.co.jp> wrote:
>>>>>>>> At Tue, 18 Apr 2017 18:40:56 +0200, Petr Jelinek 
>>>>>>>> <petr.jeli...@2ndquadrant.com> wrote in 
>>>>>>>> <f64d87d1-bef3-5e3e-a999-ba302816a...@2ndquadrant.com>
>>>>>>>>> On 18/04/17 18:14, Peter Eisentraut wrote:
>>>>>>>>>> On 4/18/17 11:59, Petr Jelinek wrote:
>>>>>>>>>>> Hmm if we create hashtable for this, I'd say create hashtable for 
>>>>>>>>>>> the
>>>>>>>>>>> whole table_states then. The reason why it's list now was that it 
>>>>>>>>>>> seemed
>>>>>>>>>>> unnecessary to have hashtable when it will be empty almost always 
>>>>>>>>>>> but
>>>>>>>>>>> there is no need to have both hashtable + list IMHO.
>>>>>>>>
>>>>>>>> I understant that but I also don't like the frequent palloc/pfree
>>>>>>>> in long-lasting context and double loop like Peter.
>>>>>>>>
>>>>>>>>>> The difference is that we blow away the list of states when the 
>>>>>>>>>> catalog
>>>>>>>>>> changes, but we keep the hash table with the start times around.  We
>>>>>>>>>> need two things with different life times.
>>>>>>>>
>>>>>>>> On the other hand, hash seems overdone. Addition to that, the
>>>>>>>> hash-version leaks stale entries while subscriptions are
>>>>>>>> modified. But vacuuming them costs high.
>>>>>>>>
>>>>>>>>> Why can't we just update the hashtable based on the catalog? I mean 
>>>>>>>>> once
>>>>>>>>> the record is not needed in the list, the table has been synced so 
>>>>>>>>> there
>>>>>>>>> is no need for the timestamp either since we'll not try to start the
>>>>>>>>> worker again.
>>>>>>>
>>>>>>> I guess the table sync worker for the same table could need to be
>>>>>>> started again. For example, please image a case where the table
>>>>>>> belonging to the publication is removed from it and the corresponding
>>>>>>> subscription is refreshed, and then the table is added to it again. We
>>>>>>> have the record of the table with timestamp in the hash table when the
>>>>>>> table sync in the first time, but the table sync after refreshed could
>>>>>>> have to wait for the interval.
>>>>>>>
>>>>>>
>>>>>> But why do we want to wait in such case where user has explicitly
>>>>>> requested refresh?
>>>>>>
>>>>>
>>>>> Yeah, sorry, I meant that we don't want to wait but cannot launch the
>>>>> tablesync worker in such case.
>>>>>
>>>>> But after more thought, the latest patch Peter proposed has the same
>>>>> problem. Perhaps we need to scan always whole pg_subscription_rel and
>>>>> remove the entry if the corresponding table get synced.
>>>>>
>>>>
>>>> Yes that's what I mean by "Why can't we just update the hashtable based
>>>> on the catalog". And if we do that then I don't understand why do we
>>>> need both hastable and linked list if we need to update both based on
>>>> catalog reads anyway.
>>>
>>> Thanks, I've now understood correctly. Yes, I think you're right. If
>>> we update the hash table based on the catalog whenever table state is
>>> invalidated, we don't need to have both hash table and list.
>>>
>>> BTW, in current HEAD the SUBREL_STATE_SYNCWAIT is not stored in the
>>> pg_subscription_catalog. So the following condition seems not correct.
>>> We should use "syncworker->relstate == SUBSCRIPTION_STATE_SYNCWAIT"
>>> instead?
>>>
>>>             /*
>>>              * There is a worker synchronizing the relation and waiting for
>>>              * apply to do something.
>>>              */
>>>             if (syncworker && rstate->state == SUBREL_STATE_SYNCWAIT)
>>>             {
>>>                 /*
>>>                  * There are three possible synchronization situations here.
>>>                  *
>>>                  * a) Apply is in front of the table sync: We tell the table
>>>                  *    sync to CATCHUP.
>>>                  *
>>>                  * b) Apply is behind the table sync: We tell the table sync
>>>                  *    to mark the table as SYNCDONE and finish.
>>>
>>>                  * c) Apply and table sync are at the same position: We tell
>>>                  *    table sync to mark the table as READY and finish.
>>>                  *
>>>                  * In any case we'll need to wait for table sync to change
>>>                  * the state in catalog and only then continue ourselves.
>>>                  */
>>>
>>
>> Good catch. Although it's not comment that's wrong, it's the if. We
>> should not compare rstate->state but syncworker->relstate.
> 
> I've attached a patch to fix this bug.
> 

Rereading the code again, it's actually not bug as we update the rstate
to what syncworker says, but it's obviously confusing so probably still
worth to commit that.

-- 
  Petr Jelinek                  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to