Ah, I see now. So, there's no guarantee in certain failure modes in clustered 
environment. I've got confused as in the documentation losing event sounded 
like something usual and highly probable which is not the case. Thanks for the 
explanation guys!

--Vovan

> On Jun 27, 2017, at 8:55 AM, Adam Kocoloski <[email protected]> wrote:
> 
> The guarantee on the per-DB _changes feed is much stronger — if you got a 201 
> or 202 HTTP response back on the updated documents will always show up in 
> _changes eventually.
> 
> It'd be nice if we could offer the same guarantee on _db_updates but there 
> are some non-trivial technical challenges under the hood. In the current 
> implementation you’d basically need to crash the entire cluster within 1 
> second of the update being committed to the DB in order for it to be lost 
> from the _db_updates feed. I think that’s still a useful alternative to 
> opening up 100s of connections to listen to every database’s feed directly.
> 
> Cheers, Adam
> 
>> On Jun 27, 2017, at 2:42 AM, Vladimir Kuznetsov <[email protected]> wrote:
>> 
>> Joan, thanks for the pointer to the documentation. 
>> 
>> Sorry for being annoying, I have one more question. The doc states the 
>> following:  
>> 
>> "Note: This was designed without the guarantee that a DB event will be 
>> persisted or ever occur in the _db_updates feed. It probably will, but it 
>> isn't guaranteed". 
>> 
>> Ok, I understand events in _db_updates feed are not guaranteed to be in 
>> order, timing is also is not guaranteed, that's fine. What makes me really 
>> confused is "DB event is not guaranteed to ever occur in _db_updates feed". 
>> What's the point of using _db_updates if I cannot rely on it? Recalling the 
>> use case you mentioned earlier in this thread: "you have 100 databases and 
>> you want to know when something changes on all of them", how do I know for 
>> sure a change in some database occurred if it is not even guaranteed to 
>> eventually appear in _db_updates?
>> 
>> Another question - is this true for per-db _changes feed i.e. is it also not 
>> guaranteed that ANY change will eventually appear in _changes?
>> 
>> thanks,
>> --Vovan
>> 
>>> On Jun 26, 2017, at 11:12 PM, Joan Touzet <[email protected]> wrote:
>>> 
>>> I'll update the docs. However, for now we have:
>>> 
>>> ---
>>> When a database is created, deleted, or updated, a corresponding event will 
>>> be persisted to disk (Note: This was designed without the guarantee that a 
>>> DB event will be persisted or ever occur in the _db_updates feed. It 
>>> probably will, but it isn't guaranteed). Users can subscribe to a 
>>> _changes-like feed of these database events by querying the _db_updates 
>>> endpoint.
>>> 
>>> When an admin user queries the /_db_updates endpoint, they will see the 
>>> account name associated with the DB update as well as update
>>> ---
>>> And technically, the endpoint can work without the _global_changes 
>>> database, but be aware:
>>> 
>>> ---
>>> 3: global_changes, update_db: (true/false) A flag setting whether to update 
>>> the global_changes database. If false, changes will be lost and there will 
>>> be no performance impact of global_changes on the cluster.
>>> ---
>>> 
>>> This is all from https://github.com/apache/couchdb-global-changes
>>> 
>>> I also learned something new today!
>>> 
>>> -Joan
>>> 
>>> ----- Original Message -----
>>> From: "Vladimir Kuznetsov" <[email protected]>
>>> To: "Joan Touzet" <[email protected]>
>>> Cc: [email protected]
>>> Sent: Tuesday, 27 June, 2017 1:53:02 AM
>>> Subject: Re: _global_changes purpose
>>> 
>>> Thanks Joan. 
>>> 
>>> Very good to know. It'd be great to have this reflected somewhere in the 
>>> official couchdb 2.0 docs. Probably it is already there I just could not 
>>> find that...
>>> 
>>> thanks,
>>> --Vovan
>>> 
>>>> On Jun 26, 2017, at 10:42 PM, Joan Touzet <[email protected]> wrote:
>>>> 
>>>> _db_updates is powered by the _global_changes database.
>>>> 
>>>> -Joan
>>>> 
>>>> ----- Original Message -----
>>>> From: "Vladimir Kuznetsov" <[email protected]>
>>>> To: [email protected], "Joan Touzet" <[email protected]>
>>>> Sent: Tuesday, 27 June, 2017 12:39:55 AM
>>>> Subject: Re: _global_changes purpose
>>>> 
>>>> Hi Joan
>>>> 
>>>> I heard /_db_updates is the feed-like thing I could subscribe to listen to 
>>>> the global updates(same way you described). It is not very clear why would 
>>>> I need access to _global_changes database when I already have /_db_updates 
>>>> method with pagination and long-polling features.
>>>> 
>>>> Is listening on _global_changes's /_changes feed the same as listening on 
>>>> /_db_updates? Or is there any difference? What is preferred?
>>>> 
>>>> thanks,
>>>> --Vovan
>>>> 
>>>> 
>>>>> On Jun 26, 2017, at 9:21 PM, Joan Touzet <[email protected]> wrote:
>>>>> 
>>>>> Say you have 100 databases and you want to know when something changes on 
>>>>> all
>>>>> of them. In 1.x you have to open 100 _changes continuous feeds to get that
>>>>> information. In 2.x you have to open a single connection to 
>>>>> _global_changes.
>>>>> 
>>>>> Think of the possibilities.
>>>>> 
>>>>> -Joan
>>>>> 
>>>>> ----- Original Message -----
>>>>> From: "Vladimir Kuznetsov" <[email protected]>
>>>>> To: [email protected]
>>>>> Sent: Monday, 26 June, 2017 8:47:46 PM
>>>>> Subject: _global_changes purpose
>>>>> 
>>>>> Hi guys
>>>>> 
>>>>> I cannot find any good explanation what's the purpose of _global_changes 
>>>>> system database in CouchDB 2.0. Can somebody please explain or provide 
>>>>> some pointer?
>>>>> 
>>>>> thanks
>>>>> --Vovan
>>>> 
>> 
> 

Reply via email to