Few more question, just to be on the same page: >Work out which contacts have unread messages in the user's MAM archive, how many, and what the id of the last read message is
1. I assume the ids are the same as in MAM and can be later used to query MAM. Am I right? 2. What exactly do we understand by "contact" here? An item in user's roster or maybe just a unique JID from the whole archive. I worked with systems where there were no roster (not in XMPP world) but still there was MAM. Also a contact doesn't have to be in user's roster to be able to chat with it and still the user may want to have information about unread messages for such a contact. Best regards Michal Piotrowski [email protected] On 18 January 2017 at 15:36, Michal Piotrowski < [email protected]> wrote: > I have another question regarding pipelining. From what I understand the > very first connection to the server should be "normal" like - client sends > packet, waits for the response(s) and proceeds. The next time all the > mentioned things in the spec can be pipelined. What makes me wonder are > server responses like stream features. Should server still send them? In my > opinion they are redundant in this case as the client is probably not > interested and already knows what features will be used. > > > Best regards > Michal Piotrowski > [email protected] > > On 18 January 2017 at 15:01, Evgeny Khramtsov <[email protected]> wrote: > >> Wed, 18 Jan 2017 14:24:02 +0100 >> Florian Schmaus <[email protected]> wrote: >> >> > Bind2 already tries to solve race conditions an XMPP client encounters >> > when creating a new session by atomically querying the users archive >> > for the ID of the latest stanza, binding the resource *and* >> > activating the stream of live stanzas right after the retrieved ID. I >> > believe you don't not a database with atomic operations to implement >> > that protocol step atomically server-side (by just locking the >> > archive and stanza stream of the user while that action is performed). >> >> Ok, let's say you need to read some different tables (possibly in >> different databases, for example SQL and Redis). In order to do this >> atomically, you need to read one table, cache the results somewhere, >> then read the next table and cache the result and so on. So you need >> to maintain additional cache. And what if you need writes (enabling >> carbons requires writes I believe)? You need to discard all your writes >> if some of the table fails (simulating rollback). Is it's worth the >> effort? >> _______________________________________________ >> Standards mailing list >> Info: https://mail.jabber.org/mailman/listinfo/standards >> Unsubscribe: [email protected] >> _______________________________________________ >> > >
_______________________________________________ Standards mailing list Info: https://mail.jabber.org/mailman/listinfo/standards Unsubscribe: [email protected] _______________________________________________
