1. it is not possible to iterate efficiently backwards by including both
<before>&<after> in the query because once <after> is included in the query
then it iterates forwards. that means when iterating backwards client has
to omit <after> and fetch entire pages and then the last page will mostly
overlap with some of the local messages, which is a waste.
2. it is not possible to determine "holes" in the archive reliably, because
client can not know what is the last message archive-id, because our own
sent messages have no feedback from the server once the message is archived
what is its archive-id ... that means that client has to fetch ALL messages
from MAM once again just to be sure that holes are filled, even though many
message-bodies are already received/sent during live communication.
Basically In the current state all our own messages are "holes" in the
local archive, not to mention all kind of "bad network" scenarios,
multi-device and longer offline periods.
Making separate requests, one for archive ids and one for content would
- much less waste in the sync because only ids would be wasted, and not the
- possible to fill multiple holes in one request by fetching content that
is really needed
- make possible for push payloads to contain only message ids (when clients
want to handle encrypted messages locally by fetching them and only them)
currently it is doable by giving to the push one id before the wanted
Standards mailing list