On 7/30/13 5:27 AM, Dale Harvey wrote:
> This looks like a good approach overall, its structured very similiar
> to how I want pouchdb architected eventually (writes always succeed,
> conflicts resolved in background, only the minimal revision history is
> stored), but it may be safer doing a write queue as opposed to to
> strip down a full revision history mechanism
Yeah, we noticed that each asynchronous link (provider->mediator
upstream, mediator->server upstream, server->mediator downstream,
mediator->provider downstream) can use one of two different modes:
* compare-and-swap: writes are rejected unless the sender is up-to-date,
making the sender responsible for fetching the current version and
merging
* write-and-merge: writes always succeed, but the recipient may have to
merge, and thus produce a new record to send back up
(these correspond to choosing C and A in the CAP theorem, of course)
And we realized that things are going to be happiest if the modes are
line up in the same direction: all compare-and-swap, or all
write-and-merge (I lean towards compare-and-swap everywhere, as it tends
to isolate the merge duty rather than allowing the conflicts to spread
to all clients). Two components that can interact synchronously don't
have to choose a mode.
This proposal uses synchronous writes from provider->mediator,
compare-and-swap from the mediator to the server, write-and-merge from
the server to the mediator, and write-and-merge from the mediator back
into the provider.
> I dont quite understand the need for a client revision as it looks like
> the provider wont be using it
I think the provider can use client-revision to discover whether a new
downstream record (coming from the mediator, in the .merge() method) is
new, or if it's really a reflection of something they wrote earlier. And
it's what the mediator uses to correlate the records it sends upstream
to the server with the (reflection or new-from-other-client) downstream
records it gets from them.
If we don't have a client-generated content-revision, then we need to
store server-revisions in e.g. Places.db, which would make it pretty
hard to swap out the backend server.
One trick Vlad has looked into is to pre-compute the server-revision,
using the same algorithm that CouchDB will use (based on the Erlang
"Term" language). It works, but it feels fragile, and would lock us into
a specific backend database. We could also use the response to "POST
/_bulk_docs" to learn the server-generated server-revisions for each
accepted upstream record.
>> The Mediator scans and removes any records from the unsent queue with
>> the same key (since these will surely fail), as well as from the sent
>> queue (since this will fail, if it hasn't already).
>
> This is smart, not sure how content-revision works here? the provider
> just gets 3 records and merges them
I'll have to think about that some more. One note though: the provider
only gets two records ("mine" and "theirs", not "common"), since we
don't retain enough history to provide that. We'd need an extra shadow
copy in the mediator to hold onto the common ancestor.
cheers,
-Brian
_______________________________________________
Sync-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/sync-dev