This looks like a good approach overall, its structured very similiar to
how I want pouchdb architected eventually (writes always succeed, conflicts
resolved in background, only the minimal revision history is stored), but
it may be safer doing a write queue as opposed to to strip down a full
revision history mechanism

I dont quite understand the need for a client revision as it looks like the
provider wont be using it

For deletions it looks like this should use the existing mechanism of
'_deleted' flags on records for the storage server (which needs to be away
of remote deletions + local changes), the provider just wipes any deleted
records and the mediator keeps a deleted key revision map until it
succesfully sends it back to the provider. There doesnt seems to be any
concept of 'successful' write / merge back to the provider from the
mediator but I guess thats implied

> The Mediator scans and removes any records from the unsent queue with the
same key (since these will surely fail), as well as from the sent queue
(since this will fail, if it hasn't already).

This is smart, not sure how content-revision works here? the provider just
gets 3 records and merges them, if the provider generates the content
revision then the mediator cant pre reject changes that happen prior to the
merge, this could lead to a race of changes if the provider keeps
attempting to write pre merge whereas if the mediator provides the
revision-client or the provider at least returns it on merge then we can
ignore any changes that happen pre merge in the mediator.




On 30 July 2013 12:22, Andreas Gal <[email protected]> wrote:

>
> Not having to carry pouchdb on the client side is definitely tempting.
> Also, as we discussed earlier, CouchDB's replication algorithm is not a
> perfect fit for our star-shaped nodes. Its meant for a more interconnected
> graph. A single outgoing changes queue avoids carrying more history on the
> client than needed in the star-shaped graph.
>
> A second benefit is that this model fits pretty well with the existing
> interfaces we have in the browser for the datatypes we are talking about
> here. We have observers and mutators on all of them, and we can cheaply
> feed from observers into the table of local changes.
>
> On the flipside, this is a lot of new code to write and get right, and its
> exactly the kind of tricky complex distributed state machine that will take
> the most time to debug and get ready for production. Also, if we implement
> our own replication mechanism, what is the advantage of sticking with the
> CouchDB wire protocol? Its actually rather clumsy and inefficient (see
> proposed jsondiff delta compression). I am not arguing for using something
> else than CouchDB. I am merely asking why you think it makes sense to stick
> with the wire protocol but abandon the higher level semantics of CouchDB.
>
> Andreas
>
>
> Brian Warner wrote:
>
>> Chris and I have been sketching out what our queue-sync idea[1] would
>> look like when run over the CouchDB API. The rough initial writeup is here:
>>
>>  
>> https://wiki.mozilla.org/**Identity/CryptoIdeas/06-Queue-**Sync-CouchDB<https://wiki.mozilla.org/Identity/CryptoIdeas/06-Queue-Sync-CouchDB>
>>
>> (with some even rougher notes on an etherpad[2]).
>>
>> It lacks rigor, but should be enough to see where it's headed. The basic
>> idea is to use couch's "POST _bulk_docs" API (which is used internally by
>> the CouchDB replication machinery) to deliver batches of new records to the
>> server, some of which will be accepted, others which will be rejected (due
>> to other clients delivering their own changes first). We use the "GET
>> _changes" API to learn about all server changes, both reflections of our
>> own, and those from other clients. New changes are delivered to the local
>> Provider (aka engine) for merging into Places.db/etc.
>>
>> The "Mediator" is responsible for crypto, batching changes into efficient
>> bundles, all network traffic, and maintains a "revision table". This table
>> maps locally-generated "content-revisions" to server-generated
>> "server-revisions", keeping them isolated from servers and local Providers
>> respectively. These revisions help provide the previous-version value used
>> by compare-and-swap to reject new records that aren't based upon the
>> server's previous version (think hg or git push failing because you aren't
>> up-to-date).
>>
>> This doesn't use the couch replication system (POST /_replicate), nor
>> does it embed a copy of CouchDB/PouchDB in the browser. It just uses couch
>> on the server, and speaks the couch API. This seems like a decent way to
>> get the benefits of a well-tested API and server implementation, without
>> taking on the code-size or runtime costs of having a full CouchDB instance
>> inside the browser.
>>
>> Let us know what you think!
>>  -Brian
>>
>>
>> [1]: 
>> https://wiki.mozilla.org/**Identity/CryptoIdeas/05-Queue-**Sync<https://wiki.mozilla.org/Identity/CryptoIdeas/05-Queue-Sync>
>> [2]: 
>> https://id.etherpad.mozilla.**org/picl-couchdb-queuesync-**notes<https://id.etherpad.mozilla.org/picl-couchdb-queuesync-notes>
>> ?
>> ______________________________**_________________
>> Sync-dev mailing list
>> [email protected]
>> https://mail.mozilla.org/**listinfo/sync-dev<https://mail.mozilla.org/listinfo/sync-dev>
>>
> ______________________________**_________________
> Sync-dev mailing list
> [email protected]
> https://mail.mozilla.org/**listinfo/sync-dev<https://mail.mozilla.org/listinfo/sync-dev>
>
_______________________________________________
Sync-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/sync-dev

Reply via email to