On Nov 19, 8:43 am, Daniel Paull <[email protected]> wrote:
> I'll have to think about this a bit more, but I can see that the usual
> TP2 puzzles do not apply in the wave case, even when multiple clients
> submit changes to the server concurrently as the server ensures that
> all clients see all changes in a consistent order.  The common TP2
> puzzles occur when operations are received in different orders at
> nodes in a peer-to-peer network.
>
> The part I'm not sure about though, is the claim that only a single
> state space is used by the server and how this relates to when
> acknowledgements are sent from the server to the client.  In the end
> I'm trying to think about whether the introduction of the server
> acknowledgement means that latency between clients (ie, how long it
> takes operations performed on one client to be seen on another) is
> related to the number of concurrent operations performed by clients.
> The OT white paper claims that the latency is constant, though it does
> not specifically talk about concurrent operations by clients when
> making that claim.

My understanding of the Google Wave Protocol only comes from briefly
reading their white paper on OT.  I hope the following is accurate:

Consider there is one server and 3 clients that all start in the same
initial state (i.e. from quiescence).  Let the server already contain
n operations in its linear history buffer (HB).  Let the 3 clients
concurrently generate operations O1,O2,O3 respectively.  This leads to
the cube picture with the 3 operations extending out from a common
starting point.

In a system that satisfies TP2, all paths around the cube must
converge on the opposite corner.  A system that avoids TP2 does so by
constraining the path that is taken.  For example in the Google Wave
Protocol the server may happen to add the operations to its linear HB
in the order O1,O3,O2.   This order probably depends on the order in
which the operations happen to have been received from the clients –
even though they were roughly sent in parallel.

Each time the server receives an operation it is always able to IT the
operation against some tail of its linear HB.  For example when O2
arrives it will specify sequence number n which is the number of
operations in the server HB that are in its context.   The server will
IT against the tail [begin()+n,end()) which in this scenario will be
[O1, O3’] where O3’ = IT(O3,O1).   In fact for that reason the Google
Wave protocol is able to use simple sequence numbers instead of vector
times.

When a server wants to update a client (which it is allowed to do at
any time), it simply sends the tail of its HB which it knows that the
client doesn’t yet have.  This is very deterministic because the
client isn’t allowed to receive operations from any other machine.
For example the client that sent O1 could be sent the tail [O3’, O2’]
where O2’ = IT( O2, [O1, O3’] ) ].   Note that it would be possible to
only send O3’ (when it was first appended to the HB).   The server can
be very aggressive in sending its operations out to clients.   It is
always possible for a client to IT the received operations against the
list of local operations that have not yet been sent to the server
(and the client has not been allowed to send these to the server
yet).   This indeed is why the client must wait.

Note BTW that Rui Li and Du Li point out that any system that doesn’t
meet TP2 doesn’t actually preserve intention correctly in certain
cases.  It is possible that these examples don’t occur frequently
enough to be considered a problem in practise.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Wave 
Protocol" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/wave-protocol?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to