On Sat, 19 Feb 2011 10:09:08 EST erik quanstrom <[email protected]>  wrote:
> > It is inherent to 9p (and RPC).
> 
> please defend this.  i don't see any evidence for this bald claim.

We went over latency issues multiple times in the past but
let us take your 80ms latency. You can get 12.5 rpc calls
through in 1 sec even if you take 0 seconds to generate &
process each request & response. If each call transfers 64K,
at most you get a throughput of 800KB/sec. If you pipeline
your requests, without waiting for each reply, you are using
streaming. To avoid `streaming' you can setup N parallel
connections but that is again adding a lot of complexity to a
relatively simple problem.

> > I think it is worth looking at a successor protocol instead
> > of just minimally fixing up 9p (a clean slate approach frees
> > up your mind.  You can then merge the two later).
> 
> what is the goal?

Better handling of latency at a minimum?  If I were to do
this I would experiment with extending the channel concept.

>                    without a clear problem to solve that you
> can build a system around, i don't see the point.  making replica
> fast doesn't seem like sufficient motivation to me at all.

No. I just use Ron's hg repo now.

> 2.  at cross-u.s. latencies of 80ms, serial file operations like
> mk can be painful.  if bringing the work to the data doesn't work
> or still takes too many rtts, the only solution i see is to cache everything.
> and try to manage coherence vs. latency.

For things like remote copy of a whole bunch of files caching
is not going to help you much but streaming will.  So will
increasing parllelism (upto a point).  Compression might.

Reply via email to