The problem is that you need, in a way, to break 9p. You need readahead, you need to bundle requests, and you need to cache in a very careful way. This "caching" is most effective if you can map several 9p fids to the same cache entry (i.e, to the same "fid" in the server). In the end, it was more simple to make two 9p-to-op and op-to-fs processes and keep the dialog between them secret for the rest of the world.
On 9/9/07, erik quanstrom <[EMAIL PROTECTED]> wrote: > > We already agreed on a solution. Nobody is interested in implementing it. > > > > On 9/8/07, Uriel <[EMAIL PROTECTED]> wrote: > >> > > have you compared its performance to webdav? > >> > > >> > I don't have any numbers with me, but I would expect 9P to work > >> > faster than WebDAV since 9P works one layer below HTTP. > >> > Implementation details aside, header-overhead in itself makes WebDAV > >> > a less of a competitor. > >> > >> Maybe, if you have ridiculously low latency. Which is one of the > >> reasons I would like to see the latency issues addressed, so 9P > >> services can work well over non-LAN networks. Maybe we can finally > >> agree on a solution for this at this year's IWP9? > > this topic has come up before. i'm not sure i have a clear picture of the > problem. would someone give a concrete example? > > without really knowing what the problem is, there is one big thing that > 9p clients traditionally don't do that would be enormously helpful for > larger files -- readahead. there's nothing in 9p that prevents a client from > having R outstanding reads at the same time. if l is the rtt latency and > s is the avg time it takes the fs to service a request, we can try to pick a > (reasonable) number of outstanding requests R s.t. Rs ≥ l. even if we > can't, N outstanding should reduce the latency penalty for N packets > to l, not Nl. > > if instead the problem is lots of little files the proposals i've seen > are something like bundles of 9p requests. sent en mass to the fs. > how about the opposite? why not bring the blocks en mass to the fs? > the remote fs could be treated as a block storage device (we know how > to do readahead on these things) and the "fs" could be run locally. > the "fs" a mkfs archive, mbox format, a fossil fs or whatever. > > unfortunately, if the problem is fine-grained, highly concurrent access, > readahead just won't work. > > - erik > >
