On 8/8/07, ron minnich <[EMAIL PROTECTED]> wrote:
> On 8/8/07, Charles Forsyth <[EMAIL PROTECTED]> wrote:
> > > Fix is to give it lots of dma segments so that you can stay ahead of
> > > the traffic.
> >
> > but is that then guaranteed, or just a matter of luck?
>
>
> That's a great question. I believe it is a matter of luck. But if you
> get the interrupt, and you put the new pointers in the dma struct
> "quickly", and it has not wrapped around, well, you're in luck!
>
> ron
> p.s. Actually, it does not matter, lguest is going to replace its I/O
> with the paravirt IO standard ops in the next release.
>
That really doesn't seem right, I looked more at the block and the
console code, but it seems like there should be code which "claims"
posted DMA buffers and then the guest should have to re-post them.
I have three different designs for the 9p transport on lguest. I
think I shall do them all just so I can add a "choose your own
adventure" style to Rusty's documentation.
a) follow the console driver style and just shuffle to named/pipe
socket to 9p server
b) follow the libos style and just use a shared memory buffer
posted in dev->mem that is mmaped from shared memory with the server
c) use dev->mem to store fcall slots and use the dma buffers to
shuffle payload -- this should be the optimal zero-copy case
(a) can theoretically target any 9p server, (b) will work against
inferno-tx and (c) will work against a modified spfs and will take
some pretty heavy modifications to v9fs (don't need to marshall the
fcall, just stick it in a struct in shared memory). The idea is to
compare the performance of the three approaches and see just how much
cost (performance and complexity) is involved in each. Should have
(a) done in a matter of hours.
-eric