Rusty Russell wrote:
> On Wed, 2007-08-22 at 02:26 -0700, Dor Laor wrote:
>
>> Actually while playing with virtio for kvm Avi saw that and recommended
>> to do the following:
>> struct desc_pages
>> {
>> /* Page of descriptors. */
>> union {
>> struct virtio_desc desc[NUM_
On Wed, 2007-08-22 at 02:26 -0700, Dor Laor wrote:
> Actually while playing with virtio for kvm Avi saw that and recommended
> to do the following:
> struct desc_pages
> {
> /* Page of descriptors. */
> union {
> struct virtio_desc desc[NUM_DESCS];
> char pad
>> Actually while playing with virtio for kvm Avi saw that and
>recommended
>> to do the following:
>> struct desc_pages
>> {
>> /* Page of descriptors. */
>> union {
>> struct virtio_desc desc[NUM_DESCS];
>> char pad1[PAGE_SIZE];
>> };
>[...]
>
>Fine with m
Am Mittwoch, 22. August 2007 schrieb Dor Laor:
> Actually while playing with virtio for kvm Avi saw that and recommended
> to do the following:
> struct desc_pages
> {
> /* Page of descriptors. */
> union {
> struct virtio_desc desc[NUM_DESCS];
> char pad1[PA
>> +struct desc_pages
>> +{
>> +/* Page of descriptors. */
>> +struct lguest_desc desc[NUM_DESCS];
>> +
>> +/* Next page: how we tell other side what buffers are available.
>*/
>> +unsigned int avail_idx;
>> +unsigned int available[NUM_DESCS];
>> +char pad[PAGE_SIZE - (NUM_D
Am Mittwoch, 22. August 2007 schrieb Rusty Russell:
> +struct desc_pages
> +{
> + /* Page of descriptors. */
> + struct lguest_desc desc[NUM_DESCS];
> +
> + /* Next page: how we tell other side what buffers are available. */
> + unsigned int avail_idx;
> + unsigned int available
On Tue, 2007-08-21 at 12:47 -0400, Gregory Haskins wrote:
> On Tue, 2007-08-21 at 10:06 -0400, Gregory Haskins wrote:
> > On Tue, 2007-08-21 at 23:47 +1000, Rusty Russell wrote:
> > >
> > > In the guest -> host direction, an interface like virtio is designed
> > > for batching, with the explicit
On Tue, 2007-08-21 at 20:12 +0300, Avi Kivity wrote:
> No, sync() means "make the other side aware that there's work to be done".
>
Ok, but still the important thing isn't the kick per se, but the
resulting completetion. Can we do interrupt driven reclamation? Some
of those virtio_net emails I
Gregory Haskins wrote:
> On Tue, 2007-08-21 at 10:06 -0400, Gregory Haskins wrote:
>
>> On Tue, 2007-08-21 at 23:47 +1000, Rusty Russell wrote:
>>
>>> In the guest -> host direction, an interface like virtio is designed
>>> for batching, with the explicit distinction between add_buf & s
On Tue, 2007-08-21 at 10:06 -0400, Gregory Haskins wrote:
> On Tue, 2007-08-21 at 23:47 +1000, Rusty Russell wrote:
> >
> > In the guest -> host direction, an interface like virtio is designed
> > for batching, with the explicit distinction between add_buf & sync.
>
> Right. IOQ has "iter_pu
On Tue, 2007-08-21 at 23:47 +1000, Rusty Russell wrote:
> Hi Gregory,
>
> The main current use is disk drivers: they process out-of-order.
Maybe for you ;) I am working on the networking/IVMC side.
>
> > I think the use of rings for the tx-path in of
> > itself is questionable unless
On Tue, 2007-08-21 at 08:00 -0400, Gregory Haskins wrote:
> On Tue, 2007-08-21 at 17:58 +1000, Rusty Russell wrote:
>
> > Partly the horror of the code, but mainly because it is an in-order
> > ring. You'll note that we use a reply ring, so we don't need to know
> > how much the other side has co
On Tue, 2007-08-21 at 15:25 +0300, Avi Kivity wrote:
> Gregory Haskins wrote:
> > On Tue, 2007-08-21 at 17:58 +1000, Rusty Russell wrote:
> >
> >
> >> Partly the horror of the code, but mainly because it is an in-order
> >> ring. You'll note that we use a reply ring, so we don't need to know
>
Rusty Russell wrote:
> Partly the horror of the code, but mainly because it is an in-order
> ring. You'll note that we use a reply ring, so we don't need to know
> how much the other side has consumed (and it needn't do so in order).
>
Yes, it's quite nice: by using two in-order rings, you get
Gregory Haskins wrote:
> On Tue, 2007-08-21 at 17:58 +1000, Rusty Russell wrote:
>
>
>> Partly the horror of the code, but mainly because it is an in-order
>> ring. You'll note that we use a reply ring, so we don't need to know
>> how much the other side has consumed (and it needn't do so in or
On Tue, 2007-08-21 at 17:58 +1000, Rusty Russell wrote:
> Partly the horror of the code, but mainly because it is an in-order
> ring. You'll note that we use a reply ring, so we don't need to know
> how much the other side has consumed (and it needn't do so in order).
>
I have certainly been kn
On Tue, 2007-08-21 at 00:33 -0700, Dor Laor wrote:
> >> > Well, for cache reasons you should really try to avoid having both
> >sides
> >> > write to the same data. Hence two separate cache-aligned regions
> is
> >> > better than one region and a flip bit.
> >>
> >> While I certainly can see what
>> > Well, for cache reasons you should really try to avoid having both
>sides
>> > write to the same data. Hence two separate cache-aligned regions
is
>> > better than one region and a flip bit.
>>
>> While I certainly can see what you mean about the cache implications
>for
>> a bit-flip design,
On Fri, 2007-08-17 at 09:50 -0400, Gregory Haskins wrote:
> On Fri, 2007-08-17 at 17:43 +1000, Rusty Russell wrote:
> > Well, for cache reasons you should really try to avoid having both sides
> > write to the same data. Hence two separate cache-aligned regions is
> > better than one region and a
On Mon, 2007-08-20 at 17:12 +0300, Avi Kivity wrote:
> Dor Laor wrote:
> > Using Rusty's code there is no waste.
> > Each descriptor has a flag (head|next). Next flag stands for pointer to
> > the
> > next descriptor with u32 next index. So the waste is 4 bytes.
> > Sg descriptors are chained on th
On Mon, 2007-08-20 at 07:03 -0700, Dor Laor wrote:
> >> > 2) We either need huge descriptors or some chaining
> >mechanism to
> >> > handle scatter-gather.
> >> >
> >>
> >> Or, my preference, have a small sglist in the descriptor;
> >
> >
> >Define "small" ;)
> >
> >There a certainl
Gregory Haskins wrote:
>>>
>>>
>> Or, my preference, have a small sglist in the descriptor;
>>
>
>
> Define "small" ;)
>
4.
> There a certainly patterns that cannot/will-not take advantage of SG
> (for instance, your typical network rx path), and therefore the sg
> entries are w
Dor Laor wrote:
2) We either need huge descriptors or some chaining
>> mechanism to
>>
handle scatter-gather.
>>> Or, my preference, have a small sglist in the descriptor;
>>>
>> Define "small" ;)
>>
>> There a certainly pattern
>> > 2) We either need huge descriptors or some chaining
>mechanism to
>> > handle scatter-gather.
>> >
>>
>> Or, my preference, have a small sglist in the descriptor;
>
>
>Define "small" ;)
>
>There a certainly patterns that cannot/will-not take advantage of SG
>(for instance, your
On Sun, 2007-08-19 at 12:24 +0300, Avi Kivity wrote:
> Rusty Russell wrote:
> > 2) We either need huge descriptors or some chaining mechanism to
> > handle scatter-gather.
> >
>
> Or, my preference, have a small sglist in the descriptor;
Define "small" ;)
There a certainly pa
Rusty Russell wrote:
> 2) We either need huge descriptors or some chaining mechanism to
> handle scatter-gather.
>
Or, my preference, have a small sglist in the descriptor; if the buffer
doesn't fit in the sglist follow a pointer and size (stored in the same
place as the immed
On Fri, 2007-08-17 at 17:43 +1000, Rusty Russell wrote:
> Sure, these discussions can get pretty esoteric. The question is
> whether you want a point-to-point transport (as we discuss here), or an
> N-way. Lguest has N-way, but I'm not convinced it's worthwhile, as
> there's some overhead
On Fri, 2007-08-17 at 01:26 -0400, Gregory Haskins wrote:
> Hi Rusty,
>
> Comments inline...
>
> On Fri, 2007-08-17 at 11:25 +1000, Rusty Russell wrote:
> >
> > Transport has several parts. What the hypervisor knows about (usually
> > shared memory and some interrupt mechanism and possibly "DM
Hi Rusty,
Comments inline...
On Fri, 2007-08-17 at 11:25 +1000, Rusty Russell wrote:
>
> Transport has several parts. What the hypervisor knows about (usually
> shared memory and some interrupt mechanism and possibly "DMA") and what
> is convention between users (eg. ringbuffer layouts). Whet
On Thu, 2007-08-16 at 19:13 -0400, Gregory Haskins wrote:
> Here is the v3 release of the patch series for a generalized PV-IO
> infrastructure. It has v2 plus the following changes:
Hi Gregory,
This is a lot of code. I'm having trouble taking it all in, TBH. It
might help me if we cou
Here is the v3 release of the patch series for a generalized PV-IO
infrastructure. It has v2 plus the following changes:
1) The big changes is that PVBUS is now based on the bus/device_register
APIs. The code is inspired by the lguest_bus except it has been decoupled
from the hypervisor.
31 matches
Mail list logo