Re: [PATCH 2/2] virtio_net: Improve the recv buffer allocation scheme

2008-10-16 Thread Rusty Russell
On Friday 10 October 2008 06:26:25 Anthony Liguori wrote:
 Mark McLoughlin wrote:
  Also, including virtio_net_hdr in the data buffer would need another
  feature flag. Rightly or wrongly, KVM's implementation requires
  virtio_net_hdr to be the first buffer:
 
  if (elem.in_num  1 || elem.in_sg[0].iov_len != sizeof(*hdr)) {
  fprintf(stderr, virtio-net header not in first element\n);
  exit(1);
  }
 
  i.e. it's part of the ABI ... at least as KVM sees it :-)

 This is actually something that's broken in a nasty way.  Having the
 header in the first element is not supposed to be part of the ABI but it
 sort of has to be ATM.

 If an older version of QEMU were to use a newer kernel, and the newer
 kernel had a larger header size, then if we just made the header be the
 first X bytes, QEMU has no way of knowing how many bytes that should be.
   Instead, the guest actually has to allocate the virtio-net header in
 such a way that it only presents the size depending on the features that
 the host supports.  We don't use a simple versioning scheme, so you'd
 have to check for a combination of features advertised by the host but
 that's not good enough because the host may disable certain features.

 Perhaps the header size is whatever the longest element that has been
 commonly negotiated?

Yes.  The feature implies the header extension.  Not knowing implies no 
extension is possible.

Rusty.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] virtio_net: Improve the recv buffer allocation scheme

2008-10-15 Thread Rusty Russell
On Friday 10 October 2008 02:30:35 Herbert Xu wrote:
 On Thu, Oct 09, 2008 at 11:55:59AM +1100, Rusty Russell wrote:
  Secondly, we can put the virtio_net_hdr at the head of the skb data (this
  is also worth considering for xmit I think if we have headroom) and drop
  MAX_SKB_FRAGS which contains a gratuitous +2.

 That's fine but having skb-data in the ring still means two
 different kinds of memory in there and it sucks when you only
 have 1500-byte packets.

No, you really want to do this for 1500 byte packets since it increases the 
effective space in the ring.  Unfortunately, Mark points out that kvm assumes 
the header is standalone: Anthony and I discussed this a while back and 
decided it *wasn't* a good assumption.

TODO: YA feature bit...

 We need a scheme that handles both 1500-byte packets as well
 as 64K-byte size ones, and without holding down 16M of memory
 per guest.

Ah, thanks for that.  It's not so much ring entries, as guest memory you're 
trying to save.  That makes much more sense.

   + char *p = page_address(skb_shinfo(skb)-frags[0].page);
 
  ...
 
   + memcpy(hdr, p, sizeof(*hdr));
   + p += sizeof(*hdr);
 
  I think you need kmap_atomic() here to access the page.  And yes, that
  will effect performance :(

 No we don't.  kmap would only be necessary for highmem which we
 did not request.

Good point.  Could you humor me with a comment to that effect?  Prevents me 
making the same mistake again.

Thanks!
Rusty.
PS.  Laptop broke, was MIA for a week.  Working overtime now.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] virtio_net: Improve the recv buffer allocation scheme

2008-10-10 Thread Mark McLoughlin
On Thu, 2008-10-09 at 14:26 -0500, Anthony Liguori wrote:
 Mark McLoughlin wrote:
  
  Also, including virtio_net_hdr in the data buffer would need another
  feature flag. Rightly or wrongly, KVM's implementation requires
  virtio_net_hdr to be the first buffer:
  
  if (elem.in_num  1 || elem.in_sg[0].iov_len != sizeof(*hdr)) {
  fprintf(stderr, virtio-net header not in first element\n);
  exit(1);
  }
  
  i.e. it's part of the ABI ... at least as KVM sees it :-)
 
 This is actually something that's broken in a nasty way.  Having the 
 header in the first element is not supposed to be part of the ABI but it 
 sort of has to be ATM.
 
 If an older version of QEMU were to use a newer kernel, and the newer 
 kernel had a larger header size, then if we just made the header be the 
 first X bytes, QEMU has no way of knowing how many bytes that should be. 
   Instead, the guest actually has to allocate the virtio-net header in 
 such a way that it only presents the size depending on the features that 
 the host supports.  We don't use a simple versioning scheme, so you'd 
 have to check for a combination of features advertised by the host but 
 that's not good enough because the host may disable certain features.
 
 Perhaps the header size is whatever the longest element that has been 
 commonly negotiated?
 
 So that's why this aggressive check is here.  Not to necessarily cement 
 this into the ABI but as a way to make someone figure out how to 
 sanitize this all.

Well, features may be orthogonal but they are still added sequentially
to the ABI. So, you would have a kind of implicit ABI versioning, while
still allowing individual selection of features.

e.g. if NET_F_FOO adds int foo to the header and then NET_F_BAR adds
int bar to the header then if NET_F_FOO is negotiated, the guest
should only send a header with foo and if NET_F_FOO|NET_F_BAR or
NET_F_BAR is negotiated, then the guest sends a header with both foo
and bar.

Or put it another way, a host or guest may not implement NET_F_FOO but
knowledge of the foo header field is part of the ABI of NET_F_BAR.
That knowledge would be as simple as knowing that the field exists and
that it should be ignored if the feature isn't used.

Cheers,
Mark.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] virtio_net: Improve the recv buffer allocation scheme

2008-10-10 Thread Mark McLoughlin
On Thu, 2008-10-09 at 23:30 +0800, Herbert Xu wrote:
 On Thu, Oct 09, 2008 at 11:55:59AM +1100, Rusty Russell wrote:

   The size of the logical buffer is
   returned to the guest rather than the size of the individual smaller
   buffers.
  
  That's a virtio transport breakage: can you use the standard virtio 
  mechanism, 
  just put the extended length or number of extra buffers inside the 
  virtio_net_hdr?
 
 Sure that sounds reasonable.

Okay, here we go.

The new header is lamely called virtio_net_hdr2 - I've added some
padding in there so we can extend it further in future.

It gets messy for lguest because tun/tap isn't using the same header
format anymore.

Rusty - let me know if this looks reasonable and, if so, I'll merge it
back into the original patches and resend.

Cheers,
Mark.

diff --git a/Documentation/lguest/lguest.c b/Documentation/lguest/lguest.c
index da934c2..0f840f2 100644
--- a/Documentation/lguest/lguest.c
+++ b/Documentation/lguest/lguest.c
@@ -940,14 +940,21 @@ static void handle_net_output(int fd, struct virtqueue 
*vq, bool timeout)
 {
unsigned int head, out, in, num = 0;
int len;
-   struct iovec iov[vq-vring.num];
+   struct iovec iov[vq-vring.num + 1];
static int last_timeout_num;
 
/* Keep getting output buffers from the Guest until we run out. */
-   while ((head = get_vq_desc(vq, iov, out, in)) != vq-vring.num) {
+   while ((head = get_vq_desc(vq, iov[1], out, in)) != vq-vring.num) {
if (in)
errx(1, Input buffers in output queue?);
-   len = writev(vq-dev-fd, iov, out);
+
+   /* tapfd needs a virtio_net_hdr, not virtio_net_hdr2 */
+   iov[0].iov_base  = iov[1].iov_base;
+   iov[0].iov_len   = sizeof(struct virtio_net_hdr);
+   iov[1].iov_base += sizeof(struct virtio_net_hdr2);
+   iov[1].iov_len  -= sizeof(struct virtio_net_hdr2);
+
+   len = writev(vq-dev-fd, iov, out + 1);
if (len  0)
err(1, Writing network packet to tun);
add_used_and_trigger(fd, vq, head, len);
@@ -998,18 +1005,24 @@ static unsigned int get_net_recv_head(struct device 
*dev, struct iovec *iov,
 
 /* Here we add used recv buffers to the used queue but, also, return unused
  * buffers to the avail queue. */
-static void add_net_recv_used(struct device *dev, unsigned int *heads,
- int *bufsizes, int nheads, int used_len)
+static void add_net_recv_used(struct device *dev, struct virtio_net_hdr2 *hdr2,
+ unsigned int *heads, int *bufsizes,
+ int nheads, int used_len)
 {
int len, idx;
 
/* Add the buffers we've actually used to the used queue */
len = idx = 0;
while (len  used_len) {
-   add_used(dev-vq, heads[idx], used_len, idx);
+   if (bufsizes[idx]  (used_len - len))
+   bufsizes[idx] = used_len - len;
+   add_used(dev-vq, heads[idx], bufsizes[idx], idx);
len += bufsizes[idx++];
}
 
+   /* The guest needs to know how many buffers to fetch */
+   hdr2-num_buffers = idx;
+
/* Return the rest of them back to the avail queue */
lg_last_avail(dev-vq) -= nheads - idx;
dev-vq-inflight  -= nheads - idx;
@@ -1022,12 +1035,17 @@ static void add_net_recv_used(struct device *dev, 
unsigned int *heads,
  * Guest. */
 static bool handle_tun_input(int fd, struct device *dev)
 {
-   struct iovec iov[dev-vq-vring.num];
+   struct virtio_net_hdr hdr;
+   struct virtio_net_hdr2 *hdr2;
+   struct iovec iov[dev-vq-vring.num + 1];
unsigned int heads[NET_MAX_RECV_PAGES];
int bufsizes[NET_MAX_RECV_PAGES];
int nheads, len, iovcnt;
 
-   nheads = len = iovcnt = 0;
+   nheads = len = 0;
+
+   /* First iov is for the header */
+   iovcnt = 1;
 
/* First we need enough network buffers from the Guests's recv
 * virtqueue for the largest possible packet. */
@@ -1056,13 +1074,26 @@ static bool handle_tun_input(int fd, struct device *dev)
len += bufsizes[nheads++];
}
 
+   /* Read virtio_net_hdr from tapfd */
+   iov[0].iov_base = hdr;
+   iov[0].iov_len = sizeof(hdr);
+
+   /* Read data into buffer after virtio_net_hdr2 */
+   hdr2 = iov[1].iov_base;
+   iov[1].iov_base += sizeof(*hdr2);
+   iov[1].iov_len  -= sizeof(*hdr2);
+
/* Read the packet from the device directly into the Guest's buffer. */
len = readv(dev-fd, iov, iovcnt);
if (len = 0)
err(1, reading network);
 
+   /* Copy the virtio_net_hdr into the virtio_net_hdr2 */
+   hdr2-hdr = hdr;
+   len += sizeof(*hdr2) - sizeof(hdr);
+
/* Return unused buffers to the recv queue */
-   add_net_recv_used(dev, heads, bufsizes, nheads, len);
+

Re: [PATCH 2/2] virtio_net: Improve the recv buffer allocation scheme

2008-10-09 Thread Herbert Xu
On Thu, Oct 09, 2008 at 11:55:59AM +1100, Rusty Russell wrote:

 There are three approaches we should investigate before adding YA feature.  
 Obviously, we can simply increase the number of ring entries.

That's not going to work so well as you need to increase the ring
size by MAX_SKB_FRAGS times to achieve the same level of effect.

Basically the current scheme is either going to suck at non-TSO
traffic or it's going to chew too much resources.

 Secondly, we can put the virtio_net_hdr at the head of the skb data (this is 
 also worth considering for xmit I think if we have headroom) and drop 
 MAX_SKB_FRAGS which contains a gratuitous +2.

That's fine but having skb-data in the ring still means two
different kinds of memory in there and it sucks when you only
have 1500-byte packets.

 Thirdly, we can try to coalesce contiguous buffers.  The page caching scheme 
 we have might help here, I don't know.  Maybe we should be explicitly trying 
 to allocate higher orders.

That's not really the key problem here.  The problem here is
that the scheme we're currently using in virtio-net is simply
broken when it comes to 1500-byte sized packets.  Most of the
entries on the ring buffer go to waste.

We need a scheme that handles both 1500-byte packets as well
as 64K-byte size ones, and without holding down 16M of memory
per guest.

  The size of the logical buffer is
  returned to the guest rather than the size of the individual smaller
  buffers.
 
 That's a virtio transport breakage: can you use the standard virtio 
 mechanism, 
 just put the extended length or number of extra buffers inside the 
 virtio_net_hdr?

Sure that sounds reasonable.

  Make use of this support by supplying single page receive buffers to
  the host. On receive, we extract the virtio_net_hdr, copy 128 bytes of
  the payload to the skb's linear data buffer and adjust the fragment
  offset to point to the remaining data. This ensures proper alignment
  and allows us to not use any paged data for small packets. If the
  payload occupies multiple pages, we simply append those pages as
  fragments and free the associated skbs.
 
  +   char *p = page_address(skb_shinfo(skb)-frags[0].page);
 ...
  +   memcpy(hdr, p, sizeof(*hdr));
  +   p += sizeof(*hdr);
 
 I think you need kmap_atomic() here to access the page.  And yes, that will 
 effect performance :(

No we don't.  kmap would only be necessary for highmem which we
did not request.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmVHI~} [EMAIL PROTECTED]
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] virtio_net: Improve the recv buffer allocation scheme

2008-10-09 Thread Mark McLoughlin
On Thu, 2008-10-09 at 23:30 +0800, Herbert Xu wrote: 
 On Thu, Oct 09, 2008 at 11:55:59AM +1100, Rusty Russell wrote:
 
  There are three approaches we should investigate before adding YA feature.  
  Obviously, we can simply increase the number of ring entries.
 
 That's not going to work so well as you need to increase the ring
 size by MAX_SKB_FRAGS times to achieve the same level of effect.
 
 Basically the current scheme is either going to suck at non-TSO
 traffic or it's going to chew too much resources.

Yeah ... to put some numbers on it, assume we have a 256 entry ring now.

Currently, with GSO enabled in the host the guest will fill this with 12
buffer heads with 20 buffers per head (a 10 byte buffer, an MTU sized
buffer and 18 page sized buffers).

That means we allocate ~900k for receive buffers, 12k for the ring, fail
to use 16 ring entries and the ring ends up with a capacity of 12
packets. In the case of MTU sized packets from an off-host source,
that's a huge amount of overhead for ~17k of data.

If we wanted to match the packet capacity that Herbert's suggestion
enables (i.e. 256 packets), we'd need to bump the ring size to 4k
entries (assuming we reduce it to 19 entries per packet). This would
mean we'd need to allocate ~200k for the ring and ~18M in receive
buffers. Again, assuming MTU sized packets, that's massive overhead for
~400k of data.

  Secondly, we can put the virtio_net_hdr at the head of the skb data (this 
  is 
  also worth considering for xmit I think if we have headroom) and drop 
  MAX_SKB_FRAGS which contains a gratuitous +2.
 
 That's fine but having skb-data in the ring still means two
 different kinds of memory in there and it sucks when you only
 have 1500-byte packets.

Also, including virtio_net_hdr in the data buffer would need another
feature flag. Rightly or wrongly, KVM's implementation requires
virtio_net_hdr to be the first buffer:

if (elem.in_num  1 || elem.in_sg[0].iov_len != sizeof(*hdr)) {
fprintf(stderr, virtio-net header not in first element\n);
exit(1);
}

i.e. it's part of the ABI ... at least as KVM sees it :-)

   The size of the logical buffer is
   returned to the guest rather than the size of the individual smaller
   buffers.
  
  That's a virtio transport breakage: can you use the standard virtio 
  mechanism, 
  just put the extended length or number of extra buffers inside the 
  virtio_net_hdr?
 
 Sure that sounds reasonable.


I'll give that a shot.

Cheers,
Mark.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] virtio_net: Improve the recv buffer allocation scheme

2008-10-09 Thread Anthony Liguori
Mark McLoughlin wrote:
 
 Also, including virtio_net_hdr in the data buffer would need another
 feature flag. Rightly or wrongly, KVM's implementation requires
 virtio_net_hdr to be the first buffer:
 
 if (elem.in_num  1 || elem.in_sg[0].iov_len != sizeof(*hdr)) {
 fprintf(stderr, virtio-net header not in first element\n);
 exit(1);
 }
 
 i.e. it's part of the ABI ... at least as KVM sees it :-)

This is actually something that's broken in a nasty way.  Having the 
header in the first element is not supposed to be part of the ABI but it 
sort of has to be ATM.

If an older version of QEMU were to use a newer kernel, and the newer 
kernel had a larger header size, then if we just made the header be the 
first X bytes, QEMU has no way of knowing how many bytes that should be. 
  Instead, the guest actually has to allocate the virtio-net header in 
such a way that it only presents the size depending on the features that 
the host supports.  We don't use a simple versioning scheme, so you'd 
have to check for a combination of features advertised by the host but 
that's not good enough because the host may disable certain features.

Perhaps the header size is whatever the longest element that has been 
commonly negotiated?

So that's why this aggressive check is here.  Not to necessarily cement 
this into the ABI but as a way to make someone figure out how to 
sanitize this all.

Regards,

Anthony Liguori

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] virtio_net: Improve the recv buffer allocation scheme

2008-10-08 Thread Rusty Russell
On Thursday 09 October 2008 06:34:59 Mark McLoughlin wrote:
 From: Herbert Xu [EMAIL PROTECTED]

 If segmentation offload is enabled by the host, we currently allocate
 maximum sized packet buffers and pass them to the host. This uses up
 20 ring entries, allowing us to supply only 20 packet buffers to the
 host with a 256 entry ring. This is a huge overhead when receiving
 small packets, and is most keenly felt when receiving MTU sized
 packets from off-host.

Hi Mark!

There are three approaches we should investigate before adding YA feature.  
Obviously, we can simply increase the number of ring entries.

Secondly, we can put the virtio_net_hdr at the head of the skb data (this is 
also worth considering for xmit I think if we have headroom) and drop 
MAX_SKB_FRAGS which contains a gratuitous +2.

Thirdly, we can try to coalesce contiguous buffers.  The page caching scheme 
we have might help here, I don't know.  Maybe we should be explicitly trying 
to allocate higher orders.

Now, that said, we might need this anyway.  But let's try the easy things 
first?  (Or as well...)

 The size of the logical buffer is
 returned to the guest rather than the size of the individual smaller
 buffers.

That's a virtio transport breakage: can you use the standard virtio mechanism, 
just put the extended length or number of extra buffers inside the 
virtio_net_hdr?

That makes more sense to me.

 Make use of this support by supplying single page receive buffers to
 the host. On receive, we extract the virtio_net_hdr, copy 128 bytes of
 the payload to the skb's linear data buffer and adjust the fragment
 offset to point to the remaining data. This ensures proper alignment
 and allows us to not use any paged data for small packets. If the
 payload occupies multiple pages, we simply append those pages as
 fragments and free the associated skbs.

 + char *p = page_address(skb_shinfo(skb)-frags[0].page);
...
 + memcpy(hdr, p, sizeof(*hdr));
 + p += sizeof(*hdr);

I think you need kmap_atomic() here to access the page.  And yes, that will 
effect performance :(

A few more comments moved from the patch header into the source wouldn't go 
astray, but I'm happy to do that myself (it's been on my TODO for a while).

Thanks!
Rusty.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization