On Mon, Sep 19, 2016 at 10:03:36AM -0700, Stephen Hemminger wrote:
> On Sun, 18 Sep 2016 08:36:05 -0400 (EDT)
> Paolo Bonzini <pbonz...@redhat.com> wrote:
> 
> > > Without indirect the usable ring size is cut in half on older kernels that
> > > don't support any layout. That limit on qemu ends up being 128 packets  
> > 
> > Hi Stephen,
> > 
> > note that here we were talking about limiting direct descriptors to
> > requests with a single buffer.
> > 
> > Paolo
> 
> The test I was looking at was small packet transmit (like pktgen or RFC2544).
> The idea was to make the DPDK driver use all the same algorithms as the Linux
> kernel which is the baseline for virtio. Also handling jumbo MTU packets
> efficiently with scatter/gather.
> 
> With DPDK the issue was that the typical packet required several transmit
> slot entries. Particularly bad for jumbo packets which usually are made up
> of several mbuf's. A 9K packet typically has 5 mbuf's plus one more for the
> virtio header.
> 
> Ideally, the DPDK driver should use the best algorithm it can negotiate.
> Be sure and test with old RHEL6 type systems, modern enterprise (RHEL/SLES)
> and also Google Compute which has there own virtio.  Not sure what virtual
> box is doing??  Why can't code be written like ixgbe driver which has a
> fast path if config allows it but makes decision at run time.

Interesting.  Maxime here posted integrating the indirect buffer
support to address this but this didn't get accepted
since he could not prove the performance gain.

Maxime, could you share the link to your patch so we
can discuss this on the dpdk mailing list?

Thanks!
-- 
MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org

Reply via email to