On Mon, Feb 13, 2017 at 4:22 PM, Eric Dumazet <eduma...@google.com> wrote:
> On Mon, Feb 13, 2017 at 3:47 PM, Alexander Duyck
> <alexander.du...@gmail.com> wrote:
>
>> Actually it depends on the use case.  In the case of pktgen packets
>> they are usually dropped pretty early in the receive path.  Think
>> something more along the lines of a TCP syn flood versus something
>> that would be loading up a socket.
>
> So I gave another spin an it, reducing the MTU on the sender to 500
> instead of 1500 to triple the load (in term of packets per second)
> since the sender seems to hit some kind of limit around 30Gbit.
>
> Number of packets we process on one RX queue , and one TCP flow.
>
> Current patch series : 6.3 Mpps
>
> Which is not too bad ;)
>
> This number does not change putting your __page_frag_cache_drain() trick.

Well the __page_frag_cache_drain by itself isn't going to add much of
anything.  You really aren't going to be using it except for in the
slow path.  I was talking about the bulk page count update by itself.
All __page_frag_cache_drain gets you is it cleans up the
page_frag_sub(n-1); put_page(1); code calls.

The approach I took for the Intel drivers isn't too different than
what we did for the skbuff page frag cache.  Basically I update the
page count once every 65535 frames setting it to 64K, and holding no
more than 65535 references in the pagecnt_bias.  Also in my code I
don't update the count until after the first recycle call since the
highest likelihood of us discarding the frame is after the first
allocation anyway so we wait until the first recycle to start
repopulating the count.

- Alex

Reply via email to