> On Jun 14, 2017, at 9:48 AM, John Jasen <jja...@gmail.com> wrote:
> 
> Our goal was to test whether or not FreeBSD currently is viable, as the
> operating system platform for high speed routers and firewalls, in the
> 40 to 100 GbE range.

We recently showed IPsec running at 36.32Gbps (8 streams, 32.68Gbps single 
stream).
 
At 36.32Gbps, we were limited by the 40Gbps cards we used.
(Framing overheads take up about 10% of the bandwidth.)

https://conferences.oreilly.com/oscon/oscon-tx/public/schedule/detail/56727 
<https://conferences.oreilly.com/oscon/oscon-tx/public/schedule/detail/56727>

We can send 64 byte tiny grams through the tunnel at 10.45 Mpps with 
AES-128-CBC + HMAC-SHA1.   AES-128-GCM performance is 32.98Gbps (4 streams, 
32.72Gbps single-stream).

Hardware used was essentially an “ultimate white box router”: 
https://www.netgate.com/blog/building-a-behemoth-router.html 
<https://www.netgate.com/blog/building-a-behemoth-router.html> with Intel xl710 
NICs and 8955 CPIC QAT cards.

The same hardware will l3 forward 42.6Mpps (64 byte packets).  It can forward 
14.05Mpps on a single core.   No tuning was done in the above, just bringing up 
VPP configuring the interfaces and SPDs, and running iperf3 or (DPDK’s) pkggen 
on a pair of outside ‘hosts’.  It’s likely that we can get the 42.6Mpps figure 
higher.

In other tests on smaller (Atom 8 core) hardware we’ve achieved 12Mpps l3 
forwarding with a full BGP routing table.  Likely that we can achieve even 
higher PPS results with a bit more tuning work.

Using Olivier Couchard-Labbé’s “estimated IMIX” (PPS * ( 7*(40+14) + 4*(576+14) 
+ (1500+14) )/12*8), one only needs 27Mpps to fill a 100Gbps interface.

We have a couple larger machines with unreleased Xeons in them, 4 100gbps NICs, 
and some “next generation” QAT cards that Intel says are good for 100Gbps 
encryption offload.  We plan to re-run the tests at 100Gbps sometime this 
summer.

These results are all on Linux, using VPP over DPDK, but nothing really 
restricts that work from moving back to FreeBSD.  VPP also supports netmap, but 
we’ve not attempted any performance work using the netmap interfaces as yet.
https://fd.io/news/announcement/2017/06/fast-data-fdio-project-issues-fourth-release-furthers-position-universal
 
<https://fd.io/news/announcement/2017/06/fast-data-fdio-project-issues-fourth-release-furthers-position-universal>

gnn@ was working on such a port for us, but other things took over his time.  
https://github.com/gvnn3/vpp <https://github.com/gvnn3/vpp>. I’m sure we’ll get 
back to it.

(This is all the basel for our “next generation pfSense”, btw.)

> In our investigations, we tested 10.3, 11.0/-STABLE, -CURRENT, and a USB
> stick from BSDRP using the FreeBSD routing improvements project
> enhancements (https://wiki.freebsd.org/ProjectsRoutingProposal).
> 
> We've tried stock and netmap-fwd, have played around a little with
> netmap itself and dpdk, with the results summarized below. The current
> testing platform is a Dell PowerEdge R530 with a Chelsio T580-LP-CR dual
> port 40GbE card.
> 
> Suggestions, examples for using netmap, etc, all warmly welcomed.
> 
> Further questions cheerfully answered to the best of our abilities.
> 
> a) On the positive side, it appears that 11.0 is much faster than 10.0,
> which we tested several years ago. With appropriate cpuset tuning, 5.5
> mpps is achievable using modern hardware. Using slightly older hardware,
> (such as a Dell R720 with v3 xeons), around 5.2-5.3 mpps can be obtained.
> 
> b) On the negative side, between the various releases, netmap appeared
> to be unstable with the Chelsio cards -- sometimes supported, sometimes
> broken. Also, we're still trying to figure out netmap utilities, such as
> vale-ctl and bridge, so any advice would be appreciated.
> 
> b.1) netmap-fwd is admittedly single-threaded and does not support IPv6.

There is a version of netmap-fwd (not on GitHub) that supports IPv6, and has 
some early work on threading.
Unfortunately netmap bugs stopped the threading work.

The developer (loos@) recently updated netmap in -CURRENT based on a patch from 
Vincenzo Maffione.

BTW, we’ve seen over 5mpps using netmap-fwd using a (single core of a) E3-1275. 
 See around 17:07 https://www.youtube.com/watch?v=cAVgvzivzII

> These clearly showed in our tests, as we were unable to achieve over 2.5
> mpps, saturating a single CPU and letting the others fall asleep.
> However, bumping a single CPU queue from around 0.6 mpps to 2.5 mpps is
> nothing to ignore, so it could be useful in some cases.
> 
> c) The routing improvement project USB stick performed incredibly,
> achieving 8.5 mpps out of the box. However, it appears
> (https://wiki.freebsd.org/ProjectsRoutingProposal/ConversionStatus),
> that many of the changes are still pending review, and that things have
> not moved much in the last 18 months
> (https://svnweb.freebsd.org/base/projects/routing/)
> 
> d) We've not figured out dpdk  (dpdk.org) yet. Our first foray into the
> test examples, and we're stuck trying to get the interfaces online.


DPDK on FreeBSD is a bit of a mess.

Jim

  
_______________________________________________
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to