Line rate for 10GbE is 14.88Mpps.  Your frame size doesn't include many 
overheads.
Ethernet specific (20 bytes)
12 bytes = inter-frame gap (https://en.wikipedia.org/wiki/Interpacket_gap) this 
is really time
8 bytes = MAC preamble + SFD 
Ethernet frame (64 bytes)
14 bytes = MAC header
46 bytes = Minimum payload size
4 bytes = Ethernet CRC
Thus, the minimim size Ethernet frame is: 84 bytes (20 + 64) which includes 
time on the wire

Max 1500 bytes MTU Ethernetframe size is: 1538 bytes ((12+8) + (14) + 1500 + 
(4) = 1538 bytes)

Peak possible packet rate:  (10*10^9) bits/sec / (84 bytes * 8) = 14,880,952 pps

1500 MTU packet rate: (10*10^9) bits/sec / (1538 bytes * 8) = 812,744 pps

12,000,000 / 14,880,952 = 0.8074

That looks like 80% of line rate to me. 

Jim

> On Jan 26, 2017, at 4:10 PM, Espen Johansen <pfse...@gmail.com> wrote:
> 
> What do you mean by 12Mpps or 80% or 10GE? 12Mpps at 150 packet length is
> 13.4Gbps. At 1200 (good inet avg.) you should hit 107Gbps. Where does the
> 80% of 10GE come from?
> 
> 
> On Thu, Jan 26, 2017, 07:04 Jim Thompson <j...@netgate.com> wrote:
> 
> It does not.
> 
> The c2758 SoC is interesting. 8 cores, and the on-die i354 is essentially a
> block with 4 i350s on it.
> These have 8 queues for each of rx and tx, so 16 each, for a total of 64
> queues.
> 
> On the c2xxx series (and other) boxes we ship, we increase certain
> tunables, because we know what we're installing onto, and can adjust that
> factory load. pfSense CE does not have that luxury, it has to run on nearly
> anything the community finds to run it on. Some of these systems have ...
> constrained RAM.  While we test each release on every model we ship, such
> testing takes place only for a handful of other configurations.
> 
> There is a decent explanation of some of the tunables here:
> https://wiki.freebsd.org/NetworkPerformanceTuning
> 
> Incidentally, FreeBSD, and thus pfSense can't take much advantage of those
> multqueue NICs, because the forwarding path doesn't have the architure to
> advantage them.  Our DPDK-based system can forward l3 frames at over 12Mpps
> on this hardware (about 80% of line-rate on a 10g interface).
> Neither pfSense or FreeBSD (nor Linux) will do 1/10th of this rate.
> 
> Jim
> 
>> On Thursday, January 26, 2017, Espen Johansen <pfse...@gmail.com> wrote:
>> 
>> It should autotune by default based on memory iirc.
>> 
>> On Wed, Jan 25, 2017, 23:27 Peder Rovelstad <provels...@comcast.net
>> <javascript:;>> wrote:
>> 
>>> FWiW - My nano (4 NICs, 1GB, Community), PuTTY says:
>>> 
>>> kern.ipc.nmbufs: 131925
>>> kern.ipc.nmbclusters: 20612
>>> 
>>> but nothing explicitly set on the tunables page, just whatever's built
>> in.
>>> 
>>> -----Original Message-----
>>> From: List [mailto:list-boun...@lists.pfsense.org <javascript:;>] On
>> Behalf Of Karl Fife
>>> Sent: Wednesday, January 25, 2017 4:02 PM
>>> To: pfSense Support and Discussion Mailing List <list@lists.pfsense.org
>> <javascript:;>>
>>> Subject: Re: [pfSense] Intel Atom C2758 (Rangeley/Avoton) install/boot
>>> failure with pfSense 2.3.2
>>> 
>>> This is a good theory, because RRD data from 2.2.6 suggests that the
>>> difference in utilization between the versions is slight, and that we
> had
>>> 'barely' exhausted our system default allocation.
>>> 
>>> Is there a difference between nano and full with respect to the
> installer
>>> explicitly setting tunables for kern.ipc.nmbclusters and kern.ipc.nmbuf?
>>> Vick Khera says he sees explicitly set tunables on his
>>> 2.3.2 system, yet my virgin installation of Nano pfSense 2.3.2 has no
>>> explicit declarations?
>>> 
>>> Vick, is your Supermicro A1SRi-2758F running an installation that came
>> from
>>> Netgate, or is it a community edition installation?  If the latter, Full
>> or
>>> Nano?
>>> 
>>> 
>>>> On 1/25/2017 3:49 PM, Jim Pingle wrote:
>>>>> On 01/25/2017 01:10 PM, Karl Fife wrote:
>>>>> The piece that's still missing for me is that there must have been
>>>>> some change in default system setting for FreeBSD, or some other
>>>>> change between versions, because the system booted fine with pfSense
>>>>> v 2.2.6
>>>> Aside from what has already been suggested by others, it's possible
>>>> that the newer drivers from FreeBSD 10.3 in pfSense 2.3.x enabled
>>>> features on the NIC chipset that consumed more mbufs. For example, it
>>>> might be using more queues per NIC by default than it did previously.
>>>> 
>>>> Jim
>>>> 
>>>> _______________________________________________
>>>> pfSense mailing list
>>>> https://lists.pfsense.org/mailman/listinfo/list
>>>> Support the project with Gold! https://pfsense.org/gold
>>> 
>>> _______________________________________________
>>> pfSense mailing list
>>> https://lists.pfsense.org/mailman/listinfo/list
>>> Support the project with Gold! https://pfsense.org/gold
>>> 
>>> _______________________________________________
>>> pfSense mailing list
>>> https://lists.pfsense.org/mailman/listinfo/list
>>> Support the project with Gold! https://pfsense.org/gold
>>> 
>> _______________________________________________
>> pfSense mailing list
>> https://lists.pfsense.org/mailman/listinfo/list
>> Support the project with Gold! https://pfsense.org/gold
>> 
> _______________________________________________
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold
> _______________________________________________
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Reply via email to