I wrote MTU since you used it. What I am talking about are packet sizes. If
people bulding internet knew what they where doing then a MTU of 1500 (L2)
or more would be mandatory. But because of old ATM stuff this isn't true
for all of internet. When I say our average packet size was 1200 that has
nothing to do with MTU. We had network with a minimum MTU of 1546 and a
minimum pps capability og 40Mpps.

What Im saying is the your statement is confusing. You seem to suggest that
the platform can do maximum 80% of a 10GE inteface. Reality is that it will
do MINUMUM 80% of a 10GE. And to top it all you can not calculate pps to
speed since a spesification of 12Mpps does not tell you if a device can
handle it with any payload. Most of the time a pps to speed conversion will
be a aproximation. A cisco fwsm has a pps spec suggesting it can do full
bacplane speed. Reality is that with 1400-1500 octets payload it is capable
of 5.5 gbits on a 6500/7600 platform. And pfSense has the same issues.
If you set up a Spirent testcenter with propper tests you will see that
12Mpps is best case.

And pleas do not assume that I do not understand MTU. I know exactly how
MTU, PMTUD and friends work. MTU is different depending on what layer you
operate on. A cisco switch with a system mtu of 1500 will transfer a packet
of 1522+1vlan. A system MTU of 1504 will allow a packet of 1526+1vlan=1530
(q-in-q).

On Fri, Jan 27, 2017, 13:22 Jim Thompson <j...@netgate.com> wrote:

>  My point is just that if you have normal traffic patterns, even at 600
you should
have no problem pushing 10GE.   A MTU of 600 should give you about 53
gigabit/s if you are able yo push 12000000 pps with that payload.

An "MTU of 600" wouldn't allow IPv6 to pass over a link.  IPv6
requires that every link in the internet have an MTU of 1280 octets or
greater.  See RFC 2460, section 5.

MTU is *maximum transmission unit*, which is decidedly different than
minimum packet size, which is probably what you intended.

> Your statement of 80% is just confusing, that is all.

Your misunderstanding of the issues here is, unfortunately, quite
common.  Nearly all of the work in packet processing is per-packet,
rather than per bit.  The exceptions include VPN, where the encryption
overheads dominate, and DPI, where the payload must be inspected,
rather than merely passed along.

Jim


On Fri, Jan 27, 2017 at 5:59 AM, Espen Johansen <pfse...@gmail.com> wrote:
> 1200 was my average packet size when analyzed in Dataguard Core network (a
> smb ISP here in .no) . Im sure others can find different averages. My
point
> is just that if you have normal traffic patterns, even at 600 you should
> have no problem pushing 10GE. A MTU of 600 should give you about 53
> gigabit/s if you are able yo push 12000000 pps with that payload. Your
> statement of 80% is just confusing, that is all.
>
> On Fri, Jan 27, 2017, 04:02 Jim Thompson <j...@netgate.com> wrote:
>
>> On Thursday, January 26, 2017, Espen Johansen <pfse...@gmail.com> wrote:
>>
>> > Are you saying worst case is 80%? Its not normal to have all minimum
size
>> > packets unless you are under ddos.
>> > Default ethernet is 1526 (1530 with vlan) with a MTU 1500 on a layer 1
>> > frame.
>> > A layer 2 frame is 1518 (1522 with vlan).
>> > If you want to include all layer headers then 1542 including vlan is
the
>> > correct number and that will allow a 1500 octet payload.
>>
>>
>> Yes, I know, but adding a vlan tag means the small frame size isn't
>> "smallest". I was just throwing that in for comparison.
>>
>> Point is, on a 10g network, the maximum frame rate is 14.88 mpps.  This
is
>> the highest rate required by the network under any circumstance. It's
also
>> how you have to think about the problem if you're not going to engage in
>> making excuses.
>>
>> If you still don't like it, consider that:
>>
>> - 40g Ethernet cards exist today, so being able to forward 256 byte
packets
>> at 40gbps will require the same 14.88 mpps rate,
>> - nx25 is the future in the data center vswitches and vrouters are a
thing,
>> and pfSense should be able to play in this market
>> - 10g is starting to appear on lower-end hardware.
>> - 10g switches are starting to hit $100/port
>>
>> And also that netgate has product coming in 2017 that folds multiple
>> integrated switch ports into a single 2.5gbps or multiple 10gbps Ethernet
>> uplink ports.
>>
>> Remember, we're doing this in software.  No ASICs required.  That 12mpps
>> figure on an 8 core Rangeley includes 50 ACLs in the path.
>>
>> BTW, average frame size on the Internet is just under 600 bytes, btw. Not
>> 1200 as you guessed.
>>
>> Jim
>>
>> >
>> > On Thu, Jan 26, 2017, 18:20 Jim Thompson <j...@netgate.com
>> <javascript:;>>
>> > wrote:
>> >
>> > > > On Jan 26, 2017, at 5:06 PM, rai...@ultra-secure.de <javascript:;>
>> > wrote:
>> > > >
>> > > > Am 2017-01-26 07:03, schrieb Jim Thompson:
>> > > >> It does not.
>> > > >> The c2758 SoC is interesting. 8 cores, and the on-die i354 is
>> > > essentially a
>> > > >> block with 4 i350s on it.
>> > > >> These have 8 queues for each of rx and tx, so 16 each, for a total
>> of
>> > 64
>> > > >> queues.
>> > > >> On the c2xxx series (and other) boxes we ship, we increase certain
>> > > >> tunables, because we know what we're installing onto, and can
adjust
>> > > that
>> > > >> factory load. pfSense CE does not have that luxury, it has to run
on
>> > > nearly
>> > > >> anything the community finds to run it on. Some of these systems
>> have
>> > > ...
>> > > >> constrained RAM.  While we test each release on every model we
ship,
>> > > such
>> > > >> testing takes place only for a handful of other configurations.
>> > > >> There is a decent explanation of some of the tunables here:
>> > > >> https://wiki.freebsd.org/NetworkPerformanceTuning
>> > > >> Incidentally, FreeBSD, and thus pfSense can't take much advantage
of
>> > > those
>> > > >> multqueue NICs, because the forwarding path doesn't have the
>> architure
>> > > to
>> > > >> advantage them.  Our DPDK-based system can forward l3 frames at
over
>> > > 12Mpps
>> > > >> on this hardware (about 80% of line-rate on a 10g interface).
>> > > >> Neither pfSense or FreeBSD (nor Linux) will do 1/10th of this
rate.
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > Hi, is this DPDK-based system commercially available?
>> > > >
>> > > >
>> > > >
>> > > > Rainer
>> > >
>> > > Still being developed.
>> > >
>> > > Jim
>> > > _______________________________________________
>> > > pfSense mailing list
>> > > https://lists.pfsense.org/mailman/listinfo/list
>> > > Support the project with Gold! https://pfsense.org/gold
>> > >
>> > _______________________________________________
>> > pfSense mailing list
>> > https://lists.pfsense.org/mailman/listinfo/list
>> > Support the project with Gold! https://pfsense.org/gold
>> >
>> _______________________________________________
>> pfSense mailing list
>> https://lists.pfsense.org/mailman/listinfo/list
>> Support the project with Gold! https://pfsense.org/gold
>>
> _______________________________________________
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold
_______________________________________________
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Reply via email to