Re: [pfSense] PFSense for high-bandwith environments

2016-02-24 Thread Vick Khera
On Tue, Feb 23, 2016 at 9:01 PM, Jim Thompson  wrote:

> Fun fact, this ’Netflix’ success is using the AES-GCM code that Netgate
> co-developed with the FreeBSD Foundation for use with IPsec.
>
> https://lists.freebsd.org/pipermail/freebsd-security/2014-November/008029.html
>
>
>
> Fun fact #2, a future variant of that work will leverage QuickAssist.
> http://store.netgate.com/QuickAssist-and-Other-Cards-C210.aspx
>
>
>
> Fun fact #3, we can achieve much higher PPS with the router we’re writing
> (leverages DPDK) and netmap-fwd than you can with
> fastforward.  (Where Chelsio NICs make life a bit more complex.)
> https://github.com/Netgate/netmap-fwd/blob/master/netmap-fwd.pdf
>
>

All I can say is "wow" and "thank you". Very impressive work! I look
forward to the netmap-fwd the most.
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-23 Thread Jim Thompson

> On Feb 23, 2016, at 9:43 PM, WebDawg  wrote:
> 
> Man I was looking at the price point on used 10Gbit nics and I think it is 
> time for a bit of an upgrade.

10Gbit Ethernet will be so common in three years, a 1Gbps interface will be 
only used for management interfaces. 
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold


Re: [pfSense] PFSense for high-bandwith environments

2016-02-23 Thread Jim Thompson




-- Jim
> On Feb 23, 2016, at 9:38 PM, David Burgess  wrote:
> 
>> On Feb 23, 2016 7:01 PM, "Jim Thompson"  wrote:
>> 
>> perhaps you have a different definition of ‘wire speed’.  You have to
> fill the link with min-sized packets for “wire speed”.
>> (It’s trivial with large packets.)
>> 
>> This is, of course, what is probably happing with 2-3K
> 
> The definition I had in mind was 1000 megabits per second in both
> directions. I wasn't concerned with packet rates at that moment, and I
> can't tell you what numbers I was getting because I don't remember, and
> perhaps I didn't even record them.


Doesn't matter. 

1Gbps of min-sized (64 byte, or 84 bytes including IFG (12 bytes of 'time'), 
preamble, SFD & CRC) equates to 1,488,095  packets per second. 

1Gbps of max-sized (1500 byte, or 1536 bytes including IFG (12 bytes of 
'time'), preamble, SFD & CRC) equates to 81,380  packets per second.

Neither FreeBSD or Linux will forward packets at 1.488Mpps on any conceivable 
commodity hardware. 

netmap-fwd will do 1.2Mpps on a 1.7GHz C2000 (2220) and the type of minimal 
routing table often found in pfSense installations. 

Our DPDK router will do > 10Mpps with a full (570,000 routes) BGP route table, 
and a rather large ACL table on a Broadwell-DE platform. 

> I think we all agree that if the OP has a lot of gamers online then small 
> packets and high packet rates are going to be a concern.

High packet rates are a much larger issue than raw bandwidth. 

> Thanks for the info on the new technologies. Networking is not boring
> pursuit.


"Networking is the Vietnam of computing.  Something can nuke you from behind, 
and it's gone when you turn around. It's impossible to win a guerrilla war 
against a highly distributed enemy." -- Mike Smith

I am Agent Orange. 
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-23 Thread WebDawg
On Thu, Feb 18, 2016 at 11:29 AM, Rainer Duffner  wrote:
>
>> Am 18.02.2016 um 19:13 schrieb Walter Parker :
>>
>> There is an optimization coming for pfsense. There is a new user space
>> routing daemon. netmap I think, that can reach line rate on 10G NICs (14.88
>> Mpps). There was a BSDCon that talked about a future version of pfsense
>> using this system. It uses ipfw, so there a bit a work to adapt it to
>> pfsense.
>
>
>
>
> Also, AFAIK, chelsio NICs are better in the 10G space.
>
> ESF uses them in some of their appliances (see the shop).
> Netflix uses them, too, in their FreeBSD cache-boxes.
>
> They aren’t really that much more expensive than Intel NICs.
>
> I have no experience using them myself.
>
> ___


Man I was looking at the price point on used 10Gbit nics and I think
it is time for a bit of an upgrade.
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-23 Thread David Burgess
On Feb 23, 2016 7:01 PM, "Jim Thompson"  wrote:
>
> perhaps you have a different definition of ‘wire speed’.  You have to
fill the link with min-sized packets for “wire speed”.
> (It’s trivial with large packets.)
>
> This is, of course, what is probably happing with 2-3K

The definition I had in mind was 1000 megabits per second in both
directions. I wasn't concerned with packet rates at that moment, and I
can't tell you what numbers I was getting because I don't remember, and
perhaps I didn't even record them.

I think we all agree that if the OP has a lot of gamers online then small
packets and high packet rates are going to be a concern.

Thanks for the info on the new technologies. Networking is not boring
pursuit.

db
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-23 Thread Jim Thompson

> On Feb 23, 2016, at 7:47 PM, Walter Parker  wrote:
> 
> On Tue, Feb 23, 2016 at 3:19 PM, Giles Davis  wrote:
> 
>> On 19/02/2016 17:12, David Burgess wrote:
>>> I'm a little surprised at your experience. A few years ago I built a
>>> PFSense unit with an Intel motherboard, 1st gen Core i3 CPU, and a
>>> single onboard Intel (em) GBE NIC. All routing was done through vlans
>>> and it had no trouble reaching wire speed with around 50% CPU usage.

perhaps you have a different definition of ‘wire speed’.  You have to fill the 
link with min-sized packets for “wire speed”.
(It’s trivial with large packets.)

This is, of course, what is probably happing with 2-3K ‘hardcore gamers’.  Lots 
of short packets.

>>> I do recommend using the net.inet.ip.fastforwarding=1 tweak if you
>>> can. Note that it breaks IPSEC and captive portal.

You’ll find that there is no such setting in pfSense software version 2.3, 
because we now use
tryforward() which gives you all the speed of ‘fast forwarding’ without 
breaking IPsec or captive portal.

(and therefore, there is nothing to ‘set’)

We tried to put this into FreeBSD 10.3, but there is a rare combination of 
factors that result in it breaking
NAT (but not the NAT used in pfSense).  

>>> As far as 10G NICs, I was sure I read recently that the FreeNAS people
>>> were recommending Chelsio, but I can't find the reference now.
>> I imagine it's probably going to be our ridiculous PPS figures that
>> start to bottleneck things. There's 2-3 thousand hardcore gamers behind
>> these boxes when we run our events all generating shedloads of tiny UDP
>> packets, as well as a big demand for normal web browsing, downloading,
>> streaming on top of all that. What we used to see was the ix (and before
>> the 10G NICs the bge) driver heavily pushing single CPU cores - but at
>> about ~1.2Gbit we just start seeing small amounts of packet loss - even
>> when there's no obvious single cause. I'm guessing its a combination of
>> a few factors, but to be honest we just move traffic off to another box
>> - PL for gamers is the end of the world. :(
>> 
>> I don't think we had set fastforwarding yet - so i'll definitely look
>> into that. Don't care about IPSec or captive portal at all!
>> 
>> We're also getting pricing for Chelsio NICs now too - so perhaps that'll
>> help as well.
>> 
>> Thanks again (and thanks Ed for those stats too).
>> 
>> Cheers,
>> Giles.
> 
> Fun fact, Netflix is using FreeBSD and is pushing >30 Gbps from systems
> using Chelsio NICs. See
> http://www.slideshare.net/facepalmtarbz2/slides-41343025 for details.

Fun fact, this ’Netflix’ success is using the AES-GCM code that Netgate 
co-developed with the FreeBSD Foundation for use with IPsec.
https://lists.freebsd.org/pipermail/freebsd-security/2014-November/008029.html

Fun fact #2, a future variant of that work will leverage QuickAssist.
http://store.netgate.com/QuickAssist-and-Other-Cards-C210.aspx

Fun fact #3, we can achieve much higher PPS with the router we’re writing 
(leverages DPDK) and netmap-fwd than you can with
fastforward.  (Where Chelsio NICs make life a bit more complex.)
https://github.com/Netgate/netmap-fwd/blob/master/netmap-fwd.pdf




___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-23 Thread Walter Parker
On Tue, Feb 23, 2016 at 3:19 PM, Giles Davis  wrote:

> On 19/02/2016 17:12, David Burgess wrote:
> > I'm a little surprised at your experience. A few years ago I built a
> > PFSense unit with an Intel motherboard, 1st gen Core i3 CPU, and a
> > single onboard Intel (em) GBE NIC. All routing was done through vlans
> > and it had no trouble reaching wire speed with around 50% CPU usage.
> >
> > I do recommend using the net.inet.ip.fastforwarding=1 tweak if you
> > can. Note that it breaks IPSEC and captive portal.
> >
> > As far as 10G NICs, I was sure I read recently that the FreeNAS people
> > were recommending Chelsio, but I can't find the reference now.
> I imagine it's probably going to be our ridiculous PPS figures that
> start to bottleneck things. There's 2-3 thousand hardcore gamers behind
> these boxes when we run our events all generating shedloads of tiny UDP
> packets, as well as a big demand for normal web browsing, downloading,
> streaming on top of all that. What we used to see was the ix (and before
> the 10G NICs the bge) driver heavily pushing single CPU cores - but at
> about ~1.2Gbit we just start seeing small amounts of packet loss - even
> when there's no obvious single cause. I'm guessing its a combination of
> a few factors, but to be honest we just move traffic off to another box
> - PL for gamers is the end of the world. :(
>
> I don't think we had set fastforwarding yet - so i'll definitely look
> into that. Don't care about IPSec or captive portal at all!
>
> We're also getting pricing for Chelsio NICs now too - so perhaps that'll
> help as well.
>
> Thanks again (and thanks Ed for those stats too).
>
> Cheers,
> Giles.
>
> ___
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold
>


Fun fact, Netflix is using FreeBSD and is pushing >30 Gbps from systems
using Chelsio NICs. See
http://www.slideshare.net/facepalmtarbz2/slides-41343025 for details.


Walter
-- 
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding.   -- Justice Louis D. Brandeis
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold


Re: [pfSense] PFSense for high-bandwith environments

2016-02-23 Thread Giles Davis
On 19/02/2016 17:12, David Burgess wrote:
> I'm a little surprised at your experience. A few years ago I built a
> PFSense unit with an Intel motherboard, 1st gen Core i3 CPU, and a
> single onboard Intel (em) GBE NIC. All routing was done through vlans
> and it had no trouble reaching wire speed with around 50% CPU usage.
>
> I do recommend using the net.inet.ip.fastforwarding=1 tweak if you
> can. Note that it breaks IPSEC and captive portal.
>
> As far as 10G NICs, I was sure I read recently that the FreeNAS people
> were recommending Chelsio, but I can't find the reference now.
I imagine it's probably going to be our ridiculous PPS figures that
start to bottleneck things. There's 2-3 thousand hardcore gamers behind
these boxes when we run our events all generating shedloads of tiny UDP
packets, as well as a big demand for normal web browsing, downloading,
streaming on top of all that. What we used to see was the ix (and before
the 10G NICs the bge) driver heavily pushing single CPU cores - but at
about ~1.2Gbit we just start seeing small amounts of packet loss - even
when there's no obvious single cause. I'm guessing its a combination of
a few factors, but to be honest we just move traffic off to another box
- PL for gamers is the end of the world. :(

I don't think we had set fastforwarding yet - so i'll definitely look
into that. Don't care about IPSec or captive portal at all!

We're also getting pricing for Chelsio NICs now too - so perhaps that'll
help as well.

Thanks again (and thanks Ed for those stats too).

Cheers,
Giles.

___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold


Re: [pfSense] PFSense for high-bandwith environments

2016-02-19 Thread David Burgess
On Thu, Feb 18, 2016 at 10:26 AM, Giles Davis  wrote:
>
>
> Using Intel E3-1270s and Intel 10G NICs (forget the exact model, but
> they use the BSD ix driver) we start seeing packet loss and a general
> maximum throughput at around 1-1.2Gbit. Our 'solution' so far of just
> adding more appliances and splitting the load really won't scale
> forever, so if anyone has any suggestions of 'better hardware' or BSD
> optimizations that would let us push more through a single appliances,
> i'd love to hear it. We've got a reasonable set of BSD networking tweaks
> and optimizations that certainly help, but we still can't manage to push
> more than our little-over-a-gigabit maximum before things start wobbling.



I'm a little surprised at your experience. A few years ago I built a
PFSense unit with an Intel motherboard, 1st gen Core i3 CPU, and a
single onboard Intel (em) GBE NIC. All routing was done through vlans
and it had no trouble reaching wire speed with around 50% CPU usage.

I do recommend using the net.inet.ip.fastforwarding=1 tweak if you
can. Note that it breaks IPSEC and captive portal.

As far as 10G NICs, I was sure I read recently that the FreeNAS people
were recommending Chelsio, but I can't find the reference now.

db
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold


Re: [pfSense] PFSense for high-bandwith environments

2016-02-19 Thread ED Fochler
Don’t assume that this is upper bound, but I get 800 MB/s on my Myricom card 
and 600MB/s on my chelsio card, both on standard ethernet frame size, so 
dominantly 1500 byte packets.  I’m using these for data transfer, so I’m 
measuring in MB not Mb.  The switch you’re connecting to also matters.

ED.


> On 2016, Feb 19, at 11:54 AM, Giles Davis  wrote:
> 
> On 19/02/2016 16:19, ED Fochler wrote:
>> My experience has been that intel nics are bad in the 10G space, especially 
>> under BSD.  I’ve had good luck with Myricom and Chelsio on BSD, though I 
>> haven’t used either specifically on PFSense.
>> 
>> 
>>> 
>>> Also, AFAIK, chelsio NICs are better in the 10G space.
>>> 
>>> ESF uses them in some of their appliances (see the shop).
>>> Netflix uses them, too, in their FreeBSD cache-boxes.
>>> 
>>> They aren’t really that much more expensive than Intel NICs.
>>> 
>>> I have no experience using them myself.
>>> 
>>> 
> 
> Interesting - thanks all. Netmap looks very interesting - i'll keep an
> eye on the development work with that one.
> 
> Hadn't come across either Myricom or Chelsio NICs before - but am
> looking now. I'd heard from others in the past that Solarflare NICs also
> performed well, although they're a significantly higher price point than
> the Intels!
> 
> Don't suppose anyone has any real-world performance indicators, even
> just anecdotal ones, on reasonable throughput levels that can be
> expected with any of these alternative NICs? I agree, the Intel cards
> don't seem to do massively well under BSD.
> 
> Thanks again for the replies all - some good suggestions to go and look
> at there. :)
> 
> Cheers,
> Giles.
> ___
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold

___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-19 Thread Giles Davis
On 19/02/2016 16:19, ED Fochler wrote:
> My experience has been that intel nics are bad in the 10G space, especially 
> under BSD.  I’ve had good luck with Myricom and Chelsio on BSD, though I 
> haven’t used either specifically on PFSense.
>
>
>>
>> Also, AFAIK, chelsio NICs are better in the 10G space.
>>
>> ESF uses them in some of their appliances (see the shop).
>> Netflix uses them, too, in their FreeBSD cache-boxes.
>>
>> They aren’t really that much more expensive than Intel NICs.
>>
>> I have no experience using them myself.
>>
>>

Interesting - thanks all. Netmap looks very interesting - i'll keep an
eye on the development work with that one.

Hadn't come across either Myricom or Chelsio NICs before - but am
looking now. I'd heard from others in the past that Solarflare NICs also
performed well, although they're a significantly higher price point than
the Intels!

Don't suppose anyone has any real-world performance indicators, even
just anecdotal ones, on reasonable throughput levels that can be
expected with any of these alternative NICs? I agree, the Intel cards
don't seem to do massively well under BSD.

Thanks again for the replies all - some good suggestions to go and look
at there. :)

Cheers,
Giles.
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-19 Thread ED Fochler
My experience has been that intel nics are bad in the 10G space, especially 
under BSD.  I’ve had good luck with Myricom and Chelsio on BSD, though I 
haven’t used either specifically on PFSense.


> On 2016, Feb 18, at 1:29 PM, Rainer Duffner  wrote:
> 
> 
>> Am 18.02.2016 um 19:13 schrieb Walter Parker :
>> 
>> There is an optimization coming for pfsense. There is a new user space
>> routing daemon. netmap I think, that can reach line rate on 10G NICs (14.88
>> Mpps). There was a BSDCon that talked about a future version of pfsense
>> using this system. It uses ipfw, so there a bit a work to adapt it to
>> pfsense.
> 
> 
> 
> 
> Also, AFAIK, chelsio NICs are better in the 10G space.
> 
> ESF uses them in some of their appliances (see the shop).
> Netflix uses them, too, in their FreeBSD cache-boxes.
> 
> They aren’t really that much more expensive than Intel NICs.
> 
> I have no experience using them myself.
> 
> ___
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold

___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-18 Thread Rainer Duffner

> Am 18.02.2016 um 19:13 schrieb Walter Parker :
> 
> There is an optimization coming for pfsense. There is a new user space
> routing daemon. netmap I think, that can reach line rate on 10G NICs (14.88
> Mpps). There was a BSDCon that talked about a future version of pfsense
> using this system. It uses ipfw, so there a bit a work to adapt it to
> pfsense.




Also, AFAIK, chelsio NICs are better in the 10G space.

ESF uses them in some of their appliances (see the shop).
Netflix uses them, too, in their FreeBSD cache-boxes.

They aren’t really that much more expensive than Intel NICs.

I have no experience using them myself.

___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold

Re: [pfSense] PFSense for high-bandwith environments

2016-02-18 Thread Walter Parker
There is an optimization coming for pfsense. There is a new user space
routing daemon. netmap I think, that can reach line rate on 10G NICs (14.88
Mpps). There was a BSDCon that talked about a future version of pfsense
using this system. It uses ipfw, so there a bit a work to adapt it to
pfsense.


Walter

On Thu, Feb 18, 2016 at 9:26 AM, Giles Davis  wrote:

> Hello PFSense Collective,
>
> At the risk of sounding slightly 'cheap', does anyone (else) on this
> list have experience of 'good combinations' of hardware for PFSense
> appliances that will handle high-traffic levels and comments on
> reasonable max-levels of throughput to expect from it?
>
> We've been using PFSense for quite some time for large events and these
> days are pushing up to 4Gbit/sec to the internet via our PFSense boxes,
> to 2-3k clients - with expectation of bigger events in the reasonably
> near future.
>
> Using Intel E3-1270s and Intel 10G NICs (forget the exact model, but
> they use the BSD ix driver) we start seeing packet loss and a general
> maximum throughput at around 1-1.2Gbit. Our 'solution' so far of just
> adding more appliances and splitting the load really won't scale
> forever, so if anyone has any suggestions of 'better hardware' or BSD
> optimizations that would let us push more through a single appliances,
> i'd love to hear it. We've got a reasonable set of BSD networking tweaks
> and optimizations that certainly help, but we still can't manage to push
> more than our little-over-a-gigabit maximum before things start wobbling.
>
> We're not asking a huge amount of traffic inspection from our
> envrironment (used to do a fair bit of traffic shaping, but have managed
> to provide sufficient bandwidth to meet natural demand for a while now)
> - but historically PFSense has been a great appliance to have in the
> network for firewalling and monitoring.
>
> Thanks in advance for any suggestions and thanks to the maintainers for
> such a great firewall implementation. :)
>
> Cheers,
> Giles.
> ___
> pfSense mailing list
> https://lists.pfsense.org/mailman/listinfo/list
> Support the project with Gold! https://pfsense.org/gold
>



-- 
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding.   -- Justice Louis D. Brandeis
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold


Re: [pfSense] PFSense for high-bandwith environments

2016-02-18 Thread compdoc
> Using Intel E3-1270s and Intel 10G Nics

I can't point to a specific setup, but something to look at...

Your xeon is a sandy bridge with a max transfer rate of 5 GT/s, which is
very nice but the new Skylake cpus are 8 GT/s.

Also, there's always a possibility of equipment failure/setup problems... 



___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold


[pfSense] PFSense for high-bandwith environments

2016-02-18 Thread Giles Davis
Hello PFSense Collective,

At the risk of sounding slightly 'cheap', does anyone (else) on this
list have experience of 'good combinations' of hardware for PFSense
appliances that will handle high-traffic levels and comments on
reasonable max-levels of throughput to expect from it?

We've been using PFSense for quite some time for large events and these
days are pushing up to 4Gbit/sec to the internet via our PFSense boxes,
to 2-3k clients - with expectation of bigger events in the reasonably
near future.

Using Intel E3-1270s and Intel 10G NICs (forget the exact model, but
they use the BSD ix driver) we start seeing packet loss and a general
maximum throughput at around 1-1.2Gbit. Our 'solution' so far of just
adding more appliances and splitting the load really won't scale
forever, so if anyone has any suggestions of 'better hardware' or BSD
optimizations that would let us push more through a single appliances,
i'd love to hear it. We've got a reasonable set of BSD networking tweaks
and optimizations that certainly help, but we still can't manage to push
more than our little-over-a-gigabit maximum before things start wobbling.

We're not asking a huge amount of traffic inspection from our
envrironment (used to do a fair bit of traffic shaping, but have managed
to provide sufficient bandwidth to meet natural demand for a while now)
- but historically PFSense has been a great appliance to have in the
network for firewalling and monitoring.

Thanks in advance for any suggestions and thanks to the maintainers for
such a great firewall implementation. :)

Cheers,
Giles.
___
pfSense mailing list
https://lists.pfsense.org/mailman/listinfo/list
Support the project with Gold! https://pfsense.org/gold