Hi Chris,

thank you for your time!

On 2013-10-24 20:02, Chris Bagnall wrote:
- full 450Mbps between the WLAN and pfsense

Even with 450Mbps *radios* I'd be amazed if you get more than ~80Mbps out of your WLAN. Not a pfSense limitation, just a reality of WLAN claimed radio speeds. I generally expect to see ~55-65Mbps out of 2x2 radios, so ~80Mbps out of 3x3 is probably realistic.


Ok, I see. Does this change with a router that has a Gigabit-NIC to connect with pfSense, or isn't that the bottle neck?

Unless you're in a really isolated area, using an 80Mhz channel (which is what you'd need for 450Mbps radio speed) will slaughter spectrum availability for your neighbours. Short of really needing that speed, try to stick with 20Mhz channels where possible. And if you're in a very congested WiFi area, you may even get better speeds out of 20Mhz (much easier to find one free 20Mhz channel than a free 80Mhz channel).

I will use a 802.11n router with 3 antennas that is able to operate simultaneously in the 2,4 GHz and 5 GHz band, so it advertises "up to 900Mbps" (i.e. 450 Mbps in the 2,4 + 450 Mbps in the 5 GHz band) - I do not know if it is able to use 80 MHz channels, but I read at wikipedia that this is only available for the new 802.11ac generation and not for the 11n that I own. Is that correct? Could I tweak an 11n to use 80 MHz channels, e.g. by using an alternative firmware on the router such as dd-wrt? The premises that the router will be installed are indeed quite remote and when I did a brief check with a mobile device, it did not detect any WLANs at all.


- maximal VPN speed without speed break due to hardware limitations,
i.e. as near to wire speed as possible

Depends on your choice of crypto algorithm and whether you can do it in hardware.

The CPU/Motherboard combination available (see above) unfortunately does not support any hardware encryption CPU-commands, so it will be done entirely software based. I was thinking about AES - although the book of Christopher and Jim says that Blowfish and CAST would be better choices for non-hardware accelerated cryptography - due to the fact that I am more familiar with it and do not know much (Blowfish) or anything (CAST) about the others. Do you have any advice on this one?


1. Would the Core2Duo CPU be sufficient for my requirements or should I
chose the 2,4 GHz Quad-core, the 2,89 GHz-Quad-core or maybe an even a
more powerful CPU or totally different setup?

When I was deploying a Quagga-based BGP setup in a datacentre a couple of years ago, the general consensus was that cores are more important than raw clock speed - so 4x2.4Ghz is better than 2x3.4Ghz - at least when using multiple interfaces. This was, however, with Linux hosts. One of the nice things about those Intel server cards is the ability to lock NIC affinity to CPUs/cores, so you can effectively task a core to one or more NIC ports.

Hopefully others will chime in as to whether the same is true with FreeBSD - I seem to recall there were SMP/multi-core efficiency issues with earlier FreeBSD versions - hopefully those have been ironed out by now.


Ok, but which of the 3 CPUs that I have at my disposal would you chose so to meet my requirements?

2. Is there any other bottle neck that will prevent my performance
requirements?

Bonding is not a guarantee of doubled speeds. In my experience, bonding 2 gigabit NICs will generally yield around 1.2-1.4Gbps raw throughput. You are very unlikely to get 2Gbps. Bonding is more about redundancy (failover) than throughput at this level. If you really need >1Gbps, you're going to have to consider 10GE kit.

10Gbps unfortunately is totally out of financial scope for this project - and I guess it would be an overkill, too. I have to stick with the hardware listed above. The reason I was thinking about bonding is to add "an additional channel" between LAN <-> DMZ.
Let me explain what traffic is expected:

WAN <-> DMZ:
- Access to a Webserver in the DMZ
- Access to a FTP-Server in the DMZ with a lot of bulk traffic, transferring very big files for very long time and possibly with concurrent users (i.e. using all the 2x 10Mbps upload bandwidth for many hours permanently; saying that: is FTP via dual WAN possible in the mean time or is there still the restriction of using only one uplink?!)
- A VoIP PBX that routes up to 5 concurrent phonecalls between WAN and LAN

LAN <-> DMZ:
- Many times per day a lot of bulk FTP traffic initiated by clients in the LAN who are connected with gigabit NICs.

I want to work with VLANs and QoS so that the normal traffic and VoIP traffic will be prioritized as much as possible above the bulk FTP traffic, but my idea was that I might increase chances of not jamming the line for normal web browsing or get VoIP latency problems by adding a second channel in the bond between DMZ and LAN.

So to summarize: What I want to achieve is to be able to copy files from the gigabit-clients living in the LAN back and forth to the DMZ and yet still have some additional bandwidth for the other traffic not to be jammed. I have not yet implemented QoS with pfSense ever, but my experiences with another perimeter firewall distribution in the past (Endian) with QoS was not 100% satisfactory, since I continued to have e.g. VoIP or browsing latencies when transferring bulk traffic (although much better with QoS than without, but yet never perfect). So my question is: Ok, 2x Gigabit != 2 Gigabit. But do you think that it will yet help to contribute to my objective to add a second channel to a bond so that there will be 2x Gigabit = 1 Gigabit for the user transferring bulk traffic plus additional 0,2-0,4 Gigabit for additional VoIP, browsing, etc., or is it senseless to do that this way?


You're already thinking redundancy with the multiple NIC considerations, but in my experience, NICs don't really fail that often - at least not compared to fans, power supplies and other PC components. Consider whether a 2x pfSense cluster in CARP might be more to your needs if redundancy/failover is a critical requirement.

The additional redundancy that would come with the bond is something that I see as a nice additional benefit that comes with this plan of increasing the bandwidth to fight VoIP and browsing latencies, but is not necessarily my primary objective. Saying that, I can feedback that I very well had already 2-3 NICs die (within a period of approx. 5 years) in the past on my perimeter firewall - but in all cases it where cheap 10$ PCI Realteks and I hope that the professional Intel cards are of better quality.

As for CARP: I surely find this an interesting thing, but unfortunately I have no further budget to by additional hardware, I have to use the one listed above. And additionally CARP adds some level of complexity which I am not able to cope with at this time, since I am not all to experienced with pfSense yet. But maybe the next upgrade after this one will be such a solution, I'll have to see.

Looking at your hardware again, you've specced 12 NICs, but from what I can see from your config, you only need 8 (2 VDSL ports, 2 bonded ports for LAN, 2 bonded ports for DMZ, (assuming) 2 bonded ports for WLAN).

That is correct, I will use some additional, non-bonded OPT zones with occasional low traffic, that I did not mention yet.

4x on-board Realtek 8111C Gigabit NICs

Personally I'd spec a board that has Intel or Broadcom NICs - the Realtek ones are just rubbish by comparison. There are no shortage of boards with 2 Intel NICs on them these days. look at some of the Intel-manufactured boards rather than third parties - they nearly always have Intel NICs. A few years back I used lots of DG965RY boards (Intel NIC, onboard video, so ideal for server environments).

Unfortunately I have to stick with the consumer motherboard that I have at my disposal right now. But I will use the Realteks only for very low / occasional traffic zones

PCIe 3ware 9650SE RAID Controller with 2 SATA disks RAID0 or 3 SATA
disks RAID5

Given pfSense uses <1GB space, why? A little SSD on the chipset's native SATA controller should be fine (see above, use CARP for redundancy).

In general I use hardware RAID in all my servers so to have a BBU - and prefferably also data parity, e.g. by RAID5/6 - so to have the best chances for continued data integrity at all times, no matter what happens to the power supply, due to a crashed OS or due to disk surface errors, i.e. bad sectors. Yet, as far as I have figured, many people use pfSense without such security measures in professional productive systems, so I assume that there might be a reason why they abstain such measures. Is pfSense immune against sudden power losses, system crashes, media surface failures, e.g. because it has read-only file systems or something similar, so that adding RAID, parity, BBU, etc. is never needed? Or is it just a compromise that they do by weighting costs and risk and deciding to take the risk? As I have a RAID controller and disks on stock I could use them without any cost.

Kind regards,

Chris

Thanks for your help!
Kind regards
Thinker Rix

_______________________________________________
List mailing list
List@lists.pfsense.org
http://lists.pfsense.org/mailman/listinfo/list

Reply via email to