On Oct 24, 2013, at 12:02 PM, Chris Bagnall <[email protected]> wrote:
> On 24/10/13 5:30 pm, Thinker Rix wrote: >> I want to have: >> - full Gigabit wire speed between the DMZ and the LAN zone (i.e. 2x >> Gigabit at max) > > Would have thought you'd be fine here. > >> - full 450Mbps between the WLAN and pfsense > > Even with 450Mbps *radios* I'd be amazed if you get more than ~80Mbps out of > your WLAN. Not a pfSense limitation, just a reality of WLAN claimed radio > speeds. I generally expect to see ~55-65Mbps out of 2x2 radios, so ~80Mbps > out of 3x3 is probably realistic. depends on your RF environment and channel orthogonality. > > Unless you're in a really isolated area, using an 80Mhz channel (which is > what you'd need for 450Mbps radio speed) will slaughter spectrum availability > for your neighbours. Short of really needing that speed, try to stick with > 20Mhz channels where possible. And if you're in a very congested WiFi area, > you may even get better speeds out of 20Mhz (much easier to find one free > 20Mhz channel than a free 80Mhz channel). > >> - maximal VPN speed without speed break due to hardware limitations, >> i.e. as near to wire speed as possible > > Depends on your choice of crypto algorithm and whether you can do it in > hardware. I’d recommend for a CPU that supports AES-NI, even if the FreeBSD support for same turns out to be lagging. ‘wire speed’ would need to be defined. I do know of boxes that will run at 25Gbps. As the guy at the hot rod shop told me 30 years ago, “Speed costs money son. How fast do you want to go?" >> 1. Would the Core2Duo CPU be sufficient for my requirements or should I >> chose the 2,4 GHz Quad-core, the 2,89 GHz-Quad-core or maybe an even a >> more powerful CPU or totally different setup? > > When I was deploying a Quagga-based BGP setup in a datacentre a couple of > years ago, the general consensus was that cores are more important than raw > clock speed - so 4x2.4Ghz is better than 2x3.4Ghz - at least when using > multiple interfaces. That’s not what I’d have guessed. If your application load is single-threaded (or a single process), then clock speed will win every time. If your application (load) can be broken down into prices that execute in parallel, then cores will be a win. You’ve not specified the problem well enough to discuss. An AS with internal BGP (iBGP) must have all of its iBGP peers connect to each other in a full mesh (where everyone speaks to everyone directly). This full-mesh configuration requires that each router maintain a session to every other router. In large networks, this number of sessions may degrade performance of routers, due to either a lack of memory, or too much CPU process requirements. There will also need be some serious consideration on the reliability of the network, and its constituent part(s). If those wireless links are for exterior paths, and not simply 802.11 LANs, then you’re in for a huge amount of trouble, as wireless isn’t reliable. At all. > This was, however, with Linux hosts. One of the nice things about those Intel > server cards is the ability to lock NIC affinity to CPUs/cores, so you can > effectively task a core to one or more NIC ports. But that would require completely re-archtecting the application(s). > > Hopefully others will chime in as to whether the same is true with FreeBSD - > I seem to recall there were SMP/multi-core efficiency issues with earlier > FreeBSD versions - hopefully those have been ironed out by now. > >> 2. Is there any other bottle neck that will prevent my performance >> requirements? > > Bonding is not a guarantee of doubled speeds. In my experience, bonding 2 > gigabit NICs will generally yield around 1.2-1.4Gbps raw throughput. You are > very unlikely to get 2Gbps. Bonding is more about redundancy (failover) than > throughput at this level. If you really need >1Gbps, you're going to have to > consider 10GE kit. > >> 3. When bonding the NICs, I was planning to use a port on each of the >> PCIe cards so to have a little bit of redundancy should an expansion >> card fail. Will there be significant performance losses due to this >> spread over 2 expansion cards, so that it would be much better to bond >> two NICs that live on the same expansion card and forget about the >> additional redundancy? > > No, I agree that bonding 2 ports on separate cards is the best option. > > You're already thinking redundancy with the multiple NIC considerations, but > in my experience, NICs don't really fail that often - at least not compared > to fans, power supplies and other PC components. Consider whether a 2x > pfSense cluster in CARP might be more to your needs if redundancy/failover is > a critical requirement. > > Looking at your hardware again, you've specced 12 NICs, but from what I can > see from your config, you only need 8 (2 VDSL ports, 2 bonded ports for LAN, > 2 bonded ports for DMZ, (assuming) 2 bonded ports for WLAN). > >> 4x on-board Realtek 8111C Gigabit NICs > > Personally I'd spec a board that has Intel or Broadcom NICs We agree. > - the Realtek ones are just rubbish by comparison. There are no shortage of > boards with 2 Intel NICs on them these days. look at some of the > Intel-manufactured boards rather than third parties - they nearly always have > Intel NICs. A few years back I used lots of DG965RY boards (Intel NIC, > onboard video, so ideal for server environments). > >> PCIe 3ware 9650SE RAID Controller with 2 SATA disks RAID0 or 3 SATA >> disks RAID5 > > Given pfSense uses <1GB space, why? A little SSD on the chipset's native SATA > controller should be fine (see above, use CARP for redundancy). > > Kind regards, > > Chris > -- > This email is made from 100% recycled electrons > _______________________________________________ > List mailing list > [email protected] > http://lists.pfsense.org/mailman/listinfo/list
_______________________________________________ List mailing list [email protected] http://lists.pfsense.org/mailman/listinfo/list
