Let me explain my setup in a little more detail.

~10,000 ports across 80 switch stacks and over 200 Meru access points
connected via a controller.  All authentication is RADIUS.  Mac based for
switches and combination of MAC based and 802.1x for wireless.

2 VMWare VMs w/ 8gb ram and 4 cores.  (Ill probably be dropping to 2-3
cores and 4GB), Affinity set in VMWare to keep them running on separate
hosts.
These both run the admin, portal, web services, memcached, dhcp listener,
pfdns, pfmon, pfsetvlan, snmptrapd.
RADIUS is listening on local interfaces on both servers.  Switches/APs set
to go to both hosts for redundancy.
There is a floating IP handled by pacemaker that is the target for DHCP
Relay for dhcp listener and admin interface.
DHCP for registration/isolation networks is running on one box at a time,
handled by pacemaker.
Modified PFMON to only perform maintenance tasks on the node that has the
floating IP (The "master").
Cron job to sync configs from "master" to "slave" every minute.


1 VMWare Fault Tolerant VM with 2gb ram and 1 core for Mysql database.  For
those that aren't familiar, Fault Tolerant VMs run on 2 ESXI hosts at the
same time so you can literally pull the plug on one host and it will
continue to run on the other.

The only reason we have 2 PF boxes is to have ZERO outage if one of the
ESXI hosts dies.  Not even the time it takes for another host to start the
VM.

The CPU usage on the VMs is minimal.  PF is pretty lightweight for what it
does.  Now.. If you were running in SNMP managed mode instead of RADIUS, I
could see it using more resources.  But your idea of having 11 servers
running PF is pretty over-kill.. I could literally jump from 10,000 ports
to 50,000 ports and from 200 to 1000 APs and not have to increase the
resources available to my VMs at all.




On Fri, Mar 28, 2014 at 7:28 AM, Tim DeNike <[email protected]> wrote:

> We have about 10000 ports and 200 access points via a controller running
> on 2 pf vms that share a vmware fault tolerant SQL database. The only
> reason we have 2 vms is to minimize downtime if a vmware host goes down.
>  Your proposed solution would be good for a "bazillion" ports. You don't
> need much.
>
> Sent from my iPhone
>
> On Mar 28, 2014, at 7:13 AM, Frederic Hermann <[email protected]> wrote:
>
> Dear List,
>
> I'm looking for some insight on how to setup a PF cluster able to handle
> millions of (wireless) connections (at least on paper).
>
> Our basic configuration will use MAC-Auth, with a custom postgresql
> cluster backend as external authentication source, and several captive
> portals, depending on the user location or SSID.
> Some switches may use WPA2 or 802.1x, but that would be an exception.
> All the manages switches would be connected through routed networks, and
> vlan will be used to provide registration/isolation networks.
>
> Ideally, our setup would be scalable, depending on the number of switches
> or wireless AP to manage. For exemple, add a new PF serveur for every 100 /
> 500 /  1000  switches.
>
> In that context, here is the architecture we have in mind:
>
> - 2 or more PF servers, maybe as DRDB clusters, connected to all switches
> - 1 Mysql cluster (3 node at least) for all mysql requests
> - 1 captive portal cluster (2 nodes) , behind load balancer
> - 1 online shop cluster, behind load balancer
> - 1 postresql cluster (3 node at last) as main authentication source
>
> We are wondering, in that architecture, if it would be useful, recommanded
> mandatory (or useless) to put the freeradius service on another node, using
> some proxy mechanism to ensure also HA and scalability for this critical
> service.
>
> Any idea  or suggestions?
>
> Cheers,
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> PacketFence-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/packetfence-users
>
>
------------------------------------------------------------------------------
_______________________________________________
PacketFence-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/packetfence-users

Reply via email to