Hi Jake,
There are fingerbank performance improvements in the latest packages for it.
I would suggest upgrading fingerbank (and packetfence to 5.5.2) when you have a
chance).
There is also a change to the default caching present in the latest maintenance
patches which might be of use to you:
https://github.com/inverse-inc/packetfence/commit/06df08fa09efc657a8e706c551301a37b7c07a06
<https://github.com/inverse-inc/packetfence/commit/06df08fa09efc657a8e706c551301a37b7c07a06>
Your fingerbank API key looks good to me.
Regards,
--
Louis Munro
[email protected] :: www.inverse.ca
+1.514.447.4918 x125 :: +1 (866) 353-6153 x125
Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence
(www.packetfence.org)
> On Jan 11, 2016, at 10:05 , Sallee, Jake <[email protected]> wrote:
>
> Louis!
>
> Here is the info you asked for. Thanks for your help.
>
> packetfence-5.5.0-4.el6.noarch
> fingerbank-2.0.0-28.1.noarch
>
> chi.conf
> ============================
> [storage DEFAULT]
> storage=redis
>
> [storage ldap_auth]
> expires_in=10m
>
> [storage httpd.admin]
> expires_in=1d
>
> [storage httpd.portal]
> expires_in=6h
>
> [storage redis]
> driver = Redis
> redis_class = Redis::Fast
> server = 127.0.0.1:6379
> prefix = pf
> expires_on_backend = 1
> reconnect=60
>
> #[storage file]
> #driver=File
> #root_dir=/usr/local/pf/var/cache
>
> fingerbank.conf
> =========================================
>
> [upstream]
> api_key=SHOULD I REALLY BE POSTING THIS? IT JUST FEELS WRONG.
>
> Jake Sallee
> Godfather of Bandwidth
> System Engineer
> University of Mary Hardin-Baylor
> WWW.UMHB.EDU
>
> 900 College St.
> Belton, Texas
> 76513
> Fone: 254-295-4658
> Phax: 254-295-4221
>
>
> From: Louis Munro [[email protected]]
>
> Sent: Monday, January 11, 2016 8:47 AM
>
> To: [email protected]
>
> Subject: Re: [PacketFence-users] High CPU utilization
>
>
>
>
>
> Hi Jake,
> It could be a caching issue or related to fingerbank.
>
>
> Can you tell us the version of your fingerbank and packetfence packages
> please?
>
>
> Also, please post the content of these two files:
>
>
> /usr/local/pf/conf/chi.conf
> /usr/local/fingerbank/conf/fingerbank.conf
>
>
>
> Regards,
>
> --
>
> Louis Munro
> [email protected] :: www.inverse.ca
> +1.514.447.4918 x125 :: +1 (866) 353-6153 x125
> Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence
> (www.packetfence.org)
>
>
>
>
>
> On Jan 11, 2016, at 9:09 , Sallee, Jake <[email protected]>
> wrote:
>
>
> Hello!
>
> I'm seeing some high CPU usage today:
>
> I tried bouncing the PF services but it came right back.
>
> pfqueue seems to be using quite a bit of CPU, how can I check to see if
> everything is okay?
>
> I tried this from another message on the list but I do not know how to
> interpret the response:
>
> # redis-cli -p 6380 llen Queue:general
> (integer) 0
> # redis-cli -p 6380 llen Queue:pfdhcplistener
> (integer) 1171
>
>
> Output of top:
> ==========================
> top - 08:01:31 up 46 days, 20:59, 2 users, load average: 13.15, 12.56, 11.52
> Tasks: 402 total, 11 running, 391 sleeping, 0 stopped, 0 zombie
> Cpu(s): 78.6%us, 5.3%sy, 0.0%ni, 15.8%id, 0.1%wa, 0.0%hi, 0.2%si, 0.0%st
> Mem: 49376292k total, 25674592k used, 23701700k free, 361936k buffers
> Swap: 50331644k total, 0k used, 50331644k free, 18987472k cached
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>
>
> 2138 mysql 20 0 8347m 280m 6968 S 179.4 0.6 20798:05 mysqld
>
>
> 30705 root 20 0 552m 166m 5876 R 99.8 0.3 17:40.86 pfqueue
>
>
> 30708 root 20 0 561m 175m 5876 R 99.8 0.4 17:41.82 pfqueue
>
>
> 30711 root 20 0 588m 202m 5896 R 99.5 0.4 16:24.83 pfqueue
>
>
> 30706 root 20 0 558m 172m 5876 R 98.8 0.4 17:41.23 pfqueue
>
>
> 30714 root 20 0 555m 169m 5876 R 98.8 0.4 17:41.04 pfqueue
>
>
> 30712 root 20 0 551m 165m 5876 R 98.5 0.3 17:41.60 pfqueue
>
>
> 30704 root 20 0 556m 170m 5876 R 98.1 0.4 17:42.50 pfqueue
>
>
> 30703 root 20 0 557m 171m 5876 R 97.5 0.4 17:41.62 pfqueue
>
>
> 30698 root 20 0 553m 167m 5848 R 91.2 0.3 13:11.58 pfqueue
>
>
> 30699 root 20 0 548m 162m 5848 S 77.9 0.3 13:15.51 pfqueue
>
>
> 30701 root 20 0 548m 162m 5848 S 76.3 0.3 13:22.24 pfqueue
>
>
> 30700 root 20 0 554m 168m 5848 R 43.1 0.3 13:10.90 pfqueue
>
>
> 11971 pf 20 0 583m 157m 4440 S 6.3 0.3 0:28.81 httpd
> .
> .
> .
> ================================
>
> Jake Sallee
> Godfather of Bandwidth
> System Engineer
> University of Mary Hardin-Baylor
> WWW.UMHB.EDU
>
> 900 College St.
> Belton, Texas
> 76513
>
> Fone: 254-295-4658
> Phax: 254-295-4221
>
> ------------------------------------------------------------------------------
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
> _______________________________________________
> PacketFence-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/packetfence-users
>
>
>
>
>
>
>
>
>
>
>
> ------------------------------------------------------------------------------
> Site24x7 APM Insight: Get Deep Visibility into Application Performance
> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
> Monitor end-to-end web transactions and take corrective actions now
> Troubleshoot faster and improve end-user experience. Signup Now!
> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
> _______________________________________________
> PacketFence-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/packetfence-users
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
PacketFence-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/packetfence-users