> Load average is more complex than number of (logical or otherwise)
> CPUs vs the load average number. The reason being load takes into
> account the processor state of "waiting for disk I/O".

Ah, yes. forgot about that.  

You can use a command like iostat to get more detailed info about I/O.

The iowait field will give you the % time your CPU was idle due to waiting on 
system I/O (IE: reading from hard disk).

> As far as my experience goes, when load is driven up, it is almost
> always due to IO saturation, not CPU saturation. However, I don't have
> much experience with PF systems, so they might have CPU saturation
> issues.

Interesting.  My experience has been almost the opposite. But most of my 
workloads tend to be RAM centric and not disc centric which could account for 
that.

Jake Sallee
Godfather of Bandwidth
System Engineer
University of Mary Hardin-Baylor
WWW.UMHB.EDU

900 College St.
Belton, Texas
76513

Fone: 254-295-4658
Phax: 254-295-4221

________________________________________
From: Matt Zagrabelny <mzagr...@d.umn.edu>
Sent: Friday, September 9, 2016 3:07 PM
To: packetfence-users@lists.sourceforge.net
Subject: Re: [PacketFence-users] Server Load metric

On Fri, Sep 9, 2016 at 2:37 PM, Sallee, Jake <jake.sal...@umhb.edu> wrote:
> I always assumed that came from the same source that 'top' pulls from.
>
>
> If I am correct then the number represents the workload of your system. In 
> simplified terms you want this number to always be less than the number of 
> processor cores in your system.
>
>
> If you have a quad core system and you have a system load of 3.00 then you 
> are effectively running 3 of your cores at 100%.
>
>
> If in a quad core system you have a value of 8.00 this means that you have 
> overloaded your system and there are 4 processes waiting while 4 other 
> processes are fully utilizing all the cores on your system.
>
>
> Here is a bit more explanation if your interested.
>
>
> http://www.howtogeek.com/194642/understanding-the-load-average-on-linux-and-other-unix-like-systems/
>
>
> TL;DR: the load score should always be less than the number of logical cores 
> in your system, if its not then your system is overworked and you need to do 
> something about it.

Load average is more complex than number of (logical or otherwise)
CPUs vs the load average number. The reason being load takes into
account the processor state of "waiting for disk I/O".

>From man proc:

       /proc/loadavg
              The  first  three  fields  in this file are load average
figures giving the number of jobs in the run
              queue (state R) or waiting for disk I/O (state D)
averaged over 1, 5, and 15 minutes.  They  are  the
              same as the load average numbers given by uptime(1) and
other programs.  The fourth field consists of
              two numbers separated by a slash (/).  The first of
these is the number of currently runnable  kernel
              scheduling entities (processes, threads).  The value
after the slash is the number of kernel schedulā€
              ing entities that currently exist on the system.  The
fifth field is the PID of the process that  was
              most recently created on the system.

Thus, you could have a high load average and throw a bunch of CPUs at
the issue and it doesn't change the problem one bit. It could be IO
bound.

As far as my experience goes, when load is driven up, it is almost
always due to IO saturation, not CPU saturation. However, I don't have
much experience with PF systems, so they might have CPU saturation
issues.

-m

------------------------------------------------------------------------------
_______________________________________________
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users
------------------------------------------------------------------------------
_______________________________________________
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users

Reply via email to