Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-04-02 Thread Viresh Kumar
On 3 April 2013 12:01, stratosk wrote: > I'm sorry, I don't understand. > The goal of this patch is not energy saving. He probably misunderstood it... > The goal is to detect CPU load as soon as possible to increase frequency. > > Could you please clarify this? B

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-04-02 Thread stratosk
I'm sorry, I don't understand. The goal of this patch is not energy saving. The goal is to detect CPU load as soon as possible to increase frequency. Could you please clarify this? Thanks, Stratos "Rafael J. Wysocki" wrote: >On Tuesday, April 02, 2013 06:49:14 PM

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-04-02 Thread Rafael J. Wysocki
On Tuesday, April 02, 2013 06:49:14 PM Stratos Karafotis wrote: > On 04/02/2013 04:50 PM, Rafael J. Wysocki wrote: > > Do you have any numbers indicating that this actually makes things better? > > > > Rafael > > No, I don't. > The expected behaviour after this patch is to "force" max frequency f

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-04-02 Thread Stratos Karafotis
On 04/02/2013 04:50 PM, Rafael J. Wysocki wrote: > Do you have any numbers indicating that this actually makes things better? > > Rafael No, I don't. The expected behaviour after this patch is to "force" max frequency few sampling periods earlier. The idea was to increase system responsiveness e

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-04-02 Thread Rafael J. Wysocki
> Hi Rafael, > > In case you are interested in this patch I rebased it to the latest > linux-pm/bleeding-edge. > > Thanks, > Stratos > > -- > Instead of checking only the absolute value of CPU load_freq to increase > f

[patch v3 4/8] sched: update cpu load after task_tick.

2013-04-01 Thread Alex Shi
To get the latest runnable info, we need do this cpuload update after task_tick. Signed-off-by: Alex Shi --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 8843cd3..e3233d3 100644 --- a/kernel/sched/core.c +

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-03-29 Thread Rafael J. Wysocki
t; Instead of checking only the absolute value of CPU load_freq to increase > frequency, we detect forthcoming CPU load rise and increase frequency > earlier. > > Every sampling rate, we calculate the gradient of load_freq. If it is > too steep we assume that the load most proba

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-03-29 Thread Stratos Karafotis
dge. Thanks, Stratos -- Instead of checking only the absolute value of CPU load_freq to increase frequency, we detect forthcoming CPU load rise and increase frequency earlier. Every sampling rate, we calculate the gradient of load_freq. If it is too steep we assu

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-02-22 Thread Rafael J. Wysocki
On Friday, February 22, 2013 11:27:09 AM Viresh Kumar wrote: > On Fri, Feb 22, 2013 at 7:26 AM, Viresh Kumar wrote: > > On 21 February 2013 23:09, Stratos Karafotis wrote: > > >> Instead of checking only the absolute value of CPU load_freq to increase > >> freque

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-02-21 Thread Viresh Kumar
On Fri, Feb 22, 2013 at 7:26 AM, Viresh Kumar wrote: > On 21 February 2013 23:09, Stratos Karafotis wrote: >> Instead of checking only the absolute value of CPU load_freq to increase >> frequency, we detect forthcoming CPU load rise and increase frequency >> earlier. >

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-02-21 Thread Viresh Kumar
e > frequency, we detect forthcoming CPU load rise and increase frequency > earlier. > > Every sampling rate, we calculate the gradient of load_freq. If it is > too steep we assume that the load most probably will go over > up_threshold in next iteration(s) and we increase

Re: [PATCH v3 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-02-21 Thread Stratos Karafotis
---8<-------- Instead of checking only the absolute value of CPU load_freq to increase frequency, we detect forthcoming CPU load rise and increase frequency earlier. Every sampling rate, we calculate the gradient of load_freq. If it is too

Re: [PATCH v2 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-02-21 Thread Viresh Kumar
Hi Again, On Thu, Feb 21, 2013 at 5:01 PM, Stratos Karafotis wrote: > diff --git a/drivers/cpufreq/cpufreq_ondemand.c > b/drivers/cpufreq/cpufreq_ondemand.c > @@ -168,16 +174,29 @@ static void od_check_cpu(int cpu, unsigned int > load_freq) > struct cpufreq_policy *policy = dbs_info->c

Re: [PATCH v2 linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-02-21 Thread Stratos Karafotis
I added the grad_up_threshold. Following patch v2. Thanks again, Stratos 8<-- Instead of checking only the absolute value of CPU load_freq to increase frequency, we detect forthcoming CPU load rise and increase frequency earlier. Every sampling rate, we ca

Re: [PATCH linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-02-20 Thread Viresh Kumar
Hi Stratos, On Thu, Feb 21, 2013 at 2:20 AM, Stratos Karafotis wrote: > Instead of checking only the absolute value of CPU load_freq to increase > frequency, we detect forthcoming CPU load rise and increase frequency > earlier. > > Every sampling rate, we calculate the gradie

[PATCH linux-next] cpufreq: ondemand: Calculate gradient of CPU load to early increase frequency

2013-02-20 Thread Stratos Karafotis
Instead of checking only the absolute value of CPU load_freq to increase frequency, we detect forthcoming CPU load rise and increase frequency earlier. Every sampling rate, we calculate the gradient of load_freq. If it is too steep we assume that the load most probably will go over up_threshold

[RFC patch v2 4/7] sched: update cpu load after task_tick.

2013-01-25 Thread Alex Shi
To get the latest runnable info, we need do this cpuload update after task_tick. Signed-off-by: Alex Shi --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index dbab4b3..4f4714e 100644 --- a/kernel/sched/core.c +

[PATCH v3 08/22] sched: update cpu load after task_tick.

2013-01-05 Thread Alex Shi
--- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 66ce1f1..06d27af 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2700,8 +2700,8 @@ void scheduler_tick(void) raw_spin_lock(&rq->lock)

Re: [PATCH v2 1/3] sched: introduce distinct per-cpu load average

2012-10-22 Thread Andrea Righi
On Mon, Oct 22, 2012 at 01:10:40PM +0200, Peter Zijlstra wrote: > On Sat, 2012-10-20 at 21:06 +0200, Andrea Righi wrote: > > @@ -383,13 +383,7 @@ struct rq { > > struct list_head leaf_rt_rq_list; > > #endif > > > > > + unsigned long __percpu *nr_uninterruptible; > > This is O(nr

Re: [PATCH v2 1/3] sched: introduce distinct per-cpu load average

2012-10-22 Thread Peter Zijlstra
On Sat, 2012-10-20 at 21:06 +0200, Andrea Righi wrote: > @@ -383,13 +383,7 @@ struct rq { > struct list_head leaf_rt_rq_list; > #endif > > + unsigned long __percpu *nr_uninterruptible; This is O(nr_cpus^2) memory.. > +unsigned long nr_uninterruptible_cpu(int cpu) > +{ > +

[PATCH v2 1/3] sched: introduce distinct per-cpu load average

2012-10-20 Thread Andrea Righi
Account load average, nr_running and nr_uninterruptible tasks per-cpu. The new task_struct attribute on_cpu_uninterruptible is added to properly keep track of the cpu at deactivate time, when the task is set to the uninterruptible sleep state. Moreover, rq->nr_uninterruptible is converted to a pe

Re: [PATCH RFC 1/3] sched: introduce distinct per-cpu load average

2012-10-04 Thread Andrea Righi
On Thu, Oct 04, 2012 at 02:12:08PM +0200, Peter Zijlstra wrote: > On Thu, 2012-10-04 at 11:43 +0200, Andrea Righi wrote: > > > > Right, the update must be atomic to have a coherent nr_uninterruptible > > value. And AFAICS the only way to account a coherent > > nr_uninterruptible > > value per-cpu

Re: [PATCH RFC 1/3] sched: introduce distinct per-cpu load average

2012-10-04 Thread Peter Zijlstra
On Thu, 2012-10-04 at 11:43 +0200, Andrea Righi wrote: > > Right, the update must be atomic to have a coherent nr_uninterruptible > value. And AFAICS the only way to account a coherent > nr_uninterruptible > value per-cpu is to go with atomic ops... mmh... I'll think more on > this. You could st

Re: [PATCH RFC 1/3] sched: introduce distinct per-cpu load average

2012-10-04 Thread Andrea Righi
On Thu, Oct 04, 2012 at 10:59:46AM +0200, Peter Zijlstra wrote: > On Thu, 2012-10-04 at 01:05 +0200, Andrea Righi wrote: > > +++ b/kernel/sched/core.c > > @@ -727,15 +727,17 @@ static void dequeue_task(struct rq *rq, struct > > task_struct *p, int flags) > > void activate_task(struct rq *rq, stru

Re: [PATCH RFC 1/3] sched: introduce distinct per-cpu load average

2012-10-04 Thread Peter Zijlstra
On Thu, 2012-10-04 at 01:05 +0200, Andrea Righi wrote: > +++ b/kernel/sched/core.c > @@ -727,15 +727,17 @@ static void dequeue_task(struct rq *rq, struct > task_struct *p, int flags) > void activate_task(struct rq *rq, struct task_struct *p, int flags) > { > if (task_contributes_to_load(

[PATCH RFC 1/3] sched: introduce distinct per-cpu load average

2012-10-03 Thread Andrea Righi
Account per-cpu load average, as well as nr_running and nr_uninterruptible tasks. The new element on_cpu_uninterruptible to task_struct is added to properly keep track of the cpu where the task was set to the uninterruptible sleep state. This feature is required by the cpusets cgroup subsystem

[Patch 3/5 v2] sched: change how cpu load is calculated

2007-11-26 Thread Srivatsa Vaddagiri
This patch changes how the cpu load exerted by fair_sched_class tasks is calculated. Load exerted by fair_sched_class tasks on a cpu is now a summation of the group weights, rather than summation of task weights. Weight exerted by a group on a cpu is dependent on the shares allocated to it. This

[Patch 3/5 v1] sched: change how cpu load is calculated

2007-11-26 Thread Srivatsa Vaddagiri
This patch changes how the cpu load exerted by fair_sched_class tasks is calculated. Load exerted by fair_sched_class tasks on a cpu is now a summation of the group weights, rather than summation of task weights. Weight exerted by a group on a cpu is dependent on the shares allocated to it. This

[Patch 3/4 v2] sched: change how cpu load is calculated

2007-11-25 Thread Srivatsa Vaddagiri
This patch changes how the cpu load exerted by fair_sched_class tasks is calculated. Load exerted by fair_sched_class tasks on a cpu is now a summation of the group weights, rather than summation of task weights. Weight exerted by a group on a cpu is dependent on the shares allocated to it. This

[Patch 3/4 v1] sched: change how cpu load is calculated

2007-11-25 Thread Srivatsa Vaddagiri
This patch changes how the cpu load exerted by fair_sched_class tasks is calculated. Load exerted by fair_sched_class tasks on a cpu is now a summation of the group weights, rather than summation of task weights. Weight exerted by a group on a cpu is dependent on the shares allocated to it. This

High cpu load due to pdflush

2007-11-03 Thread Bob Gill
de=0600,listmode=0644 ln -s .usbfs/devices /dev/bus/usb/devices mount --rbind /dev/bus/usb /proc/bus/usb ...so that usb would work with a custom kernel... Oh, and btw, my cpu is an Intel P4. Also, I applied the following patch and build/ran a kernel (but it did not reduce the cpu load on my system)

Re: high system cpu load during intense disk i/o

2007-08-10 Thread Rafał Bilski
So I would assume that delay_tsc() probably only makes the situation worse for the libata tests, but the real problem is at __switch_to() and schedule(). Do you agree with these assumptions? Yes. I agree that percentage of CPU time is unreasonably high for these functions. But not only for them.

Re: high system cpu load during intense disk i/o

2007-08-09 Thread Dimitrios Apostolou
Hi Rafal, thank you for your help! On Wednesday 08 August 2007 22:08:18 Rafał Bilski wrote: > > Hello again, > > Hi, > > > I 'm now using libata on the same system described before (see attached > > dmesg.txt). When writing to both disks I think the problem is now worse > > (pata_oprof_bad.txt, p

Re: high system cpu load during intense disk i/o

2007-08-08 Thread Rafał Bilski
Hello again, Hi, I 'm now using libata on the same system described before (see attached dmesg.txt). When writing to both disks I think the problem is now worse (pata_oprof_bad.txt, pata_vmstat_bad.txt), even the oprofile script needed half an hour to complete! For completion I also attach the

Re: high system cpu load during intense disk i/o

2007-08-07 Thread Dimitrios Apostolou
On Tuesday 07 August 2007 03:37:08 Alan Cox wrote: > > > acpi_pm_read is capable of disappearing into SMM traps which will make > > > it look very slow. > > > > what is an SMM trap? I googled a bit but didn't get it... > > One of the less documented bits of the PC architecture. It is possible to >

Re: high system cpu load during intense disk i/o

2007-08-07 Thread Dimitrios Apostolou
On Tuesday 07 August 2007 12:03:28 Rafał Bilski wrote: > >> Just tested (plain curiosity). > >> via82cxxx average result @533MHz: > >> /dev/hda: > >> Timing cached reads: 232 MB in 2.00 seconds = 115.93 MB/sec > >> Timing buffered disk reads: 64 MB in 3.12 seconds = 20.54 MB/sec > >> pata_vi

Re: high system cpu load during intense disk i/o

2007-08-07 Thread Rafał Bilski
Just tested (plain curiosity). via82cxxx average result @533MHz: /dev/hda: Timing cached reads: 232 MB in 2.00 seconds = 115.93 MB/sec Timing buffered disk reads: 64 MB in 3.12 seconds = 20.54 MB/sec pata_via average result @533MHz: /dev/sda: Timing cached reads: 234 MB in 2.01 seconds =

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Alan Cox
> > acpi_pm_read is capable of disappearing into SMM traps which will make > > it look very slow. > > what is an SMM trap? I googled a bit but didn't get it... One of the less documented bits of the PC architecture. It is possible to arrange that the CPU jumps into a special mode when triggered

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Dimitrios Apostolou
Rafał Bilski wrote: Hello Rafal, Hello, However I find it quite possible to have reached the throughput limit because of software (driver) problems. I have done various testing (mostly "hdparm -tT" with exactly the same PC and disks since about kernel 2.6.8 (maybe even earlier). I remember wi

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Dimitrios Apostolou
Hi Alan, Alan Cox wrote: In Your oprofile output I find "acpi_pm_read" particulary interesting. Unlike other VIA chipsets, which I know, Your doesn't use VLink to connect northbridge to southbridge. Instead PCI bus connects these two. As You probably know maximal PCI throughtput is 133MiB/s. I

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Rafał Bilski
Hello Rafal, Hello, However I find it quite possible to have reached the throughput limit because of software (driver) problems. I have done various testing (mostly "hdparm -tT" with exactly the same PC and disks since about kernel 2.6.8 (maybe even earlier). I remember with certainty that rea

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Alan Cox
> > In Your oprofile output I find "acpi_pm_read" particulary interesting. > > Unlike other VIA chipsets, which I know, Your doesn't use VLink to > > connect northbridge to southbridge. Instead PCI bus connects these two. > > As You probably know maximal PCI throughtput is 133MiB/s. In theory. I

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Dimitrios Apostolou
switch rate when running two instances of badblocks against two different disks went batshit insane. It doesn't happen here. Please capture the `vmstat 1' output while running the problematic workload. The oom-killing could have been unrelated to the CPU load problem. iirc badblocks

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Dimitrios Apostolou
Hi, Andrew Morton wrote: I suspect I was fooled by the oprofile output, which showed tremendous amounts of load in schedule() and switch_to(). The percentages which opreport shows are the percentage of non-halted CPU time. So if you have a function in the kernel which is using 1% of the total

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Dimitrios Apostolou
ever is certainly because of the high system CPU load. If you see the two_discs_bad.txt which I attached on my original message, you'll see that *vmlinux*, and specifically the *scheduler*, take up most time. And the fact that this happens only when running two i/o processes but when runnin

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Andrew Morton
gt; We're bad. > > > > Seems that your context switch rate when running two instances of > > badblocks against two different disks went batshit insane. It doesn't > > happen here. > > > > Please capture the `vmstat 1' output while running the pr

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Rafał Bilski
ainly because of the high system CPU load. If you see the two_discs_bad.txt which I attached on my original message, you'll see that *vmlinux*, and specifically the *scheduler*, take up most time. And the fact that this happens only when running two i/o processes but when running only one eve

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Dimitrios Apostolou
ater now but the system CPU load much less, than that of two_discs_bad.txt. However the cron jobs still seem to have a hard time finishing, even though they seem now to consume about 90% CPU time. Could someone please explain me some things that seem vital to understanding the situation?

Re: high system cpu load during intense disk i/o

2007-08-06 Thread Dimitrios Apostolou
disks went batshit insane. It doesn't happen here. Please capture the `vmstat 1' output while running the problematic workload. The oom-killing could have been unrelated to the CPU load problem. iirc badblocks uses a lot of memory, so it might have been genuine. Keep an eye on the /pr

Re: high system cpu load during intense disk i/o

2007-08-05 Thread Andrew Morton
Please capture the `vmstat 1' output while running the problematic workload. The oom-killing could have been unrelated to the CPU load problem. iirc badblocks uses a lot of memory, so it might have been genuine. Keep an eye on the /proc/meminfo output and send the kernel dmesg output from the

Re: high system cpu load during intense disk i/o

2007-08-05 Thread Rafał Bilski
the high system CPU load. If you see the two_discs_bad.txt which I attached on my original message, you'll see that *vmlinux*, and specifically the *scheduler*, take up most time. And the fact that this happens only when running two i/o processes but when running only one everything is

Re: high system cpu load during intense disk i/o

2007-08-05 Thread Dimitrios Apostolou
and in result access is slower > and slower. Hello and thanks for your reply. The cron job that is running every 10 min on my system is mpop (a fetchmail-like program) and another running every 5 min is mrtg. Both normally finish within 1-2 seconds. The fact that these simple cron jobs don'

Re: high system cpu load during intense disk i/o

2007-08-05 Thread Rafał Bilski
Hello again, Hi! was my report so complicated? Perhaps I shouldn't have included so many oprofile outputs. Anyway, if anyone wants to have a look, the most important is two_discs_bad.txt oprofile output, attached on my original message. The problem is 100% reproducible for me so I would apprec

Re: high system cpu load during intense disk i/o

2007-08-05 Thread Dimitrios Apostolou
ntered when I started two processes doing heavy I/O > on hda and hdc, "badblocks -v -w /dev/hda" and "badblocks -v -w > /dev/hdc". At the beginning (two_discs.txt) everything was fine and > vmstat reported more than 90% iowait CPU load. However, after a while > (

Re: CPU load

2007-02-26 Thread Randy Dunlap
On Mon, 26 Feb 2007 13:42:50 +0300 (MSK) malc wrote: > On Mon, 26 Feb 2007, Pavel Machek wrote: > > > Hi! > > > >> [..snip..] > >> > The current situation ought to be documented. Better yet some flag > can > >>> > >>> It probably _is_ documented, somewhere :-). If you find nice place >

[PATCH] Documentation: CPU load calculation description

2007-02-26 Thread malc
txt new file mode 100644 index 000..287224e --- /dev/null +++ b/Documentation/cpu-load.txt @@ -0,0 +1,113 @@ +CPU load + + +Linux exports various bits of information via `/proc/stat' and +`/proc/uptime' that userland tools, such as top(1), use to calculate +the average time s

Re: CPU load

2007-02-26 Thread malc
On Mon, 26 Feb 2007, Pavel Machek wrote: Hi! [..snip..] The current situation ought to be documented. Better yet some flag can It probably _is_ documented, somewhere :-). If you find nice place where to document it (top manpage?) go ahead with the patch. How about this: Looks okay to

Re: CPU load

2007-02-26 Thread Pavel Machek
> How about this: Looks okay to me. (You should probably add your name to it, and I do not like html-like markup... plus please don't add extra spaces between words)... You probably want to send it to akpm? Pavel > &

Re: CPU load

2007-02-25 Thread malc
On Wed, 14 Feb 2007, Pavel Machek wrote: Hi! [..snip..] The current situation ought to be documented. Better yet some flag can It probably _is_ documented, somewhere :-). If you find nice place where to document it (top manpage?) go ahead with the patch. How about this: CPU load

Re: CPU load

2007-02-14 Thread Pavel Machek
Hi! > > >>>I have (had?) code that 'exploits' this. I believe I could eat 90% of cpu > >>>without being noticed. > >> > >>Slightly changed version of hog(around 3 lines in total changed) does that > >>easily on 2.6.18.3 on PPC. > >> > >>http://www.boblycat.org/~malc/apc/load-hog-ppc.png > > > >I g

Re: CPU load

2007-02-14 Thread Con Kolivas
On Wednesday 14 February 2007 18:28, malc wrote: > On Wed, 14 Feb 2007, Con Kolivas wrote: > > On Wednesday 14 February 2007 09:01, malc wrote: > >> On Mon, 12 Feb 2007, Pavel Machek wrote: > >>> Hi! > > [..snip..] > > >>> I have (had?) code that 'exploits' this. I believe I could eat 90% of > >>>

Re: CPU load

2007-02-13 Thread malc
On Wed, 14 Feb 2007, Con Kolivas wrote: On Wednesday 14 February 2007 09:01, malc wrote: On Mon, 12 Feb 2007, Pavel Machek wrote: Hi! [..snip..] I have (had?) code that 'exploits' this. I believe I could eat 90% of cpu without being noticed. Slightly changed version of hog(around 3 lines

Re: CPU load

2007-02-13 Thread Con Kolivas
On Wednesday 14 February 2007 09:01, malc wrote: > On Mon, 12 Feb 2007, Pavel Machek wrote: > > Hi! > > > >> The kernel looks at what is using cpu _only_ during the > >> timer > >> interrupt. Which means if your HZ is 1000 it looks at > >> what is running > >> at precisely the moment those 1000 tim

Re: CPU load

2007-02-13 Thread malc
On Mon, 12 Feb 2007, Pavel Machek wrote: Hi! The kernel looks at what is using cpu _only_ during the timer interrupt. Which means if your HZ is 1000 it looks at what is running at precisely the moment those 1000 timer ticks occur. It is theoretically possible using this measurement system to u

Re: CPU load

2007-02-13 Thread Pavel Machek
Hi! > The kernel looks at what is using cpu _only_ during the > timer > interrupt. Which means if your HZ is 1000 it looks at > what is running > at precisely the moment those 1000 timer ticks occur. It > is > theoretically possible using this measurement system to > use >99% cpu > and record

Re: CPU load

2007-02-12 Thread malc
On Mon, 12 Feb 2007, Andrew Burgess wrote: On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote: How does the kernel calculates the value it places in `/proc/stat' at 4th position (i.e. "idle: twiddling thumbs")? .. Later small kernel module was developed that tried to time how much time

Re: CPU load

2007-02-12 Thread malc
On Mon, 12 Feb 2007, Con Kolivas wrote: On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote: Hello, [..snip..] The kernel looks at what is using cpu _only_ during the timer interrupt. Which means if your HZ is 1000 it looks at what is running at precisely the moment those 1000 timer tick

Re: CPU load

2007-02-12 Thread Andrew Burgess
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote: > > How does the kernel calculates the value it places in `/proc/stat' at > 4th position (i.e. "idle: twiddling thumbs")? > .. > > Later small kernel module was developed that tried to time how much > time is spent in the idle handler inside th

Re: CPU load

2007-02-11 Thread Con Kolivas
velopers incorrectly conclude > that utilizing RTC suddenly made the code run slower, after all /proc/stat > now claims that CPU load is higher, while in reality it stayed the same - > it's the accuracy that has improved (somewhat) > > But back to the original question, does it l

Re: CPU load

2007-02-11 Thread malc
e the code run slower, after all /proc/stat now claims that CPU load is higher, while in reality it stayed the same - it's the accuracy that has improved (somewhat) But back to the original question, does it look at what's running on timer interrupt only or any IRQ? (something which is

Re: CPU load

2007-02-11 Thread malc
On Mon, 12 Feb 2007, Con Kolivas wrote: On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote: [..snip..] The kernel looks at what is using cpu _only_ during the timer interrupt. Which means if your HZ is 1000 it looks at what is running at precisely the moment those 1000 timer ticks occur.

Re: CPU load

2007-02-11 Thread Con Kolivas
On Monday 12 February 2007 16:54, malc wrote: > On Mon, 12 Feb 2007, Con Kolivas wrote: > > On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote: > > [..snip..] > > > The kernel looks at what is using cpu _only_ during the timer > > interrupt. Which means if your HZ is 1000 it looks at what is run

Re: CPU load

2007-02-11 Thread Con Kolivas
On Monday 12 February 2007 16:55, Stephen Rothwell wrote: > On Mon, 12 Feb 2007 16:44:22 +1100 "Con Kolivas" <[EMAIL PROTECTED]> wrote: > > The kernel looks at what is using cpu _only_ during the timer > > interrupt. Which means if your HZ is 1000 it looks at what is running > > at precisely the mo

Re: CPU load

2007-02-11 Thread Stephen Rothwell
On Mon, 12 Feb 2007 16:44:22 +1100 "Con Kolivas" <[EMAIL PROTECTED]> wrote: > > The kernel looks at what is using cpu _only_ during the timer > interrupt. Which means if your HZ is 1000 it looks at what is running > at precisely the moment those 1000 timer ticks occur. It is > theoretically possibl

Re: CPU load

2007-02-11 Thread Con Kolivas
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote: Hello, How does the kernel calculates the value it places in `/proc/stat' at 4th position (i.e. "idle: twiddling thumbs")? For background information as to why this question arose in the first place read on. While writing the code dealing

CPU load

2007-02-11 Thread Vassili Karpov
Hello, How does the kernel calculates the value it places in `/proc/stat' at 4th position (i.e. "idle: twiddling thumbs")? For background information as to why this question arose in the first place read on. While writing the code dealing with video acquisition/processing at work noticed that wh

Re: cpu load balancing problem on smp

2007-02-09 Thread Marc Donner
On Thursday 08 February 2007 09:42, you wrote: > On Wed, 7 Feb 2007, Arjan van de Ven wrote: > > Marc Donner wrote: > >> 501: 215717 209388 209430 202514 PCI-MSI-edge > >> eth10 502:927 1019 1053888 PCI-MSI-edge > >> eth11 > > > > this is od

Re: cpu load balancing problem on smp

2007-02-08 Thread David Lang
On Wed, 7 Feb 2007, Arjan van de Ven wrote: Marc Donner wrote: 501: 215717 209388 209430 202514 PCI-MSI-edge eth10 502:927 1019 1053888 PCI-MSI-edge eth11 this is odd, this is not an irq distribution that irqbalance should give you 1

Re: cpu load balancing problem on smp

2007-02-07 Thread Marc Donner
On Wednesday 07 February 2007 06:59, you wrote: > Marc Donner wrote: > > 501: 215717 209388 209430 202514 PCI-MSI-edge > > eth10 502:927 1019 1053888 PCI-MSI-edge > > eth11 > > this is odd, this is not an irq distribution that irqbalance sho

Re: cpu load balancing problem on smp

2007-02-06 Thread Arjan van de Ven
Marc Donner wrote: 501: 215717 209388 209430 202514 PCI-MSI-edge eth10 502:927 1019 1053888 PCI-MSI-edge eth11 this is odd, this is not an irq distribution that irqbalance should give you 1 NMI:451 39 42

Re: cpu load balancing problem on smp

2007-02-06 Thread Marc Donner
> can you send me the output of > > cat /proc/interrupts here it is: irqblance is running. network loaded with 600Mbit/s for about 5minutes. CPU0 CPU1 CPU2 CPU3 0: 37713 41667 41673 49914 IO-APIC-edge timer 1: 0 0

Re: cpu load balancing problem on smp

2007-02-06 Thread Pablo Sebastian Greco
Arjan van de Ven wrote: Pablo Sebastian Greco wrote: 2296:427426436 134563009 PCI-MSI-edge eth1 2297:252252 135926471257 PCI-MSI-edge eth0 this suggests that cores would be busy rather than only one - Yes, but you are looki

Re: cpu load balancing problem on smp

2007-02-06 Thread Arjan van de Ven
Pablo Sebastian Greco wrote: 2296:427426436 134563009 PCI-MSI-edge eth1 2297:252252 135926471257 PCI-MSI-edge eth0 this suggests that cores would be busy rather than only one - To unsubscribe from this list: send the line "un

Re: cpu load balancing problem on smp

2007-02-06 Thread Pablo Sebastian Greco
Arjan van de Ven wrote: Marc Donner wrote: see http://www.irqbalance.org to get irqbalance I now have tried irqloadbalance, but the same problem. can you send me the output of cat /proc/interrupts (taken when you are or have been loading the network) maybe there's something fishy going

Re: cpu load balancing problem on smp

2007-02-06 Thread Arjan van de Ven
Marc Donner wrote: see http://www.irqbalance.org to get irqbalance I now have tried irqloadbalance, but the same problem. can you send me the output of cat /proc/interrupts (taken when you are or have been loading the network) maybe there's something fishy going on - To unsubscribe from

Re: cpu load balancing problem on smp

2007-02-06 Thread Marc Donner
On Tuesday 06 February 2007 19:09, you wrote: > On Tue, 2007-02-06 at 18:32 +0100, Marc Donner wrote: > > Hi @all > > > > we have detected some problems on our live systems and so i have build a > > test setup in our lab as follow: > > > > 3 Core 2 duo servers, each with 2 CPUs, with GE interfaces

Re: cpu load balancing problem on smp

2007-02-06 Thread Arjan van de Ven
On Tue, 2007-02-06 at 18:32 +0100, Marc Donner wrote: > Hi @all > > we have detected some problems on our live systems and so i have build a test > setup in our lab as follow: > > 3 Core 2 duo servers, each with 2 CPUs, with GE interfaces. 2 of them are > only for generating network traffic. t

[PATCH 8/13] generalised CPU load averaging

2005-02-23 Thread Nick Piggin
8/13 Do CPU load averaging over a number of different intervals. Allow each interval to be chosen by sending a parameter to source_load and target_load. 0 is instantaneous, idx > 0 returns a decaying average with the most recent sample weighted at 2^(idx-1). To a maximum of 3 (could be eas

cpu load calculation

2001-04-12 Thread Gilad Ben-Yossef
Hi there, We think we might have encountered a problem with the cpu load calculation in Linux (2.2.x). The current calculation is based on dividing the jiffies to user, system and idle jiffies according to the process the timer hardware interrupt interrupts. In our case the timer interrupt

usleep magically reduces cpu load?

2001-03-02 Thread SmartList
Why does this use up about 5% CPU (on my system) (pseude code of course) while (data,size = get_data) { write(/dev/dsp,data,size); } And this only uses about 0%: while (data,size = get_data) { write(/dev/dsp,data,size); usleep(1); } I've also tried replacing the usleep

<    1   2   3