On 3 April 2013 12:01, stratosk wrote:
> I'm sorry, I don't understand.
> The goal of this patch is not energy saving.
He probably misunderstood it...
> The goal is to detect CPU load as soon as possible to increase frequency.
>
> Could you please clarify this?
B
I'm sorry, I don't understand.
The goal of this patch is not energy saving.
The goal is to detect CPU load as soon as possible to increase frequency.
Could you please clarify this?
Thanks,
Stratos
"Rafael J. Wysocki" wrote:
>On Tuesday, April 02, 2013 06:49:14 PM
On Tuesday, April 02, 2013 06:49:14 PM Stratos Karafotis wrote:
> On 04/02/2013 04:50 PM, Rafael J. Wysocki wrote:
> > Do you have any numbers indicating that this actually makes things better?
> >
> > Rafael
>
> No, I don't.
> The expected behaviour after this patch is to "force" max frequency f
On 04/02/2013 04:50 PM, Rafael J. Wysocki wrote:
> Do you have any numbers indicating that this actually makes things better?
>
> Rafael
No, I don't.
The expected behaviour after this patch is to "force" max frequency few
sampling periods earlier.
The idea was to increase system responsiveness e
> Hi Rafael,
>
> In case you are interested in this patch I rebased it to the latest
> linux-pm/bleeding-edge.
>
> Thanks,
> Stratos
>
> --
> Instead of checking only the absolute value of CPU load_freq to increase
> f
To get the latest runnable info, we need do this cpuload update after
task_tick.
Signed-off-by: Alex Shi
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 8843cd3..e3233d3 100644
--- a/kernel/sched/core.c
+
t; Instead of checking only the absolute value of CPU load_freq to increase
> frequency, we detect forthcoming CPU load rise and increase frequency
> earlier.
>
> Every sampling rate, we calculate the gradient of load_freq. If it is
> too steep we assume that the load most proba
dge.
Thanks,
Stratos
--
Instead of checking only the absolute value of CPU load_freq to increase
frequency, we detect forthcoming CPU load rise and increase frequency
earlier.
Every sampling rate, we calculate the gradient of load_freq. If it is
too steep we assu
On Friday, February 22, 2013 11:27:09 AM Viresh Kumar wrote:
> On Fri, Feb 22, 2013 at 7:26 AM, Viresh Kumar wrote:
> > On 21 February 2013 23:09, Stratos Karafotis wrote:
>
> >> Instead of checking only the absolute value of CPU load_freq to increase
> >> freque
On Fri, Feb 22, 2013 at 7:26 AM, Viresh Kumar wrote:
> On 21 February 2013 23:09, Stratos Karafotis wrote:
>> Instead of checking only the absolute value of CPU load_freq to increase
>> frequency, we detect forthcoming CPU load rise and increase frequency
>> earlier.
>
e
> frequency, we detect forthcoming CPU load rise and increase frequency
> earlier.
>
> Every sampling rate, we calculate the gradient of load_freq. If it is
> too steep we assume that the load most probably will go over
> up_threshold in next iteration(s) and we increase
---8<--------
Instead of checking only the absolute value of CPU load_freq to increase
frequency, we detect forthcoming CPU load rise and increase frequency
earlier.
Every sampling rate, we calculate the gradient of load_freq. If it is
too
Hi Again,
On Thu, Feb 21, 2013 at 5:01 PM, Stratos Karafotis
wrote:
> diff --git a/drivers/cpufreq/cpufreq_ondemand.c
> b/drivers/cpufreq/cpufreq_ondemand.c
> @@ -168,16 +174,29 @@ static void od_check_cpu(int cpu, unsigned int
> load_freq)
> struct cpufreq_policy *policy = dbs_info->c
I added the grad_up_threshold.
Following patch v2.
Thanks again,
Stratos
8<--
Instead of checking only the absolute value of CPU load_freq to increase
frequency, we detect forthcoming CPU load rise and increase frequency
earlier.
Every sampling rate, we ca
Hi Stratos,
On Thu, Feb 21, 2013 at 2:20 AM, Stratos Karafotis
wrote:
> Instead of checking only the absolute value of CPU load_freq to increase
> frequency, we detect forthcoming CPU load rise and increase frequency
> earlier.
>
> Every sampling rate, we calculate the gradie
Instead of checking only the absolute value of CPU load_freq to increase
frequency, we detect forthcoming CPU load rise and increase frequency
earlier.
Every sampling rate, we calculate the gradient of load_freq.
If it is too steep we assume that the load most probably will
go over up_threshold
To get the latest runnable info, we need do this cpuload update after
task_tick.
Signed-off-by: Alex Shi
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index dbab4b3..4f4714e 100644
--- a/kernel/sched/core.c
+
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 66ce1f1..06d27af 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2700,8 +2700,8 @@ void scheduler_tick(void)
raw_spin_lock(&rq->lock)
On Mon, Oct 22, 2012 at 01:10:40PM +0200, Peter Zijlstra wrote:
> On Sat, 2012-10-20 at 21:06 +0200, Andrea Righi wrote:
> > @@ -383,13 +383,7 @@ struct rq {
> > struct list_head leaf_rt_rq_list;
> > #endif
> >
>
> > + unsigned long __percpu *nr_uninterruptible;
>
> This is O(nr
On Sat, 2012-10-20 at 21:06 +0200, Andrea Righi wrote:
> @@ -383,13 +383,7 @@ struct rq {
> struct list_head leaf_rt_rq_list;
> #endif
>
> + unsigned long __percpu *nr_uninterruptible;
This is O(nr_cpus^2) memory..
> +unsigned long nr_uninterruptible_cpu(int cpu)
> +{
> +
Account load average, nr_running and nr_uninterruptible tasks per-cpu.
The new task_struct attribute on_cpu_uninterruptible is added to
properly keep track of the cpu at deactivate time, when the task is set
to the uninterruptible sleep state.
Moreover, rq->nr_uninterruptible is converted to a pe
On Thu, Oct 04, 2012 at 02:12:08PM +0200, Peter Zijlstra wrote:
> On Thu, 2012-10-04 at 11:43 +0200, Andrea Righi wrote:
> >
> > Right, the update must be atomic to have a coherent nr_uninterruptible
> > value. And AFAICS the only way to account a coherent
> > nr_uninterruptible
> > value per-cpu
On Thu, 2012-10-04 at 11:43 +0200, Andrea Righi wrote:
>
> Right, the update must be atomic to have a coherent nr_uninterruptible
> value. And AFAICS the only way to account a coherent
> nr_uninterruptible
> value per-cpu is to go with atomic ops... mmh... I'll think more on
> this.
You could st
On Thu, Oct 04, 2012 at 10:59:46AM +0200, Peter Zijlstra wrote:
> On Thu, 2012-10-04 at 01:05 +0200, Andrea Righi wrote:
> > +++ b/kernel/sched/core.c
> > @@ -727,15 +727,17 @@ static void dequeue_task(struct rq *rq, struct
> > task_struct *p, int flags)
> > void activate_task(struct rq *rq, stru
On Thu, 2012-10-04 at 01:05 +0200, Andrea Righi wrote:
> +++ b/kernel/sched/core.c
> @@ -727,15 +727,17 @@ static void dequeue_task(struct rq *rq, struct
> task_struct *p, int flags)
> void activate_task(struct rq *rq, struct task_struct *p, int flags)
> {
> if (task_contributes_to_load(
Account per-cpu load average, as well as nr_running and
nr_uninterruptible tasks.
The new element on_cpu_uninterruptible to task_struct is added to
properly keep track of the cpu where the task was set to the
uninterruptible sleep state.
This feature is required by the cpusets cgroup subsystem
This patch changes how the cpu load exerted by fair_sched_class tasks
is calculated. Load exerted by fair_sched_class tasks on a cpu is now a
summation of the group weights, rather than summation of task weights.
Weight exerted by a group on a cpu is dependent on the shares allocated
to it.
This
This patch changes how the cpu load exerted by fair_sched_class tasks
is calculated. Load exerted by fair_sched_class tasks on a cpu is now a
summation of the group weights, rather than summation of task weights.
Weight exerted by a group on a cpu is dependent on the shares allocated
to it.
This
This patch changes how the cpu load exerted by fair_sched_class tasks
is calculated. Load exerted by fair_sched_class tasks on a cpu is now a
summation of the group weights, rather than summation of task weights.
Weight exerted by a group on a cpu is dependent on the shares allocated
to it.
This
This patch changes how the cpu load exerted by fair_sched_class tasks
is calculated. Load exerted by fair_sched_class tasks on a cpu is now a
summation of the group weights, rather than summation of task weights.
Weight exerted by a group on a cpu is dependent on the shares allocated to it.
This
de=0600,listmode=0644
ln -s .usbfs/devices /dev/bus/usb/devices
mount --rbind /dev/bus/usb /proc/bus/usb
...so that usb would work with a custom kernel...
Oh, and btw, my cpu is an Intel P4.
Also, I applied the following patch and build/ran a kernel (but it did
not reduce the cpu load on my system)
So I would assume that delay_tsc() probably only makes the situation worse for
the libata tests, but the real problem is at __switch_to() and schedule(). Do
you agree with these assumptions?
Yes. I agree that percentage of CPU time is unreasonably high for these
functions. But not only for them.
Hi Rafal, thank you for your help!
On Wednesday 08 August 2007 22:08:18 Rafał Bilski wrote:
> > Hello again,
>
> Hi,
>
> > I 'm now using libata on the same system described before (see attached
> > dmesg.txt). When writing to both disks I think the problem is now worse
> > (pata_oprof_bad.txt, p
Hello again,
Hi,
I 'm now using libata on the same system described before (see attached
dmesg.txt). When writing to both disks I think the problem is now worse
(pata_oprof_bad.txt, pata_vmstat_bad.txt), even the oprofile script needed
half an hour to complete! For completion I also attach the
On Tuesday 07 August 2007 03:37:08 Alan Cox wrote:
> > > acpi_pm_read is capable of disappearing into SMM traps which will make
> > > it look very slow.
> >
> > what is an SMM trap? I googled a bit but didn't get it...
>
> One of the less documented bits of the PC architecture. It is possible to
>
On Tuesday 07 August 2007 12:03:28 Rafał Bilski wrote:
> >> Just tested (plain curiosity).
> >> via82cxxx average result @533MHz:
> >> /dev/hda:
> >> Timing cached reads: 232 MB in 2.00 seconds = 115.93 MB/sec
> >> Timing buffered disk reads: 64 MB in 3.12 seconds = 20.54 MB/sec
> >> pata_vi
Just tested (plain curiosity).
via82cxxx average result @533MHz:
/dev/hda:
Timing cached reads: 232 MB in 2.00 seconds = 115.93 MB/sec
Timing buffered disk reads: 64 MB in 3.12 seconds = 20.54 MB/sec
pata_via average result @533MHz:
/dev/sda:
Timing cached reads: 234 MB in 2.01 seconds =
> > acpi_pm_read is capable of disappearing into SMM traps which will make
> > it look very slow.
>
> what is an SMM trap? I googled a bit but didn't get it...
One of the less documented bits of the PC architecture. It is possible to
arrange that the CPU jumps into a special mode when triggered
Rafał Bilski wrote:
Hello Rafal,
Hello,
However I find it quite possible to have reached the throughput limit
because of software (driver) problems. I have done various testing
(mostly "hdparm -tT" with exactly the same PC and disks since about
kernel 2.6.8 (maybe even earlier). I remember wi
Hi Alan,
Alan Cox wrote:
In Your oprofile output I find "acpi_pm_read" particulary interesting.
Unlike other VIA chipsets, which I know, Your doesn't use VLink to
connect northbridge to southbridge. Instead PCI bus connects these two.
As You probably know maximal PCI throughtput is 133MiB/s. I
Hello Rafal,
Hello,
However I find it quite possible to have reached the throughput limit
because of software (driver) problems. I have done various testing
(mostly "hdparm -tT" with exactly the same PC and disks since about
kernel 2.6.8 (maybe even earlier). I remember with certainty that rea
> > In Your oprofile output I find "acpi_pm_read" particulary interesting.
> > Unlike other VIA chipsets, which I know, Your doesn't use VLink to
> > connect northbridge to southbridge. Instead PCI bus connects these two.
> > As You probably know maximal PCI throughtput is 133MiB/s. In theory. I
switch rate when running two instances of
badblocks against two different disks went batshit insane. It doesn't
happen here.
Please capture the `vmstat 1' output while running the problematic
workload.
The oom-killing could have been unrelated to the CPU load problem. iirc
badblocks
Hi,
Andrew Morton wrote:
I suspect I was fooled by the oprofile output, which showed tremendous
amounts of load in schedule() and switch_to(). The percentages which
opreport shows are the percentage of non-halted CPU time. So if you have a
function in the kernel which is using 1% of the total
ever is certainly
because of the high system CPU load. If you see the two_discs_bad.txt
which I attached on my original message, you'll see that *vmlinux*,
and specifically the *scheduler*, take up most time.
And the fact that this happens only when running two i/o processes but
when runnin
gt; We're bad.
> >
> > Seems that your context switch rate when running two instances of
> > badblocks against two different disks went batshit insane. It doesn't
> > happen here.
> >
> > Please capture the `vmstat 1' output while running the pr
ainly because of
the high system CPU load. If you see the two_discs_bad.txt which I attached
on my original message, you'll see that *vmlinux*, and specifically the
*scheduler*, take up most time.
And the fact that this happens only when running two i/o processes but when
running only one eve
ater now but the system
CPU load much less, than that of two_discs_bad.txt.
However the cron jobs still seem to have a hard time finishing, even
though they seem now to consume about 90% CPU time. Could someone please
explain me some things that seem vital to understanding the situation?
disks went batshit insane. It doesn't
happen here.
Please capture the `vmstat 1' output while running the problematic
workload.
The oom-killing could have been unrelated to the CPU load problem. iirc
badblocks uses a lot of memory, so it might have been genuine. Keep an eye
on the /pr
Please capture the `vmstat 1' output while running the problematic
workload.
The oom-killing could have been unrelated to the CPU load problem. iirc
badblocks uses a lot of memory, so it might have been genuine. Keep an eye
on the /proc/meminfo output and send the kernel dmesg output from the
the high system CPU load. If you see the two_discs_bad.txt which I attached
on my original message, you'll see that *vmlinux*, and specifically the
*scheduler*, take up most time.
And the fact that this happens only when running two i/o processes but when
running only one everything is
and in result access is slower
> and slower.
Hello and thanks for your reply.
The cron job that is running every 10 min on my system is mpop (a
fetchmail-like program) and another running every 5 min is mrtg. Both
normally finish within 1-2 seconds.
The fact that these simple cron jobs don'
Hello again,
Hi!
was my report so complicated? Perhaps I shouldn't have included so many
oprofile outputs. Anyway, if anyone wants to have a look, the most important
is two_discs_bad.txt oprofile output, attached on my original message. The
problem is 100% reproducible for me so I would apprec
ntered when I started two processes doing heavy I/O
> on hda and hdc, "badblocks -v -w /dev/hda" and "badblocks -v -w
> /dev/hdc". At the beginning (two_discs.txt) everything was fine and
> vmstat reported more than 90% iowait CPU load. However, after a while
> (
On Mon, 26 Feb 2007 13:42:50 +0300 (MSK) malc wrote:
> On Mon, 26 Feb 2007, Pavel Machek wrote:
>
> > Hi!
> >
> >> [..snip..]
> >>
> The current situation ought to be documented. Better yet some flag
> can
> >>>
> >>> It probably _is_ documented, somewhere :-). If you find nice place
>
txt
new file mode 100644
index 000..287224e
--- /dev/null
+++ b/Documentation/cpu-load.txt
@@ -0,0 +1,113 @@
+CPU load
+
+
+Linux exports various bits of information via `/proc/stat' and
+`/proc/uptime' that userland tools, such as top(1), use to calculate
+the average time s
On Mon, 26 Feb 2007, Pavel Machek wrote:
Hi!
[..snip..]
The current situation ought to be documented. Better yet some flag
can
It probably _is_ documented, somewhere :-). If you find nice place
where to document it (top manpage?) go ahead with the patch.
How about this:
Looks okay to
> How about this:
Looks okay to me. (You should probably add your name to it, and I do
not like html-like markup... plus please don't add extra spaces
between words)...
You probably want to send it to akpm?
Pavel
>
&
On Wed, 14 Feb 2007, Pavel Machek wrote:
Hi!
[..snip..]
The current situation ought to be documented. Better yet some flag
can
It probably _is_ documented, somewhere :-). If you find nice place
where to document it (top manpage?) go ahead with the patch.
How about this:
CPU load
Hi!
>
> >>>I have (had?) code that 'exploits' this. I believe I could eat 90% of cpu
> >>>without being noticed.
> >>
> >>Slightly changed version of hog(around 3 lines in total changed) does that
> >>easily on 2.6.18.3 on PPC.
> >>
> >>http://www.boblycat.org/~malc/apc/load-hog-ppc.png
> >
> >I g
On Wednesday 14 February 2007 18:28, malc wrote:
> On Wed, 14 Feb 2007, Con Kolivas wrote:
> > On Wednesday 14 February 2007 09:01, malc wrote:
> >> On Mon, 12 Feb 2007, Pavel Machek wrote:
> >>> Hi!
>
> [..snip..]
>
> >>> I have (had?) code that 'exploits' this. I believe I could eat 90% of
> >>>
On Wed, 14 Feb 2007, Con Kolivas wrote:
On Wednesday 14 February 2007 09:01, malc wrote:
On Mon, 12 Feb 2007, Pavel Machek wrote:
Hi!
[..snip..]
I have (had?) code that 'exploits' this. I believe I could eat 90% of cpu
without being noticed.
Slightly changed version of hog(around 3 lines
On Wednesday 14 February 2007 09:01, malc wrote:
> On Mon, 12 Feb 2007, Pavel Machek wrote:
> > Hi!
> >
> >> The kernel looks at what is using cpu _only_ during the
> >> timer
> >> interrupt. Which means if your HZ is 1000 it looks at
> >> what is running
> >> at precisely the moment those 1000 tim
On Mon, 12 Feb 2007, Pavel Machek wrote:
Hi!
The kernel looks at what is using cpu _only_ during the
timer
interrupt. Which means if your HZ is 1000 it looks at
what is running
at precisely the moment those 1000 timer ticks occur. It
is
theoretically possible using this measurement system to
u
Hi!
> The kernel looks at what is using cpu _only_ during the
> timer
> interrupt. Which means if your HZ is 1000 it looks at
> what is running
> at precisely the moment those 1000 timer ticks occur. It
> is
> theoretically possible using this measurement system to
> use >99% cpu
> and record
On Mon, 12 Feb 2007, Andrew Burgess wrote:
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
How does the kernel calculates the value it places in `/proc/stat' at
4th position (i.e. "idle: twiddling thumbs")?
..
Later small kernel module was developed that tried to time how much
time
On Mon, 12 Feb 2007, Con Kolivas wrote:
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
Hello,
[..snip..]
The kernel looks at what is using cpu _only_ during the timer
interrupt. Which means if your HZ is 1000 it looks at what is running
at precisely the moment those 1000 timer tick
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
>
> How does the kernel calculates the value it places in `/proc/stat' at
> 4th position (i.e. "idle: twiddling thumbs")?
>
..
>
> Later small kernel module was developed that tried to time how much
> time is spent in the idle handler inside th
velopers incorrectly conclude
> that utilizing RTC suddenly made the code run slower, after all /proc/stat
> now claims that CPU load is higher, while in reality it stayed the same -
> it's the accuracy that has improved (somewhat)
>
> But back to the original question, does it l
e the code run slower, after all /proc/stat
now claims that CPU load is higher, while in reality it stayed the same -
it's the accuracy that has improved (somewhat)
But back to the original question, does it look at what's running on timer
interrupt only or any IRQ? (something which is
On Mon, 12 Feb 2007, Con Kolivas wrote:
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
[..snip..]
The kernel looks at what is using cpu _only_ during the timer
interrupt. Which means if your HZ is 1000 it looks at what is running
at precisely the moment those 1000 timer ticks occur.
On Monday 12 February 2007 16:54, malc wrote:
> On Mon, 12 Feb 2007, Con Kolivas wrote:
> > On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
>
> [..snip..]
>
> > The kernel looks at what is using cpu _only_ during the timer
> > interrupt. Which means if your HZ is 1000 it looks at what is run
On Monday 12 February 2007 16:55, Stephen Rothwell wrote:
> On Mon, 12 Feb 2007 16:44:22 +1100 "Con Kolivas" <[EMAIL PROTECTED]> wrote:
> > The kernel looks at what is using cpu _only_ during the timer
> > interrupt. Which means if your HZ is 1000 it looks at what is running
> > at precisely the mo
On Mon, 12 Feb 2007 16:44:22 +1100 "Con Kolivas" <[EMAIL PROTECTED]> wrote:
>
> The kernel looks at what is using cpu _only_ during the timer
> interrupt. Which means if your HZ is 1000 it looks at what is running
> at precisely the moment those 1000 timer ticks occur. It is
> theoretically possibl
On 12/02/07, Vassili Karpov <[EMAIL PROTECTED]> wrote:
Hello,
How does the kernel calculates the value it places in `/proc/stat' at
4th position (i.e. "idle: twiddling thumbs")?
For background information as to why this question arose in the first
place read on.
While writing the code dealing
Hello,
How does the kernel calculates the value it places in `/proc/stat' at
4th position (i.e. "idle: twiddling thumbs")?
For background information as to why this question arose in the first
place read on.
While writing the code dealing with video acquisition/processing at
work noticed that wh
On Thursday 08 February 2007 09:42, you wrote:
> On Wed, 7 Feb 2007, Arjan van de Ven wrote:
> > Marc Donner wrote:
> >> 501: 215717 209388 209430 202514 PCI-MSI-edge
> >> eth10 502:927 1019 1053888 PCI-MSI-edge
> >> eth11
> >
> > this is od
On Wed, 7 Feb 2007, Arjan van de Ven wrote:
Marc Donner wrote:
501: 215717 209388 209430 202514 PCI-MSI-edge eth10
502:927 1019 1053888 PCI-MSI-edge eth11
this is odd, this is not an irq distribution that irqbalance should give you
1
On Wednesday 07 February 2007 06:59, you wrote:
> Marc Donner wrote:
> > 501: 215717 209388 209430 202514 PCI-MSI-edge
> > eth10 502:927 1019 1053888 PCI-MSI-edge
> > eth11
>
> this is odd, this is not an irq distribution that irqbalance sho
Marc Donner wrote:
501: 215717 209388 209430 202514 PCI-MSI-edge eth10
502:927 1019 1053888 PCI-MSI-edge eth11
this is odd, this is not an irq distribution that irqbalance should
give you
1
NMI:451 39 42
> can you send me the output of
>
> cat /proc/interrupts
here it is:
irqblance is running.
network loaded with 600Mbit/s for about 5minutes.
CPU0 CPU1 CPU2 CPU3
0: 37713 41667 41673 49914 IO-APIC-edge timer
1: 0 0
Arjan van de Ven wrote:
Pablo Sebastian Greco wrote:
2296:427426436 134563009 PCI-MSI-edge
eth1
2297:252252 135926471257 PCI-MSI-edge
eth0
this suggests that cores would be busy rather than only one
-
Yes, but you are looki
Pablo Sebastian Greco wrote:
2296:427426436 134563009 PCI-MSI-edge eth1
2297:252252 135926471257 PCI-MSI-edge eth0
this suggests that cores would be busy rather than only one
-
To unsubscribe from this list: send the line "un
Arjan van de Ven wrote:
Marc Donner wrote:
see http://www.irqbalance.org to get irqbalance
I now have tried irqloadbalance, but the same problem.
can you send me the output of
cat /proc/interrupts
(taken when you are or have been loading the network)
maybe there's something fishy going
Marc Donner wrote:
see http://www.irqbalance.org to get irqbalance
I now have tried irqloadbalance, but the same problem.
can you send me the output of
cat /proc/interrupts
(taken when you are or have been loading the network)
maybe there's something fishy going on
-
To unsubscribe from
On Tuesday 06 February 2007 19:09, you wrote:
> On Tue, 2007-02-06 at 18:32 +0100, Marc Donner wrote:
> > Hi @all
> >
> > we have detected some problems on our live systems and so i have build a
> > test setup in our lab as follow:
> >
> > 3 Core 2 duo servers, each with 2 CPUs, with GE interfaces
On Tue, 2007-02-06 at 18:32 +0100, Marc Donner wrote:
> Hi @all
>
> we have detected some problems on our live systems and so i have build a test
> setup in our lab as follow:
>
> 3 Core 2 duo servers, each with 2 CPUs, with GE interfaces. 2 of them are
> only for generating network traffic. t
8/13
Do CPU load averaging over a number of different intervals. Allow
each interval to be chosen by sending a parameter to source_load
and target_load. 0 is instantaneous, idx > 0 returns a decaying average
with the most recent sample weighted at 2^(idx-1). To a maximum of 3
(could be eas
Hi there,
We think we might have encountered a problem with the cpu load
calculation in Linux (2.2.x). The current calculation is based on
dividing the jiffies to user, system and idle jiffies according to the
process the timer hardware interrupt interrupts.
In our case the timer interrupt
Why does this use up about 5% CPU (on my system) (pseude code of course)
while (data,size = get_data) {
write(/dev/dsp,data,size);
}
And this only uses about 0%:
while (data,size = get_data) {
write(/dev/dsp,data,size);
usleep(1);
}
I've also tried replacing the usleep
201 - 290 of 290 matches
Mail list logo