* Jörn Engel <[EMAIL PROTECTED]> wrote:
> On Mon, 3 December 2007 01:57:02 +0100, Jörn Engel wrote:
> >
> > After an eternity of compile time, this config does generate some useful
> > output. qemu is not to blame.
>
> Or is it? The output definitely looks suspicious. Large amounts of
>
* Jörn Engel [EMAIL PROTECTED] wrote:
On Mon, 3 December 2007 01:57:02 +0100, Jörn Engel wrote:
After an eternity of compile time, this config does generate some useful
output. qemu is not to blame.
Or is it? The output definitely looks suspicious. Large amounts of
code get
On Mon, 3 December 2007 01:57:02 +0100, Jörn Engel wrote:
>
> After an eternity of compile time, this config does generate some useful
> output. qemu is not to blame.
Or is it? The output definitely looks suspicious. Large amounts of
code get processed within a microsecond, while
On Mon, 3 December 2007 01:57:02 +0100, Jörn Engel wrote:
After an eternity of compile time, this config does generate some useful
output. qemu is not to blame.
Or is it? The output definitely looks suspicious. Large amounts of
code get processed within a microsecond, while
On Sun, 2 December 2007 22:19:00 +0100, Ingo Molnar wrote:
> * Jörn Engel <[EMAIL PROTECTED]> wrote:
>
> > Maybe one more thing: can you send me the config you used for the
> > setup above? I'd like to know whether qemu or my config is to blame.
>
> sure - attached.
After an eternity of
On Sun, 2 December 2007 21:45:59 +0100, Ingo Molnar wrote:
>
> to capture that trace i did not use -rt, i just patched latest -git
> with:
>
>
> http://people.redhat.com/mingo/latency-tracing-patches/latency-tracing-v2.6.24-rc3.combo.patch
>
> (this has your fixes included already)
>
>
On Sun, 2 December 2007 21:45:59 +0100, Ingo Molnar wrote:
>
> to capture a 1 second trace of what the system is doing. I think your
> troubles are due to running it within a qemu guest - that is not a
> typical utilization so you are on unchartered waters.
Looks like it. Guess I'll switch to
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> > oprofile helps if you can reliably reproduce the slowdown in a loop
> > or for a long amount of time, with lots of CPU utilization - and
> > then it's also lower overhead. The tracer can be used to capture
> > rare or complex events, and gives the
On Sun, 2 December 2007 21:07:22 +0100, Ingo Molnar wrote:
> * Jörn Engel <[EMAIL PROTECTED]> wrote:
>
> > Result looked like a livelock and finally convinced me to abandon the
> > latency tracer. Sorry, but it appears to be the right tool for the
> > wrong job.
>
> hm, we routinely use it in
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> On Sun, 2 December 2007 16:47:46 +0100, Ingo Molnar wrote:
> >
> > well what does the trace say, where do the delays come from? To get a
> > quick overview you can make tracing lighter weight by doing:
> >
> > echo 0 > /proc/sys/kernel/mcount_enabled
On Sun, 2 December 2007 16:47:46 +0100, Ingo Molnar wrote:
>
> well what does the trace say, where do the delays come from? To get a
> quick overview you can make tracing lighter weight by doing:
>
> echo 0 > /proc/sys/kernel/mcount_enabled
> echo 1 > /proc/sys/kernel/trace_syscalls
I
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> I do. Went through 10odd runs and annotated the function right below
> mcount each time. Seems to work now.
>
> Trouble is that it doesn't solve my real problem at hand. Something
> is causing significant delays when writing to logfs. Core logfs
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> On Sun, 2 December 2007 13:31:25 +0100, Jörn Engel wrote:
> >
> > After another ten or so notrace annotations throughout the spinlock
> > code, the latency tracer appears to work. Not sure how many useful
> > information is missing through all the
On Sun, 2 December 2007 13:31:25 +0100, Jörn Engel wrote:
>
> After another ten or so notrace annotations throughout the spinlock
> code, the latency tracer appears to work. Not sure how many useful
> information is missing through all the annotations, though.
And here is a patch with the
On Sun, 2 December 2007 14:57:11 +0100, Ingo Molnar wrote:
>
> hm, do you have CONFIG_FRAME_POINTERS=y, i.e. are the dumps reliable?
I do. Went through 10odd runs and annotated the function right below
mcount each time. Seems to work now.
Trouble is that it doesn't solve my real problem at
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> On Sun, 2 December 2007 09:56:08 +0100, Ingo Molnar wrote:
> > * Jörn Engel <[EMAIL PROTECTED]> wrote:
> >
> > > > ah. You should mark pit_read() function as notrace. PIT clocksource
> > > > is rare. (add the 'notrace' word to the function prototype)
>
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> On Sun, 2 December 2007 12:31:43 +0100, Jörn Engel wrote:
> >
> > This time not even the offsets have changed. Dump is identical.
>
> After another ten or so notrace annotations throughout the spinlock
> code, the latency tracer appears to work. Not
On Sun, 2 December 2007 12:31:43 +0100, Jörn Engel wrote:
>
> This time not even the offsets have changed. Dump is identical.
After another ten or so notrace annotations throughout the spinlock
code, the latency tracer appears to work. Not sure how many useful
information is missing through
On Sun, 2 December 2007 09:56:08 +0100, Ingo Molnar wrote:
> * Jörn Engel <[EMAIL PROTECTED]> wrote:
>
> > > ah. You should mark pit_read() function as notrace. PIT clocksource
> > > is rare. (add the 'notrace' word to the function prototype)
> >
> > Hardly a change at all. Apart from some
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> > ah. You should mark pit_read() function as notrace. PIT clocksource
> > is rare. (add the 'notrace' word to the function prototype)
>
> Hardly a change at all. Apart from some offsets, this dump is
> identical.
>
> stopped custom tracer.
> BUG:
* Jörn Engel [EMAIL PROTECTED] wrote:
ah. You should mark pit_read() function as notrace. PIT clocksource
is rare. (add the 'notrace' word to the function prototype)
Hardly a change at all. Apart from some offsets, this dump is
identical.
stopped custom tracer.
BUG: spinlock
On Sun, 2 December 2007 09:56:08 +0100, Ingo Molnar wrote:
* Jörn Engel [EMAIL PROTECTED] wrote:
ah. You should mark pit_read() function as notrace. PIT clocksource
is rare. (add the 'notrace' word to the function prototype)
Hardly a change at all. Apart from some offsets, this
On Sun, 2 December 2007 12:31:43 +0100, Jörn Engel wrote:
This time not even the offsets have changed. Dump is identical.
After another ten or so notrace annotations throughout the spinlock
code, the latency tracer appears to work. Not sure how many useful
information is missing through all
* Jörn Engel [EMAIL PROTECTED] wrote:
On Sun, 2 December 2007 09:56:08 +0100, Ingo Molnar wrote:
* Jörn Engel [EMAIL PROTECTED] wrote:
ah. You should mark pit_read() function as notrace. PIT clocksource
is rare. (add the 'notrace' word to the function prototype)
Hardly a
* Jörn Engel [EMAIL PROTECTED] wrote:
On Sun, 2 December 2007 12:31:43 +0100, Jörn Engel wrote:
This time not even the offsets have changed. Dump is identical.
After another ten or so notrace annotations throughout the spinlock
code, the latency tracer appears to work. Not sure how
On Sun, 2 December 2007 14:57:11 +0100, Ingo Molnar wrote:
hm, do you have CONFIG_FRAME_POINTERS=y, i.e. are the dumps reliable?
I do. Went through 10odd runs and annotated the function right below
mcount each time. Seems to work now.
Trouble is that it doesn't solve my real problem at
On Sun, 2 December 2007 13:31:25 +0100, Jörn Engel wrote:
After another ten or so notrace annotations throughout the spinlock
code, the latency tracer appears to work. Not sure how many useful
information is missing through all the annotations, though.
And here is a patch with the needed
* Jörn Engel [EMAIL PROTECTED] wrote:
On Sun, 2 December 2007 13:31:25 +0100, Jörn Engel wrote:
After another ten or so notrace annotations throughout the spinlock
code, the latency tracer appears to work. Not sure how many useful
information is missing through all the annotations,
* Jörn Engel [EMAIL PROTECTED] wrote:
I do. Went through 10odd runs and annotated the function right below
mcount each time. Seems to work now.
Trouble is that it doesn't solve my real problem at hand. Something
is causing significant delays when writing to logfs. Core logfs code
On Sun, 2 December 2007 16:47:46 +0100, Ingo Molnar wrote:
well what does the trace say, where do the delays come from? To get a
quick overview you can make tracing lighter weight by doing:
echo 0 /proc/sys/kernel/mcount_enabled
echo 1 /proc/sys/kernel/trace_syscalls
I mistyped and
* Jörn Engel [EMAIL PROTECTED] wrote:
On Sun, 2 December 2007 16:47:46 +0100, Ingo Molnar wrote:
well what does the trace say, where do the delays come from? To get a
quick overview you can make tracing lighter weight by doing:
echo 0 /proc/sys/kernel/mcount_enabled
echo 1
On Sun, 2 December 2007 21:07:22 +0100, Ingo Molnar wrote:
* Jörn Engel [EMAIL PROTECTED] wrote:
Result looked like a livelock and finally convinced me to abandon the
latency tracer. Sorry, but it appears to be the right tool for the
wrong job.
hm, we routinely use it in -rt to
* Jörn Engel [EMAIL PROTECTED] wrote:
oprofile helps if you can reliably reproduce the slowdown in a loop
or for a long amount of time, with lots of CPU utilization - and
then it's also lower overhead. The tracer can be used to capture
rare or complex events, and gives the full flow
On Sun, 2 December 2007 21:45:59 +0100, Ingo Molnar wrote:
to capture a 1 second trace of what the system is doing. I think your
troubles are due to running it within a qemu guest - that is not a
typical utilization so you are on unchartered waters.
Looks like it. Guess I'll switch to
On Sun, 2 December 2007 21:45:59 +0100, Ingo Molnar wrote:
to capture that trace i did not use -rt, i just patched latest -git
with:
http://people.redhat.com/mingo/latency-tracing-patches/latency-tracing-v2.6.24-rc3.combo.patch
(this has your fixes included already)
have done:
On Sun, 2 December 2007 22:19:00 +0100, Ingo Molnar wrote:
* Jörn Engel [EMAIL PROTECTED] wrote:
Maybe one more thing: can you send me the config you used for the
setup above? I'd like to know whether qemu or my config is to blame.
sure - attached.
After an eternity of compile time,
On Sat, 1 December 2007 21:54:56 +0100, Ingo Molnar wrote:
> * J??rn Engel <[EMAIL PROTECTED]> wrote:
> >
> > stopped custom tracer.
> > BUG: spinlock recursion on CPU#0, sh/953
> > lock: c030f280, .magic: dead4ead, .owner: sh/953, .owner_cpu: 0
> > Pid: 953, comm: sh Not tainted
* J??rn Engel <[EMAIL PROTECTED]> wrote:
> Almost. "-serial stdio" was missing. Much better now.
>
> stopped custom tracer.
> BUG: spinlock recursion on CPU#0, sh/953
> lock: c030f280, .magic: dead4ead, .owner: sh/953, .owner_cpu: 0
> Pid: 953, comm: sh Not tainted 2.6.24-rc3-ge1cca7e8-dirty
On Sat, 1 December 2007 19:32:56 +0100, Ingo Molnar wrote:
> * Jörn Engel <[EMAIL PROTECTED]> wrote:
>
> > I have to change my qemu setup a little to see the top of those
> > dumps...
>
> btw., if you start qemu like this:
>
> qemu -cdrom ./cdrom.iso -hda ./hda.img -boot c -full-screen -kernel
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> > hm, that looks weird. if you disable CONFIG_PROVE_LOCKING, does that
> > improve things? (or just turns a noisy lockup into a silent lockup?)
>
> Not much, although the dumps look different now:
> http://logfs.org/~joern/trace3.jpg
>
On Fri, 30 November 2007 19:46:25 +0100, Ingo Molnar wrote:
> * Jörn Engel <[EMAIL PROTECTED]> wrote:
> >
> > It compiles. It boots with a 512M RAM (384M was too little with all
> > the other debug options on). But it seems to lock up when running
> > trace-cmd. On a rerun it locks up again,
On Fri, 30 November 2007 19:46:25 +0100, Ingo Molnar wrote:
* Jörn Engel [EMAIL PROTECTED] wrote:
It compiles. It boots with a 512M RAM (384M was too little with all
the other debug options on). But it seems to lock up when running
trace-cmd. On a rerun it locks up again, but with
* Jörn Engel [EMAIL PROTECTED] wrote:
hm, that looks weird. if you disable CONFIG_PROVE_LOCKING, does that
improve things? (or just turns a noisy lockup into a silent lockup?)
Not much, although the dumps look different now:
http://logfs.org/~joern/trace3.jpg
On Sat, 1 December 2007 19:32:56 +0100, Ingo Molnar wrote:
* Jörn Engel [EMAIL PROTECTED] wrote:
I have to change my qemu setup a little to see the top of those
dumps...
btw., if you start qemu like this:
qemu -cdrom ./cdrom.iso -hda ./hda.img -boot c -full-screen -kernel
~/bzImage
* J??rn Engel [EMAIL PROTECTED] wrote:
Almost. -serial stdio was missing. Much better now.
stopped custom tracer.
BUG: spinlock recursion on CPU#0, sh/953
lock: c030f280, .magic: dead4ead, .owner: sh/953, .owner_cpu: 0
Pid: 953, comm: sh Not tainted 2.6.24-rc3-ge1cca7e8-dirty #2
On Sat, 1 December 2007 21:54:56 +0100, Ingo Molnar wrote:
* J??rn Engel [EMAIL PROTECTED] wrote:
stopped custom tracer.
BUG: spinlock recursion on CPU#0, sh/953
lock: c030f280, .magic: dead4ead, .owner: sh/953, .owner_cpu: 0
Pid: 953, comm: sh Not tainted 2.6.24-rc3-ge1cca7e8-dirty
Mark Lord wrote:
Ingo Molnar wrote:
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> On Fri, 30 November 2007 14:43:12 +0100, Ingo Molnar wrote:
> >
> >
> > http://redhat.com/~mingo/latency-tracing-patches/latency-tracing-v2.6.24-rc3.combo.patch
> >
> > does it work any better?
>
> It compiles. It boots with a 512M RAM (384M was
On Fri, 30 November 2007 14:43:12 +0100, Ingo Molnar wrote:
>
>
> http://redhat.com/~mingo/latency-tracing-patches/latency-tracing-v2.6.24-rc3.combo.patch
>
> does it work any better?
It compiles. It boots with a 512M RAM (384M was too little with all
the other debug options on). But it
On Fri, 30 November 2007 14:35:46 +0100, Ingo Molnar wrote:
> * Jörn Engel <[EMAIL PROTECTED]> wrote:
> >
> > kernel/sched.c:3384: warning: ‘struct prio_array’ declared inside parameter
> > list
> > kernel/sched.c:3384: warning: its scope is only this definition or
> > declaration, which is
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > > sorry, wrong URLs, the correct links are:
> > >
> > >
> > > http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
> > >http://redhat.com/~mingo/latency-tracing-patches/trace-cmd.c
> >
> > Don't seem
* Jörn Engel <[EMAIL PROTECTED]> wrote:
> On Thu, 15 November 2007 20:36:12 +0100, Ingo Molnar wrote:
> > * Ingo Molnar <[EMAIL PROTECTED]> wrote:
> >
> > > pick up the latest latency tracer patch from:
> >
> > sorry, wrong URLs, the correct links are:
> >
> >
> >
On Thu, 15 November 2007 20:36:12 +0100, Ingo Molnar wrote:
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > pick up the latest latency tracer patch from:
>
> sorry, wrong URLs, the correct links are:
>
>
>
* Jörn Engel [EMAIL PROTECTED] wrote:
On Fri, 30 November 2007 14:43:12 +0100, Ingo Molnar wrote:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracing-v2.6.24-rc3.combo.patch
does it work any better?
It compiles. It boots with a 512M RAM (384M was too little with
On Fri, 30 November 2007 14:43:12 +0100, Ingo Molnar wrote:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracing-v2.6.24-rc3.combo.patch
does it work any better?
It compiles. It boots with a 512M RAM (384M was too little with all
the other debug options on). But it seems
* Ingo Molnar [EMAIL PROTECTED] wrote:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
http://redhat.com/~mingo/latency-tracing-patches/trace-cmd.c
Don't seem to work with plain
On Thu, 15 November 2007 20:36:12 +0100, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
On Fri, 30 November 2007 14:35:46 +0100, Ingo Molnar wrote:
* Jörn Engel [EMAIL PROTECTED] wrote:
kernel/sched.c:3384: warning: ‘struct prio_array’ declared inside parameter
list
kernel/sched.c:3384: warning: its scope is only this definition or
declaration, which is probably not
* Jörn Engel [EMAIL PROTECTED] wrote:
On Thu, 15 November 2007 20:36:12 +0100, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
Mark Lord wrote:
Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
Ingo Molnar wrote:
* Mark Lord <[EMAIL PROTECTED]> wrote:
Since Ingo's latency trace patches lock up the machine on resume, the
next thing I'll try instead is to re-enable CONFIG_IRQBALANCE=y.
hm, which patch did you try? Could you check whether all chunks from the
patch below are applied?
* Mark Lord <[EMAIL PROTECTED]> wrote:
> Since Ingo's latency trace patches lock up the machine on resume, the
> next thing I'll try instead is to re-enable CONFIG_IRQBALANCE=y.
hm, which patch did you try? Could you check whether all chunks from the
patch below are applied? (these are the
Mark Lord wrote:
I have been reporting this off and on since 2.6.23 was released.
This problem was not apparent up to perhaps 2.6.23-rc8,
but definitely became common in 2.6.23 and 2.6.23.1.
Most of the time, a resume-from-RAM on my notebook takes about 2.1 seconds
of kernel time to complete.
* Mark Lord [EMAIL PROTECTED] wrote:
Since Ingo's latency trace patches lock up the machine on resume, the
next thing I'll try instead is to re-enable CONFIG_IRQBALANCE=y.
hm, which patch did you try? Could you check whether all chunks from the
patch below are applied? (these are the fixed
Mark Lord wrote:
I have been reporting this off and on since 2.6.23 was released.
This problem was not apparent up to perhaps 2.6.23-rc8,
but definitely became common in 2.6.23 and 2.6.23.1.
Most of the time, a resume-from-RAM on my notebook takes about 2.1 seconds
of kernel time to complete.
Ingo Molnar wrote:
* Mark Lord [EMAIL PROTECTED] wrote:
Since Ingo's latency trace patches lock up the machine on resume, the
next thing I'll try instead is to re-enable CONFIG_IRQBALANCE=y.
hm, which patch did you try? Could you check whether all chunks from the
patch below are applied?
Ingo Molnar wrote:
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
Is there a version of these that works with 2.6.23.1 ?
yes, i've backported it and have uploaded the v2.6.23 version to:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.23.1-combo.patch
..
ok, i
Mark Lord wrote:
Ingo Molnar wrote:
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
On Fri, 2007-11-16 at 12:23 +0100, Ingo Molnar wrote:
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > once that tracer bug was fixed, the best method to generate a trace
> > was to do this:
> >
> >echo 1 > /proc/sys/kernel/stackframe_tracing
> >echo 1 > /proc/sys/kernel/syscall_tracing
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> so here's an UP suspend+resume trace i did:
>
>
> http://redhat.com/~mingo/latency-tracing-patches/misc/trace-suspend-long.txt.bz2
>
> tons of detail - which might be interesting to other folks as well.
> Fact is, our suspend-to-RAM+resume cycle
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> once that tracer bug was fixed, the best method to generate a trace
> was to do this:
>
>echo 1 > /proc/sys/kernel/stackframe_tracing
>echo 1 > /proc/sys/kernel/syscall_tracing
>./trace-cmd bash -c "echo mem > /sys/power/state" > trace.txt
ok, i experimented around with the latency tracer, trying to capture the
trace of a full suspend+resume cycle, and it needed the tracer fix below
(GTOD clocksource suspend/resume would otherwise confuse the tracer and
you'd get no trace output as a result).
once that tracer bug was fixed, the
* Ingo Molnar [EMAIL PROTECTED] wrote:
once that tracer bug was fixed, the best method to generate a trace
was to do this:
echo 1 /proc/sys/kernel/stackframe_tracing
echo 1 /proc/sys/kernel/syscall_tracing
./trace-cmd bash -c echo mem /sys/power/state trace.txt
so here's an
On Fri, 2007-11-16 at 12:23 +0100, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
once that tracer bug was fixed, the best method to generate a trace
was to do this:
echo 1 /proc/sys/kernel/stackframe_tracing
echo 1 /proc/sys/kernel/syscall_tracing
* Ingo Molnar [EMAIL PROTECTED] wrote:
so here's an UP suspend+resume trace i did:
http://redhat.com/~mingo/latency-tracing-patches/misc/trace-suspend-long.txt.bz2
tons of detail - which might be interesting to other folks as well.
Fact is, our suspend-to-RAM+resume cycle is very,
ok, i experimented around with the latency tracer, trying to capture the
trace of a full suspend+resume cycle, and it needed the tracer fix below
(GTOD clocksource suspend/resume would otherwise confuse the tracer and
you'd get no trace output as a result).
once that tracer bug was fixed, the
Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
Is there a version of these that works with 2.6.23.1 ?
yes, i've backported it and have uploaded the v2.6.23 version to:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.23.1-combo.patch
..
ok, i experimented
Mark Lord wrote:
Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > Is there a version of these that works with 2.6.23.1 ?
>
> yes, i've backported it and have uploaded the v2.6.23 version to:
>
>
> http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.23.1-combo.patch
btw., if the trace is too
* Mark Lord <[EMAIL PROTECTED]> wrote:
>> sorry, wrong URLs, the correct links are:
>>
>>
>> http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
>>http://redhat.com/~mingo/latency-tracing-patches/trace-cmd.c
> ..
>
> Is there a version of these
Ingo Molnar wrote:
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
On Thursday, 15 of November 2007, Mark Lord wrote:
> > Hi!
> >
> >> > I have been reporting this off and on since 2.6.23 was released.
> >> > This problem was not apparent up to perhaps 2.6.23-rc8,
> >> > but definitely became common in 2.6.23 and 2.6.23.1.
> >> >
> >> > Most of the time, a
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
http://redhat.com/~mingo/latency-tracing-patches/trace-cmd.c
* Mark Lord <[EMAIL PROTECTED]> wrote:
>> Can you try nohz=off highres=off? Strange stuff is happening with
>> nohz.
>
> (added Ingo to CC: list: maybe this is some weird interaction with CFS
> and jiffies being reset to 0 on resume ??)
hm, CFS should have no impact here. To see what's
Hi!
> I have been reporting this off and on since 2.6.23 was released.
> This problem was not apparent up to perhaps 2.6.23-rc8,
> but definitely became common in 2.6.23 and 2.6.23.1.
>
> Most of the time, a resume-from-RAM on my notebook takes about 2.1 seconds
> of kernel time to complete.
Hi!
> I have been reporting this off and on since 2.6.23 was released.
> This problem was not apparent up to perhaps 2.6.23-rc8,
> but definitely became common in 2.6.23 and 2.6.23.1.
>
> Most of the time, a resume-from-RAM on my notebook takes about 2.1 seconds
> of kernel time to complete.
>
Mark Lord wrote:
Ray Lee wrote:
..
The real question is, why the 1-sec pauses?
Well, and why the 1-second pauses eventually stop, too. Seems
interesting that they don't continue. Also, they're pretty much
dead-on one- and two-second pauses, with HZ accuracy. Is this with a
NO_HZ kernel?
..
Ray Lee wrote:
On Nov 15, 2007 8:32 AM, Mark Lord <[EMAIL PROTECTED]> wrote:
I have been reporting this off and on since 2.6.23 was released.
This problem was not apparent up to perhaps 2.6.23-rc8,
but definitely became common in 2.6.23 and 2.6.23.1.
Most of the time, a resume-from-RAM on my
On Nov 15, 2007 8:32 AM, Mark Lord <[EMAIL PROTECTED]> wrote:
> I have been reporting this off and on since 2.6.23 was released.
> This problem was not apparent up to perhaps 2.6.23-rc8,
> but definitely became common in 2.6.23 and 2.6.23.1.
>
> Most of the time, a resume-from-RAM on my notebook
Hi!
I have been reporting this off and on since 2.6.23 was released.
This problem was not apparent up to perhaps 2.6.23-rc8,
but definitely became common in 2.6.23 and 2.6.23.1.
Most of the time, a resume-from-RAM on my notebook takes about 2.1 seconds
of kernel time to complete.
Once
On Nov 15, 2007 8:32 AM, Mark Lord [EMAIL PROTECTED] wrote:
I have been reporting this off and on since 2.6.23 was released.
This problem was not apparent up to perhaps 2.6.23-rc8,
but definitely became common in 2.6.23 and 2.6.23.1.
Most of the time, a resume-from-RAM on my notebook takes
Ray Lee wrote:
On Nov 15, 2007 8:32 AM, Mark Lord [EMAIL PROTECTED] wrote:
I have been reporting this off and on since 2.6.23 was released.
This problem was not apparent up to perhaps 2.6.23-rc8,
but definitely became common in 2.6.23 and 2.6.23.1.
Most of the time, a resume-from-RAM on my
Mark Lord wrote:
Ray Lee wrote:
..
The real question is, why the 1-sec pauses?
Well, and why the 1-second pauses eventually stop, too. Seems
interesting that they don't continue. Also, they're pretty much
dead-on one- and two-second pauses, with HZ accuracy. Is this with a
NO_HZ kernel?
..
Hi!
I have been reporting this off and on since 2.6.23 was released.
This problem was not apparent up to perhaps 2.6.23-rc8,
but definitely became common in 2.6.23 and 2.6.23.1.
Most of the time, a resume-from-RAM on my notebook takes about 2.1 seconds
of kernel time to complete.
* Ingo Molnar [EMAIL PROTECTED] wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
http://redhat.com/~mingo/latency-tracing-patches/trace-cmd.c
* Mark Lord [EMAIL PROTECTED] wrote:
Can you try nohz=off highres=off? Strange stuff is happening with
nohz.
(added Ingo to CC: list: maybe this is some weird interaction with CFS
and jiffies being reset to 0 on resume ??)
hm, CFS should have no impact here. To see what's happening you
On Thursday, 15 of November 2007, Mark Lord wrote:
Hi!
I have been reporting this off and on since 2.6.23 was released.
This problem was not apparent up to perhaps 2.6.23-rc8,
but definitely became common in 2.6.23 and 2.6.23.1.
Most of the time, a resume-from-RAM on my
Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
pick up the latest latency tracer patch from:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
* Mark Lord [EMAIL PROTECTED] wrote:
sorry, wrong URLs, the correct links are:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.24-rc2-git5-combo.patch
http://redhat.com/~mingo/latency-tracing-patches/trace-cmd.c
..
Is there a version of these that works with
* Ingo Molnar [EMAIL PROTECTED] wrote:
Is there a version of these that works with 2.6.23.1 ?
yes, i've backported it and have uploaded the v2.6.23 version to:
http://redhat.com/~mingo/latency-tracing-patches/latency-tracer-v2.6.23.1-combo.patch
btw., if the trace is too large and
100 matches
Mail list logo