On Mon, Sep 23, 2013 at 07:11:21PM +0200, Stephane Eranian wrote:
> Ok so what you are saying is that the ovfl_status is not maintained private
> to each counter but shared among all PEBS counters by ucode. That's
> how you end up leaking between counters like that.
I only remember asking for
On Mon, Sep 23, 2013 at 5:33 PM, Peter Zijlstra wrote:
> On Mon, Sep 23, 2013 at 05:25:19PM +0200, Stephane Eranian wrote:
>> > Its not just a broken threshold. When a PEBS event happens it can re-arm
>> > itself but only if you program a RESET value !0. We don't do that, so
>> > each counter
On Mon, Sep 23, 2013 at 05:25:19PM +0200, Stephane Eranian wrote:
> > Its not just a broken threshold. When a PEBS event happens it can re-arm
> > itself but only if you program a RESET value !0. We don't do that, so
> > each counter should only ever fire once.
> >
> > We must do this because PEBS
On Mon, Sep 16, 2013 at 6:29 PM, Peter Zijlstra wrote:
> On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
>>
>> * Stephane Eranian wrote:
>>
>> > Hi,
>> >
>> > Some updates on this problem.
>> > I have been running tests all week-end long on my HSW.
>> > I can reproduce the problem.
On Mon, Sep 16, 2013 at 6:29 PM, Peter Zijlstra pet...@infradead.org wrote:
On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
Some updates on this problem.
I have been running tests all week-end long on my HSW.
I can
On Mon, Sep 23, 2013 at 05:25:19PM +0200, Stephane Eranian wrote:
Its not just a broken threshold. When a PEBS event happens it can re-arm
itself but only if you program a RESET value !0. We don't do that, so
each counter should only ever fire once.
We must do this because PEBS is broken
On Mon, Sep 23, 2013 at 07:11:21PM +0200, Stephane Eranian wrote:
Ok so what you are saying is that the ovfl_status is not maintained private
to each counter but shared among all PEBS counters by ucode. That's
how you end up leaking between counters like that.
I only remember asking for
On Mon, Sep 23, 2013 at 5:33 PM, Peter Zijlstra pet...@infradead.org wrote:
On Mon, Sep 23, 2013 at 05:25:19PM +0200, Stephane Eranian wrote:
Its not just a broken threshold. When a PEBS event happens it can re-arm
itself but only if you program a RESET value !0. We don't do that, so
each
* Peter Zijlstra wrote:
> On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
> >
> > * Stephane Eranian wrote:
> >
> > > Hi,
> > >
> > > Some updates on this problem.
> > > I have been running tests all week-end long on my HSW.
> > > I can reproduce the problem. What I know:
> > >
* Peter Zijlstra pet...@infradead.org wrote:
On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
Some updates on this problem.
I have been running tests all week-end long on my HSW.
I can reproduce the
On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
>
> * Stephane Eranian wrote:
>
> > Hi,
> >
> > Some updates on this problem.
> > I have been running tests all week-end long on my HSW.
> > I can reproduce the problem. What I know:
> >
> > - It is not linked with callchain
> > -
* Stephane Eranian wrote:
> Hi,
>
> Some updates on this problem.
> I have been running tests all week-end long on my HSW.
> I can reproduce the problem. What I know:
>
> - It is not linked with callchain
> - The extra entries are valid
> - The reset values are still zeroes
> - The problem
Hi,
Some updates on this problem.
I have been running tests all week-end long on my HSW.
I can reproduce the problem. What I know:
- It is not linked with callchain
- The extra entries are valid
- The reset values are still zeroes
- The problem does not happen on SNB with the same test case
-
Hi,
Some updates on this problem.
I have been running tests all week-end long on my HSW.
I can reproduce the problem. What I know:
- It is not linked with callchain
- The extra entries are valid
- The reset values are still zeroes
- The problem does not happen on SNB with the same test case
-
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
Some updates on this problem.
I have been running tests all week-end long on my HSW.
I can reproduce the problem. What I know:
- It is not linked with callchain
- The extra entries are valid
- The reset values are still zeroes
-
On Mon, Sep 16, 2013 at 05:41:46PM +0200, Ingo Molnar wrote:
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
Some updates on this problem.
I have been running tests all week-end long on my HSW.
I can reproduce the problem. What I know:
- It is not linked with callchain
* Stephane Eranian wrote:
> On Tue, Sep 10, 2013 at 7:29 AM, Ingo Molnar wrote:
> >
> > * Stephane Eranian wrote:
> >
> >> On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar wrote:
> >> >
> >> > * Stephane Eranian wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> Ok, so I am able to reproduce the problem
On Tue, Sep 10, 2013 at 5:28 PM, Peter Zijlstra wrote:
> On Tue, Sep 10, 2013 at 07:15:19AM -0700, Stephane Eranian wrote:
>> The threshold is where to generate the interrupt. It does not mean
>> where to stop PEBS recording.
>
> It does, since we don't set a reset value. So once a PEBS assist
>
On Tue, Sep 10, 2013 at 07:15:19AM -0700, Stephane Eranian wrote:
> The threshold is where to generate the interrupt. It does not mean
> where to stop PEBS recording.
It does, since we don't set a reset value. So once a PEBS assist
happens, that counter stops until we reprogram it in the PMI.
>
On Tue, Sep 10, 2013 at 7:29 AM, Ingo Molnar wrote:
>
> * Stephane Eranian wrote:
>
>> On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar wrote:
>> >
>> > * Stephane Eranian wrote:
>> >
>> >> Hi,
>> >>
>> >> Ok, so I am able to reproduce the problem using a simpler
>> >> test case with a simple
* Stephane Eranian wrote:
> On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar wrote:
> >
> > * Stephane Eranian wrote:
> >
> >> Hi,
> >>
> >> Ok, so I am able to reproduce the problem using a simpler
> >> test case with a simple multithreaded program where
> >> #threads >> #CPUs.
> >
> > Does it
On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar wrote:
>
> * Stephane Eranian wrote:
>
>> Hi,
>>
>> Ok, so I am able to reproduce the problem using a simpler
>> test case with a simple multithreaded program where
>> #threads >> #CPUs.
>
> Does it go away if you use 'perf record --all-cpus'?
>
* Stephane Eranian wrote:
> Hi,
>
> Ok, so I am able to reproduce the problem using a simpler
> test case with a simple multithreaded program where
> #threads >> #CPUs.
Does it go away if you use 'perf record --all-cpus'?
> [ 2229.021934] WARNING: CPU: 6 PID: 17496 at
>
* Stephane Eranian wrote:
> On Tue, Sep 10, 2013 at 5:51 AM, Ramkumar Ramachandra
> wrote:
> > Stephane Eranian wrote:
> >> a simple multithreaded program where
> >> #threads >> #CPUs
> >
> > To put it another way, does Intel's HT work for CPU intensive and IO
> > minimal tasks? I think HT
On Tue, Sep 10, 2013 at 5:51 AM, Ramkumar Ramachandra
wrote:
> Stephane Eranian wrote:
>> a simple multithreaded program where
>> #threads >> #CPUs
>
> To put it another way, does Intel's HT work for CPU intensive and IO
> minimal tasks? I think HT assumes some amount of inefficient IO
> coupled
Stephane Eranian wrote:
> a simple multithreaded program where
> #threads >> #CPUs
To put it another way, does Intel's HT work for CPU intensive and IO
minimal tasks? I think HT assumes some amount of inefficient IO
coupled with pure CPU usage.
--
To unsubscribe from this list: send the line
Stephane Eranian wrote:
> [ 2229.021966] Call Trace:
> [ 2229.021967][] dump_stack+0x46/0x58
> [ 2229.021976] [] warn_slowpath_common+0x8c/0xc0
> [ 2229.021979] [] warn_slowpath_fmt+0x46/0x50
> [ 2229.021982] [] intel_pmu_drain_pebs_hsw+0xa8/0xc0
> [ 2229.021986] []
Hi,
Ok, so I am able to reproduce the problem using a simpler
test case with a simple multithreaded program where
#threads >> #CPUs.
[ 2229.021934] WARNING: CPU: 6 PID: 17496 at
arch/x86/kernel/cpu/perf_event_intel_ds.c:1003
intel_pmu_drain_pebs_hsw+0xa8/0xc0()
[ 2229.021936] Unexpected number
* Stephane Eranian wrote:
> Hi,
>
>
> And what was the perf record command line for this crash?
AFAICS it wasn't a crash but the WARN_ON() in intel_pmu_drain_pebs_hsw(),
at arch/x86/kernel/cpu/perf_event_intel_ds.c:1003.
at = (struct pebs_record_hsw *)(unsigned
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
And what was the perf record command line for this crash?
AFAICS it wasn't a crash but the WARN_ON() in intel_pmu_drain_pebs_hsw(),
at arch/x86/kernel/cpu/perf_event_intel_ds.c:1003.
at = (struct pebs_record_hsw *)(unsigned
Hi,
Ok, so I am able to reproduce the problem using a simpler
test case with a simple multithreaded program where
#threads #CPUs.
[ 2229.021934] WARNING: CPU: 6 PID: 17496 at
arch/x86/kernel/cpu/perf_event_intel_ds.c:1003
intel_pmu_drain_pebs_hsw+0xa8/0xc0()
[ 2229.021936] Unexpected number of
Stephane Eranian wrote:
[ 2229.021966] Call Trace:
[ 2229.021967] NMI [8159dcd6] dump_stack+0x46/0x58
[ 2229.021976] [8108dfdc] warn_slowpath_common+0x8c/0xc0
[ 2229.021979] [8108e0c6] warn_slowpath_fmt+0x46/0x50
[ 2229.021982] [810646c8]
Stephane Eranian wrote:
a simple multithreaded program where
#threads #CPUs
To put it another way, does Intel's HT work for CPU intensive and IO
minimal tasks? I think HT assumes some amount of inefficient IO
coupled with pure CPU usage.
--
To unsubscribe from this list: send the line
On Tue, Sep 10, 2013 at 5:51 AM, Ramkumar Ramachandra
artag...@gmail.com wrote:
Stephane Eranian wrote:
a simple multithreaded program where
#threads #CPUs
To put it another way, does Intel's HT work for CPU intensive and IO
minimal tasks? I think HT assumes some amount of inefficient IO
* Stephane Eranian eran...@googlemail.com wrote:
On Tue, Sep 10, 2013 at 5:51 AM, Ramkumar Ramachandra
artag...@gmail.com wrote:
Stephane Eranian wrote:
a simple multithreaded program where
#threads #CPUs
To put it another way, does Intel's HT work for CPU intensive and IO
minimal
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
Ok, so I am able to reproduce the problem using a simpler
test case with a simple multithreaded program where
#threads #CPUs.
Does it go away if you use 'perf record --all-cpus'?
[ 2229.021934] WARNING: CPU: 6 PID: 17496 at
On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar mi...@kernel.org wrote:
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
Ok, so I am able to reproduce the problem using a simpler
test case with a simple multithreaded program where
#threads #CPUs.
Does it go away if you use 'perf record
* Stephane Eranian eran...@googlemail.com wrote:
On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar mi...@kernel.org wrote:
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
Ok, so I am able to reproduce the problem using a simpler
test case with a simple multithreaded program where
On Tue, Sep 10, 2013 at 7:29 AM, Ingo Molnar mi...@kernel.org wrote:
* Stephane Eranian eran...@googlemail.com wrote:
On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar mi...@kernel.org wrote:
* Stephane Eranian eran...@googlemail.com wrote:
Hi,
Ok, so I am able to reproduce the problem
On Tue, Sep 10, 2013 at 07:15:19AM -0700, Stephane Eranian wrote:
The threshold is where to generate the interrupt. It does not mean
where to stop PEBS recording.
It does, since we don't set a reset value. So once a PEBS assist
happens, that counter stops until we reprogram it in the PMI.
So
On Tue, Sep 10, 2013 at 5:28 PM, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Sep 10, 2013 at 07:15:19AM -0700, Stephane Eranian wrote:
The threshold is where to generate the interrupt. It does not mean
where to stop PEBS recording.
It does, since we don't set a reset value. So once a
* Stephane Eranian eran...@googlemail.com wrote:
On Tue, Sep 10, 2013 at 7:29 AM, Ingo Molnar mi...@kernel.org wrote:
* Stephane Eranian eran...@googlemail.com wrote:
On Tue, Sep 10, 2013 at 6:38 AM, Ingo Molnar mi...@kernel.org wrote:
* Stephane Eranian eran...@googlemail.com
42 matches
Mail list logo