On Tue, 30 May 2017, Peter Zijlstra wrote:
> On Wed, May 24, 2017 at 12:01:50PM -0400, Vince Weaver wrote:
> > I already have people really grumpy that you have to have one mmap() page
> > per event, meaning you sacrifice one TLB entry for each event you are
> > measuring.
>
> So there is
On Tue, 30 May 2017, Peter Zijlstra wrote:
> On Wed, May 24, 2017 at 12:01:50PM -0400, Vince Weaver wrote:
> > I already have people really grumpy that you have to have one mmap() page
> > per event, meaning you sacrifice one TLB entry for each event you are
> > measuring.
>
> So there is
> > BTW there's an alternative solution in cycling the NMI watchdog over
> > all available CPUs. Then it would eventually cover all. But that's
> > less real time friendly than relying on RCU.
>
> I don't think we need to worry too much about the watchdog being rt
> friendly. Robustness is the
> > BTW there's an alternative solution in cycling the NMI watchdog over
> > all available CPUs. Then it would eventually cover all. But that's
> > less real time friendly than relying on RCU.
>
> I don't think we need to worry too much about the watchdog being rt
> friendly. Robustness is the
On Tue, May 30, 2017 at 10:51:51AM -0700, Andi Kleen wrote:
> On Tue, May 30, 2017 at 07:40:14PM +0200, Peter Zijlstra wrote:
> > On Tue, May 30, 2017 at 10:22:08AM -0700, Andi Kleen wrote:
> > > > > You would only need a single one per system however, not one per CPU.
> > > > > RCU already tracks
On Tue, May 30, 2017 at 10:51:51AM -0700, Andi Kleen wrote:
> On Tue, May 30, 2017 at 07:40:14PM +0200, Peter Zijlstra wrote:
> > On Tue, May 30, 2017 at 10:22:08AM -0700, Andi Kleen wrote:
> > > > > You would only need a single one per system however, not one per CPU.
> > > > > RCU already tracks
On Tue, May 30, 2017 at 07:40:14PM +0200, Peter Zijlstra wrote:
> On Tue, May 30, 2017 at 10:22:08AM -0700, Andi Kleen wrote:
> > > > You would only need a single one per system however, not one per CPU.
> > > > RCU already tracks all the CPUs, all we need is a single NMI watchdog
> > > > that
On Tue, May 30, 2017 at 07:40:14PM +0200, Peter Zijlstra wrote:
> On Tue, May 30, 2017 at 10:22:08AM -0700, Andi Kleen wrote:
> > > > You would only need a single one per system however, not one per CPU.
> > > > RCU already tracks all the CPUs, all we need is a single NMI watchdog
> > > > that
On Tue, May 30, 2017 at 10:22:08AM -0700, Andi Kleen wrote:
> > > You would only need a single one per system however, not one per CPU.
> > > RCU already tracks all the CPUs, all we need is a single NMI watchdog
> > > that makes sure RCU itself does not get stuck.
> > >
> > > So we just have to
On Tue, May 30, 2017 at 10:22:08AM -0700, Andi Kleen wrote:
> > > You would only need a single one per system however, not one per CPU.
> > > RCU already tracks all the CPUs, all we need is a single NMI watchdog
> > > that makes sure RCU itself does not get stuck.
> > >
> > > So we just have to
On Wed, May 24, 2017 at 12:01:50PM -0400, Vince Weaver wrote:
> I already have people really grumpy that you have to have one mmap() page
> per event, meaning you sacrifice one TLB entry for each event you are
> measuring.
So there is space in that page. We could maybe look at having an array
On Wed, May 24, 2017 at 12:01:50PM -0400, Vince Weaver wrote:
> I already have people really grumpy that you have to have one mmap() page
> per event, meaning you sacrifice one TLB entry for each event you are
> measuring.
So there is space in that page. We could maybe look at having an array
> > You would only need a single one per system however, not one per CPU.
> > RCU already tracks all the CPUs, all we need is a single NMI watchdog
> > that makes sure RCU itself does not get stuck.
> >
> > So we just have to find a single watchdog somewhere that can trigger
> > NMI.
>
> But
> > You would only need a single one per system however, not one per CPU.
> > RCU already tracks all the CPUs, all we need is a single NMI watchdog
> > that makes sure RCU itself does not get stuck.
> >
> > So we just have to find a single watchdog somewhere that can trigger
> > NMI.
>
> But
On Tue, 30 May 2017, Stephane Eranian wrote:
> On Tue, May 30, 2017 at 2:25 AM, Peter Zijlstra wrote:
> > On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> >> Ultimately, I would like to see the watchdog move out of the PMU. That
> >> is the only sensible
On Tue, 30 May 2017, Stephane Eranian wrote:
> On Tue, May 30, 2017 at 2:25 AM, Peter Zijlstra wrote:
> > On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> >> Ultimately, I would like to see the watchdog move out of the PMU. That
> >> is the only sensible solution.
> >> You
On Tue, May 30, 2017 at 9:28 AM, Peter Zijlstra wrote:
> On Tue, May 30, 2017 at 06:51:28AM -0700, Andi Kleen wrote:
>> On Tue, May 30, 2017 at 11:25:23AM +0200, Peter Zijlstra wrote:
>> > On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
>> > > Ultimately, I
On Tue, May 30, 2017 at 9:28 AM, Peter Zijlstra wrote:
> On Tue, May 30, 2017 at 06:51:28AM -0700, Andi Kleen wrote:
>> On Tue, May 30, 2017 at 11:25:23AM +0200, Peter Zijlstra wrote:
>> > On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
>> > > Ultimately, I would like to see the
On Tue, May 30, 2017 at 2:25 AM, Peter Zijlstra wrote:
> On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
>> Ultimately, I would like to see the watchdog move out of the PMU. That
>> is the only sensible solution.
>> You just need a resource able to
On Tue, May 30, 2017 at 2:25 AM, Peter Zijlstra wrote:
> On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
>> Ultimately, I would like to see the watchdog move out of the PMU. That
>> is the only sensible solution.
>> You just need a resource able to interrupt on NMI or you handle
On Tue, May 30, 2017 at 06:51:28AM -0700, Andi Kleen wrote:
> On Tue, May 30, 2017 at 11:25:23AM +0200, Peter Zijlstra wrote:
> > On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> > > Ultimately, I would like to see the watchdog move out of the PMU. That
> > > is the only
On Tue, May 30, 2017 at 06:51:28AM -0700, Andi Kleen wrote:
> On Tue, May 30, 2017 at 11:25:23AM +0200, Peter Zijlstra wrote:
> > On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> > > Ultimately, I would like to see the watchdog move out of the PMU. That
> > > is the only
On Tue, May 30, 2017 at 11:25:23AM +0200, Peter Zijlstra wrote:
> On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> > Ultimately, I would like to see the watchdog move out of the PMU. That
> > is the only sensible solution.
> > You just need a resource able to interrupt on NMI or
On Tue, May 30, 2017 at 11:25:23AM +0200, Peter Zijlstra wrote:
> On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> > Ultimately, I would like to see the watchdog move out of the PMU. That
> > is the only sensible solution.
> > You just need a resource able to interrupt on NMI or
On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> Ultimately, I would like to see the watchdog move out of the PMU. That
> is the only sensible solution.
> You just need a resource able to interrupt on NMI or you handle
> interrupt masking in software as has
> been proposed on
On Sun, May 28, 2017 at 01:31:09PM -0700, Stephane Eranian wrote:
> Ultimately, I would like to see the watchdog move out of the PMU. That
> is the only sensible solution.
> You just need a resource able to interrupt on NMI or you handle
> interrupt masking in software as has
> been proposed on
On Wed, May 24, 2017 at 9:01 AM, Vince Weaver wrote:
>
> On Wed, 24 May 2017, Andi Kleen wrote:
>
> > > Right, I did not even consider the rdpmc, but yeah, you will get a count
> > > that
> > > is not relevant to the user visible event. Unless you fake it using the
> >
On Wed, May 24, 2017 at 9:01 AM, Vince Weaver wrote:
>
> On Wed, 24 May 2017, Andi Kleen wrote:
>
> > > Right, I did not even consider the rdpmc, but yeah, you will get a count
> > > that
> > > is not relevant to the user visible event. Unless you fake it using the
> > > time
> > > scaling
Hi Kan,
[auto build test ERROR on linus/master]
[also build test ERROR on v4.12-rc2 next-20170526]
[cannot apply to tip/x86/core]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
Hi Kan,
[auto build test ERROR on linus/master]
[also build test ERROR on v4.12-rc2 next-20170526]
[cannot apply to tip/x86/core]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
> The whole point of the rdpmc() implementation is to be low overhead.
> If you have to parse 10 different mmap() fields it starts to defeat the
> purpose.
You would only use it with ref-cycles of course. So for the normal
case there is no overhead.
> If the watchdog counter is constantly
> The whole point of the rdpmc() implementation is to be low overhead.
> If you have to parse 10 different mmap() fields it starts to defeat the
> purpose.
You would only use it with ref-cycles of course. So for the normal
case there is no overhead.
> If the watchdog counter is constantly
On Wed, 24 May 2017, Andi Kleen wrote:
> > Right, I did not even consider the rdpmc, but yeah, you will get a count
> > that
> > is not relevant to the user visible event. Unless you fake it using the time
> > scaling fields there but that's ugly.
>
> Could add another scaling field to the mmap
On Wed, 24 May 2017, Andi Kleen wrote:
> > Right, I did not even consider the rdpmc, but yeah, you will get a count
> > that
> > is not relevant to the user visible event. Unless you fake it using the time
> > scaling fields there but that's ugly.
>
> Could add another scaling field to the mmap
> Right, I did not even consider the rdpmc, but yeah, you will get a count that
> is not relevant to the user visible event. Unless you fake it using the time
> scaling fields there but that's ugly.
Could add another scaling field to the mmap page for this.
-Andi
> Right, I did not even consider the rdpmc, but yeah, you will get a count that
> is not relevant to the user visible event. Unless you fake it using the time
> scaling fields there but that's ugly.
Could add another scaling field to the mmap page for this.
-Andi
On Mon, May 22, 2017 at 11:39 PM, Peter Zijlstra wrote:
> On Mon, May 22, 2017 at 12:28:26PM -0700, Stephane Eranian wrote:
>> On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra
>> wrote:
>> > On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
On Mon, May 22, 2017 at 11:39 PM, Peter Zijlstra wrote:
> On Mon, May 22, 2017 at 12:28:26PM -0700, Stephane Eranian wrote:
>> On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra
>> wrote:
>> > On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
>> >>
>> >>
>> >> > On Fri, May 19, 2017 at
On Mon, May 22, 2017 at 12:28:26PM -0700, Stephane Eranian wrote:
> On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra wrote:
> > On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
> >>
> >>
> >> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
>
On Mon, May 22, 2017 at 12:28:26PM -0700, Stephane Eranian wrote:
> On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra wrote:
> > On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
> >>
> >>
> >> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> >> > > diff --git
>
> On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra
> wrote:
> > On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
> >>
> >>
> >> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> >> > > diff --git a/arch/x86/events/core.c
>
> On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra
> wrote:
> > On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
> >>
> >>
> >> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> >> > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> >> > > index
On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra wrote:
> On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
>>
>>
>> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
>> > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
>>
On Mon, May 22, 2017 at 12:23 PM, Peter Zijlstra wrote:
> On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
>>
>>
>> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
>> > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
>> > > 580b60f..e8b2326
On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
>
>
> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
> > > 580b60f..e8b2326 100644
> > > --- a/arch/x86/events/core.c
> > > +++
On Mon, May 22, 2017 at 04:55:47PM +, Liang, Kan wrote:
>
>
> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
> > > 580b60f..e8b2326 100644
> > > --- a/arch/x86/events/core.c
> > > +++
Hi,
On Mon, May 22, 2017 at 1:30 AM, Peter Zijlstra wrote:
> On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
>> From: Kan Liang
>>
>> The CPU ref_cycles can only be used by one user at the same time,
>> otherwise a "not counted"
Hi,
On Mon, May 22, 2017 at 1:30 AM, Peter Zijlstra wrote:
> On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
>> From: Kan Liang
>>
>> The CPU ref_cycles can only be used by one user at the same time,
>> otherwise a "not counted" error will be displaced.
>> [kan]$ sudo
> On Mon, May 22, 2017 at 11:19:16AM +0200, Peter Zijlstra wrote:
> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > > @@ -934,6 +938,21 @@ int x86_schedule_events(struct cpu_hw_events
> > > *cpuc, int n, int *assign)
>
> > > for (i = 0; i < n; i++) {
> > >
> On Mon, May 22, 2017 at 11:19:16AM +0200, Peter Zijlstra wrote:
> > On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > > @@ -934,6 +938,21 @@ int x86_schedule_events(struct cpu_hw_events
> > > *cpuc, int n, int *assign)
>
> > > for (i = 0; i < n; i++) {
> > >
> On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
> > 580b60f..e8b2326 100644
> > --- a/arch/x86/events/core.c
> > +++ b/arch/x86/events/core.c
> > @@ -101,6 +101,10 @@ u64 x86_perf_event_update(struct
> On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index
> > 580b60f..e8b2326 100644
> > --- a/arch/x86/events/core.c
> > +++ b/arch/x86/events/core.c
> > @@ -101,6 +101,10 @@ u64 x86_perf_event_update(struct
On Mon, May 22, 2017 at 11:19:16AM +0200, Peter Zijlstra wrote:
> On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > @@ -934,6 +938,21 @@ int x86_schedule_events(struct cpu_hw_events *cpuc,
> > int n, int *assign)
> > for (i = 0; i < n; i++) {
> >
On Mon, May 22, 2017 at 11:19:16AM +0200, Peter Zijlstra wrote:
> On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> > @@ -934,6 +938,21 @@ int x86_schedule_events(struct cpu_hw_events *cpuc,
> > int n, int *assign)
> > for (i = 0; i < n; i++) {
> >
On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 580b60f..e8b2326 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -101,6 +101,10 @@ u64 x86_perf_event_update(struct perf_event
On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 580b60f..e8b2326 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -101,6 +101,10 @@ u64 x86_perf_event_update(struct perf_event
On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> From: Kan Liang
>
> The CPU ref_cycles can only be used by one user at the same time,
> otherwise a "not counted" error will be displaced.
> [kan]$ sudo perf stat -x, -e ref-cycles,ref-cycles -- sleep
On Fri, May 19, 2017 at 10:06:21AM -0700, kan.li...@intel.com wrote:
> From: Kan Liang
>
> The CPU ref_cycles can only be used by one user at the same time,
> otherwise a "not counted" error will be displaced.
> [kan]$ sudo perf stat -x, -e ref-cycles,ref-cycles -- sleep 1
>
From: Kan Liang
The CPU ref_cycles can only be used by one user at the same time,
otherwise a "not counted" error will be displaced.
[kan]$ sudo perf stat -x, -e ref-cycles,ref-cycles -- sleep 1
1203264,,ref-cycles,513112,100.00
,,ref-cycles,0,0.00
CPU
From: Kan Liang
The CPU ref_cycles can only be used by one user at the same time,
otherwise a "not counted" error will be displaced.
[kan]$ sudo perf stat -x, -e ref-cycles,ref-cycles -- sleep 1
1203264,,ref-cycles,513112,100.00
,,ref-cycles,0,0.00
CPU ref_cycles can only be
60 matches
Mail list logo