>
> Hello.
>
> On 09/12/2016 05:55 PM, kan.li...@intel.com wrote:
>
> > From: Kan Liang
> >
> > Net policy needs to know device information. Currently, it's enough to
> > only get irq information of rx and tx queues.
> >
> > This patch introduces ndo ops to do so, not ethtool ops.
> > Because t
> -Original Message-
> From: Tom Herbert [mailto:t...@herbertland.com]
> Sent: Monday, September 12, 2016 4:23 PM
> To: Liang, Kan
> Cc: David S. Miller ; LKML ker...@vger.kernel.org>; Linux Kernel Network Developers
> ; Kirsher, Jeffrey T ;
> Ingo Molnar ; pet.
> On Tue, Sep 13, 2016 at 5:23 AM, Liang, Kan wrote:
> >>
> >> Hello.
> >>
> >> On 09/12/2016 05:55 PM, kan.li...@intel.com wrote:
> >>
> >> > From: Kan Liang
> >> >
> >> > Net policy needs to know device
>
> > 5. Why disable IRQ balance?
> > A: Disabling IRQ balance is a common way (recommend way for some
> devices) to
> >tune network performance.
>
> I appreciate that network tuning is hard, most people get it wrong, and
> nobody agrees on the right answer.
>
> So rather than fix
>
> On Thu, Aug 4, 2016 at 12:36 PM, wrote:
> > From: Kan Liang
> >
> > To achieve better network performance, the key step is to distribute
> > the packets to dedicated queues according to policy and system run
> > time status.
> >
> > This patch provides an interface which can return the pr
> On Wed, Aug 9, 2017 at 1:42 PM, wrote:
> > From: Kan Liang
> >
> > For understanding how the workload maps to memory channels and
> hardware
> > behavior, it's very important to collect address maps with physical
> > addresses. For example, 3D XPoint access can only be found by filtering
> >
> On Tue, 15 Aug 2017, Liang, Kan wrote:
> > This patch which speed up the hrtimer
> (https://lkml.org/lkml/2017/6/26/685)
> > is decent to fix the spurious hard lockups.
> > Tested-by: Kan Liang
> >
> > Please consider to merge it into both mainline and st
> On Mon, Aug 14, 2017 at 5:52 PM, Tim Chen
> wrote:
> > We encountered workloads that have very long wake up list on large
> > systems. A waker takes a long time to traverse the entire wake list
> > and execute all the wake functions.
> >
> > We saw page wait list that are up to 3700+ entries lo
> > Here is the wake_up_page_bit call stack when the workaround is running,
> which
> > is collected by perf record -g -a -e probe:wake_up_page_bit -- sleep 10
>
> It's actually not really wake_up_page_bit() that is all that
> interesting, it would be more interesting to see which path it is tha
Hi Arnaldo and Jirka,
Ping.
Any comments for the patch?
Thanks,
Kan
> Subject: RE: [PATCH V2 0/2] measure SMI cost (user)
>
> Hi Jirka,
>
> Have you got a chance to try the code?
> Are you OK with the patch?
>
> Thanks,
> Kan
>
> >
> > Em Fri, Ju
> >
> > The right fix for mainline can be found here.
> > perf/x86/intel: enable CPU ref_cycles for GP counter perf/x86/intel,
> > watchdog: Switch NMI watchdog to ref cycles on x86
> > https://patchwork.kernel.org/patch/9779087/
> > https://patchwork.kernel.org/patch/9779089/
>
> Presumably the
>
> On Tue, Jun 20, 2017 at 02:33:09PM -0700, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > Some users reported spurious NMI watchdog timeouts.
> >
> > We now have more and more systems where the Turbo range is wide
> enough
> > that the NMI watchdog expires faster than the soft watchdog
.
> On Wed, Jun 21, 2017 at 12:40:28PM +0000, Liang, Kan wrote:
> >
> > > >
> > > > The right fix for mainline can be found here.
> > > > perf/x86/intel: enable CPU ref_cycles for GP counter
> > > > perf/x86/intel,
> > >
> On Wed, 21 Jun 2017, kan.li...@intel.com wrote:
> >
> > #ifdef CONFIG_HARDLOCKUP_DETECTOR
> > +/*
> > + * The NMI watchdog relies on PERF_COUNT_HW_CPU_CYCLES event,
> which
> > + * can tick faster than the measured CPU Frequency due to Turbo mode.
> > + * That can lead to spurious timeouts.
>
Ping.
Any comments for this patch?
Thanks,
Kan
>
> From: Kan Liang
>
> As of Skylake Server, there are a number of free-running counters in
> each IIO Box that collect counts for per box IO clocks and per Port
> Input/Output x BW/Utilization.
>
> The event code of free running event is shared
> On Mon, Oct 16, 2017 at 3:26 PM, wrote:
> > From: Kan Liang
> >
> > There could be different types of memory in the system. E.g normal
> > System Memory, Persistent Memory. To understand how the workload
> maps to
> > those memories, it's important to know the I/O statistics on different
> > t
> On Tue, Oct 17, 2017 at 12:54 PM, Liang, Kan wrote:
> >> On Mon, Oct 16, 2017 at 3:26 PM, wrote:
> >> > From: Kan Liang
> >> >
> >> > There could be different types of memory in the system. E.g normal
> >> > System Memory, Pe
> >
> > Right, it doesn’t need load latency. 0x81d0 should be a better choice.
> > I will use 0x81d0 and 0x82d0 as default event for V2.
>
> That's model specific. You would need to check the model number if you do
> that.
>
> Also with modern perf you can use the correct event names of course.
>
> On Mon, May 29, 2017 at 02:52:39PM +0200, Peter Zijlstra wrote:
> > On Mon, May 29, 2017 at 02:46:37PM +0200, Jiri Olsa wrote:
> >
> > > for some reason I can't get single SMI count generated, is there a
> > > setup/bench that would provoke that?
> >
> > Not having SMIs is a good thing ;-)
>
> The meaning of perf record's "overwrite" option and many "overwrite" in
> source code are not clear. In perf's code, the 'overwrite' has 2 meanings:
> 1. Make ringbuffer readonly (perf_evlist__mmap_ex's argument).
> 2. Set evsel's "backward" attribute (in apply_config_terms).
>
> perf record d
> On 2017/11/1 20:00, Namhyung Kim wrote:
> > On Wed, Nov 01, 2017 at 06:32:50PM +0800, Wangnan (F) wrote:
> >>
> >> On 2017/11/1 17:49, Namhyung Kim wrote:
> >>> Hi,
> >>>
> >>> On Wed, Nov 01, 2017 at 05:53:26AM +, Wang Nan wrote:
> perf record backward recording doesn't work as we expec
> On 2017/11/1 21:26, Liang, Kan wrote:
> >> The meaning of perf record's "overwrite" option and many "overwrite"
> >> in source code are not clear. In perf's code, the 'overwrite' has 2
> >> meanings:
> >> 1. Ma
> On 2017/11/1 22:22, Liang, Kan wrote:
> >> On 2017/11/1 21:26, Liang, Kan wrote:
> >>>> The meaning of perf record's "overwrite" option and many "overwrite"
> >>>> in source code are not clear. In perf's code,
> On 2017/11/1 23:04, Liang, Kan wrote:
> >> On 2017/11/1 22:22, Liang, Kan wrote:
> >>>> On 2017/11/1 21:26, Liang, Kan wrote:
> >>>>>> The meaning of perf record's "overwrite" option and many
> "overwrite"
> >
> On 2017/11/1 21:57, Liang, Kan wrote:
> >> On 2017/11/1 20:00, Namhyung Kim wrote:
> >>> On Wed, Nov 01, 2017 at 06:32:50PM +0800, Wangnan (F) wrote:
> >>>> On 2017/11/1 17:49, Namhyung Kim wrote:
> >>>>> Hi,
> >>>>>
>
Hi Namhyung,
> On Wed, Nov 01, 2017 at 04:22:53PM +0000, Liang, Kan wrote:
> > > On 2017/11/1 21:57, Liang, Kan wrote:
> > > >> On 2017/11/1 20:00, Namhyung Kim wrote:
> > > >>> On Wed, Nov 01, 2017 at 06:32:50PM +0800, Wangnan (F) wrote:
> > >
> On Tue, 24 Oct 2017, kan.li...@intel.com wrote:
> > - if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > + if (event->hw.idx == UNCORE_PMC_IDX_FIXED)
> > shift = 64 - uncore_fixed_ctr_bits(box);
> > else
> > shift = 64 - uncore_perf_ctr_bits(box); diff --git
> > a/arch
> On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > On Tue, 24 Oct 2017, kan.li...@intel.com wrote:
> > > > > - if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > > > + if (event->hw.idx
> On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > > > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > > > Patch 5/5 will clean up the client IMC unc
> On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > But then you have this in uncore_perf_event_update():
> > >
> > > - if (event->hw.idx >= UNCORE_PMC_IDX_FIXED)
> > > + if (event->h
> On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > On Thu, 2 Nov 2017, Liang, Kan wrote:
> > > > > On Thu, 2 Nov 2017, Thomas Gleixner wrote:
> > > > > But then you have this in uncore_perf_event_update():
> > > > >
> > > > >
> On Tue, Oct 24, 2017 at 11:22:00AM +0200, Ingo Molnar wrote:
> >
> > * Liang, Kan wrote:
> >
> > > For 'all', do you mean the whole process?
> >
> > Yeah.
> >
> > > I think that's the ultimate goal. Eventually there
> Based on previous discussion, perf needs to support only two types
> of ringbuffer: read-write + forward, readonly + backward. This patchset
> completly removes the concept of 'overwrite' from code level, controls
> mapping permission using write_backward instead.
I think I suggested to remove t
> Since all 'overwrite' usage are cleaned and no one really use a readonly main
> ringbuffer, remove 'overwrite' from function arguments and evlist. The
> concept
> of 'overwrite' and 'write_backward' are cleanner than before:
>
> 1. In code level, there's no 'overwrite' concept. Each evlist has
Hi Stephane,
Any comments for the script?
Thanks,
Kan
>
> From: Kan Liang
>
> There could be different types of memory in the system. E.g normal
> System Memory, Persistent Memory. To understand how the workload maps
> to
> those memories, it's important to know the I/O statistics of them.
> P
> On Wed, Oct 18, 2017 at 07:29:32AM -0700, kan.li...@intel.com wrote:
>
> SNIP
>
> > + rec->synthesized_file = calloc(nr_thread, sizeof(struct
> perf_data_file));
> > + if (rec->synthesized_file == NULL) {
> > + pr_debug("Could not do multithread synthesize."
> > +
> On Thu, Jul 02, 2015 at 03:08:43AM -0400, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > print_aggr fails to print per-core/per-socket statistics after commit
> > 582ec0829b3d ("perf stat: Fix per-socket output bug for uncore
> > events") if events have differnt cpus. Because in print_
>
> Em Fri, Aug 28, 2015 at 01:33:22PM +, Liang, Kan escreveu:
> > > On Thu, Jul 02, 2015 at 03:08:43AM -0400, kan.li...@intel.com wrote:
> > > > From: Kan Liang print_aggr fails to print
> > > > per-core/per-socket statistics after commit 582ec0829
> -Original Message-
> From: Arnaldo Carvalho de Melo [mailto:a...@kernel.org]
> Sent: Friday, August 28, 2015 10:47 AM
> To: Liang, Kan
> Cc: Jiri Olsa; jo...@kernel.org; a...@linux.intel.com; namhy...@kernel.org;
> eran...@google.com; Hunter, Adrian; dsah...@gmai
> On Tue, Aug 25, 2015 at 1:15 PM, Liang, Kan wrote:
> >
> >> >> >
> >> >> I understand that these metrics are useful and needed however if I
> >> >> look at the broader picture I see many PMUs doing similar things
> >> >
> On Fri, Aug 28, 2015 at 8:00 AM, Liang, Kan wrote:
> >
> >
> >> On Tue, Aug 25, 2015 at 1:15 PM, Liang, Kan
> wrote:
> >> >
> >> >> >> >
> >> >> >> I understand that these metrics are useful and need
> On Thu, Aug 27, 2015 at 07:25:35AM -0400, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > This patch parse cpu_topology from perf.data, and store cpu's socket
> > id and core id information in perf_session_env.
> >
> > Signed-off-by: Kan Liang
> > ---
>
> hum, I've made simple test to t
> On Fri, Aug 28, 2015 at 09:43:38AM -0400, Kan Liang wrote:
> > From: Kan Liang
> >
> > This patch stores cpu socket_id and core_id in perf.date, and read
> > them to perf_env in header process.
> >
> > Signed-off-by: Kan Liang
> > ---
> >
> > Changes since V1:
> > - Store core_id and socket_
> On Fri, Aug 28, 2015 at 05:48:07AM -0400, Kan Liang wrote:
> > From: Kan Liang
> >
> > The group read results from cycles/ref-cycles/TSC/ASTATE/MSTATE
> event
> > can be used to calculate the frequency, CPU Utilization and percent
> > performance during each sampling period.
> > This patch sho
> Em Fri, Aug 21, 2015 at 03:54:36PM +0200, Jiri Olsa escreveu:
> > On Fri, Aug 21, 2015 at 02:23:14AM -0400, kan.li...@intel.com wrote:
> > > From: Kan Liang
> > >
> > > evsel may have different cpus and threads as evlist's.
> > > Use it's own cpus and threads, when open evsel in perf record.
>
>
> Em Mon, Aug 31, 2015 at 09:06:29PM +, Liang, Kan escreveu:
> >
> >
> > > Em Fri, Aug 21, 2015 at 03:54:36PM +0200, Jiri Olsa escreveu:
> > > > On Fri, Aug 21, 2015 at 02:23:14AM -0400, kan.li...@intel.com wrote:
> > > > > From: K
>
> Em Tue, Sep 01, 2015 at 09:58:13AM -0400, Kan Liang escreveu:
> > From: Jiri Olsa
> >
> > This patch test cpu core_id and socket_id which are stored in perf_env.
> >
> > Signed-off-by: Jiri Olsa
> > Signed-off-by: Kan Liang
> > ---
> >
> > Changes since jirka's original version
> > - Use
>
> On 2015/9/8 15:37, Jiri Olsa wrote:
> > On Mon, Sep 07, 2015 at 09:27:26PM +0800, Wangnan (F) wrote:
> >
> > SNIP
> >
> >> I found the problem.
> >>
> >> perf relies on build_cpu_topology() to fetch CPU_TOPOLOGY from sysfs.
> >> It depend on the existance of
> >>
> >> /sys/devices/system/cpu
>
> On Thu, Sep 03, 2015 at 08:30:59AM -0400, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > To generate the frequency and performance output, perf must sample
> > read special events like cycles, ref-cycles, msr/tsc/, msr/aperf/ or
> > msr/mperf/.
> > With the --freq-perf option, perf
> diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c index
> 151b8310ac70..d4c8aa2f4db7 100644
> --- a/tools/perf/util/header.c
> +++ b/tools/perf/util/header.c
> @@ -415,8 +415,6 @@ struct cpu_topo {
> u32 thread_sib;
> char **core_siblings;
> char **thread_sibling
>
> From: Arnaldo Carvalho de Melo
>
> This reverts commit d49e4695077278ee3016cd242967de23072ec331.
>
> We don't need it, using machine->env seems to be enough.
The patchset to dump freq per sample need commit d49e469507.
It also needs commit 2c07144dfc which is revert by PATCH 13.
https://
>
> On Fri, Oct 09, 2015 at 06:31:23PM +, Liang, Kan wrote:
>
> SNIP
>
> > > could not reproduce this one.. any chance you could compile with
> > > DEBUG=1 and re-run in gdb for more details? like which of the frees
> > > got crazy.. ?
> >
>
> hi,
> sending another version of stat scripting.
>
> v4 changes:
> - added attr update event for event's cpumask
> - forbig aggregation on task workloads
> - some minor reorders and changelog fixes
>
> v3 changes:
> - added attr update event to handle unit,scale,name for event
>
Hi Arnaldo
Here is one more fix for perf/core need to be pulled.
Thanks,
Kan
>
> On Fri, Oct 09, 2015 at 06:59:23AM -0400, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > Perf will core dump if --per-socket/core -a are applied for perf stat.
> >
> > The root cause is that cpu_map__build
> > SNIP
> >
> > > diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> > > index a96fb5c..5ef88f7 100644
> > > --- a/tools/perf/builtin-stat.c
> > > +++ b/tools/perf/builtin-stat.c
> > > @@ -1179,7 +1179,7 @@ int cmd_stat(int argc, const char **argv, const
> char *prefix __maybe_
> -Original Message-
> From: Arnaldo Carvalho de Melo [mailto:a...@kernel.org]
> Sent: Friday, October 02, 2015 4:40 PM
> To: Liang, Kan
> Cc: Jiri Olsa; jo...@kernel.org; namhy...@kernel.org; a...@linux.intel.com;
> linux-kernel@vger.kernel.org; Stephane Eranian
>
On 8/6/2018 2:20 PM, Peter Zijlstra wrote:
On Mon, Aug 06, 2018 at 10:23:41AM -0700, kan.li...@linux.intel.com wrote:
+ if (++loops > 100) {
+ static bool warned;
+
+ if (!warned) {
+ WARN(1, "perfevents: irq loop stuck!\n");
+
On 8/6/2018 2:35 PM, Peter Zijlstra wrote:
On Mon, Aug 06, 2018 at 10:23:42AM -0700, kan.li...@linux.intel.com wrote:
@@ -2044,6 +2056,14 @@ static void intel_pmu_disable_event(struct perf_event
*event)
if (unlikely(event->attr.precise_ip))
intel_pmu_pebs_disable(even
On 8/6/2018 2:39 PM, Peter Zijlstra wrote:
On Mon, Aug 06, 2018 at 10:23:43AM -0700, kan.li...@linux.intel.com wrote:
+static bool intel_glk_counter_freezing_broken(int cpu)
case INTEL_FAM6_ATOM_GEMINI_LAKE:
+ x86_add_quirk(intel_counter_freezing_quirk);
We really
On 7/30/2018 6:06 AM, Ingo Molnar wrote:
* Masayoshi Mizuma wrote:
Hi Ingo,
Is the following Kan's patch ready to merge...?
Looks good at first sight - but it was whitespace damaged here so I couldn't
apply it.
Kan, mind re-sending it properly as a standalone patch, with Masayoshi's
On 7/23/2018 11:16 AM, Peter Zijlstra wrote:
On Thu, Mar 08, 2018 at 06:15:39PM -0800, kan.li...@linux.intel.com wrote:
From: Kan Liang
The Extended PEBS feature, introduced in Goldmont Plus
microarchitecture, supports all events as "Extended PEBS".
Introduce flag PMU_FL_PEBS_ALL to indica
On 7/23/2018 12:21 PM, Peter Zijlstra wrote:
On Mon, Jul 23, 2018 at 04:59:44PM +0200, Peter Zijlstra wrote:
On Thu, Mar 08, 2018 at 06:15:41PM -0800, kan.li...@linux.intel.com wrote:
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index ef47a418d819..86149b87cce8 10
On 7/23/2018 12:56 PM, Liang, Kan wrote:
On 7/23/2018 12:21 PM, Peter Zijlstra wrote:
On Mon, Jul 23, 2018 at 04:59:44PM +0200, Peter Zijlstra wrote:
On Thu, Mar 08, 2018 at 06:15:41PM -0800, kan.li...@linux.intel.com
wrote:
diff --git a/arch/x86/events/intel/core.c
b/arch/x86/events
> On Fri, Jan 19, 2018 at 12:24:17PM -0800, Andi Kleen wrote:
> > > Oh, think a bit more.
> > > I think we cannot do the same thing as we did for CPU PMU's fixed
> counters.
> > >
> > > The counters here are free running counters. They cannot be start/stop.
> >
> > Yes free running counter have com
On 1/24/2018 7:26 AM, Peter Zijlstra wrote:
On Mon, Jan 08, 2018 at 07:15:13AM -0800, kan.li...@intel.com wrote:
The formula to calculate the event->count is as below:
event->count = period left from last time +
(reload_times - 1) * reload_val +
laten
> On Tue, Jan 23, 2018 at 10:00:58PM +0000, Liang, Kan wrote:
> > > On Fri, Jan 19, 2018 at 12:24:17PM -0800, Andi Kleen wrote:
> > > > > Oh, think a bit more.
> > > > > I think we cannot do the same thing as we did for CPU PMU's fixed
> > >
> On Thu, Dec 21, 2017 at 10:08:44AM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > The perf record has specific codes to calculate the ringbuffer
> > position for both overwrite and non-overwrite mode.
> > The perf top will support both modes later.
> > It is useful to make the sp
> Hi,
>
> On Tue, Jan 09, 2018 at 03:12:28PM +0000, Liang, Kan wrote:
> > > > > > >
> > > > > > > Also I guess the current code might miss some events since the
> head
> > > can
> > >
On 1/11/2018 6:10 AM, Jiri Olsa wrote:
On Wed, Jan 10, 2018 at 09:31:56AM -0500, Liang, Kan wrote:
On 1/10/2018 5:39 AM, Jiri Olsa wrote:
On Mon, Jan 08, 2018 at 07:15:15AM -0800, kan.li...@intel.com wrote:
From: Kan Liang
When the PEBS interrupt threshold is larger than one, there is
> On Thu, Dec 21, 2017 at 10:08:44AM -0800, kan.li...@intel.com wrote:
>
> SNIP
>
> > +/*
> > + * Report the start and end of the available data in ringbuffer
> > + */
> > +int perf_mmap__read_init(struct perf_mmap *map, bool overwrite,
> > +u64 *start, u64 *end)
> > {
> >
> On Thu, Dec 21, 2017 at 10:08:45AM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > 'start' and 'prev' are duplicate in perf_mmap__read()
> >
> > Use 'map->prev' to replace 'start' in perf_mmap__read_*().
> >
> > Suggested-by: Wang Nan
> > Signed-off-by: Kan Liang
> > ---
> > too
> On Thu, Dec 21, 2017 at 10:08:46AM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > The direction of overwrite mode is backward. The last mmap__read_event
> > will set tail to map->prev. Need to correct the map->prev to head which
> > is the end of next read.
> >
> > It will be used
> On Thu, Dec 21, 2017 at 10:08:49AM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > Discards perf_mmap__read_backward and perf_mmap__read_catchup.
> No tools
> > use them.
> >
> > There are tools still use perf_mmap__read_forward. Keep it, but add
> > comments to point to the new in
> On Thu, Dec 21, 2017 at 10:08:50AM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > Per event overwrite term is not forbidden in perf top, which can bring
> > problems. Because perf top only support non-overwrite mode.
> >
> > Check and forbid inconsistent per event overwrite term i
> On Thu, Dec 21, 2017 at 10:08:52AM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > Switch to non-overwrite mode if kernel doesnot support overwrite
> > ringbuffer.
> >
> > It's only effect when overwrite mode is supported.
> > No change to current behavior.
> >
> > Signed-off-by: K
> SNIP
>
> > .max_stack = sysctl_perf_event_max_stack,
> > .sym_pcnt_filter = 5,
> > diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
> > index 68146f4..56023e4 100644
> > --- a/tools/perf/ui/browsers/hists.c
> > +++ b/tools/perf
> On Thu, Dec 21, 2017 at 10:08:53AM -0800, kan.li...@intel.com wrote:
>
> SNIP
>
> > .max_stack = sysctl_perf_event_max_stack,
> > .sym_pcnt_filter = 5,
> > diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
> > index 68146f4..560
> On Thu, Jan 11, 2018 at 09:29:21PM +0000, Liang, Kan wrote:
> > > On Thu, Dec 21, 2017 at 10:08:50AM -0800, kan.li...@intel.com wrote:
> > > > From: Kan Liang
> > > >
> > > > Per event overwrite term is not forbidden in perf top, which can
>
> > On Thu, Jan 11, 2018 at 09:29:21PM +, Liang, Kan wrote:
> > > > On Thu, Dec 21, 2017 at 10:08:50AM -0800, kan.li...@intel.com wrote:
> > > > > From: Kan Liang
> > > > >
> > > > > Per event overwrite term is not forbidden in
On 1/30/2018 4:16 AM, Stephane Eranian wrote:
Hi,
On Mon, Jan 29, 2018 at 8:29 AM, wrote:
From: Kan Liang
--
Changes since V2:
- Refined the changelog
- Introduced specific read function for large PEBS.
The previous generic PEBS read function is confusing.
Disabled PMU in
On 1/30/2018 8:39 AM, Jiri Olsa wrote:
On Tue, Jan 30, 2018 at 01:16:39AM -0800, Stephane Eranian wrote:
Hi,
On Mon, Jan 29, 2018 at 8:29 AM, wrote:
From: Kan Liang
--
Changes since V2:
- Refined the changelog
- Introduced specific read function for large PEBS.
The previous
On 1/30/2018 10:04 AM, Jiri Olsa wrote:
On Tue, Jan 30, 2018 at 09:59:15AM -0500, Liang, Kan wrote:
On 1/30/2018 8:39 AM, Jiri Olsa wrote:
On Tue, Jan 30, 2018 at 01:16:39AM -0800, Stephane Eranian wrote:
Hi,
On Mon, Jan 29, 2018 at 8:29 AM, wrote:
From: Kan Liang
--
Changes
On 1/30/2018 11:36 AM, Stephane Eranian wrote:
On Tue, Jan 30, 2018 at 7:25 AM, Liang, Kan wrote:
On 1/30/2018 10:04 AM, Jiri Olsa wrote:
On Tue, Jan 30, 2018 at 09:59:15AM -0500, Liang, Kan wrote:
On 1/30/2018 8:39 AM, Jiri Olsa wrote:
On Tue, Jan 30, 2018 at 01:16:39AM -0800
On 1/31/2018 8:15 AM, Jiri Olsa wrote:
On Wed, Jan 31, 2018 at 10:15:39AM +0100, Jiri Olsa wrote:
On Tue, Jan 30, 2018 at 07:59:41PM -0800, Andi Kleen wrote:
Still, the part I am missing here, is why asking for
PERF_SAMPLE_PERIOD voids large PEBS.
I think it was disabled together with frequ
> Em Thu, Jan 18, 2018 at 01:26:17PM -0800, kan.li...@intel.com escreveu:
> > From: Kan Liang
> >
> > In perf_mmap__push(), the 'size' need to be recalculated, otherwise
> > the invalid data might be pushed to the record in overwrite mode.
> >
> > The issue is introduced by commit 7fb4b407a124 ("p
> On Mon, Jan 15, 2018 at 10:57:05AM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > There are a number of free running counters introduced for uncore, which
> > provide highly valuable information to a wide array of customers.
> > For example, Skylake Server has IIO free running cou
>
> On Thu, Jan 18, 2018 at 05:43:10PM +, Liang, Kan wrote:
> > In the uncore document, there is no event-code assigned to free running
> counters.
> > Some events need to be defined to indicate the free running counters.
> > The events are encoded as event-cod
> On Fri, Jan 19, 2018 at 03:15:00PM +0000, Liang, Kan wrote:
> > >
> > > On Thu, Jan 18, 2018 at 05:43:10PM +, Liang, Kan wrote:
> > > > In the uncore document, there is no event-code assigned to free
> > > > running
> > > counters.
>
> On Fri, Jan 19, 2018 at 9:53 AM, Liang, Kan wrote:
> >> On Fri, Jan 19, 2018 at 03:15:00PM +0000, Liang, Kan wrote:
> >> > >
> >> > > On Thu, Jan 18, 2018 at 05:43:10PM +, Liang, Kan wrote:
> >> > > > In the uncore document
> On Mon, Jan 15, 2018 at 12:20:38PM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > The perf record has specific codes to calculate the ringbuffer position
> > for both overwrite and non-overwrite mode. Now, only perf record
> > supports both modes. The perf top will support both mo
On 1/11/2018 10:45 AM, Jiri Olsa wrote:
On Thu, Jan 11, 2018 at 10:21:25AM -0500, Liang, Kan wrote:
SNIP
hum, but the PEBS drain is specific just for
PERF_X86_EVENT_AUTO_RELOAD events, right?
Accurately, PEBS drain is specific for PERF_X86_EVENT_FREERUNNING here
> Hi,
>
> On Mon, Jan 15, 2018 at 12:20:48PM -0800, kan.li...@intel.com wrote:
> > From: Kan Liang
> >
> > For overwrite mode, the ringbuffer will be paused. The event lost is
> > expected. It needs a way to notify the browser not print the warning.
> >
> > It will be used later for perf top to d
On 1/18/2018 4:49 AM, Jiri Olsa wrote:
On Tue, Jan 16, 2018 at 01:49:13PM -0500, Liang, Kan wrote:
On 1/11/2018 10:45 AM, Jiri Olsa wrote:
On Thu, Jan 11, 2018 at 10:21:25AM -0500, Liang, Kan wrote:
SNIP
hum, but the PEBS drain is specific just for
PERF_X86_EVENT_AUTO_RELOAD events
> > > > >
> > > > > Also I guess the current code might miss some events since the head
> can
> > > be
> > > > > different between _read_init() and _read_done(), no?
> > > > >
> > > >
> > > > The overwrite mode requires the ring buffer to be paused during
> > > processing.
> > > > The head is uncha
On 1/10/2018 5:39 AM, Jiri Olsa wrote:
On Mon, Jan 08, 2018 at 07:15:15AM -0800, kan.li...@intel.com wrote:
From: Kan Liang
When the PEBS interrupt threshold is larger than one, there is no way to
get exact auto-reload times and value needed for event update unless
flush the PEBS buffer.
Dr
On 1/10/2018 5:22 AM, Jiri Olsa wrote:
On Mon, Jan 08, 2018 at 07:15:13AM -0800, kan.li...@intel.com wrote:
SNIP
There is nothing need to do in x86_perf_event_set_period(). Because it
is fixed period. The period_left is already adjusted.
Signed-off-by: Kan Liang
---
arch/x86/events/intel
> On Thu, 2 Nov 2017, kan.li...@intel.com wrote:
>
> > From: Kan Liang
> >
> > The free running counter is read-only and always active. Current generic
> > uncore code does not support this kind of counters.
> >
> > The free running counter is read-only. It cannot be enable/disable in
> > event_s
On 3/9/2018 12:42 PM, Peter Zijlstra wrote:
On Fri, Mar 09, 2018 at 09:31:11AM -0500, Vince Weaver wrote:
On Fri, 9 Mar 2018, tip-bot for Kan Liang wrote:
Commit-ID: 1af22eba248efe2de25658041a80a3d40fb3e92e
Gitweb: https://git.kernel.org/tip/1af22eba248efe2de25658041a80a3d40fb3e92e
Auth
On 3/9/2018 2:10 PM, Vince Weaver wrote:
On Fri, 9 Mar 2018, Peter Zijlstra wrote:
On Fri, Mar 09, 2018 at 09:31:11AM -0500, Vince Weaver wrote:
On Fri, 9 Mar 2018, tip-bot for Kan Liang wrote:
Commit-ID: 1af22eba248efe2de25658041a80a3d40fb3e92e
Gitweb: https://git.kernel.org/tip/1af2
On 3/7/2018 3:33 PM, Kroening, Gary wrote:
For systems with a single PCI segment, it is sufficient to look for the
bus number to change in order to determine that all of the CHa's have
been counted for a single socket.
However, for multi PCI segment systems, each socket is given a new
segment
1 - 100 of 800 matches
Mail list logo