Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-10 Thread Alex Solomatnikov
Alan,

I merged SBI emulation for perf counters and config:
https://github.com/riscv/riscv-pk/pull/98

You should be able to write these CSRs.

Thanks,
Alex

On Mon, Apr 9, 2018 at 12:07 AM, Alan Kao  wrote:
> On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:
>> On Mon, 26 Mar 2018 00:57:54 PDT (-0700), alan...@andestech.com wrote:
>> >This patch provide a basic PMU, riscv_base_pmu, which supports two
>> >general hardware event, instructions and cycles.  Furthermore, this
>> >PMU serves as a reference implementation to ease the portings in
>> >the future.
>> >
>> >riscv_base_pmu should be able to run on any RISC-V machine that
>> >conforms to the Priv-Spec.  Note that the latest qemu model hasn't
>> >fully support a proper behavior of Priv-Spec 1.10 yet, but work
>> >around should be easy with very small fixes.  Please check
>> >https://github.com/riscv/riscv-qemu/pull/115 for future updates.
>> >
>> >Cc: Nick Hu 
>> >Cc: Greentime Hu 
>> >Signed-off-by: Alan Kao 
>>
>> We should really be able to detect PMU types at runtime (via a device tree
>> entry) rather than requiring that a single PMU is built in to the kernel.
>> This will require a handful of modifications to how this patch works, which
>> I'll try to list below.
>
>> >+menu "PMU type"
>> >+depends on PERF_EVENTS
>> >+
>> >+config RISCV_BASE_PMU
>> >+bool "Base Performance Monitoring Unit"
>> >+def_bool y
>> >+help
>> >+  A base PMU that serves as a reference implementation and has limited
>> >+  feature of perf.
>> >+
>> >+endmenu
>> >+
>>
>> Rather than a menu where a single PMU can be selected, there should be
>> options to enable or disable support for each PMU type -- this is just like
>> how all our other drivers work.
>>
>
> I see.  Sure.  The descriptions and implementation will be refined in v3.
>
>> >+struct pmu * __weak __init riscv_init_platform_pmu(void)
>> >+{
>> >+riscv_pmu = _base_pmu;
>> >+return riscv_pmu->pmu;
>> >+}
>>
>> Rather than relying on a weak symbol that gets overridden by other PMU
>> types, this should look through the device tree for a compatible PMU (in the
>> case of just the base PMU it could be any RISC-V hart) and install a PMU
>> handler for it.  There'd probably be some sort of priority scheme here, like
>> there are for other driver subsystems, where we'd pick the best PMU driver
>> that's compatible with the PMUs on every hart.
>>
>> >+
>> >+int __init init_hw_perf_events(void)
>> >+{
>> >+struct pmu *pmu = riscv_init_platform_pmu();
>> >+
>> >+perf_irq = NULL;
>> >+perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
>> >+return 0;
>> >+}
>> >+arch_initcall(init_hw_perf_events);
>>
>> Since we only have a single PMU type right now this isn't critical to handle
>> right away, but we will have to refactor this before adding another PMU.
>
> I see.  My rough plan is to do the device tree parsing here, and if no 
> specific
> PMU string is found then just register the base PMU proposed in this patch.
> How about this idea?
>
> Thanks.


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-10 Thread Alex Solomatnikov
Alan,

I merged SBI emulation for perf counters and config:
https://github.com/riscv/riscv-pk/pull/98

You should be able to write these CSRs.

Thanks,
Alex

On Mon, Apr 9, 2018 at 12:07 AM, Alan Kao  wrote:
> On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:
>> On Mon, 26 Mar 2018 00:57:54 PDT (-0700), alan...@andestech.com wrote:
>> >This patch provide a basic PMU, riscv_base_pmu, which supports two
>> >general hardware event, instructions and cycles.  Furthermore, this
>> >PMU serves as a reference implementation to ease the portings in
>> >the future.
>> >
>> >riscv_base_pmu should be able to run on any RISC-V machine that
>> >conforms to the Priv-Spec.  Note that the latest qemu model hasn't
>> >fully support a proper behavior of Priv-Spec 1.10 yet, but work
>> >around should be easy with very small fixes.  Please check
>> >https://github.com/riscv/riscv-qemu/pull/115 for future updates.
>> >
>> >Cc: Nick Hu 
>> >Cc: Greentime Hu 
>> >Signed-off-by: Alan Kao 
>>
>> We should really be able to detect PMU types at runtime (via a device tree
>> entry) rather than requiring that a single PMU is built in to the kernel.
>> This will require a handful of modifications to how this patch works, which
>> I'll try to list below.
>
>> >+menu "PMU type"
>> >+depends on PERF_EVENTS
>> >+
>> >+config RISCV_BASE_PMU
>> >+bool "Base Performance Monitoring Unit"
>> >+def_bool y
>> >+help
>> >+  A base PMU that serves as a reference implementation and has limited
>> >+  feature of perf.
>> >+
>> >+endmenu
>> >+
>>
>> Rather than a menu where a single PMU can be selected, there should be
>> options to enable or disable support for each PMU type -- this is just like
>> how all our other drivers work.
>>
>
> I see.  Sure.  The descriptions and implementation will be refined in v3.
>
>> >+struct pmu * __weak __init riscv_init_platform_pmu(void)
>> >+{
>> >+riscv_pmu = _base_pmu;
>> >+return riscv_pmu->pmu;
>> >+}
>>
>> Rather than relying on a weak symbol that gets overridden by other PMU
>> types, this should look through the device tree for a compatible PMU (in the
>> case of just the base PMU it could be any RISC-V hart) and install a PMU
>> handler for it.  There'd probably be some sort of priority scheme here, like
>> there are for other driver subsystems, where we'd pick the best PMU driver
>> that's compatible with the PMUs on every hart.
>>
>> >+
>> >+int __init init_hw_perf_events(void)
>> >+{
>> >+struct pmu *pmu = riscv_init_platform_pmu();
>> >+
>> >+perf_irq = NULL;
>> >+perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
>> >+return 0;
>> >+}
>> >+arch_initcall(init_hw_perf_events);
>>
>> Since we only have a single PMU type right now this isn't critical to handle
>> right away, but we will have to refactor this before adding another PMU.
>
> I see.  My rough plan is to do the device tree parsing here, and if no 
> specific
> PMU string is found then just register the base PMU proposed in this patch.
> How about this idea?
>
> Thanks.


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-09 Thread Palmer Dabbelt

On Mon, 09 Apr 2018 00:07:11 PDT (-0700), alan...@andestech.com wrote:

On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:

On Mon, 26 Mar 2018 00:57:54 PDT (-0700), alan...@andestech.com wrote:
>This patch provide a basic PMU, riscv_base_pmu, which supports two
>general hardware event, instructions and cycles.  Furthermore, this
>PMU serves as a reference implementation to ease the portings in
>the future.
>
>riscv_base_pmu should be able to run on any RISC-V machine that
>conforms to the Priv-Spec.  Note that the latest qemu model hasn't
>fully support a proper behavior of Priv-Spec 1.10 yet, but work
>around should be easy with very small fixes.  Please check
>https://github.com/riscv/riscv-qemu/pull/115 for future updates.
>
>Cc: Nick Hu 
>Cc: Greentime Hu 
>Signed-off-by: Alan Kao 

We should really be able to detect PMU types at runtime (via a device tree
entry) rather than requiring that a single PMU is built in to the kernel.
This will require a handful of modifications to how this patch works, which
I'll try to list below.



>+menu "PMU type"
>+   depends on PERF_EVENTS
>+
>+config RISCV_BASE_PMU
>+   bool "Base Performance Monitoring Unit"
>+   def_bool y
>+   help
>+ A base PMU that serves as a reference implementation and has limited
>+ feature of perf.
>+
>+endmenu
>+

Rather than a menu where a single PMU can be selected, there should be
options to enable or disable support for each PMU type -- this is just like
how all our other drivers work.



I see.  Sure.  The descriptions and implementation will be refined in v3.


>+struct pmu * __weak __init riscv_init_platform_pmu(void)
>+{
>+   riscv_pmu = _base_pmu;
>+   return riscv_pmu->pmu;
>+}

Rather than relying on a weak symbol that gets overridden by other PMU
types, this should look through the device tree for a compatible PMU (in the
case of just the base PMU it could be any RISC-V hart) and install a PMU
handler for it.  There'd probably be some sort of priority scheme here, like
there are for other driver subsystems, where we'd pick the best PMU driver
that's compatible with the PMUs on every hart.

>+
>+int __init init_hw_perf_events(void)
>+{
>+   struct pmu *pmu = riscv_init_platform_pmu();
>+
>+   perf_irq = NULL;
>+   perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
>+   return 0;
>+}
>+arch_initcall(init_hw_perf_events);

Since we only have a single PMU type right now this isn't critical to handle
right away, but we will have to refactor this before adding another PMU.


I see.  My rough plan is to do the device tree parsing here, and if no specific
PMU string is found then just register the base PMU proposed in this patch.
How about this idea?


Sounds good.  We know the generic PMU will work on all RISC-V harts, so there's 
no need to add an explicit device tree entry for it.  Then we can figure out 
how to add device tree entries for custom performance monitors later :)


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-09 Thread Palmer Dabbelt

On Mon, 09 Apr 2018 00:07:11 PDT (-0700), alan...@andestech.com wrote:

On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:

On Mon, 26 Mar 2018 00:57:54 PDT (-0700), alan...@andestech.com wrote:
>This patch provide a basic PMU, riscv_base_pmu, which supports two
>general hardware event, instructions and cycles.  Furthermore, this
>PMU serves as a reference implementation to ease the portings in
>the future.
>
>riscv_base_pmu should be able to run on any RISC-V machine that
>conforms to the Priv-Spec.  Note that the latest qemu model hasn't
>fully support a proper behavior of Priv-Spec 1.10 yet, but work
>around should be easy with very small fixes.  Please check
>https://github.com/riscv/riscv-qemu/pull/115 for future updates.
>
>Cc: Nick Hu 
>Cc: Greentime Hu 
>Signed-off-by: Alan Kao 

We should really be able to detect PMU types at runtime (via a device tree
entry) rather than requiring that a single PMU is built in to the kernel.
This will require a handful of modifications to how this patch works, which
I'll try to list below.



>+menu "PMU type"
>+   depends on PERF_EVENTS
>+
>+config RISCV_BASE_PMU
>+   bool "Base Performance Monitoring Unit"
>+   def_bool y
>+   help
>+ A base PMU that serves as a reference implementation and has limited
>+ feature of perf.
>+
>+endmenu
>+

Rather than a menu where a single PMU can be selected, there should be
options to enable or disable support for each PMU type -- this is just like
how all our other drivers work.



I see.  Sure.  The descriptions and implementation will be refined in v3.


>+struct pmu * __weak __init riscv_init_platform_pmu(void)
>+{
>+   riscv_pmu = _base_pmu;
>+   return riscv_pmu->pmu;
>+}

Rather than relying on a weak symbol that gets overridden by other PMU
types, this should look through the device tree for a compatible PMU (in the
case of just the base PMU it could be any RISC-V hart) and install a PMU
handler for it.  There'd probably be some sort of priority scheme here, like
there are for other driver subsystems, where we'd pick the best PMU driver
that's compatible with the PMUs on every hart.

>+
>+int __init init_hw_perf_events(void)
>+{
>+   struct pmu *pmu = riscv_init_platform_pmu();
>+
>+   perf_irq = NULL;
>+   perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
>+   return 0;
>+}
>+arch_initcall(init_hw_perf_events);

Since we only have a single PMU type right now this isn't critical to handle
right away, but we will have to refactor this before adding another PMU.


I see.  My rough plan is to do the device tree parsing here, and if no specific
PMU string is found then just register the base PMU proposed in this patch.
How about this idea?


Sounds good.  We know the generic PMU will work on all RISC-V harts, so there's 
no need to add an explicit device tree entry for it.  Then we can figure out 
how to add device tree entries for custom performance monitors later :)


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-09 Thread Alan Kao
On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:
> On Mon, 26 Mar 2018 00:57:54 PDT (-0700), alan...@andestech.com wrote:
> >This patch provide a basic PMU, riscv_base_pmu, which supports two
> >general hardware event, instructions and cycles.  Furthermore, this
> >PMU serves as a reference implementation to ease the portings in
> >the future.
> >
> >riscv_base_pmu should be able to run on any RISC-V machine that
> >conforms to the Priv-Spec.  Note that the latest qemu model hasn't
> >fully support a proper behavior of Priv-Spec 1.10 yet, but work
> >around should be easy with very small fixes.  Please check
> >https://github.com/riscv/riscv-qemu/pull/115 for future updates.
> >
> >Cc: Nick Hu 
> >Cc: Greentime Hu 
> >Signed-off-by: Alan Kao 
> 
> We should really be able to detect PMU types at runtime (via a device tree
> entry) rather than requiring that a single PMU is built in to the kernel.
> This will require a handful of modifications to how this patch works, which
> I'll try to list below.

> >+menu "PMU type"
> >+depends on PERF_EVENTS
> >+
> >+config RISCV_BASE_PMU
> >+bool "Base Performance Monitoring Unit"
> >+def_bool y
> >+help
> >+  A base PMU that serves as a reference implementation and has limited
> >+  feature of perf.
> >+
> >+endmenu
> >+
> 
> Rather than a menu where a single PMU can be selected, there should be
> options to enable or disable support for each PMU type -- this is just like
> how all our other drivers work.
> 

I see.  Sure.  The descriptions and implementation will be refined in v3.

> >+struct pmu * __weak __init riscv_init_platform_pmu(void)
> >+{
> >+riscv_pmu = _base_pmu;
> >+return riscv_pmu->pmu;
> >+}
> 
> Rather than relying on a weak symbol that gets overridden by other PMU
> types, this should look through the device tree for a compatible PMU (in the
> case of just the base PMU it could be any RISC-V hart) and install a PMU
> handler for it.  There'd probably be some sort of priority scheme here, like
> there are for other driver subsystems, where we'd pick the best PMU driver
> that's compatible with the PMUs on every hart.
> 
> >+
> >+int __init init_hw_perf_events(void)
> >+{
> >+struct pmu *pmu = riscv_init_platform_pmu();
> >+
> >+perf_irq = NULL;
> >+perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
> >+return 0;
> >+}
> >+arch_initcall(init_hw_perf_events);
> 
> Since we only have a single PMU type right now this isn't critical to handle
> right away, but we will have to refactor this before adding another PMU.

I see.  My rough plan is to do the device tree parsing here, and if no specific
PMU string is found then just register the base PMU proposed in this patch.
How about this idea?

Thanks.


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-09 Thread Alan Kao
On Thu, Apr 05, 2018 at 09:47:50AM -0700, Palmer Dabbelt wrote:
> On Mon, 26 Mar 2018 00:57:54 PDT (-0700), alan...@andestech.com wrote:
> >This patch provide a basic PMU, riscv_base_pmu, which supports two
> >general hardware event, instructions and cycles.  Furthermore, this
> >PMU serves as a reference implementation to ease the portings in
> >the future.
> >
> >riscv_base_pmu should be able to run on any RISC-V machine that
> >conforms to the Priv-Spec.  Note that the latest qemu model hasn't
> >fully support a proper behavior of Priv-Spec 1.10 yet, but work
> >around should be easy with very small fixes.  Please check
> >https://github.com/riscv/riscv-qemu/pull/115 for future updates.
> >
> >Cc: Nick Hu 
> >Cc: Greentime Hu 
> >Signed-off-by: Alan Kao 
> 
> We should really be able to detect PMU types at runtime (via a device tree
> entry) rather than requiring that a single PMU is built in to the kernel.
> This will require a handful of modifications to how this patch works, which
> I'll try to list below.

> >+menu "PMU type"
> >+depends on PERF_EVENTS
> >+
> >+config RISCV_BASE_PMU
> >+bool "Base Performance Monitoring Unit"
> >+def_bool y
> >+help
> >+  A base PMU that serves as a reference implementation and has limited
> >+  feature of perf.
> >+
> >+endmenu
> >+
> 
> Rather than a menu where a single PMU can be selected, there should be
> options to enable or disable support for each PMU type -- this is just like
> how all our other drivers work.
> 

I see.  Sure.  The descriptions and implementation will be refined in v3.

> >+struct pmu * __weak __init riscv_init_platform_pmu(void)
> >+{
> >+riscv_pmu = _base_pmu;
> >+return riscv_pmu->pmu;
> >+}
> 
> Rather than relying on a weak symbol that gets overridden by other PMU
> types, this should look through the device tree for a compatible PMU (in the
> case of just the base PMU it could be any RISC-V hart) and install a PMU
> handler for it.  There'd probably be some sort of priority scheme here, like
> there are for other driver subsystems, where we'd pick the best PMU driver
> that's compatible with the PMUs on every hart.
> 
> >+
> >+int __init init_hw_perf_events(void)
> >+{
> >+struct pmu *pmu = riscv_init_platform_pmu();
> >+
> >+perf_irq = NULL;
> >+perf_pmu_register(pmu, "cpu", PERF_TYPE_RAW);
> >+return 0;
> >+}
> >+arch_initcall(init_hw_perf_events);
> 
> Since we only have a single PMU type right now this isn't critical to handle
> right away, but we will have to refactor this before adding another PMU.

I see.  My rough plan is to do the device tree parsing here, and if no specific
PMU string is found then just register the base PMU proposed in this patch.
How about this idea?

Thanks.


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-05 Thread Palmer Dabbelt

On Mon, 26 Mar 2018 00:57:54 PDT (-0700), alan...@andestech.com wrote:

This patch provide a basic PMU, riscv_base_pmu, which supports two
general hardware event, instructions and cycles.  Furthermore, this
PMU serves as a reference implementation to ease the portings in
the future.

riscv_base_pmu should be able to run on any RISC-V machine that
conforms to the Priv-Spec.  Note that the latest qemu model hasn't
fully support a proper behavior of Priv-Spec 1.10 yet, but work
around should be easy with very small fixes.  Please check
https://github.com/riscv/riscv-qemu/pull/115 for future updates.

Cc: Nick Hu 
Cc: Greentime Hu 
Signed-off-by: Alan Kao 


We should really be able to detect PMU types at runtime (via a device tree 
entry) rather than requiring that a single PMU is built in to the kernel.  This 
will require a handful of modifications to how this patch works, which I'll try 
to list below.



---
 arch/riscv/Kconfig  |  12 +
 arch/riscv/include/asm/perf_event.h |  76 +-
 arch/riscv/kernel/Makefile  |   1 +
 arch/riscv/kernel/perf_event.c  | 469 
 4 files changed, 554 insertions(+), 4 deletions(-)
 create mode 100644 arch/riscv/kernel/perf_event.c

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 310b9a5d6737..dd4aecfb5265 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -195,6 +195,18 @@ config RISCV_ISA_C
 config RISCV_ISA_A
def_bool y

+menu "PMU type"
+   depends on PERF_EVENTS
+
+config RISCV_BASE_PMU
+   bool "Base Performance Monitoring Unit"
+   def_bool y
+   help
+ A base PMU that serves as a reference implementation and has limited
+ feature of perf.
+
+endmenu
+
 endmenu


Rather than a menu where a single PMU can be selected, there should be options 
to enable or disable support for each PMU type -- this is just like how all our 
other drivers work.



 menu "Kernel type"
diff --git a/arch/riscv/include/asm/perf_event.h 
b/arch/riscv/include/asm/perf_event.h
index e13d2ff29e83..98e2efb02d25 100644
--- a/arch/riscv/include/asm/perf_event.h
+++ b/arch/riscv/include/asm/perf_event.h
@@ -1,13 +1,81 @@
+/* SPDX-License-Identifier: GPL-2.0 */
 /*
  * Copyright (C) 2018 SiFive
+ * Copyright (C) 2018 Andes Technology Corporation
  *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public Licence
- * as published by the Free Software Foundation; either version
- * 2 of the Licence, or (at your option) any later version.
  */

 #ifndef _ASM_RISCV_PERF_EVENT_H
 #define _ASM_RISCV_PERF_EVENT_H

+#include 
+#include 
+
+#define RISCV_BASE_COUNTERS2
+
+/*
+ * The RISCV_MAX_COUNTERS parameter should be specified.
+ */
+
+#ifdef CONFIG_RISCV_BASE_PMU
+#define RISCV_MAX_COUNTERS 2
+#endif
+
+#ifndef RISCV_MAX_COUNTERS
+#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
+#endif
+
+/*
+ * These are the indexes of bits in counteren register *minus* 1,
+ * except for cycle.  It would be coherent if it can directly mapped
+ * to counteren bit definition, but there is a *time* register at
+ * counteren[1].  Per-cpu structure is scarce resource here.
+ *
+ * According to the spec, an implementation can support counter up to
+ * mhpmcounter31, but many high-end processors has at most 6 general
+ * PMCs, we give the definition to MHPMCOUNTER8 here.
+ */
+#define RISCV_PMU_CYCLE0
+#define RISCV_PMU_INSTRET  1
+#define RISCV_PMU_MHPMCOUNTER3 2
+#define RISCV_PMU_MHPMCOUNTER4 3
+#define RISCV_PMU_MHPMCOUNTER5 4
+#define RISCV_PMU_MHPMCOUNTER6 5
+#define RISCV_PMU_MHPMCOUNTER7 6
+#define RISCV_PMU_MHPMCOUNTER8 7
+
+#define RISCV_OP_UNSUPP(-EOPNOTSUPP)
+
+struct cpu_hw_events {
+   /* # currently enabled events*/
+   int n_events;
+   /* currently enabled events */
+   struct perf_event   *events[RISCV_MAX_COUNTERS];
+   /* vendor-defined PMU data */
+   void*platform;
+};
+
+struct riscv_pmu {
+   struct pmu  *pmu;
+
+   /* generic hw/cache events table */
+   const int   *hw_events;
+   const int   (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
+  [PERF_COUNT_HW_CACHE_OP_MAX]
+  [PERF_COUNT_HW_CACHE_RESULT_MAX];
+   /* method used to map hw/cache events */
+   int (*map_hw_event)(u64 config);
+   int (*map_cache_event)(u64 config);
+
+   /* max generic hw events in map */
+   int max_events;
+   /* number total counters, 2(base) + x(general) */
+   int num_counters;
+   /* the width of the counter */
+   int counter_width;
+
+   /* vendor-defined PMU features */
+   void*platform;
+};
+
 #endif /* 

Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-05 Thread Palmer Dabbelt

On Mon, 26 Mar 2018 00:57:54 PDT (-0700), alan...@andestech.com wrote:

This patch provide a basic PMU, riscv_base_pmu, which supports two
general hardware event, instructions and cycles.  Furthermore, this
PMU serves as a reference implementation to ease the portings in
the future.

riscv_base_pmu should be able to run on any RISC-V machine that
conforms to the Priv-Spec.  Note that the latest qemu model hasn't
fully support a proper behavior of Priv-Spec 1.10 yet, but work
around should be easy with very small fixes.  Please check
https://github.com/riscv/riscv-qemu/pull/115 for future updates.

Cc: Nick Hu 
Cc: Greentime Hu 
Signed-off-by: Alan Kao 


We should really be able to detect PMU types at runtime (via a device tree 
entry) rather than requiring that a single PMU is built in to the kernel.  This 
will require a handful of modifications to how this patch works, which I'll try 
to list below.



---
 arch/riscv/Kconfig  |  12 +
 arch/riscv/include/asm/perf_event.h |  76 +-
 arch/riscv/kernel/Makefile  |   1 +
 arch/riscv/kernel/perf_event.c  | 469 
 4 files changed, 554 insertions(+), 4 deletions(-)
 create mode 100644 arch/riscv/kernel/perf_event.c

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 310b9a5d6737..dd4aecfb5265 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -195,6 +195,18 @@ config RISCV_ISA_C
 config RISCV_ISA_A
def_bool y

+menu "PMU type"
+   depends on PERF_EVENTS
+
+config RISCV_BASE_PMU
+   bool "Base Performance Monitoring Unit"
+   def_bool y
+   help
+ A base PMU that serves as a reference implementation and has limited
+ feature of perf.
+
+endmenu
+
 endmenu


Rather than a menu where a single PMU can be selected, there should be options 
to enable or disable support for each PMU type -- this is just like how all our 
other drivers work.



 menu "Kernel type"
diff --git a/arch/riscv/include/asm/perf_event.h 
b/arch/riscv/include/asm/perf_event.h
index e13d2ff29e83..98e2efb02d25 100644
--- a/arch/riscv/include/asm/perf_event.h
+++ b/arch/riscv/include/asm/perf_event.h
@@ -1,13 +1,81 @@
+/* SPDX-License-Identifier: GPL-2.0 */
 /*
  * Copyright (C) 2018 SiFive
+ * Copyright (C) 2018 Andes Technology Corporation
  *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public Licence
- * as published by the Free Software Foundation; either version
- * 2 of the Licence, or (at your option) any later version.
  */

 #ifndef _ASM_RISCV_PERF_EVENT_H
 #define _ASM_RISCV_PERF_EVENT_H

+#include 
+#include 
+
+#define RISCV_BASE_COUNTERS2
+
+/*
+ * The RISCV_MAX_COUNTERS parameter should be specified.
+ */
+
+#ifdef CONFIG_RISCV_BASE_PMU
+#define RISCV_MAX_COUNTERS 2
+#endif
+
+#ifndef RISCV_MAX_COUNTERS
+#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
+#endif
+
+/*
+ * These are the indexes of bits in counteren register *minus* 1,
+ * except for cycle.  It would be coherent if it can directly mapped
+ * to counteren bit definition, but there is a *time* register at
+ * counteren[1].  Per-cpu structure is scarce resource here.
+ *
+ * According to the spec, an implementation can support counter up to
+ * mhpmcounter31, but many high-end processors has at most 6 general
+ * PMCs, we give the definition to MHPMCOUNTER8 here.
+ */
+#define RISCV_PMU_CYCLE0
+#define RISCV_PMU_INSTRET  1
+#define RISCV_PMU_MHPMCOUNTER3 2
+#define RISCV_PMU_MHPMCOUNTER4 3
+#define RISCV_PMU_MHPMCOUNTER5 4
+#define RISCV_PMU_MHPMCOUNTER6 5
+#define RISCV_PMU_MHPMCOUNTER7 6
+#define RISCV_PMU_MHPMCOUNTER8 7
+
+#define RISCV_OP_UNSUPP(-EOPNOTSUPP)
+
+struct cpu_hw_events {
+   /* # currently enabled events*/
+   int n_events;
+   /* currently enabled events */
+   struct perf_event   *events[RISCV_MAX_COUNTERS];
+   /* vendor-defined PMU data */
+   void*platform;
+};
+
+struct riscv_pmu {
+   struct pmu  *pmu;
+
+   /* generic hw/cache events table */
+   const int   *hw_events;
+   const int   (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
+  [PERF_COUNT_HW_CACHE_OP_MAX]
+  [PERF_COUNT_HW_CACHE_RESULT_MAX];
+   /* method used to map hw/cache events */
+   int (*map_hw_event)(u64 config);
+   int (*map_cache_event)(u64 config);
+
+   /* max generic hw events in map */
+   int max_events;
+   /* number total counters, 2(base) + x(general) */
+   int num_counters;
+   /* the width of the counter */
+   int counter_width;
+
+   /* vendor-defined PMU features */
+   void*platform;
+};
+
 #endif /* _ASM_RISCV_PERF_EVENT_H */
diff --git a/arch/riscv/kernel/Makefile 

Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-02 Thread Alan Kao

Hi Alex,
 
On Mon, Apr 02, 2018 at 03:36:12PM +0800, Alan Kao wrote:
> On Sat, Mar 31, 2018 at 03:47:10PM -0700, Alex Solomatnikov wrote:
> 
> The original guess was that maybe, an counter value on a hart is picked
> as the minusend, and an old counter value on another hart was recorded
> as the subtrahend but numerically larger.  Then, the overflow causes 
> by that subtraction.  Please let me name this guess as 
> "cross-hart subtraction."
>
> > You can add a skew between cores in qemu, something like this:
> > 
> > case CSR_INSTRET:
> > core_id()*return cpu_get_host_ticks()/10;
> > break;
> > case CSR_CYCLE:
> > return cpu_get_host_ticks();
> > break;
> > 
> 
> However, I tried similar stuff to reproduce the phenomenon but in vain.
> It seems that the 
>
>   ***cross-hart subtration doesn't even happen, because generic
> code handles them.  ...

I am sorry that this observation is wrong.  With appropriate tweak, we
successfully reproduce the behavior and locate the the bug.

This will be fix in v2.


Thanks for the helps.
Alan


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-02 Thread Alan Kao

Hi Alex,
 
On Mon, Apr 02, 2018 at 03:36:12PM +0800, Alan Kao wrote:
> On Sat, Mar 31, 2018 at 03:47:10PM -0700, Alex Solomatnikov wrote:
> 
> The original guess was that maybe, an counter value on a hart is picked
> as the minusend, and an old counter value on another hart was recorded
> as the subtrahend but numerically larger.  Then, the overflow causes 
> by that subtraction.  Please let me name this guess as 
> "cross-hart subtraction."
>
> > You can add a skew between cores in qemu, something like this:
> > 
> > case CSR_INSTRET:
> > core_id()*return cpu_get_host_ticks()/10;
> > break;
> > case CSR_CYCLE:
> > return cpu_get_host_ticks();
> > break;
> > 
> 
> However, I tried similar stuff to reproduce the phenomenon but in vain.
> It seems that the 
>
>   ***cross-hart subtration doesn't even happen, because generic
> code handles them.  ...

I am sorry that this observation is wrong.  With appropriate tweak, we
successfully reproduce the behavior and locate the the bug.

This will be fix in v2.


Thanks for the helps.
Alan


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-02 Thread Alan Kao
On Sat, Mar 31, 2018 at 03:47:10PM -0700, Alex Solomatnikov wrote:

The original guess was that maybe, an counter value on a hart is picked
as the minusend, and an old counter value on another hart was recorded
as the subtrahend but numerically larger.  Then, the overflow causes 
by that subtraction.  Please let me name this guess as 
"cross-hart subtraction."

> You can add a skew between cores in qemu, something like this:
> 
> case CSR_INSTRET:
> core_id()*return cpu_get_host_ticks()/10;
> break;
> case CSR_CYCLE:
> return cpu_get_host_ticks();
> break;
> 

However, I tried similar stuff to reproduce the phenomenon but in vain.
It seems that the cross-hart subtration doesn't even happen, because generic
code handles them.   While I am still looking for the proof to it, I would 
like to have more information on this first:

* What is the frequency of that "funny number" event?  Was that often?

* If you monitor only one hart, will the event disappear?

* What will happen if you change the counter_width to fit U54's counter width?

* Is the test program you used open-sourced?

> Alex
> 

Many thanks,
Alan

> On Wed, Mar 28, 2018 at 7:30 PM, Alan Kao  wrote:
> > Hi Alex,
> >
> > I'm appreciated for your reply and tests.
> >
> > On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
> >> Did you test this code?
> >
> > I did test this patch on QEMU's virt model with multi-hart, which is the 
> > only
> > RISC-V machine I have for now.  But as I mentioned in
> > https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
> > in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
> > tweak the code to make reading work.
> >
> > Specifically, the read to cycle and instret in QEMU looks like this:
> > ...
> > case CSR_INSTRET:
> > case CSR_CYCLE:
> > //  if (ctr_ok) {
> > return cpu_get_host_ticks();
> > //  }
> > break;
> > ...
> > and the two lines of comment was the tweak.
> >
> > On such environment, I did not get anything unexpected.  No matter which of 
> > them
> > is requested, QEMU returns the host's tick.
> >
> >>
> >> I got funny numbers when I tried to run it on HiFive Unleashed:
> >>
> >> perf stat mem-latency
> >> ...
> >>
> >>  Performance counter stats for 'mem-latency':
> >>
> >> 157.907000  task-clock (msec) #0.940 CPUs utilized
> >>
> >>  1  context-switches  #0.006 K/sec
> >>
> >>  1  cpu-migrations#0.006 K/sec
> >>
> >>   4102  page-faults   #0.026 M/sec
> >>
> >>  157923752  cycles#1.000 GHz
> >>
> >> 9223372034948899840  instructions  # 58403957087.78  insn
> >> per cycle
> >>  branches
> >>
> >>  branch-misses
> >>
> >>
> >>0.168046000 seconds time elapsed
> >>
> >>
> >> Tracing read_counter(), I see this:
> >>
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
> >> read_counter  idx=0 val=2528358954912
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
> >> read_counter  idx=1 val=53892244920
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
> >> read_counter  idx=0 val=2528418303035
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
> >> read_counter  idx=1 val=53906699665
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
> >> read_counter  idx=0 val=2528516878664
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
> >> read_counter  idx=1 val=51986369142
> >>
> >> It looks like the counter values from different cores are subtracted and
> >> wraparound occurs.
> >>
> >
> > Thanks for the hint.  It makes sense.  9223372034948899840 is 
> > 7fff8e66a400,
> > which should be a wraparound with the mask I set (63-bit) in the code.
> >
> > I will try this direction.  Ideally, we can solve it by explicitly syncing 
> > the
> > hwc->prev_count when a cpu migration event happens.
> >
> >>
> >> Also, core IDs and socket IDs are wrong in perf report:
> >>
> >
> > As Palmer has replied to this, I have no comment here.
> >
> >> perf report --header -I
> >> Error:
> >> The perf.data file has no samples!
> >> # 
> >> # captured on: Thu Jan  1 02:52:07 1970
> >> # hostname : buildroot
> >> # os release : 4.15.0-00045-g0d7c030-dirty
> >> # perf version : 4.15.0
> >> # arch : riscv64
> >> # nrcpus online : 4
> >> # nrcpus avail : 5
> >> # total memory : 8188340 kB
> >> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
> >> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
> >> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
> >> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
> >> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
> >> # sibling cores   : 1
> >> # 

Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-04-02 Thread Alan Kao
On Sat, Mar 31, 2018 at 03:47:10PM -0700, Alex Solomatnikov wrote:

The original guess was that maybe, an counter value on a hart is picked
as the minusend, and an old counter value on another hart was recorded
as the subtrahend but numerically larger.  Then, the overflow causes 
by that subtraction.  Please let me name this guess as 
"cross-hart subtraction."

> You can add a skew between cores in qemu, something like this:
> 
> case CSR_INSTRET:
> core_id()*return cpu_get_host_ticks()/10;
> break;
> case CSR_CYCLE:
> return cpu_get_host_ticks();
> break;
> 

However, I tried similar stuff to reproduce the phenomenon but in vain.
It seems that the cross-hart subtration doesn't even happen, because generic
code handles them.   While I am still looking for the proof to it, I would 
like to have more information on this first:

* What is the frequency of that "funny number" event?  Was that often?

* If you monitor only one hart, will the event disappear?

* What will happen if you change the counter_width to fit U54's counter width?

* Is the test program you used open-sourced?

> Alex
> 

Many thanks,
Alan

> On Wed, Mar 28, 2018 at 7:30 PM, Alan Kao  wrote:
> > Hi Alex,
> >
> > I'm appreciated for your reply and tests.
> >
> > On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
> >> Did you test this code?
> >
> > I did test this patch on QEMU's virt model with multi-hart, which is the 
> > only
> > RISC-V machine I have for now.  But as I mentioned in
> > https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
> > in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
> > tweak the code to make reading work.
> >
> > Specifically, the read to cycle and instret in QEMU looks like this:
> > ...
> > case CSR_INSTRET:
> > case CSR_CYCLE:
> > //  if (ctr_ok) {
> > return cpu_get_host_ticks();
> > //  }
> > break;
> > ...
> > and the two lines of comment was the tweak.
> >
> > On such environment, I did not get anything unexpected.  No matter which of 
> > them
> > is requested, QEMU returns the host's tick.
> >
> >>
> >> I got funny numbers when I tried to run it on HiFive Unleashed:
> >>
> >> perf stat mem-latency
> >> ...
> >>
> >>  Performance counter stats for 'mem-latency':
> >>
> >> 157.907000  task-clock (msec) #0.940 CPUs utilized
> >>
> >>  1  context-switches  #0.006 K/sec
> >>
> >>  1  cpu-migrations#0.006 K/sec
> >>
> >>   4102  page-faults   #0.026 M/sec
> >>
> >>  157923752  cycles#1.000 GHz
> >>
> >> 9223372034948899840  instructions  # 58403957087.78  insn
> >> per cycle
> >>  branches
> >>
> >>  branch-misses
> >>
> >>
> >>0.168046000 seconds time elapsed
> >>
> >>
> >> Tracing read_counter(), I see this:
> >>
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
> >> read_counter  idx=0 val=2528358954912
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
> >> read_counter  idx=1 val=53892244920
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
> >> read_counter  idx=0 val=2528418303035
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
> >> read_counter  idx=1 val=53906699665
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
> >> read_counter  idx=0 val=2528516878664
> >> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
> >> read_counter  idx=1 val=51986369142
> >>
> >> It looks like the counter values from different cores are subtracted and
> >> wraparound occurs.
> >>
> >
> > Thanks for the hint.  It makes sense.  9223372034948899840 is 
> > 7fff8e66a400,
> > which should be a wraparound with the mask I set (63-bit) in the code.
> >
> > I will try this direction.  Ideally, we can solve it by explicitly syncing 
> > the
> > hwc->prev_count when a cpu migration event happens.
> >
> >>
> >> Also, core IDs and socket IDs are wrong in perf report:
> >>
> >
> > As Palmer has replied to this, I have no comment here.
> >
> >> perf report --header -I
> >> Error:
> >> The perf.data file has no samples!
> >> # 
> >> # captured on: Thu Jan  1 02:52:07 1970
> >> # hostname : buildroot
> >> # os release : 4.15.0-00045-g0d7c030-dirty
> >> # perf version : 4.15.0
> >> # arch : riscv64
> >> # nrcpus online : 4
> >> # nrcpus avail : 5
> >> # total memory : 8188340 kB
> >> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
> >> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
> >> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
> >> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
> >> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
> >> # sibling cores   : 1
> >> # sibling cores   : 2
> 

Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-03-31 Thread Alex Solomatnikov
You can add a skew between cores in qemu, something like this:

case CSR_INSTRET:
core_id()*return cpu_get_host_ticks()/10;
break;
case CSR_CYCLE:
return cpu_get_host_ticks();
break;

Alex

On Wed, Mar 28, 2018 at 7:30 PM, Alan Kao  wrote:
> Hi Alex,
>
> I'm appreciated for your reply and tests.
>
> On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
>> Did you test this code?
>
> I did test this patch on QEMU's virt model with multi-hart, which is the only
> RISC-V machine I have for now.  But as I mentioned in
> https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
> in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
> tweak the code to make reading work.
>
> Specifically, the read to cycle and instret in QEMU looks like this:
> ...
> case CSR_INSTRET:
> case CSR_CYCLE:
> //  if (ctr_ok) {
> return cpu_get_host_ticks();
> //  }
> break;
> ...
> and the two lines of comment was the tweak.
>
> On such environment, I did not get anything unexpected.  No matter which of 
> them
> is requested, QEMU returns the host's tick.
>
>>
>> I got funny numbers when I tried to run it on HiFive Unleashed:
>>
>> perf stat mem-latency
>> ...
>>
>>  Performance counter stats for 'mem-latency':
>>
>> 157.907000  task-clock (msec) #0.940 CPUs utilized
>>
>>  1  context-switches  #0.006 K/sec
>>
>>  1  cpu-migrations#0.006 K/sec
>>
>>   4102  page-faults   #0.026 M/sec
>>
>>  157923752  cycles#1.000 GHz
>>
>> 9223372034948899840  instructions  # 58403957087.78  insn
>> per cycle
>>  branches
>>
>>  branch-misses
>>
>>
>>0.168046000 seconds time elapsed
>>
>>
>> Tracing read_counter(), I see this:
>>
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
>> read_counter  idx=0 val=2528358954912
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
>> read_counter  idx=1 val=53892244920
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
>> read_counter  idx=0 val=2528418303035
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
>> read_counter  idx=1 val=53906699665
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
>> read_counter  idx=0 val=2528516878664
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
>> read_counter  idx=1 val=51986369142
>>
>> It looks like the counter values from different cores are subtracted and
>> wraparound occurs.
>>
>
> Thanks for the hint.  It makes sense.  9223372034948899840 is 
> 7fff8e66a400,
> which should be a wraparound with the mask I set (63-bit) in the code.
>
> I will try this direction.  Ideally, we can solve it by explicitly syncing the
> hwc->prev_count when a cpu migration event happens.
>
>>
>> Also, core IDs and socket IDs are wrong in perf report:
>>
>
> As Palmer has replied to this, I have no comment here.
>
>> perf report --header -I
>> Error:
>> The perf.data file has no samples!
>> # 
>> # captured on: Thu Jan  1 02:52:07 1970
>> # hostname : buildroot
>> # os release : 4.15.0-00045-g0d7c030-dirty
>> # perf version : 4.15.0
>> # arch : riscv64
>> # nrcpus online : 4
>> # nrcpus avail : 5
>> # total memory : 8188340 kB
>> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
>> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
>> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
>> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
>> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
>> # sibling cores   : 1
>> # sibling cores   : 2
>> # sibling cores   : 3
>> # sibling cores   : 4
>> # sibling threads : 1
>> # sibling threads : 2
>> # sibling threads : 3
>> # sibling threads : 4
>> # CPU 0: Core ID -1, Socket ID -1
>> # CPU 1: Core ID 0, Socket ID -1
>> # CPU 2: Core ID 0, Socket ID -1
>> # CPU 3: Core ID 0, Socket ID -1
>> # CPU 4: Core ID 0, Socket ID -1
>> # pmu mappings: cpu = 4, software = 1
>> # CPU cache info:
>> #  L1 Instruction  32K [1]
>> #  L1 Data 32K [1]
>> #  L1 Instruction  32K [2]
>> #  L1 Data 32K [2]
>> #  L1 Instruction  32K [3]
>> #  L1 Data 32K [3]
>> # missing features: TRACING_DATA BUILD_ID CPUDESC CPUID NUMA_TOPOLOGY
>> BRANCH_STACK GROUP_DESC AUXTRACE STAT
>> # 
>>
>>
>> Alex
>>
>
> Many thanks,
> Alan
>
>> On Mon, Mar 26, 2018 at 12:57 AM, Alan Kao  wrote:
>>
>> > This patch provide a basic PMU, riscv_base_pmu, which supports two
>> > general hardware event, instructions and cycles.  Furthermore, this
>> > PMU serves as a reference implementation to ease the portings in
>> > the future.
>> >
>> > riscv_base_pmu should be 

Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-03-31 Thread Alex Solomatnikov
You can add a skew between cores in qemu, something like this:

case CSR_INSTRET:
core_id()*return cpu_get_host_ticks()/10;
break;
case CSR_CYCLE:
return cpu_get_host_ticks();
break;

Alex

On Wed, Mar 28, 2018 at 7:30 PM, Alan Kao  wrote:
> Hi Alex,
>
> I'm appreciated for your reply and tests.
>
> On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
>> Did you test this code?
>
> I did test this patch on QEMU's virt model with multi-hart, which is the only
> RISC-V machine I have for now.  But as I mentioned in
> https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
> in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
> tweak the code to make reading work.
>
> Specifically, the read to cycle and instret in QEMU looks like this:
> ...
> case CSR_INSTRET:
> case CSR_CYCLE:
> //  if (ctr_ok) {
> return cpu_get_host_ticks();
> //  }
> break;
> ...
> and the two lines of comment was the tweak.
>
> On such environment, I did not get anything unexpected.  No matter which of 
> them
> is requested, QEMU returns the host's tick.
>
>>
>> I got funny numbers when I tried to run it on HiFive Unleashed:
>>
>> perf stat mem-latency
>> ...
>>
>>  Performance counter stats for 'mem-latency':
>>
>> 157.907000  task-clock (msec) #0.940 CPUs utilized
>>
>>  1  context-switches  #0.006 K/sec
>>
>>  1  cpu-migrations#0.006 K/sec
>>
>>   4102  page-faults   #0.026 M/sec
>>
>>  157923752  cycles#1.000 GHz
>>
>> 9223372034948899840  instructions  # 58403957087.78  insn
>> per cycle
>>  branches
>>
>>  branch-misses
>>
>>
>>0.168046000 seconds time elapsed
>>
>>
>> Tracing read_counter(), I see this:
>>
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
>> read_counter  idx=0 val=2528358954912
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
>> read_counter  idx=1 val=53892244920
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
>> read_counter  idx=0 val=2528418303035
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
>> read_counter  idx=1 val=53906699665
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
>> read_counter  idx=0 val=2528516878664
>> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
>> read_counter  idx=1 val=51986369142
>>
>> It looks like the counter values from different cores are subtracted and
>> wraparound occurs.
>>
>
> Thanks for the hint.  It makes sense.  9223372034948899840 is 
> 7fff8e66a400,
> which should be a wraparound with the mask I set (63-bit) in the code.
>
> I will try this direction.  Ideally, we can solve it by explicitly syncing the
> hwc->prev_count when a cpu migration event happens.
>
>>
>> Also, core IDs and socket IDs are wrong in perf report:
>>
>
> As Palmer has replied to this, I have no comment here.
>
>> perf report --header -I
>> Error:
>> The perf.data file has no samples!
>> # 
>> # captured on: Thu Jan  1 02:52:07 1970
>> # hostname : buildroot
>> # os release : 4.15.0-00045-g0d7c030-dirty
>> # perf version : 4.15.0
>> # arch : riscv64
>> # nrcpus online : 4
>> # nrcpus avail : 5
>> # total memory : 8188340 kB
>> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
>> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
>> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
>> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
>> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
>> # sibling cores   : 1
>> # sibling cores   : 2
>> # sibling cores   : 3
>> # sibling cores   : 4
>> # sibling threads : 1
>> # sibling threads : 2
>> # sibling threads : 3
>> # sibling threads : 4
>> # CPU 0: Core ID -1, Socket ID -1
>> # CPU 1: Core ID 0, Socket ID -1
>> # CPU 2: Core ID 0, Socket ID -1
>> # CPU 3: Core ID 0, Socket ID -1
>> # CPU 4: Core ID 0, Socket ID -1
>> # pmu mappings: cpu = 4, software = 1
>> # CPU cache info:
>> #  L1 Instruction  32K [1]
>> #  L1 Data 32K [1]
>> #  L1 Instruction  32K [2]
>> #  L1 Data 32K [2]
>> #  L1 Instruction  32K [3]
>> #  L1 Data 32K [3]
>> # missing features: TRACING_DATA BUILD_ID CPUDESC CPUID NUMA_TOPOLOGY
>> BRANCH_STACK GROUP_DESC AUXTRACE STAT
>> # 
>>
>>
>> Alex
>>
>
> Many thanks,
> Alan
>
>> On Mon, Mar 26, 2018 at 12:57 AM, Alan Kao  wrote:
>>
>> > This patch provide a basic PMU, riscv_base_pmu, which supports two
>> > general hardware event, instructions and cycles.  Furthermore, this
>> > PMU serves as a reference implementation to ease the portings in
>> > the future.
>> >
>> > riscv_base_pmu should be able to run on any RISC-V machine that
>> > 

Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-03-28 Thread Alan Kao
Hi Alex,

I'm appreciated for your reply and tests.

On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
> Did you test this code?

I did test this patch on QEMU's virt model with multi-hart, which is the only
RISC-V machine I have for now.  But as I mentioned in 
https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
tweak the code to make reading work.

Specifically, the read to cycle and instret in QEMU looks like this:
...
case CSR_INSTRET:
case CSR_CYCLE:
//  if (ctr_ok) {
return cpu_get_host_ticks();
//  }
break;
...
and the two lines of comment was the tweak.

On such environment, I did not get anything unexpected.  No matter which of them
is requested, QEMU returns the host's tick.

> 
> I got funny numbers when I tried to run it on HiFive Unleashed:
> 
> perf stat mem-latency
> ...
> 
>  Performance counter stats for 'mem-latency':
> 
> 157.907000  task-clock (msec) #0.940 CPUs utilized
> 
>  1  context-switches  #0.006 K/sec
> 
>  1  cpu-migrations#0.006 K/sec
> 
>   4102  page-faults   #0.026 M/sec
> 
>  157923752  cycles#1.000 GHz
> 
> 9223372034948899840  instructions  # 58403957087.78  insn
> per cycle
>  branches
> 
>  branch-misses
> 
> 
>0.168046000 seconds time elapsed
> 
> 
> Tracing read_counter(), I see this:
> 
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
> read_counter  idx=0 val=2528358954912
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
> read_counter  idx=1 val=53892244920
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
> read_counter  idx=0 val=2528418303035
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
> read_counter  idx=1 val=53906699665
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
> read_counter  idx=0 val=2528516878664
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
> read_counter  idx=1 val=51986369142
> 
> It looks like the counter values from different cores are subtracted and
> wraparound occurs.
> 

Thanks for the hint.  It makes sense.  9223372034948899840 is 7fff8e66a400,
which should be a wraparound with the mask I set (63-bit) in the code.

I will try this direction.  Ideally, we can solve it by explicitly syncing the
hwc->prev_count when a cpu migration event happens.

> 
> Also, core IDs and socket IDs are wrong in perf report:
> 

As Palmer has replied to this, I have no comment here.

> perf report --header -I
> Error:
> The perf.data file has no samples!
> # 
> # captured on: Thu Jan  1 02:52:07 1970
> # hostname : buildroot
> # os release : 4.15.0-00045-g0d7c030-dirty
> # perf version : 4.15.0
> # arch : riscv64
> # nrcpus online : 4
> # nrcpus avail : 5
> # total memory : 8188340 kB
> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
> # sibling cores   : 1
> # sibling cores   : 2
> # sibling cores   : 3
> # sibling cores   : 4
> # sibling threads : 1
> # sibling threads : 2
> # sibling threads : 3
> # sibling threads : 4
> # CPU 0: Core ID -1, Socket ID -1
> # CPU 1: Core ID 0, Socket ID -1
> # CPU 2: Core ID 0, Socket ID -1
> # CPU 3: Core ID 0, Socket ID -1
> # CPU 4: Core ID 0, Socket ID -1
> # pmu mappings: cpu = 4, software = 1
> # CPU cache info:
> #  L1 Instruction  32K [1]
> #  L1 Data 32K [1]
> #  L1 Instruction  32K [2]
> #  L1 Data 32K [2]
> #  L1 Instruction  32K [3]
> #  L1 Data 32K [3]
> # missing features: TRACING_DATA BUILD_ID CPUDESC CPUID NUMA_TOPOLOGY
> BRANCH_STACK GROUP_DESC AUXTRACE STAT
> # 
> 
> 
> Alex
>

Many thanks,
Alan

> On Mon, Mar 26, 2018 at 12:57 AM, Alan Kao  wrote:
> 
> > This patch provide a basic PMU, riscv_base_pmu, which supports two
> > general hardware event, instructions and cycles.  Furthermore, this
> > PMU serves as a reference implementation to ease the portings in
> > the future.
> >
> > riscv_base_pmu should be able to run on any RISC-V machine that
> > conforms to the Priv-Spec.  Note that the latest qemu model hasn't
> > fully support a proper behavior of Priv-Spec 1.10 yet, but work
> > around should be easy with very small fixes.  Please check
> > https://github.com/riscv/riscv-qemu/pull/115 for future updates.
> >
> > Cc: Nick Hu 
> > Cc: Greentime Hu 
> > Signed-off-by: Alan Kao 

Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-03-28 Thread Alan Kao
Hi Alex,

I'm appreciated for your reply and tests.

On Wed, Mar 28, 2018 at 03:58:41PM -0700, Alex Solomatnikov wrote:
> Did you test this code?

I did test this patch on QEMU's virt model with multi-hart, which is the only
RISC-V machine I have for now.  But as I mentioned in 
https://github.com/riscv/riscv-qemu/pull/115 , the hardware counter support
in QEMU is not fully conformed to the 1.10 Priv-Spec, so I had to slightly
tweak the code to make reading work.

Specifically, the read to cycle and instret in QEMU looks like this:
...
case CSR_INSTRET:
case CSR_CYCLE:
//  if (ctr_ok) {
return cpu_get_host_ticks();
//  }
break;
...
and the two lines of comment was the tweak.

On such environment, I did not get anything unexpected.  No matter which of them
is requested, QEMU returns the host's tick.

> 
> I got funny numbers when I tried to run it on HiFive Unleashed:
> 
> perf stat mem-latency
> ...
> 
>  Performance counter stats for 'mem-latency':
> 
> 157.907000  task-clock (msec) #0.940 CPUs utilized
> 
>  1  context-switches  #0.006 K/sec
> 
>  1  cpu-migrations#0.006 K/sec
> 
>   4102  page-faults   #0.026 M/sec
> 
>  157923752  cycles#1.000 GHz
> 
> 9223372034948899840  instructions  # 58403957087.78  insn
> per cycle
>  branches
> 
>  branch-misses
> 
> 
>0.168046000 seconds time elapsed
> 
> 
> Tracing read_counter(), I see this:
> 
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.058809] CPU 3:
> read_counter  idx=0 val=2528358954912
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.063339] CPU 3:
> read_counter  idx=1 val=53892244920
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.118160] CPU 3:
> read_counter  idx=0 val=2528418303035
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.122694] CPU 3:
> read_counter  idx=1 val=53906699665
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.216736] CPU 1:
> read_counter  idx=0 val=2528516878664
> Jan  1 00:41:50 buildroot user.info kernel: [ 2510.221270] CPU 1:
> read_counter  idx=1 val=51986369142
> 
> It looks like the counter values from different cores are subtracted and
> wraparound occurs.
> 

Thanks for the hint.  It makes sense.  9223372034948899840 is 7fff8e66a400,
which should be a wraparound with the mask I set (63-bit) in the code.

I will try this direction.  Ideally, we can solve it by explicitly syncing the
hwc->prev_count when a cpu migration event happens.

> 
> Also, core IDs and socket IDs are wrong in perf report:
> 

As Palmer has replied to this, I have no comment here.

> perf report --header -I
> Error:
> The perf.data file has no samples!
> # 
> # captured on: Thu Jan  1 02:52:07 1970
> # hostname : buildroot
> # os release : 4.15.0-00045-g0d7c030-dirty
> # perf version : 4.15.0
> # arch : riscv64
> # nrcpus online : 4
> # nrcpus avail : 5
> # total memory : 8188340 kB
> # cmdline : /usr/bin/perf record -F 1000 lat_mem_rd -P 1 -W 1 -N 1 -t 10
> # event : name = cycles:ppp, , size = 112, { sample_period, sample_freq } =
> 1000, sample_type = IP|TID|TIME|PERIOD, disabled = 1, inherit = 1, mmap =
> 1, comm = 1, freq = 1, enable_on_exec = 1, task = 1, precise_ip = 3,
> sample_id_all = 1, exclude_guest = 1, mmap2 = 1, comm_exec = 1
> # sibling cores   : 1
> # sibling cores   : 2
> # sibling cores   : 3
> # sibling cores   : 4
> # sibling threads : 1
> # sibling threads : 2
> # sibling threads : 3
> # sibling threads : 4
> # CPU 0: Core ID -1, Socket ID -1
> # CPU 1: Core ID 0, Socket ID -1
> # CPU 2: Core ID 0, Socket ID -1
> # CPU 3: Core ID 0, Socket ID -1
> # CPU 4: Core ID 0, Socket ID -1
> # pmu mappings: cpu = 4, software = 1
> # CPU cache info:
> #  L1 Instruction  32K [1]
> #  L1 Data 32K [1]
> #  L1 Instruction  32K [2]
> #  L1 Data 32K [2]
> #  L1 Instruction  32K [3]
> #  L1 Data 32K [3]
> # missing features: TRACING_DATA BUILD_ID CPUDESC CPUID NUMA_TOPOLOGY
> BRANCH_STACK GROUP_DESC AUXTRACE STAT
> # 
> 
> 
> Alex
>

Many thanks,
Alan

> On Mon, Mar 26, 2018 at 12:57 AM, Alan Kao  wrote:
> 
> > This patch provide a basic PMU, riscv_base_pmu, which supports two
> > general hardware event, instructions and cycles.  Furthermore, this
> > PMU serves as a reference implementation to ease the portings in
> > the future.
> >
> > riscv_base_pmu should be able to run on any RISC-V machine that
> > conforms to the Priv-Spec.  Note that the latest qemu model hasn't
> > fully support a proper behavior of Priv-Spec 1.10 yet, but work
> > around should be easy with very small fixes.  Please check
> > https://github.com/riscv/riscv-qemu/pull/115 for future updates.
> >
> > Cc: Nick Hu 
> > Cc: Greentime Hu 
> > Signed-off-by: Alan Kao 
> > ---
> >  arch/riscv/Kconfig  |  12 +
> >  

Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-03-28 Thread Palmer Dabbelt

On Wed, 28 Mar 2018 15:31:30 PDT (-0700), s...@sifive.com wrote:

Also, core IDs and socket IDs are wrong in perf report:


It looks like for this we need the stuff in topology.h.  I've cobbled something 
together quickly for Alex to use for now to see if that fixes other problems, 
I'll submit a sane version when we can.


Re: [PATCH 1/2] perf: riscv: preliminary RISC-V support

2018-03-28 Thread Palmer Dabbelt

On Wed, 28 Mar 2018 15:31:30 PDT (-0700), s...@sifive.com wrote:

Also, core IDs and socket IDs are wrong in perf report:


It looks like for this we need the stuff in topology.h.  I've cobbled something 
together quickly for Alex to use for now to see if that fixes other problems, 
I'll submit a sane version when we can.