Re: [RFC PATCH 1/2] powerpc/numa: Introduce logical numa id

2020-07-31 Thread Srikar Dronamraju
* Aneesh Kumar K.V [2020-07-31 16:49:14]: > We use ibm,associativity and ibm,associativity-lookup-arrays to derive the > numa > node numbers. These device tree properties are firmware indicated grouping of > resources based on their hierarchy in the platform. These numbers (group id) > are >

[RFC PATCH 2/2] powerpc/powernv/cpufreq: Don't assume chip id is same as Linux node id

2020-07-31 Thread Aneesh Kumar K.V
On PowerNV platforms we always have 1:1 mapping between chip ID and firmware group id. Use the helper to convert firmware group id to node id instead of directly using chip ID as Linux node id. NOTE: This doesn't have any functional change. On PowerNV platforms we continue to have 1:1 mapping

Re: [PATCH v4 07/10] Powerpc/numa: Detect support for coregroup

2020-07-31 Thread Michael Ellerman
Srikar Dronamraju writes: > * Michael Ellerman [2020-07-31 17:49:55]: > >> Srikar Dronamraju writes: >> > Add support for grouping cores based on the device-tree classification. >> > - The last domain in the associativity domains always refers to the >> > core. >> > - If primary reference

Re: [PATCH v4 10/10] powerpc/smp: Implement cpu_to_coregroup_id

2020-07-31 Thread Srikar Dronamraju
* Michael Ellerman [2020-07-31 18:02:21]: > Srikar Dronamraju writes: > > Lookup the coregroup id from the associativity array. > Thanks Michael for all your comments and inputs. > It's slightly strange that this is called in patch 9, but only properly > implemented here in patch 10. > >

Re: [PATCH v4 10/10] powerpc/smp: Implement cpu_to_coregroup_id

2020-07-31 Thread Michael Ellerman
Srikar Dronamraju writes: > * Michael Ellerman [2020-07-31 18:02:21]: > >> Srikar Dronamraju writes: >> > Lookup the coregroup id from the associativity array. > > Thanks Michael for all your comments and inputs. > >> It's slightly strange that this is called in patch 9, but only properly >>

[PATCH 2/2] powerpc/vmemmap: Don't warn if we don't find a mapping vmemmap list entry

2020-07-31 Thread Aneesh Kumar K.V
Now that we are handling vmemmap list allocation failure correctly, don't WARN in section deactivate when we don't find a mapping vmemmap list entry. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/mm/init_64.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git

[PATCH v4 00/12] PCI: Remove '*val = 0' from pcie_capability_read_*()

2020-07-31 Thread Saheed O. Bolarinwa
v4 CHANGES: - Drop uses of pcie_capability_read_*() return value. This related to [1] which is pointing towards making the accessors return void. - Remove patches found to be unnecessary - Reword some commit messages v3 CHANGES: - Split previous PATCH 6/13 into two : PATCH 6/14 and PATCH 7/14 -

[PATCH v4 10/12] PCI/AER: Check if pcie_capability_read_*() reads ~0

2020-07-31 Thread Saheed O. Bolarinwa
On failure pcie_capability_read_*() sets it's last parameter, val to 0. However, with Patch 12/12, it is possible that val is set to ~0 on failure. This would introduce a bug because (x & x) == (~0 & x). Since ~0 is an invalid value in here, Add extra check for ~0 to the if condition to confirm

Re: [PATCH] powerpc/pseries: explicitly reschedule during drmem_lmb list traversal

2020-07-31 Thread Michael Ellerman
Nathan Lynch writes: > Michael Ellerman writes: >> Nathan Lynch writes: >>> Laurent Dufour writes: Le 28/07/2020 à 19:37, Nathan Lynch a écrit : > The drmem lmb list can have hundreds of thousands of entries, and > unfortunately lookups take the form of linear searches. As long as

Re: [PATCH v4 08/10] powerpc/smp: Allocate cpumask only after searching thread group

2020-07-31 Thread Srikar Dronamraju
* Michael Ellerman [2020-07-31 17:52:15]: > Srikar Dronamraju writes: > > If allocated earlier and the search fails, then cpumask need to be > > freed. However cpu_l1_cache_map can be allocated after we search thread > > group. > > It's not freed anywhere AFAICS? > Yes, its never freed.

Re: [PATCH] KVM: PPC: Book3S HV: fix a oops in kvmppc_uvmem_page_free()

2020-07-31 Thread Bharata B Rao
On Fri, Jul 31, 2020 at 01:37:00AM -0700, Ram Pai wrote: > On Fri, Jul 31, 2020 at 09:59:40AM +0530, Bharata B Rao wrote: > > On Thu, Jul 30, 2020 at 04:25:26PM -0700, Ram Pai wrote: > > In our case, device pages that are in use are always associated with a valid > > pvt member. See

[RFC PATCH 1/2] powerpc/numa: Introduce logical numa id

2020-07-31 Thread Aneesh Kumar K.V
We use ibm,associativity and ibm,associativity-lookup-arrays to derive the numa node numbers. These device tree properties are firmware indicated grouping of resources based on their hierarchy in the platform. These numbers (group id) are not sequential and hypervisor/firmware can follow different

Re: [PATCH v4 06/10] powerpc/smp: Generalize 2nd sched domain

2020-07-31 Thread Michael Ellerman
Srikar Dronamraju writes: > * Michael Ellerman [2020-07-31 17:45:37]: > >> Srikar Dronamraju writes: >> > Currently "CACHE" domain happens to be the 2nd sched domain as per >> > powerpc_topology. This domain will collapse if cpumask of l2-cache is >> > same as SMT domain. However we could

[PATCH v4 12/12] PCI: Remove '*val = 0' from pcie_capability_read_*()

2020-07-31 Thread Saheed O. Bolarinwa
There are several reasons why a PCI capability read may fail whether the device is present or not. If this happens, pcie_capability_read_*() will return -EINVAL/PCIBIOS_BAD_REGISTER_NUMBER or PCIBIOS_DEVICE_NOT_FOUND and *val is set to 0. This behaviour if further ensured by this code inside

[PATCH 1/2] powerpc/vmemmap: Fix memory leak with vmemmap list allocation failures.

2020-07-31 Thread Aneesh Kumar K.V
If we fail to allocate vmemmap list, we don't keep track of allocated vmemmap block buf. Hence on section deactivate we skip vmemmap block buf free. This results in memory leak. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/mm/init_64.c | 35 --- 1 file

[GIT PULL] Please pull powerpc/linux.git powerpc-5.8-8 tag

2020-07-31 Thread Michael Ellerman
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Hi Linus, Please pull one more powerpc fix for 5.8: The following changes since commit f0479c4bcbd92d1a457d4a43bcab79f29d11334a: selftests/powerpc: Use proper error code to check fault address (2020-07-15 23:10:17 +1000) are available in the

Re: [PATCH v4 08/10] powerpc/smp: Allocate cpumask only after searching thread group

2020-07-31 Thread Michael Ellerman
Srikar Dronamraju writes: > * Michael Ellerman [2020-07-31 17:52:15]: > >> Srikar Dronamraju writes: >> > If allocated earlier and the search fails, then cpumask need to be >> > freed. However cpu_l1_cache_map can be allocated after we search thread >> > group. >> >> It's not freed anywhere

Re: [PATCH] powerpc/pseries: explicitly reschedule during drmem_lmb list traversal

2020-07-31 Thread Nathan Lynch
Michael Ellerman writes: > Nathan Lynch writes: >> Michael Ellerman writes: >>> Nathan Lynch writes: Laurent Dufour writes: > Le 28/07/2020 à 19:37, Nathan Lynch a écrit : >> The drmem lmb list can have hundreds of thousands of entries, and >> unfortunately lookups take the

Re: [PATCH V5 0/4] powerpc/perf: Add support for perf extended regs in powerpc

2020-07-31 Thread Athira Rajeev
> On 31-Jul-2020, at 1:20 AM, Jiri Olsa wrote: > > On Thu, Jul 30, 2020 at 01:24:40PM +0530, Athira Rajeev wrote: >> >> >>> On 27-Jul-2020, at 10:46 PM, Athira Rajeev >>> wrote: >>> >>> Patch set to add support for perf extended register capability in >>> powerpc. The capability flag

Re: [GIT PULL] Please pull powerpc/linux.git powerpc-5.8-8 tag

2020-07-31 Thread pr-tracker-bot
The pull request you sent on Fri, 31 Jul 2020 23:05:17 +1000: > https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git > tags/powerpc-5.8-8 has been merged into torvalds/linux.git: https://git.kernel.org/torvalds/c/deacdb3e3979979016fcd0ffd518c320a62ad166 Thank you! --

[PATCH v4 0/2] powerpc/papr_scm: add support for reporting NVDIMM 'life_used_percentage' metric

2020-07-31 Thread Vaibhav Jain
Changes since v3[1]: * Fixed a rebase issue pointed out by Aneesh in first patch in the series. [1] https://lore.kernel.org/linux-nvdimm/20200730121303.134230-1-vaib...@linux.ibm.com --- This small patchset implements kernel side support for reporting 'life_used_percentage' metric in NDCTL

[PATCH v4 1/2] powerpc/papr_scm: Fetch nvdimm performance stats from PHYP

2020-07-31 Thread Vaibhav Jain
Update papr_scm.c to query dimm performance statistics from PHYP via H_SCM_PERFORMANCE_STATS hcall and export them to user-space as PAPR specific NVDIMM attribute 'perf_stats' in sysfs. The patch also provide a sysfs ABI documentation for the stats being reported and their meanings. During NVDIMM

Re: [PATCH v4 2/2] powerpc/papr_scm: Add support for fetching nvdimm 'fuel-gauge' metric

2020-07-31 Thread Aneesh Kumar K.V
Vaibhav Jain writes: > We add support for reporting 'fuel-gauge' NVDIMM metric via > PAPR_PDSM_HEALTH pdsm payload. 'fuel-gauge' metric indicates the usage > life remaining of a papr-scm compatible NVDIMM. PHYP exposes this > metric via the H_SCM_PERFORMANCE_STATS. > > The metric value is

[PATCH v4 2/2] powerpc/papr_scm: Add support for fetching nvdimm 'fuel-gauge' metric

2020-07-31 Thread Vaibhav Jain
We add support for reporting 'fuel-gauge' NVDIMM metric via PAPR_PDSM_HEALTH pdsm payload. 'fuel-gauge' metric indicates the usage life remaining of a papr-scm compatible NVDIMM. PHYP exposes this metric via the H_SCM_PERFORMANCE_STATS. The metric value is returned from the pdsm by extending the

Re: [PATCH v4 1/2] powerpc/papr_scm: Fetch nvdimm performance stats from PHYP

2020-07-31 Thread Aneesh Kumar K.V
Vaibhav Jain writes: > Update papr_scm.c to query dimm performance statistics from PHYP via > H_SCM_PERFORMANCE_STATS hcall and export them to user-space as PAPR > specific NVDIMM attribute 'perf_stats' in sysfs. The patch also > provide a sysfs ABI documentation for the stats being reported and

Re: [PATCH v4 09/10] Powerpc/smp: Create coregroup domain

2020-07-31 Thread Gautham R Shenoy
Hi Srikar, Valentin, On Wed, Jul 29, 2020 at 11:43:55AM +0530, Srikar Dronamraju wrote: > * Valentin Schneider [2020-07-28 16:03:11]: > [..snip..] > At this time the current topology would be good enough i.e BIGCORE would > always be equal to a MC. However in future we could have chips that

Re: [PATCH v4 06/10] powerpc/smp: Generalize 2nd sched domain

2020-07-31 Thread Michael Ellerman
Srikar Dronamraju writes: > Currently "CACHE" domain happens to be the 2nd sched domain as per > powerpc_topology. This domain will collapse if cpumask of l2-cache is > same as SMT domain. However we could generalize this domain such that it > could mean either be a "CACHE" domain or a "BIGCORE"

Re: [PATCH v4 07/10] Powerpc/numa: Detect support for coregroup

2020-07-31 Thread Michael Ellerman
Srikar Dronamraju writes: > Add support for grouping cores based on the device-tree classification. > - The last domain in the associativity domains always refers to the > core. > - If primary reference domain happens to be the penultimate domain in > the associativity domains device-tree

Re: [PATCH] KVM: PPC: Book3S HV: Define H_PAGE_IN_NONSHARED for H_SVM_PAGE_IN hcall

2020-07-31 Thread Ram Pai
On Fri, Jul 31, 2020 at 10:03:34AM +0530, Bharata B Rao wrote: > On Thu, Jul 30, 2020 at 04:21:01PM -0700, Ram Pai wrote: > > H_SVM_PAGE_IN hcall takes a flag parameter. This parameter specifies the > > way in which a page will be treated. H_PAGE_IN_NONSHARED indicates > > that the page will be

[PATCH v3 1/4] powerpc/sstep: support new VSX vector paired storage access instructions

2020-07-31 Thread Balamuruhan S
VSX Vector Paired instructions loads/stores an octword (32 bytes) from/to storage into two sequential VSRs. Add `analyse_instr()` support to these new instructions, * Load VSX Vector Paired (lxvp) * Load VSX Vector Paired Indexed (lxvpx) * Prefixed Load VSX Vector Paired

[PATCH v3 4/4] powerpc sstep: add testcases for vsx load/store instructions

2020-07-31 Thread Balamuruhan S
add testcases for vsx load/store vector paired instructions, * Load VSX Vector Paired (lxvp) * Load VSX Vector Paired Indexed (lxvpx) * Prefixed Load VSX Vector Paired (plxvp) * Store VSX Vector Paired (stxvp) * Store VSX Vector Paired Indexed (stxvpx)

Re: [PATCH v4 08/10] powerpc/smp: Allocate cpumask only after searching thread group

2020-07-31 Thread Michael Ellerman
Srikar Dronamraju writes: > If allocated earlier and the search fails, then cpumask need to be > freed. However cpu_l1_cache_map can be allocated after we search thread > group. It's not freed anywhere AFAICS? And even after this change there's still an error path that doesn't free it, isn't

Re: [PATCH v4 10/10] powerpc/smp: Implement cpu_to_coregroup_id

2020-07-31 Thread Michael Ellerman
Srikar Dronamraju writes: > Lookup the coregroup id from the associativity array. It's slightly strange that this is called in patch 9, but only properly implemented here in patch 10. I'm not saying you have to squash them together, but it would be good if the change log for patch 9 mentioned

[PATCH v3 0/4] VSX 32-byte vector paired load/store instructions

2020-07-31 Thread Balamuruhan S
VSX vector paired instructions operates with octword (32-byte) operand for loads and stores between storage and a pair of two sequential Vector-Scalar Registers (VSRs). There are 4 word instructions and 2 prefixed instructions that provides this 32-byte storage access operations - lxvp, lxvpx,

[PATCH v3 2/4] powerpc/sstep: support emulation for vsx vector paired storage access instructions

2020-07-31 Thread Balamuruhan S
add emulate_step() changes to support vsx vector paired storage access instructions that provides octword operands loads/stores between storage and set of 64 Vector Scalar Registers (VSRs). Suggested-by: Ravi Bangoria Suggested-by: Naveen N. Rao Signed-off-by: Balamuruhan S ---

[PATCH v3 3/4] powerpc ppc-opcode: add encoding macros for vsx vector paired instructions

2020-07-31 Thread Balamuruhan S
add instruction encoding, extended opcodes, regs and DQ immediate macro for new vsx vector paired instructions, * Load VSX Vector Paired (lxvp) * Load VSX Vector Paired Indexed (lxvpx) * Prefixed Load VSX Vector Paired (plxvp) * Store VSX Vector Paired (stxvp)

Re: [PATCH] KVM: PPC: Book3S HV: fix a oops in kvmppc_uvmem_page_free()

2020-07-31 Thread Ram Pai
On Fri, Jul 31, 2020 at 09:59:40AM +0530, Bharata B Rao wrote: > On Thu, Jul 30, 2020 at 04:25:26PM -0700, Ram Pai wrote: > > Observed the following oops while stress-testing, using multiple > > secureVM on a distro kernel. However this issue theoritically exists in > > 5.5 kernel and later. > >

Re: [PATCH v4 07/10] Powerpc/numa: Detect support for coregroup

2020-07-31 Thread Srikar Dronamraju
* Michael Ellerman [2020-07-31 17:49:55]: > Srikar Dronamraju writes: > > Add support for grouping cores based on the device-tree classification. > > - The last domain in the associativity domains always refers to the > > core. > > - If primary reference domain happens to be the penultimate

Re: [PATCH v4 06/10] powerpc/smp: Generalize 2nd sched domain

2020-07-31 Thread Srikar Dronamraju
* Michael Ellerman [2020-07-31 17:45:37]: > Srikar Dronamraju writes: > > Currently "CACHE" domain happens to be the 2nd sched domain as per > > powerpc_topology. This domain will collapse if cpumask of l2-cache is > > same as SMT domain. However we could generalize this domain such that it > >