* Aneesh Kumar K.V [2020-07-31 16:49:14]:
> We use ibm,associativity and ibm,associativity-lookup-arrays to derive the
> numa
> node numbers. These device tree properties are firmware indicated grouping of
> resources based on their hierarchy in the platform. These numbers (group id)
> are
>
> On 31-Jul-2020, at 1:20 AM, Jiri Olsa wrote:
>
> On Thu, Jul 30, 2020 at 01:24:40PM +0530, Athira Rajeev wrote:
>>
>>
>>> On 27-Jul-2020, at 10:46 PM, Athira Rajeev
>>> wrote:
>>>
>>> Patch set to add support for perf extended register capability in
>>> powerpc. The capability flag
The pull request you sent on Fri, 31 Jul 2020 23:05:17 +1000:
> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
> tags/powerpc-5.8-8
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/deacdb3e3979979016fcd0ffd518c320a62ad166
Thank you!
--
Michael Ellerman writes:
> Nathan Lynch writes:
>> Michael Ellerman writes:
>>> Nathan Lynch writes:
Laurent Dufour writes:
> Le 28/07/2020 à 19:37, Nathan Lynch a écrit :
>> The drmem lmb list can have hundreds of thousands of entries, and
>> unfortunately lookups take the
Nathan Lynch writes:
> Michael Ellerman writes:
>> Nathan Lynch writes:
>>> Laurent Dufour writes:
Le 28/07/2020 à 19:37, Nathan Lynch a écrit :
> The drmem lmb list can have hundreds of thousands of entries, and
> unfortunately lookups take the form of linear searches. As long as
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Linus,
Please pull one more powerpc fix for 5.8:
The following changes since commit f0479c4bcbd92d1a457d4a43bcab79f29d11334a:
selftests/powerpc: Use proper error code to check fault address (2020-07-15
23:10:17 +1000)
are available in the
There are several reasons why a PCI capability read may fail whether the
device is present or not. If this happens, pcie_capability_read_*() will
return -EINVAL/PCIBIOS_BAD_REGISTER_NUMBER or PCIBIOS_DEVICE_NOT_FOUND
and *val is set to 0.
This behaviour if further ensured by this code inside
On failure pcie_capability_read_*() sets it's last parameter, val
to 0. However, with Patch 12/12, it is possible that val is set
to ~0 on failure. This would introduce a bug because
(x & x) == (~0 & x).
Since ~0 is an invalid value in here,
Add extra check for ~0 to the if condition to confirm
Srikar Dronamraju writes:
> * Michael Ellerman [2020-07-31 17:45:37]:
>
>> Srikar Dronamraju writes:
>> > Currently "CACHE" domain happens to be the 2nd sched domain as per
>> > powerpc_topology. This domain will collapse if cpumask of l2-cache is
>> > same as SMT domain. However we could
Srikar Dronamraju writes:
> * Michael Ellerman [2020-07-31 17:52:15]:
>
>> Srikar Dronamraju writes:
>> > If allocated earlier and the search fails, then cpumask need to be
>> > freed. However cpu_l1_cache_map can be allocated after we search thread
>> > group.
>>
>> It's not freed anywhere
v4 CHANGES:
- Drop uses of pcie_capability_read_*() return value. This related to
[1] which is pointing towards making the accessors return void.
- Remove patches found to be unnecessary
- Reword some commit messages
v3 CHANGES:
- Split previous PATCH 6/13 into two : PATCH 6/14 and PATCH 7/14
-
Now that we are handling vmemmap list allocation failure correctly, don't
WARN in section deactivate when we don't find a mapping vmemmap list entry.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/init_64.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git
If we fail to allocate vmemmap list, we don't keep track of allocated
vmemmap block buf. Hence on section deactivate we skip vmemmap block
buf free. This results in memory leak.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/init_64.c | 35 ---
1 file
Srikar Dronamraju writes:
> * Michael Ellerman [2020-07-31 17:49:55]:
>
>> Srikar Dronamraju writes:
>> > Add support for grouping cores based on the device-tree classification.
>> > - The last domain in the associativity domains always refers to the
>> > core.
>> > - If primary reference
Srikar Dronamraju writes:
> * Michael Ellerman [2020-07-31 18:02:21]:
>
>> Srikar Dronamraju writes:
>> > Lookup the coregroup id from the associativity array.
>
> Thanks Michael for all your comments and inputs.
>
>> It's slightly strange that this is called in patch 9, but only properly
>>
On PowerNV platforms we always have 1:1 mapping between chip ID and
firmware group id. Use the helper to convert firmware group id to
node id instead of directly using chip ID as Linux node id.
NOTE: This doesn't have any functional change. On PowerNV platforms
we continue to have 1:1 mapping
We use ibm,associativity and ibm,associativity-lookup-arrays to derive the numa
node numbers. These device tree properties are firmware indicated grouping of
resources based on their hierarchy in the platform. These numbers (group id) are
not sequential and hypervisor/firmware can follow different
On Fri, Jul 31, 2020 at 01:37:00AM -0700, Ram Pai wrote:
> On Fri, Jul 31, 2020 at 09:59:40AM +0530, Bharata B Rao wrote:
> > On Thu, Jul 30, 2020 at 04:25:26PM -0700, Ram Pai wrote:
> > In our case, device pages that are in use are always associated with a valid
> > pvt member. See
* Michael Ellerman [2020-07-31 18:02:21]:
> Srikar Dronamraju writes:
> > Lookup the coregroup id from the associativity array.
>
Thanks Michael for all your comments and inputs.
> It's slightly strange that this is called in patch 9, but only properly
> implemented here in patch 10.
>
>
* Michael Ellerman [2020-07-31 17:52:15]:
> Srikar Dronamraju writes:
> > If allocated earlier and the search fails, then cpumask need to be
> > freed. However cpu_l1_cache_map can be allocated after we search thread
> > group.
>
> It's not freed anywhere AFAICS?
>
Yes, its never freed.
* Michael Ellerman [2020-07-31 17:45:37]:
> Srikar Dronamraju writes:
> > Currently "CACHE" domain happens to be the 2nd sched domain as per
> > powerpc_topology. This domain will collapse if cpumask of l2-cache is
> > same as SMT domain. However we could generalize this domain such that it
> >
* Michael Ellerman [2020-07-31 17:49:55]:
> Srikar Dronamraju writes:
> > Add support for grouping cores based on the device-tree classification.
> > - The last domain in the associativity domains always refers to the
> > core.
> > - If primary reference domain happens to be the penultimate
On Fri, Jul 31, 2020 at 09:59:40AM +0530, Bharata B Rao wrote:
> On Thu, Jul 30, 2020 at 04:25:26PM -0700, Ram Pai wrote:
> > Observed the following oops while stress-testing, using multiple
> > secureVM on a distro kernel. However this issue theoritically exists in
> > 5.5 kernel and later.
> >
add testcases for vsx load/store vector paired instructions,
* Load VSX Vector Paired (lxvp)
* Load VSX Vector Paired Indexed (lxvpx)
* Prefixed Load VSX Vector Paired (plxvp)
* Store VSX Vector Paired (stxvp)
* Store VSX Vector Paired Indexed (stxvpx)
add instruction encoding, extended opcodes, regs and DQ immediate macro
for new vsx vector paired instructions,
* Load VSX Vector Paired (lxvp)
* Load VSX Vector Paired Indexed (lxvpx)
* Prefixed Load VSX Vector Paired (plxvp)
* Store VSX Vector Paired (stxvp)
add emulate_step() changes to support vsx vector paired storage
access instructions that provides octword operands loads/stores
between storage and set of 64 Vector Scalar Registers (VSRs).
Suggested-by: Ravi Bangoria
Suggested-by: Naveen N. Rao
Signed-off-by: Balamuruhan S
---
VSX Vector Paired instructions loads/stores an octword (32 bytes)
from/to storage into two sequential VSRs. Add `analyse_instr()` support
to these new instructions,
* Load VSX Vector Paired (lxvp)
* Load VSX Vector Paired Indexed (lxvpx)
* Prefixed Load VSX Vector Paired
VSX vector paired instructions operates with octword (32-byte) operand
for loads and stores between storage and a pair of two sequential Vector-Scalar
Registers (VSRs). There are 4 word instructions and 2 prefixed instructions
that provides this 32-byte storage access operations - lxvp, lxvpx,
On Fri, Jul 31, 2020 at 10:03:34AM +0530, Bharata B Rao wrote:
> On Thu, Jul 30, 2020 at 04:21:01PM -0700, Ram Pai wrote:
> > H_SVM_PAGE_IN hcall takes a flag parameter. This parameter specifies the
> > way in which a page will be treated. H_PAGE_IN_NONSHARED indicates
> > that the page will be
Srikar Dronamraju writes:
> Lookup the coregroup id from the associativity array.
It's slightly strange that this is called in patch 9, but only properly
implemented here in patch 10.
I'm not saying you have to squash them together, but it would be good if
the change log for patch 9 mentioned
Srikar Dronamraju writes:
> If allocated earlier and the search fails, then cpumask need to be
> freed. However cpu_l1_cache_map can be allocated after we search thread
> group.
It's not freed anywhere AFAICS?
And even after this change there's still an error path that doesn't free
it, isn't
Srikar Dronamraju writes:
> Add support for grouping cores based on the device-tree classification.
> - The last domain in the associativity domains always refers to the
> core.
> - If primary reference domain happens to be the penultimate domain in
> the associativity domains device-tree
Srikar Dronamraju writes:
> Currently "CACHE" domain happens to be the 2nd sched domain as per
> powerpc_topology. This domain will collapse if cpumask of l2-cache is
> same as SMT domain. However we could generalize this domain such that it
> could mean either be a "CACHE" domain or a "BIGCORE"
Hi Srikar, Valentin,
On Wed, Jul 29, 2020 at 11:43:55AM +0530, Srikar Dronamraju wrote:
> * Valentin Schneider [2020-07-28 16:03:11]:
>
[..snip..]
> At this time the current topology would be good enough i.e BIGCORE would
> always be equal to a MC. However in future we could have chips that
Vaibhav Jain writes:
> We add support for reporting 'fuel-gauge' NVDIMM metric via
> PAPR_PDSM_HEALTH pdsm payload. 'fuel-gauge' metric indicates the usage
> life remaining of a papr-scm compatible NVDIMM. PHYP exposes this
> metric via the H_SCM_PERFORMANCE_STATS.
>
> The metric value is
Vaibhav Jain writes:
> Update papr_scm.c to query dimm performance statistics from PHYP via
> H_SCM_PERFORMANCE_STATS hcall and export them to user-space as PAPR
> specific NVDIMM attribute 'perf_stats' in sysfs. The patch also
> provide a sysfs ABI documentation for the stats being reported and
We add support for reporting 'fuel-gauge' NVDIMM metric via
PAPR_PDSM_HEALTH pdsm payload. 'fuel-gauge' metric indicates the usage
life remaining of a papr-scm compatible NVDIMM. PHYP exposes this
metric via the H_SCM_PERFORMANCE_STATS.
The metric value is returned from the pdsm by extending the
Update papr_scm.c to query dimm performance statistics from PHYP via
H_SCM_PERFORMANCE_STATS hcall and export them to user-space as PAPR
specific NVDIMM attribute 'perf_stats' in sysfs. The patch also
provide a sysfs ABI documentation for the stats being reported and
their meanings.
During NVDIMM
Changes since v3[1]:
* Fixed a rebase issue pointed out by Aneesh in first patch in the series.
[1]
https://lore.kernel.org/linux-nvdimm/20200730121303.134230-1-vaib...@linux.ibm.com
---
This small patchset implements kernel side support for reporting
'life_used_percentage' metric in NDCTL
39 matches
Mail list logo