On Thu, 2017-06-29 at 20:44 -0400, Gustavo Romero wrote:
> Add a test to check if FP/VSX registers are sane (restored correctly) after
> a VSX unavailable exception is caught in the middle of a transaction.
>
> Signed-off-by: Gustavo Romero
> Signed-off-by: Breno
"Rafael J. Wysocki" writes:
> On Fri, Jun 30, 2017 at 5:45 AM, Michael Ellerman wrote:
>> "Rafael J. Wysocki" writes:
>>
>>> On Thu, Jun 29, 2017 at 2:21 PM, Michael Ellerman
>>> wrote:
On Wed,
Check for validity of cpu before calling get_hard_smp_processor_id.
Found with coverity.
Signed-off-by: Santosh Sivaraj
---
arch/powerpc/platforms/powernv/smp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/smp.c
On Fri, 2017-06-30 at 13:41 -0300, Breno Leitao wrote:
> Thanks Gustavo for the patch.
>
> On Thu, Jun 29, 2017 at 08:39:23PM -0400, Gustavo Romero wrote:
> > Currently tm_reclaim() can return with a corrupted vs0 (fp0) or vs32 (v0)
> > due to the fact vs0 is used to save FPSCR and vs32 is used
Turns out pthreads returns an errno and doesn't set errno. This doesn't
play well with perror().
Signed-off-by: Cyril Bur
---
.../selftests/powerpc/benchmarks/context_switch.c| 16 +---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git
On Thu, 2017-06-29 at 20:44 -0400, Gustavo Romero wrote:
> Currently tm_reclaim() can return with a corrupted vs0 (fp0) or vs32 (v0)
> due to the fact vs0 is used to save FPSCR and vs32 is used to save VSCR.
>
> Later, we recheckpoint trusting that the live state of FP and VEC are ok
> depending
Implemented default hugepage size verification (default_hugepagesz=)
in order to allow allocation of defined number of pages (hugepages=)
only for supported hugepage sizes.
Signed-off-by: Victor Aoqui
---
arch/powerpc/mm/hugetlbpage.c | 15 +++
1 file changed, 15
The gigantic page range received from platform actually extends
upto (block_size * expeted_pages) starting at any given address
instead of just a single 16GB page.
Fixes: 4792adbac9eb ("powerpc: Don't use a 16G page if beyond mem= limits")
Signed-off-by: Anshuman Khandual
Hi,
Today's mainline kernel panics and machine rebooted when running kernel
selftest on powerpc.
Test: tm/tm-signal-context-chk-vsx.c
kernel version : 4.12.0-rc7
Machine : Power 8 Bare-metal
config: attached, config_4k_pages
gcc: 4.8.5
kernel traces:
--
tm-signal-msr-r[39659]: bad
* Abdul Haleem wrote (on 2017-07-03 11:25:18
+0530):
> Hi,
>
> Today's next-20170630 on powerpc shows warnings in dmesg when SMT is
> disabled.
Fix provided by Nick: https://lkml.org/lkml/2017/6/30/143
Thanks,
Santosh
>
> Test: SMT off
> kernel:
From: Madhavan Srinivasan
This patch adds support for detection of core IMC events along with the
Nest IMC events. It adds a new domain IMC_DOMAIN_CORE and its determined
with the help of the "type" property in the IMC device tree.
Signed-off-by: Anju T Sudhakar
From: Madhavan Srinivasan
Parse device tree to detect IMC units. Traverse through each IMC unit
node to find supported events and corresponding unit/scale files (if any).
The device tree for IMC counters starts at the node "imc-counters".
This node contains all the IMC
Device tree IMC driver code parses the IMC units and their events. It
passes the information to IMC pmu code which is placed in powerpc/perf
as "imc-pmu.c".
Patch adds a set of generic imc pmu related event functions to be
used by each imc pmu unit. Add code to setup format attribute and to
Code to add support for thread IMC on cpuhotplug.
When a cpu goes offline, the LDBAR for that cpu is disabled, and when it comes
back online the previous ldbar value is written back to the LDBAR for that cpu.
To register the hotplug functions for thread_imc, a new state
From: Madhavan Srinivasan
Create a new header file to add the data structures and
macros needed for In-Memory Collection (IMC) counter support.
Signed-off-by: Anju T Sudhakar
Signed-off-by: Hemant Kumar
Code to add support for detection of thread IMC events. It adds a new
domain IMC_DOMAIN_THREAD and it is determined with the help of the
"type" property in the imc device-tree.
Signed-off-by: Anju T Sudhakar
Signed-off-by: Hemant Kumar
On 07/03/2017 06:19 AM, Benjamin Herrenschmidt wrote:
> On Mon, 2017-07-03 at 13:55 +1000, David Gibson wrote:
>>> Calls that still need to be addressed :
>>>
>>> H_INT_SET_OS_REPORTING_LINE
>>> H_INT_GET_OS_REPORTING_LINE
>>> H_INT_ESB
>>> H_INT_SYNC
>>
>> So, does this mean
Code to add PMU functions required for event initialization,
read, update, add, del etc. for thread IMC PMU. Thread IMC PMUs are used
for per-task monitoring.
For each CPU, a page of memory is allocated and is kept static i.e.,
these pages will exist till the machine shuts down. The base address
On Mon, 26 Jun 2017, Jiri Slaby wrote:
> On 06/23/2017, 09:51 AM, Thomas Gleixner wrote:
> > On Wed, 21 Jun 2017, Jiri Slaby wrote:
> >> diff --git a/arch/arm64/include/asm/futex.h
> >> b/arch/arm64/include/asm/futex.h
> >> index f32b42e8725d..5bb2fd4674e7 100644
> >> ---
Power9 has In-Memory-Collection (IMC) infrastructure which contains
various Performance Monitoring Units (PMUs) at Nest level (these are
on-chip but off-core), Core level and Thread level.
From: Madhavan Srinivasan
Code to add PMU function to initialize a core IMC event. It also
adds cpumask initialization function for core IMC PMU.
Code to create platform device for the IMC counters.
Paltform devices are created based on the IMC compatibility
string.
New Config flag "CONFIG_HV_PERF_IMC_CTRS" add to contain the
IMC counter changes.
Signed-off-by: Anju T Sudhakar
Signed-off-by: Hemant Kumar
Adds cpumask attribute to be used by each IMC pmu. Only one cpu (any
online CPU) from each chip for nest PMUs is designated to read counters.
On CPU hotplug, dying CPU is checked to see whether it
From: Balbir Singh
Move from mwrite() to patch_instruction() for xmon for
breakpoint addition and removal.
Signed-off-by: Balbir Singh
Signed-off-by: Michael Ellerman
---
arch/powerpc/xmon/xmon.c | 7 +--
1 file changed,
Michael Ellerman writes:
> On Thu, 2017-06-29 at 21:55:31 UTC, Thiago Jung Bauermann wrote:
>> H_GET_24X7_CATALOG_PAGE needs to be passed the version number obtained from
>> the first catalog page obtained previously. This is a 64 bit number, but
>>
Masahiro Yamada writes:
> Hi Michael,
>
> Ping. Please apply this patch.
>
> I need this to clean up Makefiles in the next development cycle.
Sorry for some reason it didn't land in patchwork, so I keep forgetting
about it.
Have merged it now for 4.13.
cheers
From: Balbir Singh
So that we can implement STRICT_RWX, use patch_instruction() in
optprobes.
Signed-off-by: Balbir Singh
Signed-off-by: Michael Ellerman
---
arch/powerpc/kernel/optprobes.c | 53
From: Balbir Singh
With hash we update the bolted pte to mark it read-only. We rely
on the MMU_FTR_KERNEL_RO to generate the correct permissions
for read-only text. The radix implementation just prints a warning
in this implementation
Signed-off-by: Balbir Singh
From: Balbir Singh
For CONFIG_STRICT_KERNEL_RWX align __init_begin to 16M. We use 16M
since its the larger of 2M on radix and 16M on hash for our linear
mapping. The plan is to have .text, .rodata and everything upto
__init_begin marked as RX. Note we still have executable
From: Balbir Singh
Once upon a time there were only two PP (page protection) bits. In ISA
2.03 an additional PP bit was added, but because of the layout of the
HPTE it could not be made contiguous with the existing PP bits.
The result is that we now have three PP bits,
From: Balbir Singh
This patch creates the window using text_poke_area, allocated via
get_vm_area(). text_poke_area is per CPU to avoid locking.
text_poke_area for each cpu is setup using late_initcall, prior to
setup of these alternate mapping areas, we continue to use
From: Balbir Singh
The Radix linear mapping code (create_physical_mapping()) tries to use
the largest page size it can at each step. Currently the only reason
it steps down to a smaller page size is if the start addr is
unaligned (never happens in practice), or the end of
From: Balbir Singh
arch_arm/disarm_probe() use direct assignment for copying
instructions, replace them with patch_instruction(). We don't need to
call flush_icache_range() because patch_instruction() does it for us.
Signed-off-by: Balbir Singh
From: Balbir Singh
Commit 9abcc981de97 ("powerpc/mm/radix: Only add X for pages
overlapping kernel text") changed the linear mapping on Radix to only
mark the kernel text executable.
However if the kernel is run relocated, for example as a kdump kernel,
then the exception
From: Balbir Singh
All code that patches kernel text has been moved over to using
patch_instruction() and patch_instruction() is able to cope with the
kernel text being read only.
The linker script has been updated to ensure the read only data ends
on a large page
On Tue, 2017-06-27 at 11:36 -0400, Tejun Heo wrote:
> Hello, Abdul.
>
> Sorry about the long delay.
>
> On Mon, Jun 12, 2017 at 04:53:42PM +0530, Abdul Haleem wrote:
> > linux-next kernel crashed while running CPU offline and online.
> >
> > Machine: Power 8 LPAR
> > Kernel :
36 matches
Mail list logo