[PATCH v2] PCI/AER: Handle Multi UnCorrectable/Correctable errors properly

2022-03-14 Thread Kuppuswamy Sathyanarayanan
Currently the aer_irq() handler returns IRQ_NONE for cases without bits PCI_ERR_ROOT_UNCOR_RCV or PCI_ERR_ROOT_COR_RCV are set. But this assumption is incorrect. Consider a scenario where aer_irq() is triggered for a correctable error, and while we process the error and before we clear the error

linux-next: manual merge of the tip tree with the powerpc tree

2022-03-14 Thread Stephen Rothwell
Hi all, Today's linux-next merge of the tip tree got a conflict in: arch/powerpc/include/asm/livepatch.h between commit: a4520b252765 ("powerpc/ftrace: Add support for livepatch to PPC32") from the powerpc tree and commit: a557abfd1a16 ("x86/livepatch: Validate __fentry__ location")

Re: [PATCH 13/15] swiotlb: merge swiotlb-xen initialization into swiotlb

2022-03-14 Thread Stefano Stabellini
On Mon, 14 Mar 2022, Christoph Hellwig wrote: > Reuse the generic swiotlb initialization for xen-swiotlb. For ARM/ARM64 > this works trivially, while for x86 xen_swiotlb_fixup needs to be passed > as the remap argument to swiotlb_init_remap/swiotlb_init_late. > > Signed-off-by: Christoph Hellwig

Re: [PATCH 5/5] x86/pkeys: Standardize on u8 for pkey type

2022-03-14 Thread Dave Hansen
On 3/10/22 16:57, ira.we...@intel.com wrote: > From: Ira Weiny > > The number of pkeys supported on x86 and powerpc are much smaller than a > u16 value can hold. It is desirable to standardize on the type for > pkeys. powerpc currently supports the most pkeys at 32. u8 is plenty > large for

Re: [PATCH 14/15] swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl

2022-03-14 Thread Boris Ostrovsky
On 3/14/22 3:31 AM, Christoph Hellwig wrote: @@ -314,6 +293,7 @@ void __init swiotlb_init(bool addressing_limit, unsigned int flags) int swiotlb_init_late(size_t size, gfp_t gfp_mask, int (*remap)(void *tlb, unsigned long nslabs)) { + struct io_tlb_mem *mem =

Re: [PATCH 13/15] swiotlb: merge swiotlb-xen initialization into swiotlb

2022-03-14 Thread Boris Ostrovsky
On 3/14/22 3:31 AM, Christoph Hellwig wrote: - static void __init pci_xen_swiotlb_init(void) { if (!xen_initial_domain() && !x86_swiotlb_enable) return; x86_swiotlb_enable = true; - xen_swiotlb = true; - xen_swiotlb_init_early(); +

Re: [PATCH v7 3/5] powerpc: Rework and improve STRICT_KERNEL_RWX patching

2022-03-14 Thread Jordan Niethe
On Sat, Mar 12, 2022 at 6:30 PM Christophe Leroy wrote: > > Hi Jordan > > Le 10/11/2021 à 01:37, Jordan Niethe a écrit : > > From: "Christopher M. Riedl" > > > > Rework code-patching with STRICT_KERNEL_RWX to prepare for a later patch > > which uses a temporary mm for patching under the Book3s64

Re: [PATCH 3/4] powerpc: Handle prefixed instructions in show_user_instructions()

2022-03-14 Thread Jordan Niethe
On Wed, Feb 23, 2022 at 1:34 AM Christophe Leroy wrote: > > > > Le 02/06/2020 à 07:27, Jordan Niethe a écrit : > > Currently prefixed instructions are treated as two word instructions by > > show_user_instructions(), treat them as a single instruction. '<' and > > '>' are placed around the

[RFC v2 PATCH 5/5] powerpc/crash hp: add crash hotplug support for kexec_load

2022-03-14 Thread Sourabh Jain
The kernel changes needed to add support for crash hotplug support for kexec_load system calls are similar to kexec_file_load (which has already been implemented in earlier patches). Since kexec segment array is prepared by kexec tool in the userspace, the kernel does aware of which index FDT

[RFC v2 PATCH 4/5] powerpc/crash hp: add crash hotplug support for kexec_file_load

2022-03-14 Thread Sourabh Jain
Two major changes are done to enable the crash CPU hotplug handler. Firstly, updated the kexec_load path to prepare kimage for hotplug changes and secondly, implemented the crash hotplug handler itself. On the kexec load path, memsz allocation of crash FDT segment is updated to ensure that it has

[RFC v2 PATCH 2/5] powerpc/crash hp: introduce a new config option CRASH_HOTPLUG

2022-03-14 Thread Sourabh Jain
The option CRASH_HOTPLUG enables, in kernel update to kexec segments on hotplug events. All the updates needed on the capture kernel load path in the kernel for both kexec_load and kexec_file_load system will be kept under this config. Signed-off-by: Sourabh Jain --- arch/powerpc/Kconfig | 11

[RFC v2 PATCH 3/5] powrepc/crash hp: update kimage struct

2022-03-14 Thread Sourabh Jain
Two new members fdt_index and fdt_index_valid are added in kimage struct to track the FDT kexec segment. These new members of kimage struct will help the crash hotplug handler to easily access the FDT segment from the kexec segment array. Otherwise, we have to loop through all kexec segments to

[RFC v2 PATCH 0/5] In kernel handling of CPU hotplug events for crash kernel

2022-03-14 Thread Sourabh Jain
This patch series implements the crash hotplug handler on PowerPC introduced by https://lkml.org/lkml/2022/2/9/1406 patch series. The Problem: Post hotplug/DLPAR events the capture kernel holds stale information about the system. Dump collection with stale capture kernel might end

[RFC v2 PATCH 1/5] powerpc/kexec: make update_cpus_node non-static

2022-03-14 Thread Sourabh Jain
Make the update_cpus_node function non-static and export it for usage in other kexec components. The update_cpus_node definition is moved to core_64.c so that it can be used with both kexec_load and kexec_file_load system calls. Signed-off-by: Sourabh Jain --- arch/powerpc/include/asm/kexec.h

Re: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer

2022-03-14 Thread Boris Ostrovsky
On 3/14/22 3:31 AM, Christoph Hellwig wrote: -void __init swiotlb_init(bool addressing_limit, unsigned int flags) +void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags, + int (*remap)(void *tlb, unsigned long nslabs)) { - size_t bytes =

Re: [PATCH 10/14] powerpc/rtas: replace rtas_call_unlocked with raw_rtas_call

2022-03-14 Thread Laurent Dufour
On 08/03/2022, 14:50:43, Nicholas Piggin wrote: > Use the same calling and rets return convention with the raw rtas > call rather than requiring callers to load and byteswap return > values out of rtas_args. > > Signed-off-by: Nicholas Piggin Despite a minor comment below Reviewed-by: Laurent

Re: [PATCH 09/14] powerpc/rtas: Leave MSR[RI] enabled over RTAS call

2022-03-14 Thread Laurent Dufour
On 08/03/2022, 14:50:42, Nicholas Piggin wrote: > PAPR specifies that RTAS may be called with MSR[RI] enabled if the > calling context is recoverable, and RTAS will manage RI as necessary. > Call the rtas entry point with RI enabled, and add a check to ensure > the caller has RI enabled. > >

Re: [PATCH 08/14] powerpc/rtas: call enter_rtas in real-mode on 64-bit

2022-03-14 Thread Laurent Dufour
On 08/03/2022, 14:50:41, Nicholas Piggin wrote: > This moves MSR save/restore and some real-mode juggling out of asm and > into C code, simplifying things. > > Signed-off-by: Nicholas Piggin > --- > arch/powerpc/kernel/rtas.c | 15 --- > arch/powerpc/kernel/rtas_entry.S | 32

Re: [PATCH 07/14] powerpc/rtas: PACA can be restored directly from SPRG

2022-03-14 Thread Laurent Dufour
On 08/03/2022, 14:50:40, Nicholas Piggin wrote: > On 64-bit, PACA is saved in a SPRG so it does not need to be saved on > stack. We also don't need to mask off the top bits for real mode > addresses because the architecture does this for us. > > Signed-off-by: Nicholas Piggin Reviewed-by:

Re: [PATCH 06/14] powerpc/rtas: Load rtas entry MSR explicitly

2022-03-14 Thread Laurent Dufour
On 08/03/2022, 14:50:39, Nicholas Piggin wrote: > Rather than adjust the current MSR value to find the rtas entry > MSR on 64-bit, load the explicit value we want as 32-bit does. > > This prevents some facilities (e.g., VEC and VSX) from being left > enabled which doesn't seem to cause a problem

Re: [PATCH 05/14] powerpc/rtas: Modernise RI clearing on 64-bit

2022-03-14 Thread Laurent Dufour
On 08/03/2022, 14:50:38, Nicholas Piggin wrote: > mtmsrd L=1 can clear MSR[RI] without the previous MSR value; it does > not require sync; it can be moved later to before SRRs are live. > > Signed-off-by: Nicholas Piggin Reviewed-by: Laurent Dufour > --- > arch/powerpc/kernel/rtas_entry.S |

Re: [PATCH 04/14] powerpc/rtas: Call enter_rtas with MSR[EE] disabled

2022-03-14 Thread Laurent Dufour
On 08/03/2022, 14:50:37, Nicholas Piggin wrote: > Disable MSR[EE] in C code rather than asm. > > Signed-off-by: Nicholas Piggin FWIW, Reviewed-by: Laurent Dufour > --- > arch/powerpc/kernel/rtas.c | 4 > arch/powerpc/kernel/rtas_entry.S | 17 + > 2 files changed,

Re: [PATCH] powerpc/xive: fix return value of __setup handler

2022-03-14 Thread Cédric Le Goater
On 3/13/22 07:59, Randy Dunlap wrote: __setup() handlers should return 1 to obsolete_checksetup() in init/main.c to indicate that the boot option has been handled. A return of 0 causes the boot option/value to be listed as an Unknown kernel parameter and added to init's (limited) argument or

[PATCH v2] static_call: Don't make __static_call_return0 static

2022-03-14 Thread Christophe Leroy
System.map shows that vmlinux contains several instances of __static_call_return0(): c0004fc0 t __static_call_return0 c0011518 t __static_call_return0 c00d8160 t __static_call_return0 arch_static_call_transform() uses the middle one to check whether we are setting a call

[PATCH v1 2/2] static_call: Remove __DEFINE_STATIC_CALL macro

2022-03-14 Thread Christophe Leroy
Only DEFINE_STATIC_CALL use __DEFINE_STATIC_CALL macro now when CONFIG_HAVE_STATIC_CALL is selected. Only keep __DEFINE_STATIC_CALL() for the generic fallback, and also use it to implement DEFINE_STATIC_CALL_NULL() in that case. Signed-off-by: Christophe Leroy --- include/linux/static_call.h |

[PATCH v1 1/2] static_call: Properly initialise DEFINE_STATIC_CALL_RET0()

2022-03-14 Thread Christophe Leroy
When a static call is updated with __static_call_return0() as target, arch_static_call_transform() set it to use an optimised set of instructions which are meant to lay in the same cacheline. But when initialising a static call with DEFINE_STATIC_CALL_RET0(), we get a branch to the real

[PATCH 15/15] x86: remove cruft from

2022-03-14 Thread Christoph Hellwig
gets pulled in by all drivers using the DMA API. Remove x86 internal variables and unnecessary includes from it. Signed-off-by: Christoph Hellwig --- arch/x86/include/asm/dma-mapping.h | 11 --- arch/x86/include/asm/iommu.h | 2 ++ 2 files changed, 2 insertions(+), 11

[PATCH 14/15] swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl

2022-03-14 Thread Christoph Hellwig
No users left. Signed-off-by: Christoph Hellwig --- include/linux/swiotlb.h | 2 - kernel/dma/swiotlb.c| 85 +++-- 2 files changed, 30 insertions(+), 57 deletions(-) diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index

[PATCH 13/15] swiotlb: merge swiotlb-xen initialization into swiotlb

2022-03-14 Thread Christoph Hellwig
Reuse the generic swiotlb initialization for xen-swiotlb. For ARM/ARM64 this works trivially, while for x86 xen_swiotlb_fixup needs to be passed as the remap argument to swiotlb_init_remap/swiotlb_init_late. Signed-off-by: Christoph Hellwig --- arch/arm/xen/mm.c | 21 +++---

[PATCH 12/15] swiotlb: provide swiotlb_init variants that remap the buffer

2022-03-14 Thread Christoph Hellwig
To shared more code between swiotlb and xen-swiotlb, offer a swiotlb_init_remap interface and add a remap callback to swiotlb_init_late that will allow Xen to remap the buffer the buffer without duplicating much of the logic. Signed-off-by: Christoph Hellwig --- arch/x86/pci/sta2x11-fixup.c |

[PATCH 11/15] swiotlb: pass a gfp_mask argument to swiotlb_init_late

2022-03-14 Thread Christoph Hellwig
Let the caller chose a zone to allocate from. This will be used later on by the xen-swiotlb initialization on arm. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual --- arch/x86/pci/sta2x11-fixup.c | 2 +- include/linux/swiotlb.h | 2 +- kernel/dma/swiotlb.c | 7

[PATCH 10/15] swiotlb: add a SWIOTLB_ANY flag to lift the low memory restriction

2022-03-14 Thread Christoph Hellwig
Power SVM wants to allocate a swiotlb buffer that is not restricted to low memory for the trusted hypervisor scheme. Consolidate the support for this into the swiotlb_init interface by adding a new flag. Signed-off-by: Christoph Hellwig --- arch/powerpc/include/asm/svm.h | 4

[PATCH 09/15] swiotlb: make the swiotlb_init interface more useful

2022-03-14 Thread Christoph Hellwig
Pass a bool to pass if swiotlb needs to be enabled based on the addressing needs and replace the verbose argument with a set of flags, including one to force enable bounce buffering. Note that this patch removes the possibility to force xen-swiotlb use using swiotlb=force on the command line on

[PATCH 08/15] x86: centralize setting SWIOTLB_FORCE when guest memory encryption is enabled

2022-03-14 Thread Christoph Hellwig
Move enabling SWIOTLB_FORCE for guest memory encryption into common code. Signed-off-by: Christoph Hellwig --- arch/x86/kernel/cpu/mshyperv.c | 8 arch/x86/kernel/pci-dma.c | 8 arch/x86/mm/mem_encrypt_amd.c | 3 --- 3 files changed, 8 insertions(+), 11 deletions(-)

[PATCH 07/15] x86: remove the IOMMU table infrastructure

2022-03-14 Thread Christoph Hellwig
The IOMMU table tries to separate the different IOMMUs into different backends, but actually requires various cross calls. Rewrite the code to do the generic swiotlb/swiotlb-xen setup directly in pci-dma.c and then just call into the IOMMU drivers. Signed-off-by: Christoph Hellwig ---

[PATCH 06/15] MIPS/octeon: use swiotlb_init instead of open coding it

2022-03-14 Thread Christoph Hellwig
Use the generic swiotlb initialization helper instead of open coding it. Signed-off-by: Christoph Hellwig Acked-by: Thomas Bogendoerfer --- arch/mips/cavium-octeon/dma-octeon.c | 15 ++- arch/mips/pci/pci-octeon.c | 2 +- 2 files changed, 3 insertions(+), 14 deletions(-)

[PATCH 05/15] arm/xen: don't check for xen_initial_domain() in xen_create_contiguous_region

2022-03-14 Thread Christoph Hellwig
From: Stefano Stabellini It used to be that Linux enabled swiotlb-xen when running a dom0 on ARM. Since f5079a9a2a31 "xen/arm: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped", Linux detects whether to enable or disable swiotlb-xen based on the new feature flags:

[PATCH 04/15] swiotlb: rename swiotlb_late_init_with_default_size

2022-03-14 Thread Christoph Hellwig
swiotlb_late_init_with_default_size is an overly verbose name that doesn't even catch what the function is doing, given that the size is not just a default but the actual requested size. Rename it to swiotlb_init_late. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual ---

[PATCH 03/15] swiotlb: simplify swiotlb_max_segment

2022-03-14 Thread Christoph Hellwig
Remove the bogus Xen override that was usually larger than the actual size and just calculate the value on demand. Note that swiotlb_max_segment still doesn't make sense as an interface and should eventually be removed. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual ---

[PATCH 02/15] swiotlb: make swiotlb_exit a no-op if SWIOTLB_FORCE is set

2022-03-14 Thread Christoph Hellwig
If force bouncing is enabled we can't release the buffers. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual --- kernel/dma/swiotlb.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 908eac2527cb1..af9d257501a64 100644 ---

[PATCH 01/15] dma-direct: use is_swiotlb_active in dma_direct_map_page

2022-03-14 Thread Christoph Hellwig
Use the more specific is_swiotlb_active check instead of checking the global swiotlb_force variable. Signed-off-by: Christoph Hellwig Reviewed-by: Anshuman Khandual --- kernel/dma/direct.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/dma/direct.h

cleanup swiotlb initialization v5

2022-03-14 Thread Christoph Hellwig
Hi all, this series tries to clean up the swiotlb initialization, including that of swiotlb-xen. To get there is also removes the x86 iommu table infrastructure that massively obsfucates the initialization path. Git tree: git://git.infradead.org/users/hch/misc.git swiotlb-init-cleanup

[PATCH] static_call: Don't make __static_call_return0 static

2022-03-14 Thread Christophe Leroy
System.map shows that vmlinux contains several instances of __static_call_return0(): c0004fc0 t __static_call_return0 c0011518 t __static_call_return0 c00d8160 t __static_call_return0 arch_static_call_transform() uses the middle one to check whether we are setting a call