Re: [PATCH v3 11/15] mm/memory: factor out copying the actual PTE in copy_present_pte()
On Mon, Jan 29, 2024 at 01:46:45PM +0100, David Hildenbrand wrote: > Let's prepare for further changes. > > Reviewed-by: Ryan Roberts > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > mm/memory.c | 63 - > 1 file changed, 33 insertions(+), 30 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 8d14ba440929..a3bdb25f4c8d 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -930,6 +930,29 @@ copy_present_page(struct vm_area_struct *dst_vma, struct > vm_area_struct *src_vma > return 0; > } > > +static inline void __copy_present_pte(struct vm_area_struct *dst_vma, > + struct vm_area_struct *src_vma, pte_t *dst_pte, pte_t *src_pte, > + pte_t pte, unsigned long addr) > +{ > + struct mm_struct *src_mm = src_vma->vm_mm; > + > + /* If it's a COW mapping, write protect it both processes. */ > + if (is_cow_mapping(src_vma->vm_flags) && pte_write(pte)) { > + ptep_set_wrprotect(src_mm, addr, src_pte); > + pte = pte_wrprotect(pte); > + } > + > + /* If it's a shared mapping, mark it clean in the child. */ > + if (src_vma->vm_flags & VM_SHARED) > + pte = pte_mkclean(pte); > + pte = pte_mkold(pte); > + > + if (!userfaultfd_wp(dst_vma)) > + pte = pte_clear_uffd_wp(pte); > + > + set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); > +} > + > /* > * Copy one pte. Returns 0 if succeeded, or -EAGAIN if one preallocated page > * is required to copy this pte. > @@ -939,23 +962,23 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct > vm_area_struct *src_vma, >pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, >struct folio **prealloc) > { > - struct mm_struct *src_mm = src_vma->vm_mm; > - unsigned long vm_flags = src_vma->vm_flags; > pte_t pte = ptep_get(src_pte); > struct page *page; > struct folio *folio; > > page = vm_normal_page(src_vma, addr, pte); > - if (page) > - folio = page_folio(page); > - if (page && folio_test_anon(folio)) { > + if (unlikely(!page)) > + goto copy_pte; > + > + folio = page_folio(page); > + folio_get(folio); > + if (folio_test_anon(folio)) { > /* >* If this page may have been pinned by the parent process, >* copy the page immediately for the child so that we'll always >* guarantee the pinned page won't be randomly replaced in the >* future. >*/ > - folio_get(folio); > if (unlikely(folio_try_dup_anon_rmap_pte(folio, page, > src_vma))) { > /* Page may be pinned, we have to copy. */ > folio_put(folio); > @@ -963,34 +986,14 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct > vm_area_struct *src_vma, >addr, rss, prealloc, page); > } > rss[MM_ANONPAGES]++; > - } else if (page) { > - folio_get(folio); > + VM_WARN_ON_FOLIO(PageAnonExclusive(page), folio); > + } else { > folio_dup_file_rmap_pte(folio, page); > rss[mm_counter_file(folio)]++; > } > > - /* > - * If it's a COW mapping, write protect it both > - * in the parent and the child > - */ > - if (is_cow_mapping(vm_flags) && pte_write(pte)) { > - ptep_set_wrprotect(src_mm, addr, src_pte); > - pte = pte_wrprotect(pte); > - } > - VM_BUG_ON(page && folio_test_anon(folio) && PageAnonExclusive(page)); > - > - /* > - * If it's a shared mapping, mark it clean in > - * the child > - */ > - if (vm_flags & VM_SHARED) > - pte = pte_mkclean(pte); > - pte = pte_mkold(pte); > - > - if (!userfaultfd_wp(dst_vma)) > - pte = pte_clear_uffd_wp(pte); > - > - set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); > +copy_pte: > + __copy_present_pte(dst_vma, src_vma, dst_pte, src_pte, pte, addr); > return 0; > } > > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 12/15] mm/memory: pass PTE to copy_present_pte()
On Mon, Jan 29, 2024 at 01:46:46PM +0100, David Hildenbrand wrote: > We already read it, let's just forward it. > > This patch is based on work by Ryan Roberts. > > Reviewed-by: Ryan Roberts > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > mm/memory.c | 7 +++ > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index a3bdb25f4c8d..41b24da5be38 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -959,10 +959,9 @@ static inline void __copy_present_pte(struct > vm_area_struct *dst_vma, > */ > static inline int > copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct > *src_vma, > - pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, > - struct folio **prealloc) > + pte_t *dst_pte, pte_t *src_pte, pte_t pte, unsigned long addr, > + int *rss, struct folio **prealloc) > { > - pte_t pte = ptep_get(src_pte); > struct page *page; > struct folio *folio; > > @@ -1103,7 +1102,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct > vm_area_struct *src_vma, > } > /* copy_present_pte() will clear `*prealloc' if consumed */ > ret = copy_present_pte(dst_vma, src_vma, dst_pte, src_pte, > -addr, rss, ); > +ptent, addr, rss, ); > /* >* If we need a pre-allocated page for this pte, drop the >* locks, allocate, and try again. > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 10/15] powerpc/mm: use pte_next_pfn() in set_ptes()
On Mon, Jan 29, 2024 at 01:46:44PM +0100, David Hildenbrand wrote: > Let's use our handy new helper. Note that the implementation is slightly > different, but shouldn't really make a difference in practice. > > Reviewed-by: Christophe Leroy > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/powerpc/mm/pgtable.c | 5 + > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c > index a04ae4449a02..549a440ed7f6 100644 > --- a/arch/powerpc/mm/pgtable.c > +++ b/arch/powerpc/mm/pgtable.c > @@ -220,10 +220,7 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, > break; > ptep++; > addr += PAGE_SIZE; > - /* > - * increment the pfn. > - */ > - pte = pfn_pte(pte_pfn(pte) + 1, pte_pgprot((pte))); > + pte = pte_next_pfn(pte); > } > } > > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 09/15] arm/mm: use pte_next_pfn() in set_ptes()
On Mon, Jan 29, 2024 at 01:46:43PM +0100, David Hildenbrand wrote: > Let's use our handy helper now that it's available on all archs. > > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/arm/mm/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c > index 674ed71573a8..c24e29c0b9a4 100644 > --- a/arch/arm/mm/mmu.c > +++ b/arch/arm/mm/mmu.c > @@ -1814,6 +1814,6 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, > if (--nr == 0) > break; > ptep++; > - pte_val(pteval) += PAGE_SIZE; > + pteval = pte_next_pfn(pteval); > } > } > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 08/15] mm/pgtable: make pte_next_pfn() independent of set_ptes()
On Mon, Jan 29, 2024 at 01:46:42PM +0100, David Hildenbrand wrote: > Let's provide pte_next_pfn(), independently of set_ptes(). This allows for > using the generic pte_next_pfn() version in some arch-specific set_ptes() > implementations, and prepares for reusing pte_next_pfn() in other context. > > Reviewed-by: Christophe Leroy > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > include/linux/pgtable.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index f6d0e3513948..351cd9dc7194 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -212,7 +212,6 @@ static inline int pmd_dirty(pmd_t pmd) > #define arch_flush_lazy_mmu_mode() do {} while (0) > #endif > > -#ifndef set_ptes > > #ifndef pte_next_pfn > static inline pte_t pte_next_pfn(pte_t pte) > @@ -221,6 +220,7 @@ static inline pte_t pte_next_pfn(pte_t pte) > } > #endif > > +#ifndef set_ptes > /** > * set_ptes - Map consecutive pages to a contiguous range of addresses. > * @mm: Address space to map the pages into. > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 07/15] sparc/pgtable: define PFN_PTE_SHIFT
On Mon, Jan 29, 2024 at 01:46:41PM +0100, David Hildenbrand wrote: > We want to make use of pte_next_pfn() outside of set_ptes(). Let's > simply define PFN_PTE_SHIFT, required by pte_next_pfn(). > > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/sparc/include/asm/pgtable_64.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/sparc/include/asm/pgtable_64.h > b/arch/sparc/include/asm/pgtable_64.h > index a8c871b7d786..652af9d63fa2 100644 > --- a/arch/sparc/include/asm/pgtable_64.h > +++ b/arch/sparc/include/asm/pgtable_64.h > @@ -929,6 +929,8 @@ static inline void __set_pte_at(struct mm_struct *mm, > unsigned long addr, > maybe_tlb_batch_add(mm, addr, ptep, orig, fullmm, PAGE_SHIFT); > } > > +#define PFN_PTE_SHIFTPAGE_SHIFT > + > static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte, unsigned int nr) > { > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 06/15] s390/pgtable: define PFN_PTE_SHIFT
On Mon, Jan 29, 2024 at 01:46:40PM +0100, David Hildenbrand wrote: > We want to make use of pte_next_pfn() outside of set_ptes(). Let's > simply define PFN_PTE_SHIFT, required by pte_next_pfn(). > > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/s390/include/asm/pgtable.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h > index 1299b56e43f6..4b91e65c85d9 100644 > --- a/arch/s390/include/asm/pgtable.h > +++ b/arch/s390/include/asm/pgtable.h > @@ -1316,6 +1316,8 @@ pgprot_t pgprot_writecombine(pgprot_t prot); > #define pgprot_writethrough pgprot_writethrough > pgprot_t pgprot_writethrough(pgprot_t prot); > > +#define PFN_PTE_SHIFTPAGE_SHIFT > + > /* > * Set multiple PTEs to consecutive pages with a single call. All PTEs > * are within the same folio, PMD and VMA. > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 05/15] riscv/pgtable: define PFN_PTE_SHIFT
On Mon, Jan 29, 2024 at 01:46:39PM +0100, David Hildenbrand wrote: > We want to make use of pte_next_pfn() outside of set_ptes(). Let's > simply define PFN_PTE_SHIFT, required by pte_next_pfn(). > > Reviewed-by: Alexandre Ghiti > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/riscv/include/asm/pgtable.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/riscv/include/asm/pgtable.h > b/arch/riscv/include/asm/pgtable.h > index 0c94260b5d0c..add5cd30ab34 100644 > --- a/arch/riscv/include/asm/pgtable.h > +++ b/arch/riscv/include/asm/pgtable.h > @@ -523,6 +523,8 @@ static inline void __set_pte_at(pte_t *ptep, pte_t pteval) > set_pte(ptep, pteval); > } > > +#define PFN_PTE_SHIFT_PAGE_PFN_SHIFT > + > static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pteval, unsigned int nr) > { > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 04/15] powerpc/pgtable: define PFN_PTE_SHIFT
On Mon, Jan 29, 2024 at 01:46:38PM +0100, David Hildenbrand wrote: > We want to make use of pte_next_pfn() outside of set_ptes(). Let's > simply define PFN_PTE_SHIFT, required by pte_next_pfn(). > > Reviewed-by: Christophe Leroy > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/powerpc/include/asm/pgtable.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/powerpc/include/asm/pgtable.h > b/arch/powerpc/include/asm/pgtable.h > index 9224f23065ff..7a1ba8889aea 100644 > --- a/arch/powerpc/include/asm/pgtable.h > +++ b/arch/powerpc/include/asm/pgtable.h > @@ -41,6 +41,8 @@ struct mm_struct; > > #ifndef __ASSEMBLY__ > > +#define PFN_PTE_SHIFTPTE_RPN_SHIFT > + > void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, > pte_t pte, unsigned int nr); > #define set_ptes set_ptes > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 03/15] nios2/pgtable: define PFN_PTE_SHIFT
On Mon, Jan 29, 2024 at 01:46:37PM +0100, David Hildenbrand wrote: > We want to make use of pte_next_pfn() outside of set_ptes(). Let's > simply define PFN_PTE_SHIFT, required by pte_next_pfn(). > > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/nios2/include/asm/pgtable.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/nios2/include/asm/pgtable.h > b/arch/nios2/include/asm/pgtable.h > index 5144506dfa69..d052dfcbe8d3 100644 > --- a/arch/nios2/include/asm/pgtable.h > +++ b/arch/nios2/include/asm/pgtable.h > @@ -178,6 +178,8 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) > *ptep = pteval; > } > > +#define PFN_PTE_SHIFT0 > + > static inline void set_ptes(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pte, unsigned int nr) > { > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 02/15] arm/pgtable: define PFN_PTE_SHIFT
On Mon, Jan 29, 2024 at 01:46:36PM +0100, David Hildenbrand wrote: > We want to make use of pte_next_pfn() outside of set_ptes(). Let's > simply define PFN_PTE_SHIFT, required by pte_next_pfn(). > > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/arm/include/asm/pgtable.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h > index d657b84b6bf7..be91e376df79 100644 > --- a/arch/arm/include/asm/pgtable.h > +++ b/arch/arm/include/asm/pgtable.h > @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval) > extern void __sync_icache_dcache(pte_t pteval); > #endif > > +#define PFN_PTE_SHIFTPAGE_SHIFT > + > void set_ptes(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, pte_t pteval, unsigned int nr); > #define set_ptes set_ptes > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH v3 01/15] arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary
On Mon, Jan 29, 2024 at 01:46:35PM +0100, David Hildenbrand wrote: > From: Ryan Roberts > > Since the high bits [51:48] of an OA are not stored contiguously in the > PTE, there is a theoretical bug in set_ptes(), which just adds PAGE_SIZE > to the pte to get the pte with the next pfn. This works until the pfn > crosses the 48-bit boundary, at which point we overflow into the upper > attributes. > > Of course one could argue (and Matthew Wilcox has :) that we will never > see a folio cross this boundary because we only allow naturally aligned > power-of-2 allocation, so this would require a half-petabyte folio. So > its only a theoretical bug. But its better that the code is robust > regardless. > > I've implemented pte_next_pfn() as part of the fix, which is an opt-in > core-mm interface. So that is now available to the core-mm, which will > be needed shortly to support forthcoming fork()-batching optimizations. > > Link: https://lkml.kernel.org/r/20240125173534.1659317-1-ryan.robe...@arm.com > Fixes: 4a169d61c2ed ("arm64: implement the new page table range API") > Closes: > https://lore.kernel.org/linux-mm/fdaeb9a5-d890-499a-92c8-d171df43a...@arm.com/ > Signed-off-by: Ryan Roberts > Reviewed-by: Catalin Marinas > Reviewed-by: David Hildenbrand > Signed-off-by: David Hildenbrand Reviewed-by: Mike Rapoport (IBM) > --- > arch/arm64/include/asm/pgtable.h | 28 +--- > 1 file changed, 17 insertions(+), 11 deletions(-) > > diff --git a/arch/arm64/include/asm/pgtable.h > b/arch/arm64/include/asm/pgtable.h > index b50270107e2f..9428801c1040 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -341,6 +341,22 @@ static inline void __sync_cache_and_tags(pte_t pte, > unsigned int nr_pages) > mte_sync_tags(pte, nr_pages); > } > > +/* > + * Select all bits except the pfn > + */ > +static inline pgprot_t pte_pgprot(pte_t pte) > +{ > + unsigned long pfn = pte_pfn(pte); > + > + return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte)); > +} > + > +#define pte_next_pfn pte_next_pfn > +static inline pte_t pte_next_pfn(pte_t pte) > +{ > + return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte)); > +} > + > static inline void set_ptes(struct mm_struct *mm, > unsigned long __always_unused addr, > pte_t *ptep, pte_t pte, unsigned int nr) > @@ -354,7 +370,7 @@ static inline void set_ptes(struct mm_struct *mm, > if (--nr == 0) > break; > ptep++; > - pte_val(pte) += PAGE_SIZE; > + pte = pte_next_pfn(pte); > } > } > #define set_ptes set_ptes > @@ -433,16 +449,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte) > return clear_pte_bit(pte, __pgprot(PTE_SWP_EXCLUSIVE)); > } > > -/* > - * Select all bits except the pfn > - */ > -static inline pgprot_t pte_pgprot(pte_t pte) > -{ > - unsigned long pfn = pte_pfn(pte); > - > - return __pgprot(pte_val(pfn_pte(pfn, __pgprot(0))) ^ pte_val(pte)); > -} > - > #ifdef CONFIG_NUMA_BALANCING > /* > * See the comment in include/linux/pgtable.h > -- > 2.43.0 > > -- Sincerely yours, Mike.
Re: [PATCH] powerpc/pseries/iommu: DLPAR ADD of pci device doesn't completely initialize pci_controller structure
Hello All, There is still some issue even after applying the patch. This is not a complete fix. I am working on V3 of the patch. Please do not merge this patch upstream. Thanks, Gaurav On 1/10/24 4:53 PM, Gaurav Batra wrote: When a PCI device is Dynamically added, LPAR OOPS with NULL pointer exception. Complete stack is as below [ 211.239206] BUG: Kernel NULL pointer dereference on read at 0x0030 [ 211.239210] Faulting instruction address: 0xc06bbe5c [ 211.239214] Oops: Kernel access of bad area, sig: 11 [#1] [ 211.239218] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA pSeries [ 211.239223] Modules linked in: rpadlpar_io rpaphp rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache netfs xsk_diag bonding nft_compat nf_tables nfnetlink rfkill binfmt_misc dm_multipath rpcrdma sunrpc rdma_ucm ib_srpt ib_isert iscsi_target_mod target_core_mod ib_umad ib_iser libiscsi scsi_transport_iscsi ib_ipoib rdma_cm iw_cm ib_cm mlx5_ib ib_uverbs ib_core pseries_rng drm drm_panel_orientation_quirks xfs libcrc32c mlx5_core mlxfw sd_mod t10_pi sg tls ibmvscsi ibmveth scsi_transport_srp vmx_crypto pseries_wdt psample dm_mirror dm_region_hash dm_log dm_mod fuse [ 211.239280] CPU: 17 PID: 2685 Comm: drmgr Not tainted 6.7.0-203405+ #66 [ 211.239284] Hardware name: IBM,9080-HEX POWER10 (raw) 0x800200 0xf06 of:IBM,FW1060.00 (NH1060_008) hv:phyp pSeries [ 211.239289] NIP: c06bbe5c LR: c0a13e68 CTR: c00579f8 [ 211.239293] REGS: c0009924f240 TRAP: 0300 Not tainted (6.7.0-203405+) [ 211.239298] MSR: 80009033 CR: 24002220 XER: 20040006 [ 211.239306] CFAR: c0a13e64 DAR: 0030 DSISR: 4000 IRQMASK: 0 [ 211.239306] GPR00: c0a13e68 c0009924f4e0 c15a2b00 [ 211.239306] GPR04: c13c5590 c6d07970 c000d8f8f180 [ 211.239306] GPR08: 06ec c000d8f8f180 c2c35d58 24002228 [ 211.239306] GPR12: c00579f8 c003ffeb3880 [ 211.239306] GPR16: [ 211.239306] GPR20: [ 211.239306] GPR24: c000919460c0 f000 c10088e8 [ 211.239306] GPR28: c13c5590 c6d07970 c000919460c0 c000919460c0 [ 211.239354] NIP [c06bbe5c] sysfs_add_link_to_group+0x34/0x94 [ 211.239361] LR [c0a13e68] iommu_device_link+0x5c/0x118 [ 211.239367] Call Trace: [ 211.239369] [c0009924f4e0] [c0a109b8] iommu_init_device+0x26c/0x318 (unreliable) [ 211.239376] [c0009924f520] [c0a13e68] iommu_device_link+0x5c/0x118 [ 211.239382] [c0009924f560] [c0a107f4] iommu_init_device+0xa8/0x318 [ 211.239387] [c0009924f5c0] [c0a11a08] iommu_probe_device+0xc0/0x134 [ 211.239393] [c0009924f600] [c0a11ac0] iommu_bus_notifier+0x44/0x104 [ 211.239398] [c0009924f640] [c018dcc0] notifier_call_chain+0xb8/0x19c [ 211.239405] [c0009924f6a0] [c018df88] blocking_notifier_call_chain+0x64/0x98 [ 211.239411] [c0009924f6e0] [c0a250fc] bus_notify+0x50/0x7c [ 211.239416] [c0009924f720] [c0a20838] device_add+0x640/0x918 [ 211.239421] [c0009924f7f0] [c08f1a34] pci_device_add+0x23c/0x298 [ 211.239427] [c0009924f840] [c0077460] of_create_pci_dev+0x400/0x884 [ 211.239432] [c0009924f8e0] [c0077a08] of_scan_pci_dev+0x124/0x1b0 [ 211.239437] [c0009924f980] [c0077b0c] __of_scan_bus+0x78/0x18c [ 211.239442] [c0009924fa10] [c0073f90] pcibios_scan_phb+0x2a4/0x3b0 [ 211.239447] [c0009924fad0] [c01007a8] init_phb_dynamic+0xb8/0x110 [ 211.239453] [c0009924fb40] [c00806920620] dlpar_add_slot+0x170/0x3b8 [rpadlpar_io] [ 211.239461] [c0009924fbe0] [c00806920d64] add_slot_store.part.0+0xb4/0x130 [rpadlpar_io] [ 211.239468] [c0009924fc70] [c0fb4144] kobj_attr_store+0x2c/0x48 [ 211.239473] [c0009924fc90] [c06b90e4] sysfs_kf_write+0x64/0x78 [ 211.239479] [c0009924fcb0] [c06b7b78] kernfs_fop_write_iter+0x1b0/0x290 [ 211.239485] [c0009924fd00] [c05b6fdc] vfs_write+0x350/0x4a0 [ 211.239491] [c0009924fdc0] [c05b7450] ksys_write+0x84/0x140 [ 211.239496] [c0009924fe10] [c0030a04] system_call_exception+0x124/0x330 [ 211.239502] [c0009924fe50] [c000cedc] system_call_vectored_common+0x15c/0x2ec Commit a940904443e4 ("powerpc/iommu: Add iommu_ops to report capabilities and allow blocking domains") broke DLPAR ADD of pci devices. The above added iommu_device structure to pci_controller. During system boot, pci devices are discovered and this newly added iommu_device structure initialized by a call to iommu_device_register(). During DLPAR ADD of a
Re: [PATCH v2 2/4] eventfd: simplify eventfd_signal()
On Wed, Nov 22, 2023 at 1:49 PM Christian Brauner wrote: > > Ever since the evenfd type was introduced back in 2007 in commit > e1ad7468c77d ("signal/timer/event: eventfd core") the eventfd_signal() > function only ever passed 1 as a value for @n. There's no point in > keeping that additional argument. > > Signed-off-by: Christian Brauner > --- > arch/x86/kvm/hyperv.c | 2 +- > arch/x86/kvm/xen.c| 2 +- > virt/kvm/eventfd.c| 4 ++-- > 30 files changed, 60 insertions(+), 63 deletions(-) For KVM: Acked-by: Paolo Bonzini
Re: [PATCH v2 2/4] eventfd: simplify eventfd_signal()
For vfio_ap_ops.c Reviewed-by: Anthony Krowiak On 2/6/24 2:44 PM, Stefan Hajnoczi wrote: vhost and VIRTIO-related parts: Reviewed-by: Stefan Hajnoczi On Wed, 22 Nov 2023 at 07:50, Christian Brauner wrote: Ever since the evenfd type was introduced back in 2007 in commit e1ad7468c77d ("signal/timer/event: eventfd core") the eventfd_signal() function only ever passed 1 as a value for @n. There's no point in keeping that additional argument. Signed-off-by: Christian Brauner --- arch/x86/kvm/hyperv.c | 2 +- arch/x86/kvm/xen.c| 2 +- drivers/accel/habanalabs/common/device.c | 2 +- drivers/fpga/dfl.c| 2 +- drivers/gpu/drm/drm_syncobj.c | 6 +++--- drivers/gpu/drm/i915/gvt/interrupt.c | 2 +- drivers/infiniband/hw/mlx5/devx.c | 2 +- drivers/misc/ocxl/file.c | 2 +- drivers/s390/cio/vfio_ccw_chp.c | 2 +- drivers/s390/cio/vfio_ccw_drv.c | 4 ++-- drivers/s390/cio/vfio_ccw_ops.c | 6 +++--- drivers/s390/crypto/vfio_ap_ops.c | 2 +- drivers/usb/gadget/function/f_fs.c| 4 ++-- drivers/vdpa/vdpa_user/vduse_dev.c| 6 +++--- drivers/vfio/fsl-mc/vfio_fsl_mc_intr.c| 2 +- drivers/vfio/pci/vfio_pci_core.c | 6 +++--- drivers/vfio/pci/vfio_pci_intrs.c | 12 ++-- drivers/vfio/platform/vfio_platform_irq.c | 4 ++-- drivers/vhost/vdpa.c | 4 ++-- drivers/vhost/vhost.c | 10 +- drivers/vhost/vhost.h | 2 +- drivers/virt/acrn/ioeventfd.c | 2 +- drivers/xen/privcmd.c | 2 +- fs/aio.c | 2 +- fs/eventfd.c | 9 +++-- include/linux/eventfd.h | 4 ++-- mm/memcontrol.c | 10 +- mm/vmpressure.c | 2 +- samples/vfio-mdev/mtty.c | 4 ++-- virt/kvm/eventfd.c| 4 ++-- 30 files changed, 60 insertions(+), 63 deletions(-) diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c index 238afd7335e4..4943f6b2bbee 100644 --- a/arch/x86/kvm/hyperv.c +++ b/arch/x86/kvm/hyperv.c @@ -2388,7 +2388,7 @@ static u16 kvm_hvcall_signal_event(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *h if (!eventfd) return HV_STATUS_INVALID_PORT_ID; - eventfd_signal(eventfd, 1); + eventfd_signal(eventfd); return HV_STATUS_SUCCESS; } diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index e53fad915a62..523bb6df5ac9 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -2088,7 +2088,7 @@ static bool kvm_xen_hcall_evtchn_send(struct kvm_vcpu *vcpu, u64 param, u64 *r) if (ret < 0 && ret != -ENOTCONN) return false; } else { - eventfd_signal(evtchnfd->deliver.eventfd.ctx, 1); + eventfd_signal(evtchnfd->deliver.eventfd.ctx); } *r = 0; diff --git a/drivers/accel/habanalabs/common/device.c b/drivers/accel/habanalabs/common/device.c index 9711e8fc979d..3a89644f087c 100644 --- a/drivers/accel/habanalabs/common/device.c +++ b/drivers/accel/habanalabs/common/device.c @@ -2044,7 +2044,7 @@ static void hl_notifier_event_send(struct hl_notifier_event *notifier_event, u64 notifier_event->events_mask |= event_mask; if (notifier_event->eventfd) - eventfd_signal(notifier_event->eventfd, 1); + eventfd_signal(notifier_event->eventfd); mutex_unlock(_event->lock); } diff --git a/drivers/fpga/dfl.c b/drivers/fpga/dfl.c index dd7a783d53b5..e73f88050f08 100644 --- a/drivers/fpga/dfl.c +++ b/drivers/fpga/dfl.c @@ -1872,7 +1872,7 @@ static irqreturn_t dfl_irq_handler(int irq, void *arg) { struct eventfd_ctx *trigger = arg; - eventfd_signal(trigger, 1); + eventfd_signal(trigger); return IRQ_HANDLED; } diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c index 01da6789d044..b9cc62982196 100644 --- a/drivers/gpu/drm/drm_syncobj.c +++ b/drivers/gpu/drm/drm_syncobj.c @@ -1365,7 +1365,7 @@ static void syncobj_eventfd_entry_fence_func(struct dma_fence *fence, struct syncobj_eventfd_entry *entry = container_of(cb, struct syncobj_eventfd_entry, fence_cb); - eventfd_signal(entry->ev_fd_ctx, 1); + eventfd_signal(entry->ev_fd_ctx); syncobj_eventfd_entry_free(entry); } @@ -1388,13 +1388,13 @@ syncobj_eventfd_entry_func(struct drm_syncobj *syncobj, entry->fence = fence; if (entry->flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE) { - eventfd_signal(entry->ev_fd_ctx, 1); + eventfd_signal(entry->ev_fd_ctx); syncobj_eventfd_entry_free(entry); } else { ret =
Re: [PATCH 2/2] powerpc/pseries: Set CPU_FTR_DBELL according to ibm,pi-features
Nicholas Piggin writes: > PAPR will define a new ibm,pi-features bit which says that doorbells > should not be used even on architectures where they exist. This could be > because they are emulated and slower than using the interrupt controller > directly for IPIs. > > Wire this bit into the pi-features parser to clear CPU_FTR_DBELL, and > ensure CPU_FTR_DBELL is not in CPU_FTRS_ALWAYS. > > Signed-off-by: Nicholas Piggin > --- Tested this patch on a PP64-LE lpar with the patch[1] and seeing the relevant pi-feature bit CPU_FTR_DBELL in `cur_cpu_spec->cpu_features` getting cleared. [1] https://lore.kernel.org/all/20240207035220.339726-1-npig...@gmail.com Hence, Tested-by: Vaibhav Jain -- Cheers ~ Vaibhav
Re: [PATCH 1/2] powerpc/pseries: Add a clear modifier to ibm,pa/pi-features parser
Nicholas Piggin writes: > When a new ibm,pa/pi-features bit is introduced that is intended to > apply to existing systems and features, it may have an "inverted" > meaning (i.e., bit clear => feature available; bit set => unavailable). > Depending on the nature of the feature, this may give the best > backward compatibility result where old firmware will continue to > have that bit clear and therefore the feature available. > > The 'invert' modifier presumably was introduced for this type of > feature bit. However it invert will set the feature if the bit is > clear, which prevents it being used in the situation where an old > CPU lacks a feature that a new CPU has, then a new firmware comes > out to disable that feature on the new CPU if the bit is set. > Adding an 'invert' entry for that feature would incorrectly enable > it for the old CPU. > > So add a 'clear' modifier that clears the feature if the bit is set, > but it does not set the feature if the bit is clear. The feature > is expected to be set in the cpu table. > > This replaces the 'invert' modifier, which is unused since commit > 7d4703455168 ("powerpc/feature: Remove CPU_FTR_NODSISRALIGN"). > > Signed-off-by: Nicholas Piggin > --- Tested this patch on a PP64-LE lpar with the patch[1] and seeing the relevant pi-feature bit CPU_FTR_DBELL getting cleared. [1] https://lore.kernel.org/all/20240207035220.339726-2-npig...@gmail.com Hence, Tested-by: Vaibhav Jain -- Cheers ~ Vaibhav
Re: [PATCH v2] drivers/ps3: select VIDEO to provide cmdline functions
Am 07.02.24 um 17:13 schrieb Randy Dunlap: When VIDEO is not set, there is a build error. Fix that by selecting VIDEO for PS3_PS3AV. ERROR: modpost: ".video_get_options" [drivers/ps3/ps3av_mod.ko] undefined! Fixes: dae7fbf43fd0 ("driver/ps3: Include for mode parsing") Fixes: a3b6792e990d ("video/cmdline: Introduce CONFIG_VIDEO for video= parameter") Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Aneesh Kumar K.V Cc: Naveen N. Rao Cc: linuxppc-dev@lists.ozlabs.org Cc: Thomas Zimmermann Cc: Geoff Levand Acked-by: Geoff Levand Cc: linux-fb...@vger.kernel.org Cc: dri-de...@lists.freedesktop.org Signed-off-by: Randy Dunlap Reviewed-by: Thomas Zimmermann --- v2: add Geoff's Ack; add second Fixes: tag and more Cc:s (Thomas) arch/powerpc/platforms/ps3/Kconfig |1 + 1 file changed, 1 insertion(+) diff -- a/arch/powerpc/platforms/ps3/Kconfig b/arch/powerpc/platforms/ps3/Kconfig --- a/arch/powerpc/platforms/ps3/Kconfig +++ b/arch/powerpc/platforms/ps3/Kconfig @@ -67,6 +67,7 @@ config PS3_VUART config PS3_PS3AV depends on PPC_PS3 tristate "PS3 AV settings driver" if PS3_ADVANCED + select VIDEO select PS3_VUART default y help -- -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstrasse 146, 90461 Nuernberg, Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman HRB 36809 (AG Nuernberg)
[PATCH v2] drivers/ps3: select VIDEO to provide cmdline functions
When VIDEO is not set, there is a build error. Fix that by selecting VIDEO for PS3_PS3AV. ERROR: modpost: ".video_get_options" [drivers/ps3/ps3av_mod.ko] undefined! Fixes: dae7fbf43fd0 ("driver/ps3: Include for mode parsing") Fixes: a3b6792e990d ("video/cmdline: Introduce CONFIG_VIDEO for video= parameter") Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Aneesh Kumar K.V Cc: Naveen N. Rao Cc: linuxppc-dev@lists.ozlabs.org Cc: Thomas Zimmermann Cc: Geoff Levand Acked-by: Geoff Levand Cc: linux-fb...@vger.kernel.org Cc: dri-de...@lists.freedesktop.org Signed-off-by: Randy Dunlap --- v2: add Geoff's Ack; add second Fixes: tag and more Cc:s (Thomas) arch/powerpc/platforms/ps3/Kconfig |1 + 1 file changed, 1 insertion(+) diff -- a/arch/powerpc/platforms/ps3/Kconfig b/arch/powerpc/platforms/ps3/Kconfig --- a/arch/powerpc/platforms/ps3/Kconfig +++ b/arch/powerpc/platforms/ps3/Kconfig @@ -67,6 +67,7 @@ config PS3_VUART config PS3_PS3AV depends on PPC_PS3 tristate "PS3 AV settings driver" if PS3_ADVANCED + select VIDEO select PS3_VUART default y help
Re: [PATCH 5/5] sched/vtime: do not include header
Le Wed, Feb 07, 2024 at 03:12:57PM +0100, Alexander Gordeev a écrit : > On Wed, Feb 07, 2024 at 12:30:08AM +0100, Frederic Weisbecker wrote: > > Reviewed-by: Frederic Weisbecker > > Thank you for the review, Frederic! > > The Heiko comment is valid and I would add this chunk in v2: > > --- a/arch/powerpc/include/asm/Kbuild > +++ b/arch/powerpc/include/asm/Kbuild > @@ -6,5 +6,4 @@ generic-y += agp.h > generic-y += kvm_types.h > generic-y += mcs_spinlock.h > generic-y += qrwlock.h > -generic-y += vtime.h > generic-y += early_ioremap.h > > Would you keep your Reviewed-by? Sure!
Re: [PATCH 4/5] s390/irq,nmi: do not include header
On Mon, Jan 29, 2024 at 10:51:44AM +0100, Heiko Carstens wrote: > It is confusing when the patch subject is "do not include.." and all > what this patch is doing is to add two includes. I see what this is > doing: getting rid of the implicit include of asm/vtime.h most likely > via linux/hardirq.h, but that's not very obvious. > > Anyway: > Acked-by: Heiko Carstens Thank you, Heiko! Whether this wording sounds better? s390/irq,nmi: include header directly update_timer_sys() and update_timer_mcck() are inlines used for CPU time accounting from the interrupt and machine-check handlers. These routines are specific to s390 architecture, but included via header implicitly. Avoid the extra loop and include header directly.
Re: [PATCH 5/5] sched/vtime: do not include header
On Wed, Feb 07, 2024 at 12:30:08AM +0100, Frederic Weisbecker wrote: > Reviewed-by: Frederic Weisbecker Thank you for the review, Frederic! The Heiko comment is valid and I would add this chunk in v2: --- a/arch/powerpc/include/asm/Kbuild +++ b/arch/powerpc/include/asm/Kbuild @@ -6,5 +6,4 @@ generic-y += agp.h generic-y += kvm_types.h generic-y += mcs_spinlock.h generic-y += qrwlock.h -generic-y += vtime.h generic-y += early_ioremap.h Would you keep your Reviewed-by?
[powerpc:merge] BUILD SUCCESS 4ef8376c466ae8b03e632dd8eca1e44315f7dd61
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git merge branch HEAD: 4ef8376c466ae8b03e632dd8eca1e44315f7dd61 Automatic merge of 'fixes' into merge (2024-02-06 22:58) elapsed time: 1463m configs tested: 193 configs skipped: 4 The following configs have been built successfully. More configs may be tested in the coming days. tested configs: alpha allnoconfig gcc alphaallyesconfig gcc alpha defconfig gcc arc alldefconfig gcc arc allmodconfig gcc arc allnoconfig gcc arc allyesconfig gcc arc defconfig gcc archsdk_defconfig gcc arc randconfig-001-20240207 gcc arc randconfig-002-20240207 gcc arm allmodconfig gcc arm allnoconfig clang arm allyesconfig gcc arm aspeed_g5_defconfig gcc arm defconfig clang armkeystone_defconfig gcc armqcom_defconfig clang arm randconfig-001-20240207 clang arm randconfig-002-20240207 clang arm randconfig-003-20240207 clang arm randconfig-004-20240207 gcc armshmobile_defconfig gcc arm sp7021_defconfig gcc arm spitz_defconfig gcc arm64allmodconfig clang arm64 allnoconfig gcc arm64 defconfig gcc arm64 randconfig-001-20240207 clang arm64 randconfig-002-20240207 clang arm64 randconfig-003-20240207 clang arm64 randconfig-004-20240207 clang csky allmodconfig gcc csky allnoconfig gcc csky allyesconfig gcc cskydefconfig gcc csky randconfig-001-20240207 gcc csky randconfig-002-20240207 gcc hexagon allmodconfig clang hexagon allnoconfig clang hexagon allyesconfig clang hexagon defconfig clang hexagon randconfig-001-20240207 clang hexagon randconfig-002-20240207 clang i386 allmodconfig gcc i386 allnoconfig gcc i386 allyesconfig gcc i386 buildonly-randconfig-001-20240207 clang i386 buildonly-randconfig-002-20240207 clang i386 buildonly-randconfig-003-20240207 clang i386 buildonly-randconfig-004-20240207 clang i386 buildonly-randconfig-005-20240207 clang i386 buildonly-randconfig-006-20240207 clang i386defconfig clang i386 randconfig-001-20240207 gcc i386 randconfig-002-20240207 clang i386 randconfig-003-20240207 gcc i386 randconfig-004-20240207 gcc i386 randconfig-005-20240207 gcc i386 randconfig-006-20240207 clang i386 randconfig-011-20240207 gcc i386 randconfig-012-20240207 gcc i386 randconfig-013-20240207 gcc i386 randconfig-014-20240207 gcc i386 randconfig-015-20240207 gcc i386 randconfig-016-20240207 gcc loongarchallmodconfig gcc loongarch allnoconfig gcc loongarchallyesconfig gcc loongarch defconfig gcc loongarch randconfig-001-20240207 gcc loongarch randconfig-002-20240207 gcc m68k allmodconfig gcc m68k allnoconfig gcc m68k allyesconfig gcc m68k bvme6000_defconfig gcc m68kdefconfig gcc m68k multi_defconfig gcc microblaze allmodconfig gcc microblazeallnoconfig gcc microblaze allyesconfig gcc microblaze defconfig gcc mips allmodconfig gcc mips allnoconfig gcc mips allyesconfig gcc mips
[powerpc:fixes-test] BUILD SUCCESS 1c57b9f63ab34f01b8c73731cc0efacb5a9a2f16
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git fixes-test branch HEAD: 1c57b9f63ab34f01b8c73731cc0efacb5a9a2f16 powerpc: 85xx: mark local functions static elapsed time: 1462m configs tested: 194 configs skipped: 3 The following configs have been built successfully. More configs may be tested in the coming days. tested configs: alpha allnoconfig gcc alphaallyesconfig gcc alpha defconfig gcc arc alldefconfig gcc arc allmodconfig gcc arc allnoconfig gcc arc allyesconfig gcc arc defconfig gcc archsdk_defconfig gcc arc randconfig-001-20240207 gcc arc randconfig-002-20240207 gcc arm allmodconfig gcc arm allnoconfig clang arm allyesconfig gcc arm aspeed_g5_defconfig gcc arm defconfig clang armkeystone_defconfig gcc armqcom_defconfig clang arm randconfig-001-20240207 clang arm randconfig-002-20240207 clang arm randconfig-003-20240207 clang arm randconfig-004-20240207 gcc armshmobile_defconfig gcc arm sp7021_defconfig gcc arm spitz_defconfig gcc arm64allmodconfig clang arm64 allnoconfig gcc arm64 defconfig gcc arm64 randconfig-001-20240207 clang arm64 randconfig-002-20240207 clang arm64 randconfig-003-20240207 clang arm64 randconfig-004-20240207 clang csky allmodconfig gcc csky allnoconfig gcc csky allyesconfig gcc cskydefconfig gcc csky randconfig-001-20240207 gcc csky randconfig-002-20240207 gcc hexagon allmodconfig clang hexagon allnoconfig clang hexagon allyesconfig clang hexagon defconfig clang hexagon randconfig-001-20240207 clang hexagon randconfig-002-20240207 clang i386 allmodconfig gcc i386 allnoconfig gcc i386 allyesconfig gcc i386 buildonly-randconfig-001-20240207 clang i386 buildonly-randconfig-002-20240207 clang i386 buildonly-randconfig-003-20240207 clang i386 buildonly-randconfig-004-20240207 clang i386 buildonly-randconfig-005-20240207 clang i386 buildonly-randconfig-006-20240207 clang i386defconfig clang i386 randconfig-001-20240207 gcc i386 randconfig-002-20240207 clang i386 randconfig-003-20240207 gcc i386 randconfig-004-20240207 gcc i386 randconfig-005-20240207 gcc i386 randconfig-006-20240207 clang i386 randconfig-011-20240207 gcc i386 randconfig-012-20240207 gcc i386 randconfig-013-20240207 gcc i386 randconfig-014-20240207 gcc i386 randconfig-015-20240207 gcc i386 randconfig-016-20240207 gcc loongarchallmodconfig gcc loongarch allnoconfig gcc loongarchallyesconfig gcc loongarch defconfig gcc loongarch randconfig-001-20240207 gcc loongarch randconfig-002-20240207 gcc m68k allmodconfig gcc m68k allnoconfig gcc m68k allyesconfig gcc m68k bvme6000_defconfig gcc m68kdefconfig gcc m68k multi_defconfig gcc microblaze allmodconfig gcc microblazeallnoconfig gcc microblaze allyesconfig gcc microblaze defconfig gcc mips allmodconfig gcc mips allnoconfig gcc mips allyesconfig gcc mips
[PATCH v11 5/5] arm64: send SIGBUS to user process for SEA exception
For SEA exception, kernel require take some action to recover from memory error, such as isolate poison page adn kill failure thread, which are done in memory_failure(). During the test, the failure thread cannot be killed due to this issue[1], Here, I temporarily workaround this issue by sending signals to user processes (!(PF_KTHREAD|PF_IO_WORKER|PF_WQ_WORKER|PF_USER_WORKER)) in do_sea(). After [1] is merged, this patch can be rolled back or the SIGBUS will be sent repeated. [1]https://lore.kernel.org/lkml/20240204080144.7977-1-xuesh...@linux.alibaba.com/ Signed-off-by: Tong Tiangen --- arch/arm64/mm/fault.c | 16 +--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 78f9d5ce83bb..a27bb2de1a7c 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -824,9 +824,6 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) const struct fault_info *inf; unsigned long siaddr; - if (do_apei_claim_sea(regs)) - return 0; - inf = esr_to_fault_info(esr); if (esr & ESR_ELx_FnV) { siaddr = 0; @@ -838,6 +835,19 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) */ siaddr = untagged_addr(far); } + + if (do_apei_claim_sea(regs)) { + if (!(current->flags & (PF_KTHREAD | + PF_USER_WORKER | + PF_WQ_WORKER | + PF_IO_WORKER))) { + set_thread_esr(0, esr); + arm64_force_sig_fault(inf->sig, inf->code, siaddr, + "Uncorrected memory error on access to poison memory\n"); + } + return 0; + } + arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr); return 0; -- 2.25.1
[PATCH v11 4/5] arm64: support copy_mc_[user]_highpage()
Currently, many scenarios that can tolerate memory errors when copying page have been supported in the kernel[1~5], all of which are implemented by copy_mc_[user]_highpage(). arm64 should also support this mechanism. Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. Add new helper copy_mc_page() which provide a page copy implementation with hardware memory error safe. The code logic of copy_mc_page() is the same as copy_page(), the main difference is that the ldp insn of copy_mc_page() contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the main logic is extracted to copy_page_template.S. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") Signed-off-by: Tong Tiangen --- arch/arm64/include/asm/mte.h| 9 + arch/arm64/include/asm/page.h | 10 ++ arch/arm64/lib/Makefile | 2 ++ arch/arm64/lib/copy_mc_page.S | 37 +++ arch/arm64/lib/copy_page.S | 50 +++--- arch/arm64/lib/copy_page_template.S | 56 + arch/arm64/lib/mte.S| 29 +++ arch/arm64/mm/copypage.c| 45 +++ include/linux/highmem.h | 8 + 9 files changed, 201 insertions(+), 45 deletions(-) create mode 100644 arch/arm64/lib/copy_mc_page.S create mode 100644 arch/arm64/lib/copy_page_template.S diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index 91fbd5c8a391..dc68337c2623 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -92,6 +92,11 @@ static inline bool try_page_mte_tagging(struct page *page) void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t pte, unsigned int nr_pages); void mte_copy_page_tags(void *kto, const void *kfrom); + +#ifdef CONFIG_ARCH_HAS_COPY_MC +int mte_copy_mc_page_tags(void *kto, const void *kfrom); +#endif + void mte_thread_init_user(void); void mte_thread_switch(struct task_struct *next); void mte_cpu_setup(void); @@ -128,6 +133,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages) static inline void mte_copy_page_tags(void *kto, const void *kfrom) { } +static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom) +{ + return 0; +} static inline void mte_thread_init_user(void) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 2312e6ee595f..304cc86b8a10 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE +#ifdef CONFIG_ARCH_HAS_COPY_MC +int copy_mc_page(void *to, const void *from); +int copy_mc_highpage(struct page *to, struct page *from); +#define __HAVE_ARCH_COPY_MC_HIGHPAGE + +int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma); +#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE +#endif + struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, unsigned long vaddr); #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 29490be2546b..a2fd865b816d 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -15,6 +15,8 @@ endif lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o +lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o + obj-$(CONFIG_CRC32) += crc32.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S new file mode 100644 index ..1e5fe6952869 --- /dev/null +++ b/arch/arm64/lib/copy_mc_page.S @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * Copy a page from src to dest (both are page aligned) with memory error safe + * + * Parameters: + * x0 - dest + * x1 - src + * Returns: + * x0 - Return 0 if copy success, or -EFAULT if anything goes wrong + * while copying. + */ + .macro ldp1 reg1, reg2, ptr, val + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr, \val]) + .endm + +SYM_FUNC_START(__pi_copy_mc_page) +#include "copy_page_template.S" + + mov x0,
[PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support
With the increase of memory capacity and density, the probability of memory error also increases. The increasing size and density of server RAM in data centers and clouds have shown increased uncorrectable memory errors. Currently, more and more scenarios that can tolerate memory errors???such as CoW[1,2], KSM copy[3], coredump copy[4], khugepaged[5,6], uaccess copy[7], etc. This patchset introduces a new processing framework on ARM64, which enables ARM64 to support error recovery in the above scenarios, and more scenarios can be expanded based on this in the future. In arm64, memory error handling in do_sea(), which is divided into two cases: 1. If the user state consumed the memory errors, the solution is to kill the user process and isolate the error page. 2. If the kernel state consumed the memory errors, the solution is to panic. For case 2, Undifferentiated panic may not be the optimal choice, as it can be handled better. In some scenarios, we can avoid panic, such as uaccess, if the uaccess fails due to memory error, only the user process will be affected, killing the user process and isolating the user page with hardware memory errors is a better choice. [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") [4] commit 245f09226893 ("mm: hwpoison: coredump: support recovery from dump_user_range()") [5] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") [6] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") [7] commit 278b917f8cb9 ("x86/mce: Add _ASM_EXTABLE_CPY for copy user access") -- Test result: 1. copy_page(), copy_mc_page() basic function test pass, and the disassembly contents remains the same before and after refactor. 2. copy_to/from_user() access kernel NULL pointer raise translation fault and dump error message then die(), test pass. 3. Test following scenarios: copy_from_user(), get_user(), COW. Before patched: trigger a hardware memory error then panic. After patched: trigger a hardware memory error without panic. Testing step: step1. start an user-process. step2. poison(einj) the user-process's page. step3: user-process access the poison page in kernel mode, then trigger SEA. step4: the kernel will not panic, only the user process is killed, the poison page is isolated. (before patched, the kernel will panic in do_sea()) -- Since V10: Accroding Mark's suggestion: 1. Merge V10's patch2 and patch3 to V11's patch2. 2. Patch2(V11): use new fixup_type for ld* in copy_to_user(), fix fatal issues (NULL kernel pointeraccess) been fixup incorrectly. 3. Patch2(V11): refactoring the logic of do_sea(). 4. Patch4(V11): Remove duplicate assembly logic and remove do_mte(). Besides: 1. Patch2(V11): remove st* insn's fixup, st* generally not trigger memory error. 2. Split a part of the logic of patch2(V11) to patch5(V11), for detail, see patch5(V11)'s commit msg. 3. Remove patch6(v10) ???arm64: introduce copy_mc_to_kernel() implementation???. During modification, some problems that cannot be solved in a short period are found. The patch will be released after the problems are solved. 4. Add test result in this patch. 5. Modify patchset title, do not use machine check and remove "-next". Since V9: 1. Rebase to latest kernel version 6.8-rc2. 2. Add patch 6/6 to support copy_mc_to_kernel(). Since V8: 1. Rebase to latest kernel version and fix topo in some of the patches. 2. According to the suggestion of Catalin, I attempted to modify the return value of function copy_mc_[user]_highpage() to bytes not copied. During the modification process, I found that it would be more reasonable to return -EFAULT when copy error occurs (referring to the newly added patch 4). For ARM64, the implementation of copy_mc_[user]_highpage() needs to consider MTE. Considering the scenario where data copying is successful but the MTE tag copying fails, it is also not reasonable to return bytes not copied. 3. Considering the recent addition of machine check safe support for multiple scenarios, modify commit message for patch 5 (patch 4 for V8). Since V7: Currently, there are patches supporting recover from poison consumption for the cow scenario[1]. Therefore, Supporting cow scenario under the arm64 architecture only needs to modify the relevant code under the arch/. [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.l...@intel.com/ Since V6: Resend patches that are not merged into the mainline in V6. Since V5: 1. Add patch2/3 to add uaccess assembly helpers. 2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8. 3. Remove kernel access fixup
[PATCH v11 3/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage()
If hardware errors are encountered during page copying, returning the bytes not copied is not meaningful, and the caller cannot do any processing on the remaining data. Returning -EFAULT is more reasonable, which represents a hardware error encountered during the copying. Signed-off-by: Tong Tiangen --- include/linux/highmem.h | 8 mm/khugepaged.c | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 451c1dff0e87..c5ca1a1fc4f5 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from) /* * If architecture supports machine check exception handling, define the * #MC versions of copy_user_highpage and copy_highpage. They copy a memory - * page with #MC in source page (@from) handled, and return the number - * of bytes not copied if there was a #MC, otherwise 0 for success. + * page with #MC in source page (@from) handled, and return -EFAULT if there + * was a #MC, otherwise 0 for success. */ static inline int copy_mc_user_highpage(struct page *to, struct page *from, unsigned long vaddr, struct vm_area_struct *vma) @@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } static inline int copy_mc_highpage(struct page *to, struct page *from) @@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); - return ret; + return ret ? -EFAULT : 0; } #else static inline int copy_mc_user_highpage(struct page *to, struct page *from, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index fe43fbc44525..d0f40c42f620 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -797,7 +797,7 @@ static int __collapse_huge_page_copy(pte_t *pte, continue; } src_page = pte_page(pteval); - if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) { + if (copy_mc_user_highpage(page, src_page, _address, vma)) { result = SCAN_COPY_MC; break; } @@ -2053,7 +2053,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, clear_highpage(hpage + (index % HPAGE_PMD_NR)); index++; } - if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) { + if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page)) { result = SCAN_COPY_MC; goto rollback; } -- 2.25.1
[PATCH v11 2/5] arm64: add support for ARCH_HAS_COPY_MC
For the arm64 kernel, when it processes hardware memory errors for synchronize notifications(do_sea()), if the errors is consumed within the kernel, the current processing is panic. However, it is not optimal. Take copy_from/to_user for example, If ld* triggers a memory error, even in kernel mode, only the associated process is affected. Killing the user process and isolating the corrupt page is a better choice. New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn that can recover from memory errors triggered by access to kernel memory. Signed-off-by: Tong Tiangen --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/asm-extable.h | 31 +++- arch/arm64/include/asm/asm-uaccess.h | 4 arch/arm64/include/asm/extable.h | 1 + arch/arm64/lib/copy_to_user.S| 10 - arch/arm64/mm/extable.c | 19 + arch/arm64/mm/fault.c| 27 +--- 7 files changed, 75 insertions(+), 18 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 96fb363d2f52..72b651c461d5 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -20,6 +20,7 @@ config ARM64 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_HAS_CACHE_LINE_SIZE + select ARCH_HAS_COPY_MC if ACPI_APEI_GHES select ARCH_HAS_CURRENT_STACK_POINTER select ARCH_HAS_DEBUG_VIRTUAL select ARCH_HAS_DEBUG_VM_PGTABLE diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h index 980d1dd8e1a3..9c0664fe1eb1 100644 --- a/arch/arm64/include/asm/asm-extable.h +++ b/arch/arm64/include/asm/asm-extable.h @@ -5,11 +5,13 @@ #include #include -#define EX_TYPE_NONE 0 -#define EX_TYPE_BPF1 -#define EX_TYPE_UACCESS_ERR_ZERO 2 -#define EX_TYPE_KACCESS_ERR_ZERO 3 -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +#define EX_TYPE_NONE 0 +#define EX_TYPE_BPF1 +#define EX_TYPE_UACCESS_ERR_ZERO 2 +#define EX_TYPE_KACCESS_ERR_ZERO 3 +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 +/* kernel access memory error safe */ +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */ #define EX_DATA_REG_ERR_SHIFT 0 @@ -51,6 +53,17 @@ #define _ASM_EXTABLE_UACCESS(insn, fixup) \ _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr) +#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero) \ + __ASM_EXTABLE_RAW(insn, fixup, \ + EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, \ + ( \ + EX_DATA_REG(ERR, err) | \ + EX_DATA_REG(ZERO, zero) \ + )) + +#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup) \ + _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr) + /* * Create an exception table entry for uaccess `insn`, which will branch to `fixup` * when an unhandled fault is taken. @@ -69,6 +82,14 @@ .endif .endm +/* + * Create an exception table entry for kaccess me(memory error) safe `insn`, which + * will branch to `fixup` when an unhandled fault is taken. + */ + .macro _asm_extable_kaccess_me_safe, insn, fixup + _ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup) + .endm + #else /* __ASSEMBLY__ */ #include diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h index 5b6efe8abeeb..7bbebfa5b710 100644 --- a/arch/arm64/include/asm/asm-uaccess.h +++ b/arch/arm64/include/asm/asm-uaccess.h @@ -57,6 +57,10 @@ alternative_else_nop_endif .endm #endif +#define KERNEL_ME_SAFE(l, x...)\ +: x; \ + _asm_extable_kaccess_me_safeb, l + #define USER(l, x...) \ : x; \ _asm_extable_uaccessb, l diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h index 72b0e71cc3de..bc49443bc502 100644 --- a/arch/arm64/include/asm/extable.h +++ b/arch/arm64/include/asm/extable.h @@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex, #endif /* !CONFIG_BPF_JIT */ bool fixup_exception(struct pt_regs *regs); +bool fixup_exception_me(struct pt_regs *regs); #endif diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S index 802231772608..2ac716c0d6d8 100644 --- a/arch/arm64/lib/copy_to_user.S +++ b/arch/arm64/lib/copy_to_user.S @@ -20,7 +20,7 @@ * x0 - bytes not copied */ .macro ldrb1
[PATCH v11 1/5] uaccess: add generic fallback version of copy_mc_to_user()
x86/powerpc has it's implementation of copy_mc_to_user(), we add generic fallback in include/linux/uaccess.h prepare for other architechures to enable CONFIG_ARCH_HAS_COPY_MC. Signed-off-by: Tong Tiangen Acked-by: Michael Ellerman --- arch/powerpc/include/asm/uaccess.h | 1 + arch/x86/include/asm/uaccess.h | 1 + include/linux/uaccess.h| 9 + 3 files changed, 11 insertions(+) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index f1f9890f50d3..4bfd1e6f0702 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) return n; } +#define copy_mc_to_user copy_mc_to_user #endif extern long __copy_from_user_flushcache(void *dst, const void __user *src, diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 5c367c1290c3..fd56282ee9a8 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); unsigned long __must_check copy_mc_to_user(void __user *to, const void *from, unsigned len); +#define copy_mc_to_user copy_mc_to_user #endif /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 3064314f4832..550287c92990 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -205,6 +205,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) } #endif +#ifndef copy_mc_to_user +static inline unsigned long __must_check +copy_mc_to_user(void *dst, const void *src, size_t cnt) +{ + check_object_size(src, cnt, true); + return raw_copy_to_user(dst, src, cnt); +} +#endif + static __always_inline void pagefault_disabled_inc(void) { current->pagefault_disabled++; -- 2.25.1
Re: [PATCH 0/4] PCI: Consolidate TLP Log reading and printing
Adding Cc Quigshun which I ended up forgotting despite thinking it at one point. -- i. On Tue, 6 Feb 2024, Ilpo Järvinen wrote: > This series consolidates AER & DPC TLP Log handling code. Helpers are > added for reading and printing the TLP Log and the format is made to > include E-E Prefixes in both cases (previously only one DPC RP PIO > displayed the E-E Prefixes). > > I'd appreciate if people familiar with ixgbe could check the error > handling conversion within the driver is correct. > > Ilpo Järvinen (4): > PCI/AER: Cleanup register variable > PCI: Generalize TLP Header Log reading > PCI: Add TLP Prefix reading into pcie_read_tlp_log() > PCI: Create helper to print TLP Header and Prefix Log > > drivers/firmware/efi/cper.c | 4 +- > drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 39 +++-- > drivers/pci/ats.c | 2 +- > drivers/pci/pci.c | 79 +++ > drivers/pci/pci.h | 2 +- > drivers/pci/pcie/aer.c| 28 ++- > drivers/pci/pcie/dpc.c| 31 > drivers/pci/probe.c | 14 ++-- > include/linux/aer.h | 16 ++-- > include/linux/pci.h | 2 +- > include/ras/ras_event.h | 10 +-- > include/uapi/linux/pci_regs.h | 2 + > 12 files changed, 145 insertions(+), 84 deletions(-) > >
[PATCH] powerpc/cputable: Add missing PPC_FEATURE_BOOKE on PPC64 Book-E
Commit e320a76db4b0 ("powerpc/cputable: Split cpu_specs[] out of cputable.h") moved the cpu_specs to separate header files. Previously PPC_FEATURE_BOOKE was enabled by CONFIG_PPC_BOOK3E_64. The definition in cpu_specs_e500mc.h for PPC64 no longer enables PPC_FEATURE_BOOKE. This breaks user space reading the ELF hwcaps and expect PPC_FEATURE_BOOKE. Debugging an application with gdb is no longer working on e5500/e6500 because the 64-bit detection relies on PPC_FEATURE_BOOKE for Book-E. Fixes: e320a76db4b0 ("powerpc/cputable: Split cpu_specs[] out of cputable.h") Signed-off-by: David Engraf --- arch/powerpc/kernel/cpu_specs_e500mc.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/cpu_specs_e500mc.h b/arch/powerpc/kernel/cpu_specs_e500mc.h index ceb06b109f831..2ae8e9a7b461c 100644 --- a/arch/powerpc/kernel/cpu_specs_e500mc.h +++ b/arch/powerpc/kernel/cpu_specs_e500mc.h @@ -8,7 +8,8 @@ #ifdef CONFIG_PPC64 #define COMMON_USER_BOOKE (PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU | \ -PPC_FEATURE_HAS_FPU | PPC_FEATURE_64) +PPC_FEATURE_HAS_FPU | PPC_FEATURE_64 | \ +PPC_FEATURE_BOOKE) #else #define COMMON_USER_BOOKE (PPC_FEATURE_32 | PPC_FEATURE_HAS_MMU | \ PPC_FEATURE_BOOKE) -- 2.40.1
Re: [PATCH] drivers/ps3: select VIDEO to provide cmdline functions
Hi Am 07.02.24 um 04:37 schrieb Randy Dunlap: When VIDEO is not set, there is a build error. Fix that by selecting VIDEO for PS3_PS3AV. ERROR: modpost: ".video_get_options" [drivers/ps3/ps3av_mod.ko] undefined! Fixes: dae7fbf43fd0 ("driver/ps3: Include for mode parsing") Thanks for the fix. Please also add Fixes: a3b6792e990d ("video/cmdline: Introduce CONFIG_VIDEO for video= parameter") Cc: linux-fb...@vger.kernel.org Cc: dri-de...@lists.freedesktop.org That's the commit that exposed the problem. IDK why the old config option VIDEO_CMDLINE worked. Signed-off-by: Randy Dunlap Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Aneesh Kumar K.V Cc: Naveen N. Rao Cc: linuxppc-dev@lists.ozlabs.org Cc: Thomas Zimmermann Cc: Geoff Levand Reviewed-by: Thomas Zimmermann --- arch/powerpc/platforms/ps3/Kconfig |1 + 1 file changed, 1 insertion(+) diff -- a/arch/powerpc/platforms/ps3/Kconfig b/arch/powerpc/platforms/ps3/Kconfig --- a/arch/powerpc/platforms/ps3/Kconfig +++ b/arch/powerpc/platforms/ps3/Kconfig @@ -67,6 +67,7 @@ config PS3_VUART config PS3_PS3AV depends on PPC_PS3 tristate "PS3 AV settings driver" if PS3_ADVANCED + select VIDEO select PS3_VUART default y help -- -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstrasse 146, 90461 Nuernberg, Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman HRB 36809 (AG Nuernberg)