Re: [PATCH 02/28] staging: android: ion: use vmap instead of vm_map_ram

2020-04-08 Thread Hillf Danton


On Wed,  8 Apr 2020 13:59:00 +0200
> 
> vm_map_ram can keep mappings around after the vm_unmap_ram.  Using that
> with non-PAGE_KERNEL mappings can lead to all kinds of aliasing issues.
> 
> Signed-off-by: Christoph Hellwig 
> ---
>  drivers/staging/android/ion/ion_heap.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/staging/android/ion/ion_heap.c 
> b/drivers/staging/android/ion/ion_heap.c
> index 473b465724f1..a2d5c6df4b96 100644
> --- a/drivers/staging/android/ion/ion_heap.c
> +++ b/drivers/staging/android/ion/ion_heap.c
> @@ -99,12 +99,12 @@ int ion_heap_map_user(struct ion_heap *heap, struct 
> ion_buffer *buffer,
>  
>  static int ion_heap_clear_pages(struct page **pages, int num, pgprot_t 
> pgprot)
>  {
> - void *addr = vm_map_ram(pages, num, -1, pgprot);
> + void *addr = vmap(pages, num, VM_MAP);

A merge glitch?

void *vmap(struct page **pages, unsigned int count,
   unsigned long flags, pgprot_t prot)
>  
>   if (!addr)
>   return -ENOMEM;
>   memset(addr, 0, PAGE_SIZE * num);
> - vm_unmap_ram(addr, num);
> + vunmap(addr);
>  
>   return 0;
>  }
> -- 
> 2.25.1



Re: Oops (request_key_auth_describe) while running cve-2016-7042 from LTP

2019-09-02 Thread Hillf Danton
>
> Hi Hillf,
>
> Would you like to me to put you down as the author of this patch?  If so, I'll
> need a Signed-off-by from you.
>
Signed-off-by: Hillf Danton 



Re: Oops (request_key_auth_describe) while running cve-2016-7042 from LTP

2019-08-31 Thread Hillf Danton
David Howells  wrote:
>> 1, callee has no pre defined duty to help caller in general; they should not
>> try to do anything, however, to help their callers in principle due to
>> limited info on their hands IMO.
>
> Ah, no.  It's entirely reasonable for an API to specify that one of its
> methods will be called with one or more locks held - and that the method must
> be aware of this and may make use of this.
>
Fair and clear.

Thanks
Hillf




Re: Oops (request_key_auth_describe) while running cve-2016-7042 from LTP

2019-08-31 Thread Hillf Danton
David Howells  wrote:
>
> Hillf Danton  wrote:
>
> > -   struct request_key_auth *rka = dereference_key_rcu(key);
> > +   struct request_key_auth *rka;
> > +
> > +   rcu_read_lock();
> > +   rka = dereference_key_rcu(key);
>
> This shouldn't help as the caller, proc_keys_show(), is holding the RCU read
> lock across the call.  The end of the function reads:
> and the documentation says "This method will be called with the RCU read lock
> held".
>
1, callee has no pre defined duty to help caller in general; they should
not try to do anything, however, to help their callers in principle due to
limited info on their hands IMO.
2, uses of rcu can be nested.
3, no comment can be found in security/keys/request_key_auth.c about
the rcu already documented.
4, the newly added rcu can avoid incidental messup anywhere else.

Hillf


Re: Oops (request_key_auth_describe) while running cve-2016-7042 from LTP

2019-08-30 Thread Hillf Danton


On Fri, 30 Aug 2019 12:18:07 +0530 Sachin Sant wrote:
> 
> [ 8074.351033] BUG: Kernel NULL pointer dereference at 0x0038
> [ 8074.351046] Faulting instruction address: 0xc04ddf30
> [ 8074.351052] Oops: Kernel access of bad area, sig: 11 [#1]
> [ 8074.351056] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries

Add rcu gp.

--- a/security/keys/request_key_auth.c
+++ b/security/keys/request_key_auth.c
@@ -64,12 +64,19 @@ static int request_key_auth_instantiate(
 static void request_key_auth_describe(const struct key *key,
  struct seq_file *m)
 {
-   struct request_key_auth *rka = dereference_key_rcu(key);
+   struct request_key_auth *rka;
+
+   rcu_read_lock();
+   rka = dereference_key_rcu(key);
+   if (!rka)
+   goto out;
 
seq_puts(m, "key:");
seq_puts(m, key->description);
if (key_is_positive(key))
seq_printf(m, " pid:%d ci:%zu", rka->pid, rka->callout_len);
+out:
+   rcu_read_unlock();
 }
 
 /*



Re: [PATCH 1/2] mm: move vma_is_anonymous check within pmd_move_must_withdraw

2016-11-07 Thread Hillf Danton
On Monday, November 07, 2016 4:35 PM Aneesh Kumar K.V
> 
> Architectures like ppc64 want to use page table deposit/withraw
> even with huge pmd dax entries. Allow arch to override the
> vma_is_anonymous check by moving that to pmd_move_must_withdraw
> function
> 
> Signed-off-by: Aneesh Kumar K.V 
> ---
>  include/asm-generic/pgtable.h | 12 
>  mm/huge_memory.c  | 17 +++--
>  2 files changed, 15 insertions(+), 14 deletions(-)
> 
> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
> index c4f8fd2fd384..324990273ad2 100644
> --- a/include/asm-generic/pgtable.h
> +++ b/include/asm-generic/pgtable.h
> @@ -653,18 +653,6 @@ static inline pmd_t pmd_read_atomic(pmd_t *pmdp)
>  }
>  #endif
> 
> -#ifndef pmd_move_must_withdraw
> -static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
> -  spinlock_t *old_pmd_ptl)
> -{
> - /*
> -  * With split pmd lock we also need to move preallocated
> -  * PTE page table if new_pmd is on different PMD page table.
> -  */
> - return new_pmd_ptl != old_pmd_ptl;
> -}
> -#endif
> -
>  /*
>   * This function is meant to be used by sites walking pagetables with
>   * the mmap_sem hold in read mode to protect against MADV_DONTNEED and
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index cdcd25cb30fe..1ac1b0ca63c4 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1424,6 +1424,20 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct 
> vm_area_struct *vma,
>   return 1;
>  }
> 
> +#ifndef pmd_move_must_withdraw
> +static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
> +  spinlock_t *old_pmd_ptl)
> +{
> + /*
> +  * With split pmd lock we also need to move preallocated
> +  * PTE page table if new_pmd is on different PMD page table.
> +  *
> +  * We also don't deposit and withdraw tables for file pages.
> +  */
> + return (new_pmd_ptl != old_pmd_ptl) && vma_is_anonymous(vma);

Stray git merge?

> +}
> +#endif
> +
>  bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> unsigned long new_addr, unsigned long old_end,
> pmd_t *old_pmd, pmd_t *new_pmd)
> @@ -1458,8 +1472,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned 
> long old_addr,
>   pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd);
>   VM_BUG_ON(!pmd_none(*new_pmd));
> 
> - if (pmd_move_must_withdraw(new_ptl, old_ptl) &&
> - vma_is_anonymous(vma)) {
> + if (pmd_move_must_withdraw(new_ptl, old_ptl)) {
>   pgtable_t pgtable;
>   pgtable = pgtable_trans_huge_withdraw(mm, old_pmd);
>   pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
> --
> 2.10.2
> 



Re: [PATCH] mm/hugetlb: Use the right pte val for compare in hugetlb_cow

2016-10-18 Thread Hillf Danton
On Tuesday, October 18, 2016 11:43 PM Aneesh Kumar K.V wrote:
> 
> We cannot use the pte value used in set_pte_at for pte_same comparison,
> because archs like ppc64, filter/add new pte flag in set_pte_at. Instead
> fetch the pte value inside hugetlb_cow. We are comparing pte value to
> make sure the pte didn't change since we dropped the page table lock.
> hugetlb_cow get called with page table lock held, and we can take a copy
> of the pte value before we drop the page table lock.
> 
> With hugetlbfs, we optimize the MAP_PRIVATE write fault path with no
> previous mapping (huge_pte_none entries), by forcing a cow in the fault
> path. This avoid take an addition fault to covert a read-only mapping
> to read/write. Here we were comparing a recently instantiated pte (via
> set_pte_at) to the pte values from linux page table. As explained above
> on ppc64 such pte_same check returned wrong result, resulting in us
> taking an additional fault on ppc64.
> 
> Fixes: 6a119eae942c ("powerpc/mm: Add a _PAGE_PTE bit")
> 
> Reported-by: Jan Stancek <jstan...@redhat.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.vnet.ibm.com>
> ---

Acked-by: Hillf Danton <hillf...@alibaba-inc.com>

>  mm/hugetlb.c | 12 +++-
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index ec49d9ef1eef..da8fbd02b92e 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3386,15 +3386,17 @@ static void unmap_ref_private(struct mm_struct *mm, 
> struct vm_area_struct *vma,
>   * Keep the pte_same checks anyway to make transition from the mutex easier.
>   */
>  static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
> - unsigned long address, pte_t *ptep, pte_t pte,
> - struct page *pagecache_page, spinlock_t *ptl)
> +unsigned long address, pte_t *ptep,
> +struct page *pagecache_page, spinlock_t *ptl)
>  {
> + pte_t pte;
>   struct hstate *h = hstate_vma(vma);
>   struct page *old_page, *new_page;
>   int ret = 0, outside_reserve = 0;
>   unsigned long mmun_start;   /* For mmu_notifiers */
>   unsigned long mmun_end; /* For mmu_notifiers */
> 
> + pte = huge_ptep_get(ptep);
>   old_page = pte_page(pte);
> 
>  retry_avoidcopy:
> @@ -3668,7 +3670,7 @@ static int hugetlb_no_page(struct mm_struct *mm, struct 
> vm_area_struct *vma,
>   hugetlb_count_add(pages_per_huge_page(h), mm);
>   if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) {
>   /* Optimization, do the COW without a second fault */
> - ret = hugetlb_cow(mm, vma, address, ptep, new_pte, page, ptl);
> + ret = hugetlb_cow(mm, vma, address, ptep, page, ptl);
>   }
> 
>   spin_unlock(ptl);
> @@ -3822,8 +3824,8 @@ int hugetlb_fault(struct mm_struct *mm, struct 
> vm_area_struct *vma,
> 
>   if (flags & FAULT_FLAG_WRITE) {
>   if (!huge_pte_write(entry)) {
> - ret = hugetlb_cow(mm, vma, address, ptep, entry,
> - pagecache_page, ptl);
> + ret = hugetlb_cow(mm, vma, address, ptep,
> +   pagecache_page, ptl);
>   goto out_put_page;
>   }
>   entry = huge_pte_mkdirty(entry);
> --
> 2.10.1