Re: [PATCH] powerpc/interrupt: Put braces around empty body in an 'if' statement

2022-06-19 Thread Souptick Joarder
On Sun, Jun 19, 2022 at 11:13 AM Randy Dunlap  wrote:
>
>
>
> On 6/18/22 20:11, Souptick Joarder wrote:
> > From: "Souptick Joarder (HPE)" 
> >
> > Kernel test robot throws warning ->
> >
> > arch/powerpc/kernel/interrupt.c:
> > In function 'interrupt_exit_kernel_prepare':
> >
> >>> arch/powerpc/kernel/interrupt.c:542:55: warning: suggest
> > braces around empty body in an 'if' statement [-Wempty-body]
> >  542 | CT_WARN_ON(ct_state() == CONTEXT_USER);
>
> That must be when CONFIG_CONTEXT_TRACKING_USER is not set/enabled.
> Can you confirm that?

Yes, CONFIG_CONTEXT_TRACKING_USER is not set.
>
> Then the preferable fix would be in :
>
> change
> #define CT_WARN_ON(cond)
>
> to either an empty do-while loop or a static inline function.
>
> (adding Frederic to Cc:)
>
> >
> > Fix it by adding braces.
> >
> > Reported-by: Kernel test robot 
> > Signed-off-by: Souptick Joarder (HPE) 
> > ---
> >  arch/powerpc/kernel/interrupt.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/powerpc/kernel/interrupt.c 
> > b/arch/powerpc/kernel/interrupt.c
> > index 784ea3289c84..b8a918bab48f 100644
> > --- a/arch/powerpc/kernel/interrupt.c
> > +++ b/arch/powerpc/kernel/interrupt.c
> > @@ -538,8 +538,9 @@ notrace unsigned long 
> > interrupt_exit_kernel_prepare(struct pt_regs *regs)
> >* CT_WARN_ON comes here via program_check_exception,
> >* so avoid recursion.
> >*/
> > - if (TRAP(regs) != INTERRUPT_PROGRAM)
> > + if (TRAP(regs) != INTERRUPT_PROGRAM) {
> >   CT_WARN_ON(ct_state() == CONTEXT_USER);
> > + }
> >
> >   kuap = kuap_get_and_assert_locked();
> >
>
> --
> ~Randy


[PATCH] powerpc/interrupt: Put braces around empty body in an 'if' statement

2022-06-18 Thread Souptick Joarder
From: "Souptick Joarder (HPE)" 

Kernel test robot throws warning ->

arch/powerpc/kernel/interrupt.c:
In function 'interrupt_exit_kernel_prepare':

>> arch/powerpc/kernel/interrupt.c:542:55: warning: suggest
braces around empty body in an 'if' statement [-Wempty-body]
 542 | CT_WARN_ON(ct_state() == CONTEXT_USER);

Fix it by adding braces.

Reported-by: Kernel test robot 
Signed-off-by: Souptick Joarder (HPE) 
---
 arch/powerpc/kernel/interrupt.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index 784ea3289c84..b8a918bab48f 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -538,8 +538,9 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct 
pt_regs *regs)
 * CT_WARN_ON comes here via program_check_exception,
 * so avoid recursion.
 */
-   if (TRAP(regs) != INTERRUPT_PROGRAM)
+   if (TRAP(regs) != INTERRUPT_PROGRAM) {
CT_WARN_ON(ct_state() == CONTEXT_USER);
+   }
 
kuap = kuap_get_and_assert_locked();
 
-- 
2.25.1



Re: [PATCH v3 resend 01/15] mm: add setup_initial_init_mm() helper

2021-06-08 Thread Souptick Joarder
On Tue, Jun 8, 2021 at 8:27 PM Christophe Leroy
 wrote:
>
>
>
> Le 08/06/2021 à 16:53, Souptick Joarder a écrit :
> > On Tue, Jun 8, 2021 at 1:56 PM Kefeng Wang  
> > wrote:
> >>
> >> Add setup_initial_init_mm() helper to setup kernel text,
> >> data and brk.
> >>
> >> Cc: linux-snps-...@lists.infradead.org
> >> Cc: linux-arm-ker...@lists.infradead.org
> >> Cc: linux-c...@vger.kernel.org
> >> Cc: uclinux-h8-de...@lists.sourceforge.jp
> >> Cc: linux-m...@lists.linux-m68k.org
> >> Cc: openr...@lists.librecores.org
> >> Cc: linuxppc-dev@lists.ozlabs.org
> >> Cc: linux-ri...@lists.infradead.org
> >> Cc: linux...@vger.kernel.org
> >> Cc: linux-s...@vger.kernel.org
> >> Cc: x...@kernel.org
> >> Signed-off-by: Kefeng Wang 
> >> ---
> >>   include/linux/mm.h | 3 +++
> >>   mm/init-mm.c   | 9 +
> >>   2 files changed, 12 insertions(+)
> >>
> >> diff --git a/include/linux/mm.h b/include/linux/mm.h
> >> index c274f75efcf9..02aa057540b7 100644
> >> --- a/include/linux/mm.h
> >> +++ b/include/linux/mm.h
> >> @@ -244,6 +244,9 @@ int __add_to_page_cache_locked(struct page *page, 
> >> struct address_space *mapping,
> >>
> >>   #define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
> >>
> >> +void setup_initial_init_mm(void *start_code, void *end_code,
> >> +  void *end_data, void *brk);
> >> +
> >
> > Gentle query -> is there any limitation to add inline functions in
> > setup_arch() functions ?
>
> Kefeng just followed comment from Mike I guess, see
> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20210604070633.32363-2-wangkefeng.w...@huawei.com/#2696253

Ok.
>
> Christophe
>


Re: [PATCH v3 resend 11/15] powerpc: convert to setup_initial_init_mm()

2021-06-08 Thread Souptick Joarder
On Tue, Jun 8, 2021 at 8:24 PM Christophe Leroy
 wrote:
>
>
>
> Le 08/06/2021 à 16:36, Souptick Joarder a écrit :
> > On Tue, Jun 8, 2021 at 1:56 PM Kefeng Wang  
> > wrote:
> >>
> >> Use setup_initial_init_mm() helper to simplify code.
> >>
> >> Note klimit is (unsigned long) _end, with new helper,
> >> will use _end directly.
> >
> > With this change klimit left with no user in this file and can be
> > moved to some appropriate header.
> > But in a separate series.
>
> I have a patch to remove klimit, see
> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/9fa9ba6807c17f93f35a582c199c646c4a8bfd9c.1622800638.git.christophe.le...@csgroup.eu/

Got it. Thanks :)

>
> Christophe
>
>
> >
> >>
> >> Cc: Michael Ellerman 
> >> Cc: Benjamin Herrenschmidt 
> >> Cc: linuxppc-dev@lists.ozlabs.org
> >> Signed-off-by: Kefeng Wang 
> >> ---
> >>   arch/powerpc/kernel/setup-common.c | 5 +
> >>   1 file changed, 1 insertion(+), 4 deletions(-)
> >>
> >> diff --git a/arch/powerpc/kernel/setup-common.c 
> >> b/arch/powerpc/kernel/setup-common.c
> >> index 74a98fff2c2f..96697c6e1e16 100644
> >> --- a/arch/powerpc/kernel/setup-common.c
> >> +++ b/arch/powerpc/kernel/setup-common.c
> >> @@ -927,10 +927,7 @@ void __init setup_arch(char **cmdline_p)
> >>
> >>  klp_init_thread_info(_task);
> >>
> >> -   init_mm.start_code = (unsigned long)_stext;
> >> -   init_mm.end_code = (unsigned long) _etext;
> >> -   init_mm.end_data = (unsigned long) _edata;
> >> -   init_mm.brk = klimit;
> >> +   setup_initial_init_mm(_stext, _etext, _edata, _end);
> >>
> >>  mm_iommu_init(_mm);
> >>  irqstack_early_init();
> >> --
> >> 2.26.2
> >>
> >>


Re: [PATCH v3 resend 01/15] mm: add setup_initial_init_mm() helper

2021-06-08 Thread Souptick Joarder
On Tue, Jun 8, 2021 at 1:56 PM Kefeng Wang  wrote:
>
> Add setup_initial_init_mm() helper to setup kernel text,
> data and brk.
>
> Cc: linux-snps-...@lists.infradead.org
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: linux-c...@vger.kernel.org
> Cc: uclinux-h8-de...@lists.sourceforge.jp
> Cc: linux-m...@lists.linux-m68k.org
> Cc: openr...@lists.librecores.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-ri...@lists.infradead.org
> Cc: linux...@vger.kernel.org
> Cc: linux-s...@vger.kernel.org
> Cc: x...@kernel.org
> Signed-off-by: Kefeng Wang 
> ---
>  include/linux/mm.h | 3 +++
>  mm/init-mm.c   | 9 +
>  2 files changed, 12 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c274f75efcf9..02aa057540b7 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -244,6 +244,9 @@ int __add_to_page_cache_locked(struct page *page, struct 
> address_space *mapping,
>
>  #define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
>
> +void setup_initial_init_mm(void *start_code, void *end_code,
> +  void *end_data, void *brk);
> +

Gentle query -> is there any limitation to add inline functions in
setup_arch() functions ?

>  /*
>   * Linux kernel virtual memory manager primitives.
>   * The idea being to have a "virtual" mm in the same way
> diff --git a/mm/init-mm.c b/mm/init-mm.c
> index 153162669f80..b4a6f38fb51d 100644
> --- a/mm/init-mm.c
> +++ b/mm/init-mm.c
> @@ -40,3 +40,12 @@ struct mm_struct init_mm = {
> .cpu_bitmap = CPU_BITS_NONE,
> INIT_MM_CONTEXT(init_mm)
>  };
> +
> +void setup_initial_init_mm(void *start_code, void *end_code,
> +  void *end_data, void *brk)
> +{
> +   init_mm.start_code = (unsigned long)start_code;
> +   init_mm.end_code = (unsigned long)end_code;
> +   init_mm.end_data = (unsigned long)end_data;
> +   init_mm.brk = (unsigned long)brk;
> +}
> --
> 2.26.2
>
>


Re: [PATCH v3 resend 11/15] powerpc: convert to setup_initial_init_mm()

2021-06-08 Thread Souptick Joarder
On Tue, Jun 8, 2021 at 1:56 PM Kefeng Wang  wrote:
>
> Use setup_initial_init_mm() helper to simplify code.
>
> Note klimit is (unsigned long) _end, with new helper,
> will use _end directly.

With this change klimit left with no user in this file and can be
moved to some appropriate header.
But in a separate series.

>
> Cc: Michael Ellerman 
> Cc: Benjamin Herrenschmidt 
> Cc: linuxppc-dev@lists.ozlabs.org
> Signed-off-by: Kefeng Wang 
> ---
>  arch/powerpc/kernel/setup-common.c | 5 +
>  1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c 
> b/arch/powerpc/kernel/setup-common.c
> index 74a98fff2c2f..96697c6e1e16 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -927,10 +927,7 @@ void __init setup_arch(char **cmdline_p)
>
> klp_init_thread_info(_task);
>
> -   init_mm.start_code = (unsigned long)_stext;
> -   init_mm.end_code = (unsigned long) _etext;
> -   init_mm.end_data = (unsigned long) _edata;
> -   init_mm.brk = klimit;
> +   setup_initial_init_mm(_stext, _etext, _edata, _end);
>
> mm_iommu_init(_mm);
> irqstack_early_init();
> --
> 2.26.2
>
>


Re: [linux-next PATCH] mm/gup.c: Convert to use get_user_{page|pages}_fast_only()

2020-05-26 Thread Souptick Joarder
On Tue, May 26, 2020 at 1:29 PM Paul Mackerras  wrote:
>
> On Mon, May 25, 2020 at 02:23:32PM +0530, Souptick Joarder wrote:
> > API __get_user_pages_fast() renamed to get_user_pages_fast_only()
> > to align with pin_user_pages_fast_only().
> >
> > As part of this we will get rid of write parameter.
> > Instead caller will pass FOLL_WRITE to get_user_pages_fast_only().
> > This will not change any existing functionality of the API.
> >
> > All the callers are changed to pass FOLL_WRITE.
> >
> > Also introduce get_user_page_fast_only(), and use it in a few
> > places that hard-code nr_pages to 1.
> >
> > Updated the documentation of the API.
> >
> > Signed-off-by: Souptick Joarder 
>
> The arch/powerpc/kvm bits look reasonable.
>
> Reviewed-by: Paul Mackerras 

Thanks Paul. This patch is merged through mm-tree.
https://lore.kernel.org/kvm/1590396812-31277-1-git-send-email-jrdr.li...@gmail.com/


[linux-next PATCH] mm/gup.c: Convert to use get_user_{page|pages}_fast_only()

2020-05-25 Thread Souptick Joarder
API __get_user_pages_fast() renamed to get_user_pages_fast_only()
to align with pin_user_pages_fast_only().

As part of this we will get rid of write parameter.
Instead caller will pass FOLL_WRITE to get_user_pages_fast_only().
This will not change any existing functionality of the API.

All the callers are changed to pass FOLL_WRITE.

Also introduce get_user_page_fast_only(), and use it in a few
places that hard-code nr_pages to 1.

Updated the documentation of the API.

Signed-off-by: Souptick Joarder 
Reviewed-by: John Hubbard 
Cc: Matthew Wilcox 
Cc: John Hubbard 
---
 arch/powerpc/kvm/book3s_64_mmu_hv.c|  2 +-
 arch/powerpc/kvm/book3s_64_mmu_radix.c |  2 +-
 arch/powerpc/perf/callchain_64.c   |  4 +---
 include/linux/mm.h | 10 --
 kernel/events/core.c   |  4 ++--
 mm/gup.c   | 29 -
 virt/kvm/kvm_main.c|  8 +++-
 7 files changed, 32 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 18aed97..ddfc4c9 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -581,7 +581,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
 * We always ask for write permission since the common case
 * is that the page is writable.
 */
-   if (__get_user_pages_fast(hva, 1, 1, ) == 1) {
+   if (get_user_page_fast_only(hva, FOLL_WRITE, )) {
write_ok = true;
} else {
/* Call KVM generic code to do the slow-path check */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c 
b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 3248f78..5d4c087 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -795,7 +795,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 * is that the page is writable.
 */
hva = gfn_to_hva_memslot(memslot, gfn);
-   if (!kvm_ro && __get_user_pages_fast(hva, 1, 1, ) == 1) {
+   if (!kvm_ro && get_user_page_fast_only(hva, FOLL_WRITE, )) {
upgrade_write = true;
} else {
unsigned long pfn;
diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
index 1bff896d..814d1c2 100644
--- a/arch/powerpc/perf/callchain_64.c
+++ b/arch/powerpc/perf/callchain_64.c
@@ -29,11 +29,9 @@ int read_user_stack_slow(void __user *ptr, void *buf, int nb)
unsigned long addr = (unsigned long) ptr;
unsigned long offset;
struct page *page;
-   int nrpages;
void *kaddr;
 
-   nrpages = __get_user_pages_fast(addr, 1, 1, );
-   if (nrpages == 1) {
+   if (get_user_page_fast_only(addr, FOLL_WRITE, )) {
kaddr = page_address(page);
 
/* align address to page boundary */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 93d93bd..c1718df 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1817,10 +1817,16 @@ extern int mprotect_fixup(struct vm_area_struct *vma,
 /*
  * doesn't attempt to fault and will return short.
  */
-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
- struct page **pages);
+int get_user_pages_fast_only(unsigned long start, int nr_pages,
+unsigned int gup_flags, struct page **pages);
 int pin_user_pages_fast_only(unsigned long start, int nr_pages,
 unsigned int gup_flags, struct page **pages);
+
+static inline bool get_user_page_fast_only(unsigned long addr,
+   unsigned int gup_flags, struct page **pagep)
+{
+   return get_user_pages_fast_only(addr, 1, gup_flags, pagep) == 1;
+}
 /*
  * per-process(per-mm_struct) statistics.
  */
diff --git a/kernel/events/core.c b/kernel/events/core.c
index c94eb27..856d98c 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6934,12 +6934,12 @@ static u64 perf_virt_to_phys(u64 virt)
 * Walking the pages tables for user address.
 * Interrupts are disabled, so it prevents any tear down
 * of the page tables.
-* Try IRQ-safe __get_user_pages_fast first.
+* Try IRQ-safe get_user_page_fast_only first.
 * If failed, leave phys_addr as 0.
 */
if (current->mm != NULL) {
pagefault_disable();
-   if (__get_user_pages_fast(virt, 1, 0, ) == 1)
+   if (get_user_page_fast_only(virt, 0, ))
phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
pagefault_enable();
}
diff --git a/mm/gup.c b/mm/gup.c
index 80f51a36..f4b05f3 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2278,7 +2278,7 @@ static int gup_pte_range(pmd_t pmd,

Re: [linux-next RFC v2] mm/gup.c: Convert to use get_user_{page|pages}_fast_only()

2020-05-25 Thread Souptick Joarder
On Mon, May 25, 2020 at 6:36 AM John Hubbard  wrote:
>
> On 2020-05-23 21:27, Souptick Joarder wrote:
> > API __get_user_pages_fast() renamed to get_user_pages_fast_only()
> > to align with pin_user_pages_fast_only().
> >
> > As part of this we will get rid of write parameter. Instead caller
> > will pass FOLL_WRITE to get_user_pages_fast_only(). This will not
> > change any existing functionality of the API.
> >
> > All the callers are changed to pass FOLL_WRITE.
>
> This looks good. A few nits below, but with those fixed, feel free to
> add:
>
>  Reviewed-by: John Hubbard 
>
> >
> > There are few places where 1 is passed to 2nd parameter of
> > __get_user_pages_fast() and return value is checked for 1
> > like [1]. Those are replaced with new inline
> > get_user_page_fast_only().
> >
> > [1] if (__get_user_pages_fast(hva, 1, 1, ) == 1)
> >
>
> We try to avoid talking *too* much about the previous version of
> the code. Just enough. So, instead of the above two paragraphs,
> I'd compress it down to:
>
> Also: introduce get_user_page_fast_only(), and use it in a few
> places that hard-code nr_pages to 1.
>
> ...
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 93d93bd..8d4597f 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1817,10 +1817,16 @@ extern int mprotect_fixup(struct vm_area_struct 
> > *vma,
> >   /*
> >* doesn't attempt to fault and will return short.
> >*/
> > -int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
> > -   struct page **pages);
> > +int get_user_pages_fast_only(unsigned long start, int nr_pages,
> > + unsigned int gup_flags, struct page **pages);
>
> Silly nit:
>
> Can you please leave the original indentation in place? I don't normally
> comment about this, but I like the original indentation better, and it matches
> the pin_user_pages_fast() below, too.
>
> ...
> > @@ -2786,8 +2792,8 @@ static int internal_get_user_pages_fast(unsigned long 
> > start, int nr_pages,
> >* If the architecture does not support this function, simply return with 
> > no
> >* pages pinned.
> >*/
> > -int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
> > -   struct page **pages)
> > +int get_user_pages_fast_only(unsigned long start, int nr_pages,
> > + unsigned int gup_flags, struct page **pages)
>
>
> Same thing here: you've changed the original indentation, which was 
> (arguably, but
> to my mind anyway) more readable, and for no reason. It still would have fit 
> within
> 80 cols.
>
> I'm sure it's a perfect 50/50 mix of people who prefer either indentation 
> style, and
> so for brand new code, I'll remain silent, as long as it is consistent with 
> either
> itself and/or the surrounding code. But changing it back and forth is a bit
> aggravating, and best avoided. :)

Ok, along with these changes I will remove the *RFC* tag and repost it.


[linux-next RFC v2] mm/gup.c: Convert to use get_user_{page|pages}_fast_only()

2020-05-23 Thread Souptick Joarder
API __get_user_pages_fast() renamed to get_user_pages_fast_only()
to align with pin_user_pages_fast_only().

As part of this we will get rid of write parameter. Instead caller
will pass FOLL_WRITE to get_user_pages_fast_only(). This will not
change any existing functionality of the API.

All the callers are changed to pass FOLL_WRITE.

There are few places where 1 is passed to 2nd parameter of
__get_user_pages_fast() and return value is checked for 1
like [1]. Those are replaced with new inline
get_user_page_fast_only().

[1] if (__get_user_pages_fast(hva, 1, 1, ) == 1)

Updated the documentation of the API.

Signed-off-by: Souptick Joarder 
Cc: John Hubbard 
Cc: Matthew Wilcox 
---
v2:
Updated the subject line and change log.
Address Matthew's comment to fix a bug and added
new inline get_user_page_fast_only().

 arch/powerpc/kvm/book3s_64_mmu_hv.c|  2 +-
 arch/powerpc/kvm/book3s_64_mmu_radix.c |  2 +-
 arch/powerpc/perf/callchain_64.c   |  4 +---
 include/linux/mm.h | 10 --
 kernel/events/core.c   |  4 ++--
 mm/gup.c   | 29 -
 virt/kvm/kvm_main.c|  8 +++-
 7 files changed, 32 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 18aed97..ddfc4c9 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -581,7 +581,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
 * We always ask for write permission since the common case
 * is that the page is writable.
 */
-   if (__get_user_pages_fast(hva, 1, 1, ) == 1) {
+   if (get_user_page_fast_only(hva, FOLL_WRITE, )) {
write_ok = true;
} else {
/* Call KVM generic code to do the slow-path check */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c 
b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 3248f78..5d4c087 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -795,7 +795,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 * is that the page is writable.
 */
hva = gfn_to_hva_memslot(memslot, gfn);
-   if (!kvm_ro && __get_user_pages_fast(hva, 1, 1, ) == 1) {
+   if (!kvm_ro && get_user_page_fast_only(hva, FOLL_WRITE, )) {
upgrade_write = true;
} else {
unsigned long pfn;
diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
index 1bff896d..814d1c2 100644
--- a/arch/powerpc/perf/callchain_64.c
+++ b/arch/powerpc/perf/callchain_64.c
@@ -29,11 +29,9 @@ int read_user_stack_slow(void __user *ptr, void *buf, int nb)
unsigned long addr = (unsigned long) ptr;
unsigned long offset;
struct page *page;
-   int nrpages;
void *kaddr;
 
-   nrpages = __get_user_pages_fast(addr, 1, 1, );
-   if (nrpages == 1) {
+   if (get_user_page_fast_only(addr, FOLL_WRITE, )) {
kaddr = page_address(page);
 
/* align address to page boundary */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 93d93bd..8d4597f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1817,10 +1817,16 @@ extern int mprotect_fixup(struct vm_area_struct *vma,
 /*
  * doesn't attempt to fault and will return short.
  */
-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
- struct page **pages);
+int get_user_pages_fast_only(unsigned long start, int nr_pages,
+   unsigned int gup_flags, struct page **pages);
 int pin_user_pages_fast_only(unsigned long start, int nr_pages,
 unsigned int gup_flags, struct page **pages);
+
+static inline bool get_user_page_fast_only(unsigned long addr,
+   unsigned int gup_flags, struct page **pagep)
+{
+   return get_user_pages_fast_only(addr, 1, gup_flags, pagep) == 1;
+}
 /*
  * per-process(per-mm_struct) statistics.
  */
diff --git a/kernel/events/core.c b/kernel/events/core.c
index c94eb27..856d98c 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6934,12 +6934,12 @@ static u64 perf_virt_to_phys(u64 virt)
 * Walking the pages tables for user address.
 * Interrupts are disabled, so it prevents any tear down
 * of the page tables.
-* Try IRQ-safe __get_user_pages_fast first.
+* Try IRQ-safe get_user_page_fast_only first.
 * If failed, leave phys_addr as 0.
 */
if (current->mm != NULL) {
pagefault_disable();
-   if (__get_user_pages_fast(virt, 1, 0, ) == 1)
+   if (get_user_page_fast_only(virt, 0, ))
  

Re: [linux-next RFC] mm/gup.c: Convert to use get_user_pages_fast_only()

2020-05-23 Thread Souptick Joarder
On Sat, May 23, 2020 at 10:55 PM Matthew Wilcox  wrote:
>
> On Sat, May 23, 2020 at 10:11:12PM +0530, Souptick Joarder wrote:
> > Renaming the API __get_user_pages_fast() to get_user_pages_
> > fast_only() to align with pin_user_pages_fast_only().
>
> Please don't split a function name across lines.  That messes
> up people who are grepping for the function name in the changelog.

Ok.

>
> > As part of this we will get rid of write parameter.
> > Instead caller will pass FOLL_WRITE to get_user_pages_fast_only().
> > This will not change any existing functionality of the API.
> >
> > All the callers are changed to pass FOLL_WRITE.
> >
> > Updated the documentation of the API.
>
> Everything you have done here is an improvement, and I'd be happy to
> see it go in (after fixing the bug I note below).
>
> But in reading through it, I noticed almost every user ...
>
> > - if (__get_user_pages_fast(hva, 1, 1, ) == 1) {
> > + if (get_user_pages_fast_only(hva, 1, FOLL_WRITE, ) == 1) {
>
> passes '1' as the second parameter.  So do we want to add:
>
> static inline bool get_user_page_fast_only(unsigned long addr,
> unsigned int gup_flags, struct page **pagep)
> {
> return get_user_pages_fast_only(addr, 1, gup_flags, pagep) == 1;
> }
>
Yes, this can be added. Does get_user_page_fast_only() naming is fine ?


> > @@ -2797,10 +2803,7 @@ int __get_user_pages_fast(unsigned long start, int 
> > nr_pages, int write,
> >* FOLL_FAST_ONLY is required in order to match the API description of
> >* this routine: no fall back to regular ("slow") GUP.
> >*/
> > - unsigned int gup_flags = FOLL_GET | FOLL_FAST_ONLY;
> > -
> > - if (write)
> > - gup_flags |= FOLL_WRITE;
> > + gup_flags = FOLL_GET | FOLL_FAST_ONLY;
>
> Er ... gup_flags |=, surely?

Poor mistake.


@@ -1998,7 +1998,7 @@ int gfn_to_page_many_atomic(struct
kvm_memory_slot *slot, gfn_t gfn,
if (entry < nr_pages)
return 0;

-   return __get_user_pages_fast(addr, nr_pages, 1, pages);
+   return get_user_pages_fast(addr, nr_pages, FOLL_WRITE, pages);

Also this needs to be corrected.


[linux-next RFC] mm/gup.c: Convert to use get_user_pages_fast_only()

2020-05-23 Thread Souptick Joarder
Renaming the API __get_user_pages_fast() to get_user_pages_
fast_only() to align with pin_user_pages_fast_only().

As part of this we will get rid of write parameter.
Instead caller will pass FOLL_WRITE to get_user_pages_fast_only().
This will not change any existing functionality of the API.

All the callers are changed to pass FOLL_WRITE.

Updated the documentation of the API.

Signed-off-by: Souptick Joarder 
Cc: John Hubbard 
Cc: Matthew Wilcox 
---
 arch/powerpc/kvm/book3s_64_mmu_hv.c|  2 +-
 arch/powerpc/kvm/book3s_64_mmu_radix.c |  2 +-
 arch/powerpc/perf/callchain_64.c   |  2 +-
 include/linux/mm.h |  4 ++--
 kernel/events/core.c   |  4 ++--
 mm/gup.c   | 29 -
 virt/kvm/kvm_main.c|  6 +++---
 7 files changed, 26 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 18aed97..34fc5c8 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -581,7 +581,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct 
kvm_vcpu *vcpu,
 * We always ask for write permission since the common case
 * is that the page is writable.
 */
-   if (__get_user_pages_fast(hva, 1, 1, ) == 1) {
+   if (get_user_pages_fast_only(hva, 1, FOLL_WRITE, ) == 1) {
write_ok = true;
} else {
/* Call KVM generic code to do the slow-path check */
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c 
b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 3248f78..3b6e342 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -795,7 +795,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu,
 * is that the page is writable.
 */
hva = gfn_to_hva_memslot(memslot, gfn);
-   if (!kvm_ro && __get_user_pages_fast(hva, 1, 1, ) == 1) {
+   if (!kvm_ro && get_user_pages_fast_only(hva, 1, FOLL_WRITE, ) == 
1) {
upgrade_write = true;
} else {
unsigned long pfn;
diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
index 1bff896d..f719a74 100644
--- a/arch/powerpc/perf/callchain_64.c
+++ b/arch/powerpc/perf/callchain_64.c
@@ -32,7 +32,7 @@ int read_user_stack_slow(void __user *ptr, void *buf, int nb)
int nrpages;
void *kaddr;
 
-   nrpages = __get_user_pages_fast(addr, 1, 1, );
+   nrpages = get_user_pages_fast_only(addr, 1, FOLL_WRITE, );
if (nrpages == 1) {
kaddr = page_address(page);
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 93d93bd..10a6758 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1817,8 +1817,8 @@ extern int mprotect_fixup(struct vm_area_struct *vma,
 /*
  * doesn't attempt to fault and will return short.
  */
-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
- struct page **pages);
+int get_user_pages_fast_only(unsigned long start, int nr_pages,
+   unsigned int gup_flags, struct page **pages);
 int pin_user_pages_fast_only(unsigned long start, int nr_pages,
 unsigned int gup_flags, struct page **pages);
 /*
diff --git a/kernel/events/core.c b/kernel/events/core.c
index c94eb27..81d6e73 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6934,12 +6934,12 @@ static u64 perf_virt_to_phys(u64 virt)
 * Walking the pages tables for user address.
 * Interrupts are disabled, so it prevents any tear down
 * of the page tables.
-* Try IRQ-safe __get_user_pages_fast first.
+* Try IRQ-safe get_user_pages_fast_only first.
 * If failed, leave phys_addr as 0.
 */
if (current->mm != NULL) {
pagefault_disable();
-   if (__get_user_pages_fast(virt, 1, 0, ) == 1)
+   if (get_user_pages_fast_only(virt, 1, 0, ) == 1)
phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
pagefault_enable();
}
diff --git a/mm/gup.c b/mm/gup.c
index 80f51a36..d8aabc0 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2278,7 +2278,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, 
unsigned long end,
  * to be special.
  *
  * For a futex to be placed on a THP tail page, get_futex_key requires a
- * __get_user_pages_fast implementation that can pin pages. Thus it's still
+ * get_user_pages_fast_only implementation that can pin pages. Thus it's still
  * useful to have gup_huge_pmd even if we can't operate on ptes.
  */
 static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
@@ -2683,7 +2683,7 @@ static inline void gup_pgd_range(unsigned long addr, 
unsigned l

Re: [PATCH] tools: testing: selftests: Remove duplicate headers

2019-03-06 Thread Souptick Joarder
On Mon, Mar 4, 2019 at 4:19 PM Souptick Joarder  wrote:
>
> On Tue, Feb 26, 2019 at 10:59 AM Souptick Joarder  
> wrote:
> >
> > On Tue, Feb 26, 2019 at 7:18 AM Michael Ellerman  
> > wrote:
> > >
> > > Souptick Joarder  writes:
> > > > Remove duplicate headers which are included twice.
> > > >
> > > > Signed-off-by: Sabyasachi Gupta 
> > > > Signed-off-by: Souptick Joarder 
> > > > ---
> > > ...
> > > >  tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c | 1 -
> > >
> > > I took this hunk via the powerpc tree.
> >
> > How about taking this entirely through a single tree ?
> > or Shall I send these changes in different patches ?
>
> If no comment, can we get this patch in queue for 5.1 ?

I will drop this patch as we have submitted these changes in
different patches. ( Except the one picked by Michael).

>
> >
> > >
> > > > diff --git 
> > > > a/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c 
> > > > b/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > > > index 167135b..af1b802 100644
> > > > --- a/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > > > +++ b/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > > > @@ -11,7 +11,6 @@
> > > >  #include 
> > > >  #include 
> > > >  #include 
> > > > -#include 
> > > >
> > > >  #include "ebb.h"
> > >
> > >
> > > cheers


Re: [PATCH] tools: testing: selftests: Remove duplicate headers

2019-03-04 Thread Souptick Joarder
On Tue, Feb 26, 2019 at 10:59 AM Souptick Joarder  wrote:
>
> On Tue, Feb 26, 2019 at 7:18 AM Michael Ellerman  wrote:
> >
> > Souptick Joarder  writes:
> > > Remove duplicate headers which are included twice.
> > >
> > > Signed-off-by: Sabyasachi Gupta 
> > > Signed-off-by: Souptick Joarder 
> > > ---
> > ...
> > >  tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c | 1 -
> >
> > I took this hunk via the powerpc tree.
>
> How about taking this entirely through a single tree ?
> or Shall I send these changes in different patches ?

If no comment, can we get this patch in queue for 5.1 ?

>
> >
> > > diff --git a/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c 
> > > b/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > > index 167135b..af1b802 100644
> > > --- a/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > > +++ b/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > > @@ -11,7 +11,6 @@
> > >  #include 
> > >  #include 
> > >  #include 
> > > -#include 
> > >
> > >  #include "ebb.h"
> >
> >
> > cheers


Re: [PATCH] tools: testing: selftests: Remove duplicate headers

2019-02-25 Thread Souptick Joarder
On Tue, Feb 26, 2019 at 7:18 AM Michael Ellerman  wrote:
>
> Souptick Joarder  writes:
> > Remove duplicate headers which are included twice.
> >
> > Signed-off-by: Sabyasachi Gupta 
> > Signed-off-by: Souptick Joarder 
> > ---
> ...
> >  tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c | 1 -
>
> I took this hunk via the powerpc tree.

How about taking this entirely through a single tree ?
or Shall I send these changes in different patches ?

>
> > diff --git a/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c 
> > b/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > index 167135b..af1b802 100644
> > --- a/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > +++ b/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
> > @@ -11,7 +11,6 @@
> >  #include 
> >  #include 
> >  #include 
> > -#include 
> >
> >  #include "ebb.h"
>
>
> cheers


[PATCH] tools: testing: selftests: Remove duplicate headers

2019-02-22 Thread Souptick Joarder
Remove duplicate headers which are included twice.

Signed-off-by: Sabyasachi Gupta 
Signed-off-by: Souptick Joarder 
---
 tools/testing/selftests/gpio/gpio-mockup-chardev.c  | 1 -
 tools/testing/selftests/net/udpgso.c| 1 -
 tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c | 1 -
 tools/testing/selftests/proc/proc-self-syscall.c| 1 -
 tools/testing/selftests/rseq/rseq.h | 1 -
 tools/testing/selftests/timers/skew_consistency.c   | 1 -
 tools/testing/selftests/x86/mpx-dig.c   | 2 --
 7 files changed, 8 deletions(-)

diff --git a/tools/testing/selftests/gpio/gpio-mockup-chardev.c 
b/tools/testing/selftests/gpio/gpio-mockup-chardev.c
index aaa1e9f..d587c81 100644
--- a/tools/testing/selftests/gpio/gpio-mockup-chardev.c
+++ b/tools/testing/selftests/gpio/gpio-mockup-chardev.c
@@ -12,7 +12,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
diff --git a/tools/testing/selftests/net/udpgso.c 
b/tools/testing/selftests/net/udpgso.c
index e279051..b8265ee 100644
--- a/tools/testing/selftests/net/udpgso.c
+++ b/tools/testing/selftests/net/udpgso.c
@@ -17,7 +17,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
diff --git a/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c 
b/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
index 167135b..af1b802 100644
--- a/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
+++ b/tools/testing/selftests/powerpc/pmu/ebb/fork_cleanup_test.c
@@ -11,7 +11,6 @@
 #include 
 #include 
 #include 
-#include 
 
 #include "ebb.h"
 
diff --git a/tools/testing/selftests/proc/proc-self-syscall.c 
b/tools/testing/selftests/proc/proc-self-syscall.c
index 5ab5f48..3a4fec3 100644
--- a/tools/testing/selftests/proc/proc-self-syscall.c
+++ b/tools/testing/selftests/proc/proc-self-syscall.c
@@ -20,7 +20,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 
diff --git a/tools/testing/selftests/rseq/rseq.h 
b/tools/testing/selftests/rseq/rseq.h
index c72eb70..6c1126e7 100644
--- a/tools/testing/selftests/rseq/rseq.h
+++ b/tools/testing/selftests/rseq/rseq.h
@@ -16,7 +16,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 
 /*
diff --git a/tools/testing/selftests/timers/skew_consistency.c 
b/tools/testing/selftests/timers/skew_consistency.c
index 022b711..8066be9 100644
--- a/tools/testing/selftests/timers/skew_consistency.c
+++ b/tools/testing/selftests/timers/skew_consistency.c
@@ -32,7 +32,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include "../kselftest.h"
diff --git a/tools/testing/selftests/x86/mpx-dig.c 
b/tools/testing/selftests/x86/mpx-dig.c
index c13607e..880fbf6 100644
--- a/tools/testing/selftests/x86/mpx-dig.c
+++ b/tools/testing/selftests/x86/mpx-dig.c
@@ -8,9 +8,7 @@
 #include 
 #include 
 #include 
-#include 
 #include 
-#include 
 #include 
 #include 
 #include 
-- 
1.9.1



Re: [RESEND PATCH 3/7] mm/gup: Change GUP fast to use flags rather than a write 'bool'

2019-02-20 Thread Souptick Joarder
Hi Ira,

On Wed, Feb 20, 2019 at 11:01 AM  wrote:
>
> From: Ira Weiny 
>
> To facilitate additional options to get_user_pages_fast() change the
> singular write parameter to be gup_flags.
>
> This patch does not change any functionality.  New functionality will
> follow in subsequent patches.
>
> Some of the get_user_pages_fast() call sites were unchanged because they
> already passed FOLL_WRITE or 0 for the write parameter.
>
> Signed-off-by: Ira Weiny 
> ---
>  arch/mips/mm/gup.c | 11 ++-
>  arch/powerpc/kvm/book3s_64_mmu_hv.c|  4 ++--
>  arch/powerpc/kvm/e500_mmu.c|  2 +-
>  arch/powerpc/mm/mmu_context_iommu.c|  4 ++--
>  arch/s390/kvm/interrupt.c  |  2 +-
>  arch/s390/mm/gup.c | 12 ++--
>  arch/sh/mm/gup.c   | 11 ++-
>  arch/sparc/mm/gup.c|  9 +
>  arch/x86/kvm/paging_tmpl.h |  2 +-
>  arch/x86/kvm/svm.c |  2 +-
>  drivers/fpga/dfl-afu-dma-region.c  |  2 +-
>  drivers/gpu/drm/via/via_dmablit.c  |  3 ++-
>  drivers/infiniband/hw/hfi1/user_pages.c|  3 ++-
>  drivers/misc/genwqe/card_utils.c   |  2 +-
>  drivers/misc/vmw_vmci/vmci_host.c  |  2 +-
>  drivers/misc/vmw_vmci/vmci_queue_pair.c|  6 --
>  drivers/platform/goldfish/goldfish_pipe.c  |  3 ++-
>  drivers/rapidio/devices/rio_mport_cdev.c   |  4 +++-
>  drivers/sbus/char/oradax.c |  2 +-
>  drivers/scsi/st.c  |  3 ++-
>  drivers/staging/gasket/gasket_page_table.c |  4 ++--
>  drivers/tee/tee_shm.c  |  2 +-
>  drivers/vfio/vfio_iommu_spapr_tce.c|  3 ++-
>  drivers/vhost/vhost.c  |  2 +-
>  drivers/video/fbdev/pvr2fb.c   |  2 +-
>  drivers/virt/fsl_hypervisor.c  |  2 +-
>  drivers/xen/gntdev.c   |  2 +-
>  fs/orangefs/orangefs-bufmap.c  |  2 +-
>  include/linux/mm.h |  4 ++--
>  kernel/futex.c |  2 +-
>  lib/iov_iter.c |  7 +--
>  mm/gup.c   | 10 +-
>  mm/util.c  |  8 
>  net/ceph/pagevec.c |  2 +-
>  net/rds/info.c |  2 +-
>  net/rds/rdma.c |  3 ++-
>  36 files changed, 81 insertions(+), 65 deletions(-)
>
> diff --git a/arch/mips/mm/gup.c b/arch/mips/mm/gup.c
> index 0d14e0d8eacf..4c2b4483683c 100644
> --- a/arch/mips/mm/gup.c
> +++ b/arch/mips/mm/gup.c
> @@ -235,7 +235,7 @@ int __get_user_pages_fast(unsigned long start, int 
> nr_pages, int write,
>   * get_user_pages_fast() - pin user pages in memory
>   * @start: starting user address
>   * @nr_pages:  number of pages from start to pin
> - * @write: whether pages will be written to
> + * @gup_flags: flags modifying pin behaviour
>   * @pages: array that receives pointers to the pages pinned.
>   * Should be at least nr_pages long.
>   *
> @@ -247,8 +247,8 @@ int __get_user_pages_fast(unsigned long start, int 
> nr_pages, int write,
>   * requested. If nr_pages is 0 or negative, returns 0. If no pages
>   * were pinned, returns -errno.
>   */
> -int get_user_pages_fast(unsigned long start, int nr_pages, int write,
> -   struct page **pages)
> +int get_user_pages_fast(unsigned long start, int nr_pages,
> +   unsigned int gup_flags, struct page **pages)
>  {
> struct mm_struct *mm = current->mm;
> unsigned long addr, len, end;
> @@ -273,7 +273,8 @@ int get_user_pages_fast(unsigned long start, int 
> nr_pages, int write,
> next = pgd_addr_end(addr, end);
> if (pgd_none(pgd))
> goto slow;
> -   if (!gup_pud_range(pgd, addr, next, write, pages, ))
> +   if (!gup_pud_range(pgd, addr, next, gup_flags & FOLL_WRITE,
> +  pages, ))
> goto slow;
> } while (pgdp++, addr = next, addr != end);
> local_irq_enable();
> @@ -289,7 +290,7 @@ int get_user_pages_fast(unsigned long start, int 
> nr_pages, int write,
> pages += nr;
>
> ret = get_user_pages_unlocked(start, (end - start) >> PAGE_SHIFT,
> - pages, write ? FOLL_WRITE : 0);
> + pages, gup_flags);
>
> /* Have to be a bit careful with return values */
> if (nr > 0) {
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
> b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index bd2dcfbf00cd..8fcb0a921e46 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -582,7 +582,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
> struct kvm_vcpu *vcpu,
> /* 

Re: [PATCH] powerpc/kernel/time: Remove duplicate header

2019-01-28 Thread Souptick Joarder
On Mon, Jan 28, 2019 at 9:41 PM Brajeswar Ghosh
 wrote:
>
> Remove linux/rtc.h which is included more than once
>
> Signed-off-by: Brajeswar Ghosh 

Acked-by: Souptick Joarder 

> ---
>  arch/powerpc/kernel/time.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
> index 3646affae963..bc0503ef9c9c 100644
> --- a/arch/powerpc/kernel/time.c
> +++ b/arch/powerpc/kernel/time.c
> @@ -57,7 +57,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  #include 
>  #include 
>  #include 
> --
> 2.17.1
>


Re: [PATCH] powerpc/powernv: Remove duplicate header

2019-01-18 Thread Souptick Joarder
On Thu, Jan 17, 2019 at 9:40 PM Sabyasachi Gupta
 wrote:
>
> Remove linux/printk.h which is included more than once.
>
> Signed-off-by: Sabyasachi Gupta 

Acked-by: Souptick Joarder 

> ---
>  arch/powerpc/platforms/powernv/opal.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/arch/powerpc/platforms/powernv/opal.c 
> b/arch/powerpc/platforms/powernv/opal.c
> index beed86f..802de0d 100644
> --- a/arch/powerpc/platforms/powernv/opal.c
> +++ b/arch/powerpc/platforms/powernv/opal.c
> @@ -26,7 +26,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  #include 
>  #include 
>  #include 
> --
> 2.7.4
>


Re: [PATCH] powerpc/cell: Remove duplicate header

2019-01-17 Thread Souptick Joarder
On Thu, Jan 17, 2019 at 9:49 PM Sabyasachi Gupta
 wrote:
>
> Remove linux/syscalls.h which is included more than once
>
> Signed-off-by: Sabyasachi Gupta 

Acked-by: Souptick Joarder 

> ---
>  arch/powerpc/platforms/cell/spu_syscalls.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/arch/powerpc/platforms/cell/spu_syscalls.c 
> b/arch/powerpc/platforms/cell/spu_syscalls.c
> index 263413a..b95d6af 100644
> --- a/arch/powerpc/platforms/cell/spu_syscalls.c
> +++ b/arch/powerpc/platforms/cell/spu_syscalls.c
> @@ -26,7 +26,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>
>  #include 
>
> --
> 2.7.4
>


Re: [PATCH] arch/powerpc: Use dma_zalloc_coherent

2018-11-16 Thread Souptick Joarder
Hi Joe,

On Fri, Nov 16, 2018 at 12:55 AM Joe Perches  wrote:
>
> On Thu, 2018-11-15 at 23:29 +0530, Sabyasachi Gupta wrote:
> > On Mon, Nov 5, 2018 at 8:58 AM Sabyasachi Gupta
> >  wrote:
> > > Replaced dma_alloc_coherent + memset with dma_zalloc_coherent
> > >
> > > Signed-off-by: Sabyasachi Gupta 
> >
> > Any comment on this patch?
>
> It's obviously correct.
>
> You might realign the arguments on the next lines
> to the open parenthesis.
>
> Perhaps there should be new function calls
> added for symmetry to the other alloc functions
> for multiplication overflow protection.
>
> Perhaps:
>
> void *dma_alloc_array_coherent()
> void *dma_calloc_coherent()
>
> Something like
> ---
>  include/linux/dma-mapping.h | 19 +++
>  1 file changed, 19 insertions(+)
>
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 15bd41447025..95bebf8883b1 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -565,6 +565,25 @@ static inline void *dma_alloc_coherent(struct device 
> *dev, size_t size,
> (gfp & __GFP_NOWARN) ? DMA_ATTR_NO_WARN : 0);
>  }
>
> +static inline void *dma_alloc_array_coherent(struct device *dev,
> +size_t n, size_t size,
> +dma_addr_t *dma_handle, gfp_t 
> gfp)
> +{
> +   size_t bytes;
> +
> +   if (unlikely(check_mul_overflow(n, size, )))
> +   return NULL;
> +   return dma_alloc_coherent(dev, bytes, dma_handle, gfp);
> +}
> +
> +static inline void *dma_calloc_coherent(struct device *dev,
> +   size_t n, size_t size,
> +   dma_addr_t *dma_handle, gfp_t gfp)
> +{
> +   return dma_alloc_array_coherent(dev, n, size, dma_handle,
> +   gfp | __GFP_ZERO);
> +}
> +

If I understood correctly, you are talking about adding these 2 new inline
functions. We can do that.

Can you please help to understand the consumers of these 2 new inline ?

>  static inline void dma_free_coherent(struct device *dev, size_t size,
> void *cpu_addr, dma_addr_t dma_handle)
>  {
>
> ---
> > > diff --git a/arch/powerpc/platforms/pasemi/dma_lib.c 
> > > b/arch/powerpc/platforms/pasemi/dma_lib.c
> []
> > > @@ -255,15 +255,13 @@ int pasemi_dma_alloc_ring(struct pasemi_dmachan 
> > > *chan, int ring_size)
> > >
> > > chan->ring_size = ring_size;
> > >
> > > -   chan->ring_virt = dma_alloc_coherent(_pdev->dev,
> > > +   chan->ring_virt = dma_zalloc_coherent(_pdev->dev,
> > >  ring_size * sizeof(u64),
> > >  >ring_dma, GFP_KERNEL);
> > >  en
> > > if (!chan->ring_virt)
> > > return -ENOMEM;
> > >
> > > -   memset(chan->ring_virt, 0, ring_size * sizeof(u64));
> > > -
> > > return 0;
> > >  }
> > >  EXPORT_SYMBOL(pasemi_dma_alloc_ring);
>
>


Re: misc: ocxl: Change return type for fault handler

2018-07-13 Thread Souptick Joarder
On Wed, Jul 11, 2018 at 6:54 PM, Michael Ellerman
 wrote:
> On Mon, 2018-06-11 at 20:29:04 UTC, Souptick Joarder wrote:
>> Use new return type vm_fault_t for fault handler. For
>> now, this is just documenting that the function returns
>> a VM_FAULT value rather than an errno. Once all instances
>> are converted, vm_fault_t will become a distinct type.
>>
>> Ref-> commit 1c8f422059ae ("mm: change return type to vm_fault_t")
>>
>> There is an existing bug when vm_insert_pfn() can return
>> ENOMEM which was ignored and VM_FAULT_NOPAGE returned as
>> default. The new inline vmf_insert_pfn() has removed
>> this inefficiency by returning correct vm_fault_ type.
>>
>> Signed-off-by: Souptick Joarder 
>> Acked-by: Andrew Donnellan 
>> Acked-by: Frederic Barrat 
>
> Applied to powerpc next, thanks.
>
> https://git.kernel.org/powerpc/c/a545cf032d11437ed86e62f00d4991
>
> cheers

Thanks :)


Re: [PATCH] misc: ocxl: Change return type for fault handler

2018-06-18 Thread Souptick Joarder
On Thu, Jun 14, 2018 at 9:36 PM, Frederic Barrat  wrote:
>
>
> Le 11/06/2018 à 22:29, Souptick Joarder a écrit :
>>
>> Use new return type vm_fault_t for fault handler. For
>> now, this is just documenting that the function returns
>> a VM_FAULT value rather than an errno. Once all instances
>> are converted, vm_fault_t will become a distinct type.
>>
>> Ref-> commit 1c8f422059ae ("mm: change return type to vm_fault_t")
>>
>> There is an existing bug when vm_insert_pfn() can return
>> ENOMEM which was ignored and VM_FAULT_NOPAGE returned as
>> default. The new inline vmf_insert_pfn() has removed
>> this inefficiency by returning correct vm_fault_ type.
>>
>> Signed-off-by: Souptick Joarder 
>> ---
>
>
> Thanks!
>
> Tested and
> Acked-by: Frederic Barrat 
>
>

Frederic, is this patch queued for 4.19 ?


[PATCH v2] mm: convert return type of handle_mm_fault() caller to vm_fault_t

2018-06-17 Thread Souptick Joarder
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.

Ref-> commit 1c8f422059ae ("mm: change return type to vm_fault_t")

In this patch all the caller of handle_mm_fault()
are changed to return vm_fault_t type.

Signed-off-by: Souptick Joarder 
---
v2: Fixed kbuild error

 arch/alpha/mm/fault.c |  3 ++-
 arch/arc/mm/fault.c   |  4 +++-
 arch/arm/mm/fault.c   |  7 ---
 arch/arm64/mm/fault.c |  6 +++---
 arch/hexagon/mm/vm_fault.c|  2 +-
 arch/ia64/mm/fault.c  |  2 +-
 arch/m68k/mm/fault.c  |  4 ++--
 arch/microblaze/mm/fault.c|  2 +-
 arch/mips/mm/fault.c  |  2 +-
 arch/nds32/mm/fault.c |  2 +-
 arch/nios2/mm/fault.c |  2 +-
 arch/openrisc/mm/fault.c  |  2 +-
 arch/parisc/mm/fault.c|  2 +-
 arch/powerpc/include/asm/copro.h  |  4 +++-
 arch/powerpc/mm/copro_fault.c |  2 +-
 arch/powerpc/mm/fault.c   |  7 ---
 arch/powerpc/platforms/cell/spufs/fault.c |  2 +-
 arch/riscv/mm/fault.c |  3 ++-
 arch/s390/mm/fault.c  | 13 -
 arch/sh/mm/fault.c|  4 ++--
 arch/sparc/mm/fault_32.c  |  3 ++-
 arch/sparc/mm/fault_64.c  |  3 ++-
 arch/um/kernel/trap.c |  2 +-
 arch/unicore32/mm/fault.c |  9 +
 arch/x86/mm/fault.c   |  5 +++--
 arch/xtensa/mm/fault.c|  2 +-
 drivers/iommu/amd_iommu_v2.c  |  2 +-
 drivers/iommu/intel-svm.c |  4 +++-
 drivers/misc/cxl/fault.c  |  2 +-
 drivers/misc/ocxl/link.c  |  3 ++-
 mm/hmm.c  |  8 
 mm/ksm.c  |  2 +-
 32 files changed, 69 insertions(+), 51 deletions(-)

diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index cd3c572..2a979ee 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -87,7 +87,8 @@
struct vm_area_struct * vma;
struct mm_struct *mm = current->mm;
const struct exception_table_entry *fixup;
-   int fault, si_code = SEGV_MAPERR;
+   int si_code = SEGV_MAPERR;
+   vm_fault_t fault;
siginfo_t info;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
 
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index a0b7bd6..3a18d33 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -66,7 +67,8 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm;
siginfo_t info;
-   int fault, ret;
+   int ret;
+   vm_fault_t fault;
int write = regs->ecr_cause & ECR_C_PROTV_STORE;  /* ST/EX */
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
 
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index b75eada..758abcb 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -219,12 +219,12 @@ static inline bool access_error(unsigned int fsr, struct 
vm_area_struct *vma)
return vma->vm_flags & mask ? false : true;
 }
 
-static int __kprobes
+static vm_fault_t __kprobes
 __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
unsigned int flags, struct task_struct *tsk)
 {
struct vm_area_struct *vma;
-   int fault;
+   vm_fault_t fault;
 
vma = find_vma(mm, addr);
fault = VM_FAULT_BADMAP;
@@ -259,7 +259,8 @@ static inline bool access_error(unsigned int fsr, struct 
vm_area_struct *vma)
 {
struct task_struct *tsk;
struct mm_struct *mm;
-   int fault, sig, code;
+   int sig, code;
+   vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
 
if (notify_page_fault(regs, fsr))
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 2af3dd8..8da263b 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -371,12 +371,12 @@ static void do_bad_area(unsigned long addr, unsigned int 
esr, struct pt_regs *re
 #define VM_FAULT_BADMAP0x01
 #define VM_FAULT_BADACCESS 0x02
 
-static int __do_page_fault(struct mm_struct *mm, unsigned long addr,
+static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
   unsigned int mm_flags, unsigned long vm_flags,
   struct task_struct *tsk)
 {
struct vm_ar

[PATCH] mm: convert return type of handle_mm_fault() caller to vm_fault_t

2018-06-14 Thread Souptick Joarder
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.

Ref-> commit 1c8f422059ae ("mm: change return type to vm_fault_t")

In this patch all the caller of handle_mm_fault()
are changed to return vm_fault_t type.

Signed-off-by: Souptick Joarder 
---
 arch/alpha/mm/fault.c |  3 ++-
 arch/arc/mm/fault.c   |  4 +++-
 arch/arm/mm/fault.c   |  7 ---
 arch/arm64/mm/fault.c |  6 +++---
 arch/hexagon/mm/vm_fault.c|  2 +-
 arch/ia64/mm/fault.c  |  2 +-
 arch/m68k/mm/fault.c  |  4 ++--
 arch/microblaze/mm/fault.c|  2 +-
 arch/mips/mm/fault.c  |  2 +-
 arch/nds32/mm/fault.c |  2 +-
 arch/nios2/mm/fault.c |  2 +-
 arch/openrisc/mm/fault.c  |  2 +-
 arch/parisc/mm/fault.c|  2 +-
 arch/powerpc/mm/copro_fault.c |  2 +-
 arch/powerpc/mm/fault.c   |  7 ---
 arch/riscv/mm/fault.c |  3 ++-
 arch/s390/mm/fault.c  | 13 -
 arch/sh/mm/fault.c|  4 ++--
 arch/sparc/mm/fault_32.c  |  3 ++-
 arch/sparc/mm/fault_64.c  |  3 ++-
 arch/um/kernel/trap.c |  2 +-
 arch/unicore32/mm/fault.c |  9 +
 arch/x86/mm/fault.c   |  5 +++--
 arch/xtensa/mm/fault.c|  2 +-
 drivers/iommu/amd_iommu_v2.c  |  2 +-
 drivers/iommu/intel-svm.c |  4 +++-
 mm/hmm.c  |  8 
 mm/ksm.c  |  2 +-
 28 files changed, 62 insertions(+), 47 deletions(-)

diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index cd3c572..2a979ee 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -87,7 +87,8 @@
struct vm_area_struct * vma;
struct mm_struct *mm = current->mm;
const struct exception_table_entry *fixup;
-   int fault, si_code = SEGV_MAPERR;
+   int si_code = SEGV_MAPERR;
+   vm_fault_t fault;
siginfo_t info;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
 
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index a0b7bd6..3a18d33 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -66,7 +67,8 @@ void do_page_fault(unsigned long address, struct pt_regs 
*regs)
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm;
siginfo_t info;
-   int fault, ret;
+   int ret;
+   vm_fault_t fault;
int write = regs->ecr_cause & ECR_C_PROTV_STORE;  /* ST/EX */
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
 
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index b75eada..758abcb 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -219,12 +219,12 @@ static inline bool access_error(unsigned int fsr, struct 
vm_area_struct *vma)
return vma->vm_flags & mask ? false : true;
 }
 
-static int __kprobes
+static vm_fault_t __kprobes
 __do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
unsigned int flags, struct task_struct *tsk)
 {
struct vm_area_struct *vma;
-   int fault;
+   vm_fault_t fault;
 
vma = find_vma(mm, addr);
fault = VM_FAULT_BADMAP;
@@ -259,7 +259,8 @@ static inline bool access_error(unsigned int fsr, struct 
vm_area_struct *vma)
 {
struct task_struct *tsk;
struct mm_struct *mm;
-   int fault, sig, code;
+   int sig, code;
+   vm_fault_t fault;
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
 
if (notify_page_fault(regs, fsr))
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 2af3dd8..8da263b 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -371,12 +371,12 @@ static void do_bad_area(unsigned long addr, unsigned int 
esr, struct pt_regs *re
 #define VM_FAULT_BADMAP0x01
 #define VM_FAULT_BADACCESS 0x02
 
-static int __do_page_fault(struct mm_struct *mm, unsigned long addr,
+static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
   unsigned int mm_flags, unsigned long vm_flags,
   struct task_struct *tsk)
 {
struct vm_area_struct *vma;
-   int fault;
+   vm_fault_t fault;
 
vma = find_vma(mm, addr);
fault = VM_FAULT_BADMAP;
@@ -419,7 +419,7 @@ static int __kprobes do_page_fault(unsigned long addr, 
unsigned int esr,
struct task_struct *tsk;
struct mm_struct *mm;
struct siginfo si;
-   int fault, major = 0;
+   vm_fault_t fault, major = 0;
unsigned long vm_flags = VM_READ | VM_WRITE;
unsigned int mm_flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
 
diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_faul

[PATCH] misc: ocxl: Change return type for fault handler

2018-06-11 Thread Souptick Joarder
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.

Ref-> commit 1c8f422059ae ("mm: change return type to vm_fault_t")

There is an existing bug when vm_insert_pfn() can return
ENOMEM which was ignored and VM_FAULT_NOPAGE returned as
default. The new inline vmf_insert_pfn() has removed
this inefficiency by returning correct vm_fault_ type.

Signed-off-by: Souptick Joarder 
---
 drivers/misc/ocxl/context.c | 22 +++---
 drivers/misc/ocxl/sysfs.c   |  5 ++---
 2 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/drivers/misc/ocxl/context.c b/drivers/misc/ocxl/context.c
index 909e880..98daf91 100644
--- a/drivers/misc/ocxl/context.c
+++ b/drivers/misc/ocxl/context.c
@@ -83,7 +83,7 @@ int ocxl_context_attach(struct ocxl_context *ctx, u64 amr)
return rc;
 }
 
-static int map_afu_irq(struct vm_area_struct *vma, unsigned long address,
+static vm_fault_t map_afu_irq(struct vm_area_struct *vma, unsigned long 
address,
u64 offset, struct ocxl_context *ctx)
 {
u64 trigger_addr;
@@ -92,15 +92,15 @@ static int map_afu_irq(struct vm_area_struct *vma, unsigned 
long address,
if (!trigger_addr)
return VM_FAULT_SIGBUS;
 
-   vm_insert_pfn(vma, address, trigger_addr >> PAGE_SHIFT);
-   return VM_FAULT_NOPAGE;
+   return vmf_insert_pfn(vma, address, trigger_addr >> PAGE_SHIFT);
 }
 
-static int map_pp_mmio(struct vm_area_struct *vma, unsigned long address,
+static vm_fault_t map_pp_mmio(struct vm_area_struct *vma, unsigned long 
address,
u64 offset, struct ocxl_context *ctx)
 {
u64 pp_mmio_addr;
int pasid_off;
+   vm_fault_t ret;
 
if (offset >= ctx->afu->config.pp_mmio_stride)
return VM_FAULT_SIGBUS;
@@ -118,27 +118,27 @@ static int map_pp_mmio(struct vm_area_struct *vma, 
unsigned long address,
pasid_off * ctx->afu->config.pp_mmio_stride +
offset;
 
-   vm_insert_pfn(vma, address, pp_mmio_addr >> PAGE_SHIFT);
+   ret = vmf_insert_pfn(vma, address, pp_mmio_addr >> PAGE_SHIFT);
mutex_unlock(>status_mutex);
-   return VM_FAULT_NOPAGE;
+   return ret;
 }
 
-static int ocxl_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t ocxl_mmap_fault(struct vm_fault *vmf)
 {
struct vm_area_struct *vma = vmf->vma;
struct ocxl_context *ctx = vma->vm_file->private_data;
u64 offset;
-   int rc;
+   vm_fault_t ret;
 
offset = vmf->pgoff << PAGE_SHIFT;
pr_debug("%s: pasid %d address 0x%lx offset 0x%llx\n", __func__,
ctx->pasid, vmf->address, offset);
 
if (offset < ctx->afu->irq_base_offset)
-   rc = map_pp_mmio(vma, vmf->address, offset, ctx);
+   ret = map_pp_mmio(vma, vmf->address, offset, ctx);
else
-   rc = map_afu_irq(vma, vmf->address, offset, ctx);
-   return rc;
+   ret = map_afu_irq(vma, vmf->address, offset, ctx);
+   return ret;
 }
 
 static const struct vm_operations_struct ocxl_vmops = {
diff --git a/drivers/misc/ocxl/sysfs.c b/drivers/misc/ocxl/sysfs.c
index d9753a1..0ab1fd1 100644
--- a/drivers/misc/ocxl/sysfs.c
+++ b/drivers/misc/ocxl/sysfs.c
@@ -64,7 +64,7 @@ static ssize_t global_mmio_read(struct file *filp, struct 
kobject *kobj,
return count;
 }
 
-static int global_mmio_fault(struct vm_fault *vmf)
+static vm_fault_t global_mmio_fault(struct vm_fault *vmf)
 {
struct vm_area_struct *vma = vmf->vma;
struct ocxl_afu *afu = vma->vm_private_data;
@@ -75,8 +75,7 @@ static int global_mmio_fault(struct vm_fault *vmf)
 
offset = vmf->pgoff;
offset += (afu->global_mmio_start >> PAGE_SHIFT);
-   vm_insert_pfn(vma, vmf->address, offset);
-   return VM_FAULT_NOPAGE;
+   return vmf_insert_pfn(vma, vmf->address, offset);
 }
 
 static const struct vm_operations_struct global_mmio_vmops = {
-- 
1.9.1



Re: [PATCH v2] powerpc: kvm: Change return type to vm_fault_t

2018-05-16 Thread Souptick Joarder
On Wed, May 16, 2018 at 12:38 PM, Paul Mackerras <pau...@ozlabs.org> wrote:
> On Wed, May 16, 2018 at 10:11:11AM +0530, Souptick Joarder wrote:
>> On Thu, May 10, 2018 at 11:57 PM, Souptick Joarder <jrdr.li...@gmail.com> 
>> wrote:
>> > Use new return type vm_fault_t for fault handler
>> > in struct vm_operations_struct. For now, this is
>> > just documenting that the function returns a
>> > VM_FAULT value rather than an errno.  Once all
>> > instances are converted, vm_fault_t will become
>> > a distinct type.
>> >
>> > commit 1c8f422059ae ("mm: change return type to
>> > vm_fault_t")
>> >
>> > Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
>> > ---
>> > v2: Updated the change log
>> >
>> >  arch/powerpc/kvm/book3s_64_vio.c | 2 +-
>> >  1 file changed, 1 insertion(+), 1 deletion(-)
>> >
>> > diff --git a/arch/powerpc/kvm/book3s_64_vio.c 
>> > b/arch/powerpc/kvm/book3s_64_vio.c
>> > index 4dffa61..346ac0d 100644
>> > --- a/arch/powerpc/kvm/book3s_64_vio.c
>> > +++ b/arch/powerpc/kvm/book3s_64_vio.c
>> > @@ -237,7 +237,7 @@ static void release_spapr_tce_table(struct rcu_head 
>> > *head)
>> > kfree(stt);
>> >  }
>> >
>> > -static int kvm_spapr_tce_fault(struct vm_fault *vmf)
>> > +static vm_fault_t kvm_spapr_tce_fault(struct vm_fault *vmf)
>> >  {
>> > struct kvmppc_spapr_tce_table *stt = 
>> > vmf->vma->vm_file->private_data;
>> > struct page *page;
>> > --
>> > 1.9.1
>> >
>>
>> If no comment, we would like to get this patch in queue
>> for 4.18.
>
> It looks fine - I'll queue it up.
>
> Paul.

Thanks Paul :)


Re: [PATCH v2] powerpc: kvm: Change return type to vm_fault_t

2018-05-15 Thread Souptick Joarder
On Thu, May 10, 2018 at 11:57 PM, Souptick Joarder <jrdr.li...@gmail.com> wrote:
> Use new return type vm_fault_t for fault handler
> in struct vm_operations_struct. For now, this is
> just documenting that the function returns a
> VM_FAULT value rather than an errno.  Once all
> instances are converted, vm_fault_t will become
> a distinct type.
>
> commit 1c8f422059ae ("mm: change return type to
> vm_fault_t")
>
> Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
> ---
> v2: Updated the change log
>
>  arch/powerpc/kvm/book3s_64_vio.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kvm/book3s_64_vio.c 
> b/arch/powerpc/kvm/book3s_64_vio.c
> index 4dffa61..346ac0d 100644
> --- a/arch/powerpc/kvm/book3s_64_vio.c
> +++ b/arch/powerpc/kvm/book3s_64_vio.c
> @@ -237,7 +237,7 @@ static void release_spapr_tce_table(struct rcu_head *head)
> kfree(stt);
>  }
>
> -static int kvm_spapr_tce_fault(struct vm_fault *vmf)
> +static vm_fault_t kvm_spapr_tce_fault(struct vm_fault *vmf)
>  {
> struct kvmppc_spapr_tce_table *stt = vmf->vma->vm_file->private_data;
> struct page *page;
> --
> 1.9.1
>

If no comment, we would like to get this patch in queue
for 4.18.


Re: [PATCH v2] powerpc: platform: cell: spufs: Change return type to vm_fault_t

2018-05-15 Thread Souptick Joarder
On Thu, May 10, 2018 at 8:35 PM, Souptick Joarder <jrdr.li...@gmail.com> wrote:
> On Sat, Apr 21, 2018 at 3:04 AM, Matthew Wilcox <wi...@infradead.org> wrote:
>> On Fri, Apr 20, 2018 at 11:02:39PM +0530, Souptick Joarder wrote:
>>> Use new return type vm_fault_t for fault handler. For
>>> now, this is just documenting that the function returns
>>> a VM_FAULT value rather than an errno. Once all instances
>>> are converted, vm_fault_t will become a distinct type.
>>>
>>> Reference id -> 1c8f422059ae ("mm: change return type to
>>> vm_fault_t")
>>>
>>> We are fixing a minor bug, that the error from vm_insert_
>>> pfn() was being ignored and the effect of this is likely
>>> to be only felt in OOM situations.
>>>
>>> Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
>>
>> Reviewed-by: Matthew Wilcox <mawil...@microsoft.com>
>
> Any further comment on this patch ?

If no further comment, we would like to get this patch in queue
for 4.18.


[PATCH v2] powerpc: kvm: Change return type to vm_fault_t

2018-05-10 Thread Souptick Joarder
Use new return type vm_fault_t for fault handler
in struct vm_operations_struct. For now, this is
just documenting that the function returns a 
VM_FAULT value rather than an errno.  Once all
instances are converted, vm_fault_t will become
a distinct type.

commit 1c8f422059ae ("mm: change return type to
vm_fault_t")

Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
---
v2: Updated the change log

 arch/powerpc/kvm/book3s_64_vio.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
index 4dffa61..346ac0d 100644
--- a/arch/powerpc/kvm/book3s_64_vio.c
+++ b/arch/powerpc/kvm/book3s_64_vio.c
@@ -237,7 +237,7 @@ static void release_spapr_tce_table(struct rcu_head *head)
kfree(stt);
 }

-static int kvm_spapr_tce_fault(struct vm_fault *vmf)
+static vm_fault_t kvm_spapr_tce_fault(struct vm_fault *vmf)
 {
struct kvmppc_spapr_tce_table *stt = vmf->vma->vm_file->private_data;
struct page *page;
--
1.9.1



Re: [PATCH] kvm: Change return type to vm_fault_t

2018-05-10 Thread Souptick Joarder
On Thu, Apr 19, 2018 at 7:26 PM, Cornelia Huck <coh...@redhat.com> wrote:
> On Thu, 19 Apr 2018 00:49:58 +0530
> Souptick Joarder <jrdr.li...@gmail.com> wrote:
>
>> Use new return type vm_fault_t for fault handler. For
>> now, this is just documenting that the function returns
>> a VM_FAULT value rather than an errno. Once all instances
>> are converted, vm_fault_t will become a distinct type.
>>
>> commit 1c8f422059ae ("mm: change return type to vm_fault_t")
>>
>> Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
>> Reviewed-by: Matthew Wilcox <mawil...@microsoft.com>
>> ---
>>  arch/mips/kvm/mips.c   | 2 +-
>>  arch/powerpc/kvm/powerpc.c | 2 +-
>>  arch/s390/kvm/kvm-s390.c   | 2 +-
>>  arch/x86/kvm/x86.c | 2 +-
>>  include/linux/kvm_host.h   | 2 +-
>>  virt/kvm/arm/arm.c | 2 +-
>>  virt/kvm/kvm_main.c| 2 +-
>>  7 files changed, 7 insertions(+), 7 deletions(-)
>
> Reviewed-by: Cornelia Huck <coh...@redhat.com>

If no further comment, We would like to get this
patch in queue for 4.18.


Re: [PATCH v2] powerpc: platform: cell: spufs: Change return type to vm_fault_t

2018-05-10 Thread Souptick Joarder
On Sat, Apr 21, 2018 at 3:04 AM, Matthew Wilcox <wi...@infradead.org> wrote:
> On Fri, Apr 20, 2018 at 11:02:39PM +0530, Souptick Joarder wrote:
>> Use new return type vm_fault_t for fault handler. For
>> now, this is just documenting that the function returns
>> a VM_FAULT value rather than an errno. Once all instances
>> are converted, vm_fault_t will become a distinct type.
>>
>> Reference id -> 1c8f422059ae ("mm: change return type to
>> vm_fault_t")
>>
>> We are fixing a minor bug, that the error from vm_insert_
>> pfn() was being ignored and the effect of this is likely
>> to be only felt in OOM situations.
>>
>> Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
>
> Reviewed-by: Matthew Wilcox <mawil...@microsoft.com>

Any further comment on this patch ?


[PATCH v2] powerpc: platform: cell: spufs: Change return type to vm_fault_t

2018-04-20 Thread Souptick Joarder
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.

Reference id -> 1c8f422059ae ("mm: change return type to
vm_fault_t")

We are fixing a minor bug, that the error from vm_insert_
pfn() was being ignored and the effect of this is likely
to be only felt in OOM situations.

Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
---
 arch/powerpc/platforms/cell/spufs/file.c | 33 +---
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/platforms/cell/spufs/file.c 
b/arch/powerpc/platforms/cell/spufs/file.c
index 469bdd0..43e7b93 100644
--- a/arch/powerpc/platforms/cell/spufs/file.c
+++ b/arch/powerpc/platforms/cell/spufs/file.c
@@ -232,12 +232,13 @@ static ssize_t spufs_attr_write(struct file *file, const 
char __user *buf,
return size;
 }
 
-static int
+static vm_fault_t
 spufs_mem_mmap_fault(struct vm_fault *vmf)
 {
struct vm_area_struct *vma = vmf->vma;
struct spu_context *ctx = vma->vm_file->private_data;
unsigned long pfn, offset;
+   vm_fault_t ret;
 
offset = vmf->pgoff << PAGE_SHIFT;
if (offset >= LS_SIZE)
@@ -256,11 +257,11 @@ static ssize_t spufs_attr_write(struct file *file, const 
char __user *buf,
vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot);
pfn = (ctx->spu->local_store_phys + offset) >> PAGE_SHIFT;
}
-   vm_insert_pfn(vma, vmf->address, pfn);
+   ret = vmf_insert_pfn(vma, vmf->address, pfn);
 
spu_release(ctx);
 
-   return VM_FAULT_NOPAGE;
+   return ret;
 }
 
 static int spufs_mem_mmap_access(struct vm_area_struct *vma,
@@ -312,13 +313,14 @@ static int spufs_mem_mmap(struct file *file, struct 
vm_area_struct *vma)
.mmap   = spufs_mem_mmap,
 };
 
-static int spufs_ps_fault(struct vm_fault *vmf,
+static vm_fault_t spufs_ps_fault(struct vm_fault *vmf,
unsigned long ps_offs,
unsigned long ps_size)
 {
struct spu_context *ctx = vmf->vma->vm_file->private_data;
unsigned long area, offset = vmf->pgoff << PAGE_SHIFT;
-   int ret = 0;
+   int err = 0;
+   vm_fault_t ret = VM_FAULT_NOPAGE;
 
spu_context_nospu_trace(spufs_ps_fault__enter, ctx);
 
@@ -349,25 +351,26 @@ static int spufs_ps_fault(struct vm_fault *vmf,
if (ctx->state == SPU_STATE_SAVED) {
up_read(>mm->mmap_sem);
spu_context_nospu_trace(spufs_ps_fault__sleep, ctx);
-   ret = spufs_wait(ctx->run_wq, ctx->state == SPU_STATE_RUNNABLE);
+   err = spufs_wait(ctx->run_wq, ctx->state == SPU_STATE_RUNNABLE);
spu_context_trace(spufs_ps_fault__wake, ctx, ctx->spu);
down_read(>mm->mmap_sem);
} else {
area = ctx->spu->problem_phys + ps_offs;
-   vm_insert_pfn(vmf->vma, vmf->address, (area + offset) >> 
PAGE_SHIFT);
+   ret = vmf_insert_pfn(vmf->vma, vmf->address,
+   (area + offset) >> PAGE_SHIFT);
spu_context_trace(spufs_ps_fault__insert, ctx, ctx->spu);
}
 
-   if (!ret)
+   if (!err)
spu_release(ctx);
 
 refault:
put_spu_context(ctx);
-   return VM_FAULT_NOPAGE;
+   return ret;
 }
 
 #if SPUFS_MMAP_4K
-static int spufs_cntl_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t spufs_cntl_mmap_fault(struct vm_fault *vmf)
 {
return spufs_ps_fault(vmf, 0x4000, SPUFS_CNTL_MAP_SIZE);
 }
@@ -1040,7 +1043,7 @@ static ssize_t spufs_signal1_write(struct file *file, 
const char __user *buf,
return 4;
 }
 
-static int
+static vm_fault_t
 spufs_signal1_mmap_fault(struct vm_fault *vmf)
 {
 #if SPUFS_SIGNAL_MAP_SIZE == 0x1000
@@ -1178,7 +1181,7 @@ static ssize_t spufs_signal2_write(struct file *file, 
const char __user *buf,
 }
 
 #if SPUFS_MMAP_4K
-static int
+static vm_fault_t
 spufs_signal2_mmap_fault(struct vm_fault *vmf)
 {
 #if SPUFS_SIGNAL_MAP_SIZE == 0x1000
@@ -1307,7 +1310,7 @@ static u64 spufs_signal2_type_get(struct spu_context *ctx)
   spufs_signal2_type_set, "%llu\n", SPU_ATTR_ACQUIRE);
 
 #if SPUFS_MMAP_4K
-static int
+static vm_fault_t
 spufs_mss_mmap_fault(struct vm_fault *vmf)
 {
return spufs_ps_fault(vmf, 0x, SPUFS_MSS_MAP_SIZE);
@@ -1369,7 +1372,7 @@ static int spufs_mss_open(struct inode *inode, struct 
file *file)
.llseek  = no_llseek,
 };
 
-static int
+static vm_fault_t
 spufs_psmap_mmap_fault(struct vm_fault *vmf)
 {
return spufs_ps_fault(vmf, 0x, SPUFS_PS_MAP_SIZE);
@@ -1429,7 +1432,7 @@ static int

Re: [PATCH] powerpc: platform: cell: spufs: Change return type to vm_fault_t

2018-04-18 Thread Souptick Joarder
On Thu, Apr 19, 2018 at 12:57 AM, Matthew Wilcox <wi...@infradead.org> wrote:
> On Thu, Apr 19, 2018 at 12:34:15AM +0530, Souptick Joarder wrote:
>> > Re-reading spufs_ps_fault(), I wouldn't change anything inside it.  Just
>> > change its return type to vm_fault_t and call it done.
>>
>> In that case, return value of spufs_wait() has to changed
>> to VM_FAULT_ type and we end with changing all the
>> references where spufs_wait() is called. I think we shouldn't
>> go with that approach. That's the reason I introduce inline
>> vmf_handle_error() and convert err to VM_FAULT_ type.
>
> No, don't change the type of 'ret' or spufs_wait.  Just do this:
>
> -static int spufs_ps_fault(struct vm_fault *vmf,
> +static vm_fault_t spufs_ps_fault(struct vm_fault *vmf,
> unsigned long ps_offs,
> unsigned long ps_size)
>

Agree. but vm_insert_pfn should be replaced with new
vmf_insert_pfn, right ?


[PATCH] kvm: Change return type to vm_fault_t

2018-04-18 Thread Souptick Joarder
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.

commit 1c8f422059ae ("mm: change return type to vm_fault_t")

Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
Reviewed-by: Matthew Wilcox <mawil...@microsoft.com>
---
 arch/mips/kvm/mips.c   | 2 +-
 arch/powerpc/kvm/powerpc.c | 2 +-
 arch/s390/kvm/kvm-s390.c   | 2 +-
 arch/x86/kvm/x86.c | 2 +-
 include/linux/kvm_host.h   | 2 +-
 virt/kvm/arm/arm.c | 2 +-
 virt/kvm/kvm_main.c| 2 +-
 7 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 2549fdd..03e0e0f 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -1076,7 +1076,7 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, 
struct kvm_fpu *fpu)
return -ENOIOCTLCMD;
 }

-int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
+vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
 {
return VM_FAULT_SIGBUS;
 }
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 403e642..3099dee 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -1825,7 +1825,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return r;
 }

-int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
+vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
 {
return VM_FAULT_SIGBUS;
 }
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index ba4c709..24af487 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -3941,7 +3941,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return r;
 }

-int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
+vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
 {
 #ifdef CONFIG_KVM_S390_UCONTROL
if ((vmf->pgoff == KVM_S390_SIE_PAGE_OFFSET)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c8a0b54..95d8102 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3827,7 +3827,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return r;
 }

-int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
+vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
 {
return VM_FAULT_SIGBUS;
 }
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ac0062b..8eeb062 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -736,7 +736,7 @@ long kvm_arch_dev_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg);
 long kvm_arch_vcpu_ioctl(struct file *filp,
 unsigned int ioctl, unsigned long arg);
-int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf);
+vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf);

 int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext);

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 86941f6..6c8cc31 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -163,7 +163,7 @@ int kvm_arch_create_vcpu_debugfs(struct kvm_vcpu *vcpu)
return 0;
 }

-int kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
+vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf)
 {
return VM_FAULT_SIGBUS;
 }
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 4501e65..45eb54b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2341,7 +2341,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool 
yield_to_kernel_mode)
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);

-static int kvm_vcpu_fault(struct vm_fault *vmf)
+static vm_fault_t kvm_vcpu_fault(struct vm_fault *vmf)
 {
struct kvm_vcpu *vcpu = vmf->vma->vm_file->private_data;
struct page *page;
--
1.9.1



Re: [PATCH] powerpc: platform: cell: spufs: Change return type to vm_fault_t

2018-04-18 Thread Souptick Joarder
On Wed, Apr 18, 2018 at 2:17 AM, Matthew Wilcox <wi...@infradead.org> wrote:
> On Wed, Apr 18, 2018 at 12:50:38AM +0530, Souptick Joarder wrote:
>> Use new return type vm_fault_t for fault handler. For
>> now, this is just documenting that the function returns
>> a VM_FAULT value rather than an errno. Once all instances
>> are converted, vm_fault_t will become a distinct type.
>>
>> Reference id -> 1c8f422059ae ("mm: change return type to
>> vm_fault_t")
>>
>> Previously vm_insert_pfn() returns err but driver returns
>> VM_FAULT_NOPAGE as default. The new function vmf_insert_pfn()
>> will replace this inefficiency by returning correct VM_FAULT_*
>> type.
>>
>> vmf_handle_error is a inline wrapper function which
>> will convert error number to vm_fault_t type err.
>
> I think you sent the wrong version of this one ...
>
> The commit message should mention that we're fixing a minor bug, that
> the error from vm_insert_pfn() was being ignored and the effect of this
> is likely to be only felt in OOM situations.

Ok, I will add this.
>
>> @@ -256,11 +257,11 @@ static ssize_t spufs_attr_write(struct file *file, 
>> const char __user *buf,
>>   vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot);
>>   pfn = (ctx->spu->local_store_phys + offset) >> PAGE_SHIFT;
>>   }
>> - vm_insert_pfn(vma, vmf->address, pfn);
>> + ret = vmf_insert_pfn(vma, vmf->address, pfn);
>>
>>   spu_release(ctx);
>>
>> - return VM_FAULT_NOPAGE;
>> + return ret;
>>  }
>
> I thought I said not to introduce vmf_handle_error(), because it's too
> trivial and obfuscates what's actually going on.
>
>> -static int spufs_ps_fault(struct vm_fault *vmf,
>> +static inline vm_fault_t vmf_handle_error(int err)
>> +{
>> + return VM_FAULT_NOPAGE;
>> +}
>> +
>
> Re-reading spufs_ps_fault(), I wouldn't change anything inside it.  Just
> change its return type to vm_fault_t and call it done.

In that case, return value of spufs_wait() has to changed
to VM_FAULT_ type and we end with changing all the
references where spufs_wait() is called. I think we shouldn't
go with that approach. That's the reason I introduce inline
vmf_handle_error() and convert err to VM_FAULT_ type.


[PATCH] powerpc: platform: cell: spufs: Change return type to vm_fault_t

2018-04-17 Thread Souptick Joarder
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.

Reference id -> 1c8f422059ae ("mm: change return type to
vm_fault_t")

Previously vm_insert_pfn() returns err but driver returns 
VM_FAULT_NOPAGE as default. The new function vmf_insert_pfn()
will replace this inefficiency by returning correct VM_FAULT_*
type.

vmf_handle_error is a inline wrapper function which
will convert error number to vm_fault_t type err.

Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
Reviewed-by: Matthew Wilcox <mawil...@microsoft.com>
---
 arch/powerpc/platforms/cell/spufs/file.c | 37 
 1 file changed, 23 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/platforms/cell/spufs/file.c 
b/arch/powerpc/platforms/cell/spufs/file.c
index 469bdd0..a1dca9a 100644
--- a/arch/powerpc/platforms/cell/spufs/file.c
+++ b/arch/powerpc/platforms/cell/spufs/file.c
@@ -232,12 +232,13 @@ static ssize_t spufs_attr_write(struct file *file, const 
char __user *buf,
return size;
 }
 
-static int
+static vm_fault_t
 spufs_mem_mmap_fault(struct vm_fault *vmf)
 {
struct vm_area_struct *vma = vmf->vma;
struct spu_context *ctx = vma->vm_file->private_data;
unsigned long pfn, offset;
+   vm_fault_t ret;
 
offset = vmf->pgoff << PAGE_SHIFT;
if (offset >= LS_SIZE)
@@ -256,11 +257,11 @@ static ssize_t spufs_attr_write(struct file *file, const 
char __user *buf,
vma->vm_page_prot = pgprot_noncached_wc(vma->vm_page_prot);
pfn = (ctx->spu->local_store_phys + offset) >> PAGE_SHIFT;
}
-   vm_insert_pfn(vma, vmf->address, pfn);
+   ret = vmf_insert_pfn(vma, vmf->address, pfn);
 
spu_release(ctx);
 
-   return VM_FAULT_NOPAGE;
+   return ret;
 }
 
 static int spufs_mem_mmap_access(struct vm_area_struct *vma,
@@ -312,13 +313,19 @@ static int spufs_mem_mmap(struct file *file, struct 
vm_area_struct *vma)
.mmap   = spufs_mem_mmap,
 };
 
-static int spufs_ps_fault(struct vm_fault *vmf,
+static inline vm_fault_t vmf_handle_error(int err)
+{
+   return VM_FAULT_NOPAGE;
+}
+
+static vm_fault_t spufs_ps_fault(struct vm_fault *vmf,
unsigned long ps_offs,
unsigned long ps_size)
 {
struct spu_context *ctx = vmf->vma->vm_file->private_data;
unsigned long area, offset = vmf->pgoff << PAGE_SHIFT;
-   int ret = 0;
+   int err = 0;
+   vm_fault_t ret = VM_FAULT_NOPAGE;
 
spu_context_nospu_trace(spufs_ps_fault__enter, ctx);
 
@@ -349,12 +356,14 @@ static int spufs_ps_fault(struct vm_fault *vmf,
if (ctx->state == SPU_STATE_SAVED) {
up_read(>mm->mmap_sem);
spu_context_nospu_trace(spufs_ps_fault__sleep, ctx);
-   ret = spufs_wait(ctx->run_wq, ctx->state == SPU_STATE_RUNNABLE);
+   err = spufs_wait(ctx->run_wq, ctx->state == SPU_STATE_RUNNABLE);
+   ret = vmf_handle_error(err);
spu_context_trace(spufs_ps_fault__wake, ctx, ctx->spu);
down_read(>mm->mmap_sem);
} else {
area = ctx->spu->problem_phys + ps_offs;
-   vm_insert_pfn(vmf->vma, vmf->address, (area + offset) >> 
PAGE_SHIFT);
+   ret = vmf_insert_pfn(vmf->vma, vmf->address,
+   (area + offset) >> PAGE_SHIFT);
spu_context_trace(spufs_ps_fault__insert, ctx, ctx->spu);
}
 
@@ -363,11 +372,11 @@ static int spufs_ps_fault(struct vm_fault *vmf,
 
 refault:
put_spu_context(ctx);
-   return VM_FAULT_NOPAGE;
+   return ret;
 }
 
 #if SPUFS_MMAP_4K
-static int spufs_cntl_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t spufs_cntl_mmap_fault(struct vm_fault *vmf)
 {
return spufs_ps_fault(vmf, 0x4000, SPUFS_CNTL_MAP_SIZE);
 }
@@ -1040,7 +1049,7 @@ static ssize_t spufs_signal1_write(struct file *file, 
const char __user *buf,
return 4;
 }
 
-static int
+static vm_fault_t
 spufs_signal1_mmap_fault(struct vm_fault *vmf)
 {
 #if SPUFS_SIGNAL_MAP_SIZE == 0x1000
@@ -1178,7 +1187,7 @@ static ssize_t spufs_signal2_write(struct file *file, 
const char __user *buf,
 }
 
 #if SPUFS_MMAP_4K
-static int
+static vm_fault_t
 spufs_signal2_mmap_fault(struct vm_fault *vmf)
 {
 #if SPUFS_SIGNAL_MAP_SIZE == 0x1000
@@ -1307,7 +1316,7 @@ static u64 spufs_signal2_type_get(struct spu_context *ctx)
   spufs_signal2_type_set, "%llu\n", SPU_ATTR_ACQUIRE);
 
 #if SPUFS_MMAP_4K
-static int
+static vm_fault_t
 spufs_mss_mmap_fault(struct vm_fault *vmf)
 {
return spufs_ps_fault(vmf, 0x00

[PATCH] misc: cxl: Change return type to vm_fault_t

2018-04-17 Thread Souptick Joarder
Use new return type vm_fault_t for fault handler. For
now, this is just documenting that the function returns
a VM_FAULT value rather than an errno. Once all instances
are converted, vm_fault_t will become a distinct type.

Reference id -> 1c8f422059ae ("mm: change return type to
vm_fault_t")

previously cxl_mmap_fault returns VM_FAULT_NOPAGE as
default value irrespective of vm_insert_pfn() return
value. This bug is fixed with new vmf_insert_pfn()
which will return VM_FAULT_ type based on err.

Signed-off-by: Souptick Joarder <jrdr.li...@gmail.com>
---
 drivers/misc/cxl/context.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index 7ff315a..c6ec872 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -128,11 +128,12 @@ void cxl_context_set_mapping(struct cxl_context *ctx,
mutex_unlock(>mapping_lock);
 }

-static int cxl_mmap_fault(struct vm_fault *vmf)
+static vm_fault_t cxl_mmap_fault(struct vm_fault *vmf)
 {
struct vm_area_struct *vma = vmf->vma;
struct cxl_context *ctx = vma->vm_file->private_data;
u64 area, offset;
+   vm_fault_t ret;

offset = vmf->pgoff << PAGE_SHIFT;

@@ -169,11 +170,11 @@ static int cxl_mmap_fault(struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
}

-   vm_insert_pfn(vma, vmf->address, (area + offset) >> PAGE_SHIFT);
+   ret = vmf_insert_pfn(vma, vmf->address, (area + offset) >> PAGE_SHIFT);

mutex_unlock(>status_mutex);

-   return VM_FAULT_NOPAGE;
+   return ret;
 }

 static const struct vm_operations_struct cxl_mmap_vmops = {
--
1.9.1