Re: [PATCH] powerpc: use $(origin ARCH) to select KBUILD_DEFCONFIG

2019-02-15 Thread Masahiro Yamada
On Sat, Feb 16, 2019 at 1:11 AM Mathieu Malaterre  wrote:
>
> On Fri, Feb 15, 2019 at 10:41 AM Masahiro Yamada
>  wrote:
> >
> > I often test all Kconfig commands for all architectures. To ease my
> > workflow, I want 'make defconfig' at least working without any cross
> > compiler.
> >
> > Currently, arch/powerpc/Makefile checks CROSS_COMPILE to decide the
> > default defconfig source.
> >
> > If CROSS_COMPILE is unset, it is likely to be the native build, so
> > 'uname -m' is useful to choose the defconfig. If CROSS_COMPILE is set,
> > the user is cross-building (i.e. 'uname -m' is probably x86_64), so
> > it falls back to ppc64_defconfig. Yup, make sense.
> >
> > However, I want to run 'make ARCH=* defconfig' without setting
> > CROSS_COMPILE for each architecture.
> >
> > My suggestion is to check $(origin ARCH).
> >
> > When you cross-compile the kernel, you need to set ARCH from your
> > environment or from the command line.
> >
> > For the native build, you do not need to set ARCH. The default in
> > the top Makefile is used:
> >
> >   ARCH?= $(SUBARCH)
> >
> > Hence, $(origin ARCH) returns 'file'.
> >
> > Before this commit, 'make ARCH=powerpc defconfig' failed:
>
> In case you have not seen it, please check:
>
> http://patchwork.ozlabs.org/patch/1037835/


I did not know that because I do not subscribe to ppc ML.


Michael's patch looks good to me.


If you mimic x86, the following will work:




diff --git a/Makefile b/Makefile
index 86cf35d..eb9552d 100644
--- a/Makefile
+++ b/Makefile
@@ -356,6 +356,11 @@ ifeq ($(ARCH),sh64)
SRCARCH := sh
 endif

+# Additional ARCH settings for powerpc
+ifneq ($(filter ppc%,$(ARCH)),)
+   SRCARCH := powerpc
+endif
+
 KCONFIG_CONFIG ?= .config
 export KCONFIG_CONFIG

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 488c9ed..ff01fef 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -34,10 +34,10 @@ ifdef CONFIG_PPC_BOOK3S_32
 KBUILD_CFLAGS  += -mcpu=powerpc
 endif

-ifeq ($(CROSS_COMPILE),)
-KBUILD_DEFCONFIG := $(shell uname -m)_defconfig
-else
+ifeq ($(ARCH),powerpc)
 KBUILD_DEFCONFIG := ppc64_defconfig
+else
+KBUILD_DEFCONFIG := $(ARCH)_defconfig
 endif

 ifdef CONFIG_PPC64
diff --git a/scripts/subarch.include b/scripts/subarch.include
index 6506828..c98323f 100644
--- a/scripts/subarch.include
+++ b/scripts/subarch.include
@@ -8,6 +8,6 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e
s/x86_64/x86/ \
  -e s/sun4u/sparc64/ \
  -e s/arm.*/arm/ -e s/sa110/arm/ \
  -e s/s390x/s390/ -e s/parisc64/parisc/ \
- -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
+ -e s/mips.*/mips/ \
  -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
  -e s/riscv.*/riscv/)








--
Best Regards
Masahiro Yamada


Re: [PATCH v5 0/3] Add NXP AUDMIX device and machine drivers

2019-02-15 Thread Nicolin Chen
On Fri, Feb 15, 2019 at 02:01:32PM +, Viorel Suman wrote:
> The patchset adds NXP Audio Mixer (AUDMIX) device and machine
> drivers and related DT bindings documentation.

For this series,

Acked-by: Nicolin Chen 

And Rob gave his at the previous version already.

Thanks.

> Changes since V4:
> 1. Removed "model" attribute from device driver DT bindings documentation
>as suggested by Nicolin.
> 
> Changes since V3:
> 1. Removed machine driver DT bindings documentation.
> 2. Trigger machine driver probe from device driver as suggested by Nicolin.
> 
> Changes since V2:
> 1. Moved "dais" node from machine driver DTS node to device driver DTS node
>   as suggested by Rob.
> 
> Changes since V1:
> 1. Original patch split into distinct patches for the device driver and
>   DT binding documentation.
> 2. Replaced AMIX with AUDMIX in both code and file names as it looks more
>   RM-compliant.
> 3. Removed polarity control from CPU DAI driver as suggested by Nicolin.
> 4. Added machine driver and related DT binding documentation.
> 
> Viorel Suman (3):
>   ASoC: fsl: Add Audio Mixer CPU DAI driver
>   ASoC: add fsl_audmix DT binding documentation
>   ASoC: fsl: Add Audio Mixer machine driver
> 
>  .../devicetree/bindings/sound/fsl,audmix.txt   |  50 ++
>  sound/soc/fsl/Kconfig  |  16 +
>  sound/soc/fsl/Makefile |   5 +
>  sound/soc/fsl/fsl_audmix.c | 578 
> +
>  sound/soc/fsl/fsl_audmix.h | 102 
>  sound/soc/fsl/imx-audmix.c | 327 
>  6 files changed, 1078 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/sound/fsl,audmix.txt
>  create mode 100644 sound/soc/fsl/fsl_audmix.c
>  create mode 100644 sound/soc/fsl/fsl_audmix.h
>  create mode 100644 sound/soc/fsl/imx-audmix.c
> 
> -- 
> 2.7.4
> 


Re: [PATCH v4 0/3] locking/rwsem: Rwsem rearchitecture part 0

2019-02-15 Thread Will Deacon
On Thu, Feb 14, 2019 at 11:37:15AM +0100, Peter Zijlstra wrote:
> On Wed, Feb 13, 2019 at 05:00:14PM -0500, Waiman Long wrote:
> > v4:
> >  - Remove rwsem-spinlock.c and make all archs use rwsem-xadd.c.
> > 
> > v3:
> >  - Optimize __down_read_trylock() for the uncontended case as suggested
> >by Linus.
> > 
> > v2:
> >  - Add patch 2 to optimize __down_read_trylock() as suggested by PeterZ.
> >  - Update performance test data in patch 1.
> > 
> > The goal of this patchset is to remove the architecture specific files
> > for rwsem-xadd to make it easer to add enhancements in the later rwsem
> > patches. It also removes the legacy rwsem-spinlock.c file and make all
> > the architectures use one single implementation of rwsem - rwsem-xadd.c.
> > 
> > Waiman Long (3):
> >   locking/rwsem: Remove arch specific rwsem files
> >   locking/rwsem: Remove rwsem-spinlock.c & use rwsem-xadd.c for all
> > archs
> >   locking/rwsem: Optimize down_read_trylock()
> 
> Acked-by: Peter Zijlstra (Intel) 
> 
> with the caveat that I'm happy to exchange patch 3 back to my earlier
> suggestion in case Will expesses concerns wrt the ARM64 performance of
> Linus' suggestion.

Right, the current proposal doesn't work well for us, unfortunately. Which
was your earlier suggestion?

Will


Re: [PATCH v4 0/3] locking/rwsem: Rwsem rearchitecture part 0

2019-02-15 Thread Waiman Long
On 02/15/2019 01:40 PM, Will Deacon wrote:
> On Thu, Feb 14, 2019 at 11:37:15AM +0100, Peter Zijlstra wrote:
>> On Wed, Feb 13, 2019 at 05:00:14PM -0500, Waiman Long wrote:
>>> v4:
>>>  - Remove rwsem-spinlock.c and make all archs use rwsem-xadd.c.
>>>
>>> v3:
>>>  - Optimize __down_read_trylock() for the uncontended case as suggested
>>>by Linus.
>>>
>>> v2:
>>>  - Add patch 2 to optimize __down_read_trylock() as suggested by PeterZ.
>>>  - Update performance test data in patch 1.
>>>
>>> The goal of this patchset is to remove the architecture specific files
>>> for rwsem-xadd to make it easer to add enhancements in the later rwsem
>>> patches. It also removes the legacy rwsem-spinlock.c file and make all
>>> the architectures use one single implementation of rwsem - rwsem-xadd.c.
>>>
>>> Waiman Long (3):
>>>   locking/rwsem: Remove arch specific rwsem files
>>>   locking/rwsem: Remove rwsem-spinlock.c & use rwsem-xadd.c for all
>>> archs
>>>   locking/rwsem: Optimize down_read_trylock()
>> Acked-by: Peter Zijlstra (Intel) 
>>
>> with the caveat that I'm happy to exchange patch 3 back to my earlier
>> suggestion in case Will expesses concerns wrt the ARM64 performance of
>> Linus' suggestion.
> Right, the current proposal doesn't work well for us, unfortunately. Which
> was your earlier suggestion?
>
> Will

In my posting yesterday, I showed that most of the trylocks done were
actually uncontended. Assuming that pattern hold for the most of the
workloads, it will not that bad after all.

-Longman



Re: [PATCH] ASoC: fsl_esai: fix register setting issue in RIGHT_J mode

2019-02-15 Thread Nicolin Chen
On Fri, Feb 15, 2019 at 11:04:38AM +, S.j. Wang wrote:
> The ESAI_xCR_xWA is xCR's bit, not the xCCR's bit, driver set it to
> wrong register, correct it.
> 
> Signed-off-by: Shengjiu Wang 

Would need this for stable kernel too.

Ackedy-by: Nicolin Chen 

Thanks.

> ---
>  sound/soc/fsl/fsl_esai.c | 7 ---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
> index 57b484768a58..afe67c865330 100644
> --- a/sound/soc/fsl/fsl_esai.c
> +++ b/sound/soc/fsl/fsl_esai.c
> @@ -398,7 +398,8 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, 
> unsigned int fmt)
>   break;
>   case SND_SOC_DAIFMT_RIGHT_J:
>   /* Data on rising edge of bclk, frame high, right aligned */
> - xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCR_xWA;
> + xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP;
> + xcr  |= ESAI_xCR_xWA;
>   break;
>   case SND_SOC_DAIFMT_DSP_A:
>   /* Data on rising edge of bclk, frame high, 1clk before data */
> @@ -455,12 +456,12 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai 
> *dai, unsigned int fmt)
>   return -EINVAL;
>   }
>  
> - mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR;
> + mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR | ESAI_xCR_xWA;
>   regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR, mask, xcr);
>   regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR, mask, xcr);
>  
>   mask = ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCCR_xFSP |
> - ESAI_xCCR_xFSD | ESAI_xCCR_xCKD | ESAI_xCR_xWA;
> + ESAI_xCCR_xFSD | ESAI_xCCR_xCKD;
>   regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, mask, xccr);
>   regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, mask, xccr);
>  
> -- 
> 1.9.1
> 


Re: [PATCH v3 2/2] locking/rwsem: Optimize down_read_trylock()

2019-02-15 Thread Will Deacon
On Thu, Feb 14, 2019 at 10:09:44AM -0800, Linus Torvalds wrote:
> On Thu, Feb 14, 2019 at 9:51 AM Linus Torvalds
>  wrote:
> >
> > The arm64 numbers scaled horribly even before, and that's because
> > there is too much ping-pong, and it's probably because there is no
> > "stickiness" to the cacheline to the core, and thus adding the extra
> > loop can make the ping-pong issue even worse because now there is more
> > of it.
> 
> Actually, if it's using the ll/sc, then I don't see why arm64 should
> even change. It doesn't really even change the pattern: the initial
> load of the value is just replaced with a "ll" that gets a non-zero
> value, and then we re-try without even doing the "sc" part.

So our cmpxchg() has a prefetch-with-intent-to-modify instruction before the
'll' part, which will attempt to grab the line unique the first time round.
The 'll' also has acquire semantics, so there's the chance for the
micro-architecture to handle that badly too.

I think that the problem with the proposed changed change is that whenever a
reader tries to acquire an rwsem that is already held for read, it will
always fail the first cmpxchg(), so in this situation the read path is
considerably slower than before.

> End result: exact same "load once, then do ll/sc to update". Just
> using a slightly different instruction pattern.
> 
> But maybe "ll" does something different to the cacheline than a regular "ld"?
> 
> Alternatively, the machine you used is using LSE, and the "swp" thing
> has some horrid behavior when it fails.

Depending on where the data is, the LSE instructions may execute outside of
the CPU (e.g. in a cache controller) and so could add latency to a failing
CAS.

Will


Re: [PATCH V2 0/7] Add FOLL_LONGTERM to GUP fast and use it

2019-02-15 Thread Ira Weiny
> NOTE: This series depends on my clean up patch to remove the write parameter
> from gup_fast_permitted()[1]
> 
> HFI1, qib, and mthca, use get_user_pages_fast() due to it performance
> advantages.  These pages can be held for a significant time.  But
> get_user_pages_fast() does not protect against mapping of FS DAX pages.
> 
> Introduce FOLL_LONGTERM and use this flag in get_user_pages_fast() which
> retains the performance while also adding the FS DAX checks.  XDP has also
> shown interest in using this functionality.[2]
> 
> In addition we change get_user_pages() to use the new FOLL_LONGTERM flag and
> remove the specialized get_user_pages_longterm call.
> 
> [1] https://lkml.org/lkml/2019/2/11/237
> [2] https://lkml.org/lkml/2019/2/11/1789

Any comments on this series?  I've touched a lot of subsystems which I think
require review.

Thanks,
Ira

> 
> Ira Weiny (7):
>   mm/gup: Replace get_user_pages_longterm() with FOLL_LONGTERM
>   mm/gup: Change write parameter to flags in fast walk
>   mm/gup: Change GUP fast to use flags rather than a write 'bool'
>   mm/gup: Add FOLL_LONGTERM capability to GUP fast
>   IB/hfi1: Use the new FOLL_LONGTERM flag to get_user_pages_fast()
>   IB/qib: Use the new FOLL_LONGTERM flag to get_user_pages_fast()
>   IB/mthca: Use the new FOLL_LONGTERM flag to get_user_pages_fast()
> 
>  arch/mips/mm/gup.c  |  11 +-
>  arch/powerpc/kvm/book3s_64_mmu_hv.c |   4 +-
>  arch/powerpc/kvm/e500_mmu.c |   2 +-
>  arch/powerpc/mm/mmu_context_iommu.c |   4 +-
>  arch/s390/kvm/interrupt.c   |   2 +-
>  arch/s390/mm/gup.c  |  12 +-
>  arch/sh/mm/gup.c|  11 +-
>  arch/sparc/mm/gup.c |   9 +-
>  arch/x86/kvm/paging_tmpl.h  |   2 +-
>  arch/x86/kvm/svm.c  |   2 +-
>  drivers/fpga/dfl-afu-dma-region.c   |   2 +-
>  drivers/gpu/drm/via/via_dmablit.c   |   3 +-
>  drivers/infiniband/core/umem.c  |   5 +-
>  drivers/infiniband/hw/hfi1/user_pages.c |   5 +-
>  drivers/infiniband/hw/mthca/mthca_memfree.c |   3 +-
>  drivers/infiniband/hw/qib/qib_user_pages.c  |   8 +-
>  drivers/infiniband/hw/qib/qib_user_sdma.c   |   2 +-
>  drivers/infiniband/hw/usnic/usnic_uiom.c|   9 +-
>  drivers/media/v4l2-core/videobuf-dma-sg.c   |   6 +-
>  drivers/misc/genwqe/card_utils.c|   2 +-
>  drivers/misc/vmw_vmci/vmci_host.c   |   2 +-
>  drivers/misc/vmw_vmci/vmci_queue_pair.c |   6 +-
>  drivers/platform/goldfish/goldfish_pipe.c   |   3 +-
>  drivers/rapidio/devices/rio_mport_cdev.c|   4 +-
>  drivers/sbus/char/oradax.c  |   2 +-
>  drivers/scsi/st.c   |   3 +-
>  drivers/staging/gasket/gasket_page_table.c  |   4 +-
>  drivers/tee/tee_shm.c   |   2 +-
>  drivers/vfio/vfio_iommu_spapr_tce.c |   3 +-
>  drivers/vfio/vfio_iommu_type1.c |   3 +-
>  drivers/vhost/vhost.c   |   2 +-
>  drivers/video/fbdev/pvr2fb.c|   2 +-
>  drivers/virt/fsl_hypervisor.c   |   2 +-
>  drivers/xen/gntdev.c|   2 +-
>  fs/orangefs/orangefs-bufmap.c   |   2 +-
>  include/linux/mm.h  |  17 +-
>  kernel/futex.c  |   2 +-
>  lib/iov_iter.c  |   7 +-
>  mm/gup.c| 220 
>  mm/gup_benchmark.c  |   5 +-
>  mm/util.c   |   8 +-
>  net/ceph/pagevec.c  |   2 +-
>  net/rds/info.c  |   2 +-
>  net/rds/rdma.c  |   3 +-
>  44 files changed, 232 insertions(+), 180 deletions(-)
> 
> -- 
> 2.20.1
> 


Re: [PATCH v3] hugetlb: allow to free gigantic pages regardless of the configuration

2019-02-15 Thread Dave Hansen
> -#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) || 
> defined(CONFIG_CMA)
> +#ifdef CONFIG_CONTIG_ALLOC
>  /* The below functions must be run on a range from a single zone. */
>  extern int alloc_contig_range(unsigned long start, unsigned long end,
> unsigned migratetype, gfp_t gfp_mask);
> -extern void free_contig_range(unsigned long pfn, unsigned nr_pages);
>  #endif
> +extern void free_contig_range(unsigned long pfn, unsigned int nr_pages);

There's a lot of stuff going on in this patch.  Adding/removing config
options.  Please get rid of these superfluous changes or at least break
them out.

>  #ifdef CONFIG_CMA
>  /* CMA stuff */
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 25c71eb8a7db..138a8df9b813 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -252,12 +252,17 @@ config MIGRATION
> pages as migration can relocate pages to satisfy a huge page
> allocation instead of reclaiming.
>  
> +
>  config ARCH_ENABLE_HUGEPAGE_MIGRATION
>   bool

Like this. :)

>  config ARCH_ENABLE_THP_MIGRATION
>   bool
>  
> +config CONTIG_ALLOC
> + def_bool y
> + depends on (MEMORY_ISOLATION && COMPACTION) || CMA
> +
>  config PHYS_ADDR_T_64BIT
>   def_bool 64BIT

Please think carefully though the Kconfig dependencies.  'select' is
*not* the same as 'depends on'.

This replaces a bunch of arch-specific "select ARCH_HAS_GIGANTIC_PAGE"
with a 'depends on'.  I *think* that ends up being OK, but it absolutely
needs to be addressed in the changelog about why *you* think it is OK
and why it doesn't change the functionality of any of the patched
architetures.

> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index afef61656c1e..e686c92212e9 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1035,7 +1035,6 @@ static int hstate_next_node_to_free(struct hstate *h, 
> nodemask_t *nodes_allowed)
>   ((node = hstate_next_node_to_free(hs, mask)) || 1); \
>   nr_nodes--)
>  
> -#ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
>  static void destroy_compound_gigantic_page(struct page *page,
>   unsigned int order)
>  {

Whats the result of this #ifdef removal?  A universally larger kernel
even for architectures that do not support runtime gigantic page
alloc/free?  That doesn't seem like a good thing.

> @@ -1058,6 +1057,12 @@ static void free_gigantic_page(struct page *page, 
> unsigned int order)
>   free_contig_range(page_to_pfn(page), 1 << order);
>  }
>  
> +static inline bool gigantic_page_runtime_allocation_supported(void)
> +{
> + return IS_ENABLED(CONFIG_CONTIG_ALLOC);
> +}

Why bother having this function?  Why don't the callers just check the
config option directly?

> +#ifdef CONFIG_CONTIG_ALLOC
>  static int __alloc_gigantic_page(unsigned long start_pfn,
>   unsigned long nr_pages, gfp_t gfp_mask)
>  {
> @@ -1143,22 +1148,15 @@ static struct page *alloc_gigantic_page(struct hstate 
> *h, gfp_t gfp_mask,
>  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid);
>  static void prep_compound_gigantic_page(struct page *page, unsigned int 
> order);
>  
> -#else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */
> -static inline bool gigantic_page_supported(void) { return false; }
> +#else /* !CONFIG_CONTIG_ALLOC */
>  static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>   int nid, nodemask_t *nodemask) { return NULL; }
> -static inline void free_gigantic_page(struct page *page, unsigned int order) 
> { }
> -static inline void destroy_compound_gigantic_page(struct page *page,
> - unsigned int order) { }
>  #endif
>  
>  static void update_and_free_page(struct hstate *h, struct page *page)
>  {
>   int i;
>  
> - if (hstate_is_gigantic(h) && !gigantic_page_supported())
> - return;

I don't get the point of removing this check.  Logically, this reads as
checking if the architecture supports gigantic hstates and has nothing
to do with allocation.

>   h->nr_huge_pages--;
>   h->nr_huge_pages_node[page_to_nid(page)]--;
>   for (i = 0; i < pages_per_huge_page(h); i++) {
> @@ -2276,13 +2274,20 @@ static int adjust_pool_surplus(struct hstate *h, 
> nodemask_t *nodes_allowed,
>  }
>  
>  #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
> -static unsigned long set_max_huge_pages(struct hstate *h, unsigned long 
> count,
> +static int set_max_huge_pages(struct hstate *h, unsigned long count,
>   nodemask_t *nodes_allowed)
>  {
>   unsigned long min_count, ret;
>  
> - if (hstate_is_gigantic(h) && !gigantic_page_supported())
> - return h->max_huge_pages;
> + if (hstate_is_gigantic(h) &&
> + !gigantic_page_runtime_allocation_supported()) {

The indentation here is wrong and reduces readability.  Needs to be like
this:

if 

Re: [PATCH v3] hugetlb: allow to free gigantic pages regardless of the configuration

2019-02-15 Thread Vlastimil Babka
On 2/14/19 8:31 PM, Alexandre Ghiti wrote:
> On systems without CMA or (MEMORY_ISOLATION && COMPACTION) activated but
> that support gigantic pages, boottime reserved gigantic pages can not be
> freed at all. This patch simply enables the possibility to hand back
> those pages to memory allocator.
> 
> This patch also renames:
> 
> - the triplet CMA or (MEMORY_ISOLATION && COMPACTION) into CONTIG_ALLOC,
> and gets rid of all use of it in architecture specific code (and then
> removes ARCH_HAS_GIGANTIC_PAGE config).
> - gigantic_page_supported to make it more accurate: this value being false
> does not mean that the system cannot use gigantic pages, it just means that
> runtime allocation of gigantic pages is not supported, one can still
> allocate boottime gigantic pages if the architecture supports it.
> 
> Signed-off-by: Alexandre Ghiti 

Acked-by: Vlastimil Babka 

Thanks!

...

> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -252,12 +252,17 @@ config MIGRATION
> pages as migration can relocate pages to satisfy a huge page
> allocation instead of reclaiming.
>  
> +

Stray newline? No need to resend, Andrew can fix up.
Ah, he wasn't in To:, adding.

>  config ARCH_ENABLE_HUGEPAGE_MIGRATION
>   bool
>  
>  config ARCH_ENABLE_THP_MIGRATION
>   bool
>  
> +config CONTIG_ALLOC
> + def_bool y
> + depends on (MEMORY_ISOLATION && COMPACTION) || CMA
> +
>  config PHYS_ADDR_T_64BIT
>   def_bool 64BIT
>  


Re: [RFC PATCH 0/5] powerpc: KASAN for 64-bit Book3E

2019-02-15 Thread Christophe Leroy




On 02/15/2019 12:04 AM, Daniel Axtens wrote:

Building on the work of Christophe, Aneesh and Balbir, I've ported
KASAN to the e6500, a 64-bit Book3E processor which doesn't have a
hashed page table. It applies on top of Christophe's series, v5.

It requires some changes to the KASAN core - please let me know if
these are problematic and we see if an alternative approach is
possible.

The KASAN shadow area is mapped into vmemmap space:
0x8000 0400   to 0x8000 0600  .
To do this we require that vmemmap be disabled. (This is the default
in the kernel config that QorIQ provides for the machine in their
SDK anyway - they use flat memory.)

Only outline instrumentation is supported and only KASAN_MINIMAL works.
Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
ioremap areas (also in 0x800...) are all mapped to a zero page. As
with the Book3S hash series, this requires overriding the memory <->
shadow mapping.

Also, as with both previous 64-bit series, early instrumentation is not
supported.

KVM, kexec and xmon have not been tested.

Thanks to those who have done the heavy lifting over the past several years:
  - Christophe's 32 bit series: 
https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
  - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
  - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/

While useful if you have an Book3E device, this is mostly intended
as a warm-up exercise for reviving Aneesh's series for book3s hash.
In particular, changes to the kasan core are going to be required
for hash and radix as well.

Regards,
Daniel


Hi Daniel,

I'll look into your series in more details later, for now I just want to 
let you know that I get a build failure:


  LD  vmlinux.o
lib/string.o: In function `memcmp':
/root/linux-powerpc/lib/string.c:857: multiple definition of `memcmp'
arch/powerpc/lib/memcmp_32.o:/root/linux-powerpc/arch/powerpc/lib/memcmp_32.S:16: 
first defined here



Christophe



Daniel Axtens (5):
   kasan: do not open-code addr_has_shadow
   kasan: allow architectures to manage the memory-to-shadow mapping
   kasan: allow architectures to provide an outline readiness check
   powerpc: move KASAN into its own subdirectory
   powerpc: KASAN for 64bit Book3E

  arch/powerpc/Kconfig  |  1 +
  arch/powerpc/Makefile |  2 +
  arch/powerpc/include/asm/kasan.h  | 77 +--
  arch/powerpc/include/asm/ppc_asm.h|  7 ++
  arch/powerpc/include/asm/string.h |  7 +-
  arch/powerpc/lib/mem_64.S |  6 +-
  arch/powerpc/lib/memcmp_64.S  |  5 +-
  arch/powerpc/lib/memcpy_64.S  |  3 +-
  arch/powerpc/lib/string.S | 15 ++--
  arch/powerpc/mm/Makefile  |  4 +-
  arch/powerpc/mm/kasan/Makefile|  6 ++
  .../{kasan_init.c => kasan/kasan_init_32.c}   |  0
  arch/powerpc/mm/kasan/kasan_init_book3e_64.c  | 53 +
  arch/powerpc/purgatory/Makefile   |  3 +
  arch/powerpc/xmon/Makefile|  1 +
  include/linux/kasan.h |  6 ++
  mm/kasan/generic.c|  5 +-
  mm/kasan/generic_report.c |  2 +-
  mm/kasan/kasan.h  |  6 +-
  mm/kasan/report.c |  6 +-
  mm/kasan/tags.c   |  3 +-
  21 files changed, 188 insertions(+), 30 deletions(-)
  create mode 100644 arch/powerpc/mm/kasan/Makefile
  rename arch/powerpc/mm/{kasan_init.c => kasan/kasan_init_32.c} (100%)
  create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c



Re: [PATCH 0/5] use pinned_vm instead of locked_vm to account pinned pages

2019-02-15 Thread Christopher Lameter
On Thu, 14 Feb 2019, Jason Gunthorpe wrote:

> On Thu, Feb 14, 2019 at 01:46:51PM -0800, Ira Weiny wrote:
>
> > > > > Really unclear how to fix this. The pinned/locked split with two
> > > > > buckets may be the right way.
> > > >
> > > > Are you suggesting that we have 2 user limits?
> > >
> > > This is what RDMA has done since CL's patch.
> >
> > I don't understand?  What is the other _user_ limit (other than
> > RLIMIT_MEMLOCK)?
>
> With todays implementation RLIMIT_MEMLOCK covers two user limits,
> total number of pinned pages and total number of mlocked pages. The
> two are different buckets and not summed.

Applications were failing at some point because they were effectively
summed up. If you mlocked/pinned a dataset of more than half the memory of
a system then things would get really weird.

Also there is the possibility of even more duplication because pages can
be pinned by multiple kernel subsystems. So you could get more than
doubling of the number.

The sane thing was to account them separately so that mlocking and
pinning worked without apps failing and then wait for another genius
to find out how to improve the situation by getting the pinned page mess
under control.

It is not even advisable to check pinned pages against any limit because
pages can be pinned by multiple subsystems.

The main problem here is that we only have a refcount to indicate pinning
and no way to clearly distinguish long term from short pins. In order to
really fix this issue we would need to have a list of subsystems that have
taken long term pins on a page. But doing so would waste a lot of memory
and cause a significant performance regression.

And the discussions here seem to be meandering around these issues.
Nothing really that convinces me that we have a clean solution at hand.



Re: [PATCH V2 3/10] KVM/MMU: Add last_level in the struct mmu_spte_page

2019-02-15 Thread Paolo Bonzini
On 15/02/19 16:05, Tianyu Lan wrote:
> Yes, you are right. Thanks to point out and will fix. The last_level
> flag is to avoid adding middle page node(e.g, PGD, PMD)
> into flush list. The address range will be duplicated if adding both
> leaf, node and middle node into flush list.

Hmm, that's not easy to track.  One kvm_mmu_page could include both leaf
and non-leaf page (for example a huge page for 0 to 2 MB and a page
table for 2 MB to 4 MB).

Is this really needed?  First, your benchmarks so far have been done
with sp->last_level always set to true.  Second, you would only
encounter this optimization in kvm_mmu_commit_zap_page when zapping a 1
GB region (which then would be invalidated twice, at both the PMD and
PGD level) or bigger.

Paolo


Re: [PATCH V2 3/10] KVM/MMU: Add last_level in the struct mmu_spte_page

2019-02-15 Thread Tianyu Lan
On Fri, Feb 15, 2019 at 12:32 AM Paolo Bonzini  wrote:
>
> On 02/02/19 02:38, lantianyu1...@gmail.com wrote:
> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> > index ce770b446238..70cafd3f95ab 100644
> > --- a/arch/x86/kvm/mmu.c
> > +++ b/arch/x86/kvm/mmu.c
> > @@ -2918,6 +2918,9 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
> >
> >   if (level > PT_PAGE_TABLE_LEVEL)
> >   spte |= PT_PAGE_SIZE_MASK;
> > +
> > + sp->last_level = is_last_spte(spte, level);
>
> Wait, I wasn't thinking straight.  If a struct kvm_mmu_page exists, it
> is never the last level.  Page table entries for the last level do not
> have a struct kvm_mmu_page.
>
> Therefore you don't need the flag after all.  I suspect your
> calculations in patch 2 are off by one, and you actually need
>
> hlist_for_each_entry(sp, range->flush_list, flush_link) {
> int pages = KVM_PAGES_PER_HPAGE(sp->role.level + 1);
> ...
> }
>
> For example, if sp->role.level is 1 then the struct kvm_mmu_page is for
> a page containing PTEs and covers an area of 2 MiB.

Yes, you are right. Thanks to point out and will fix. The last_level
flag is to avoid adding middle page node(e.g, PGD, PMD)
into flush list. The address range will be duplicated if adding both
leaf, node and middle node into flush list.

>
> Thanks,
>
> Paolo
>
> >   if (tdp_enabled)
> >   spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn,
> >   kvm_is_mmio_pfn(pfn));
>


-- 
Best regards
Tianyu Lan


[PATCH v5 3/3] ASoC: fsl: Add Audio Mixer machine driver

2019-02-15 Thread Viorel Suman
This patch implements Audio Mixer machine driver for NXP iMX8 SOCs.
It connects together Audio Mixer and related SAI instances.

Signed-off-by: Viorel Suman 
---
 sound/soc/fsl/Kconfig  |   9 ++
 sound/soc/fsl/Makefile |   2 +
 sound/soc/fsl/imx-audmix.c | 327 +
 3 files changed, 338 insertions(+)
 create mode 100644 sound/soc/fsl/imx-audmix.c

diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
index 0af2e056..d87c842 100644
--- a/sound/soc/fsl/Kconfig
+++ b/sound/soc/fsl/Kconfig
@@ -303,6 +303,15 @@ config SND_SOC_FSL_ASOC_CARD
 CS4271, CS4272 and SGTL5000.
 Say Y if you want to add support for Freescale Generic ASoC Sound Card.
 
+config SND_SOC_IMX_AUDMIX
+   tristate "SoC Audio support for i.MX boards with AUDMIX"
+   select SND_SOC_FSL_AUDMIX
+   select SND_SOC_FSL_SAI
+   help
+ SoC Audio support for i.MX boards with Audio Mixer
+ Say Y if you want to add support for SoC audio on an i.MX board with
+ an Audio Mixer.
+
 endif # SND_IMX_SOC
 
 endmenu
diff --git a/sound/soc/fsl/Makefile b/sound/soc/fsl/Makefile
index 4172d5a..c0dd044 100644
--- a/sound/soc/fsl/Makefile
+++ b/sound/soc/fsl/Makefile
@@ -62,6 +62,7 @@ snd-soc-imx-es8328-objs := imx-es8328.o
 snd-soc-imx-sgtl5000-objs := imx-sgtl5000.o
 snd-soc-imx-spdif-objs := imx-spdif.o
 snd-soc-imx-mc13783-objs := imx-mc13783.o
+snd-soc-imx-audmix-objs := imx-audmix.o
 
 obj-$(CONFIG_SND_SOC_EUKREA_TLV320) += snd-soc-eukrea-tlv320.o
 obj-$(CONFIG_SND_SOC_PHYCORE_AC97) += snd-soc-phycore-ac97.o
@@ -71,3 +72,4 @@ obj-$(CONFIG_SND_SOC_IMX_ES8328) += snd-soc-imx-es8328.o
 obj-$(CONFIG_SND_SOC_IMX_SGTL5000) += snd-soc-imx-sgtl5000.o
 obj-$(CONFIG_SND_SOC_IMX_SPDIF) += snd-soc-imx-spdif.o
 obj-$(CONFIG_SND_SOC_IMX_MC13783) += snd-soc-imx-mc13783.o
+obj-$(CONFIG_SND_SOC_IMX_AUDMIX) += snd-soc-imx-audmix.o
diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
new file mode 100644
index 000..72e37ca
--- /dev/null
+++ b/sound/soc/fsl/imx-audmix.c
@@ -0,0 +1,327 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2017 NXP
+ *
+ * The code contained herein is licensed under the GNU General Public
+ * License. You may obtain a copy of the GNU General Public License
+ * Version 2 or later at the following locations:
+ *
+ * http://www.opensource.org/licenses/gpl-license.html
+ * http://www.gnu.org/copyleft/gpl.html
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "fsl_sai.h"
+#include "fsl_audmix.h"
+
+struct imx_audmix {
+   struct platform_device *pdev;
+   struct snd_soc_card card;
+   struct platform_device *audmix_pdev;
+   struct platform_device *out_pdev;
+   struct clk *cpu_mclk;
+   int num_dai;
+   struct snd_soc_dai_link *dai;
+   int num_dai_conf;
+   struct snd_soc_codec_conf *dai_conf;
+   int num_dapm_routes;
+   struct snd_soc_dapm_route *dapm_routes;
+};
+
+static const u32 imx_audmix_rates[] = {
+   8000, 12000, 16000, 24000, 32000, 48000, 64000, 96000,
+};
+
+static const struct snd_pcm_hw_constraint_list imx_audmix_rate_constraints = {
+   .count = ARRAY_SIZE(imx_audmix_rates),
+   .list = imx_audmix_rates,
+};
+
+static int imx_audmix_fe_startup(struct snd_pcm_substream *substream)
+{
+   struct snd_soc_pcm_runtime *rtd = substream->private_data;
+   struct imx_audmix *priv = snd_soc_card_get_drvdata(rtd->card);
+   struct snd_pcm_runtime *runtime = substream->runtime;
+   struct device *dev = rtd->card->dev;
+   unsigned long clk_rate = clk_get_rate(priv->cpu_mclk);
+   int ret;
+
+   if (clk_rate % 24576000 == 0) {
+   ret = snd_pcm_hw_constraint_list(runtime, 0,
+SNDRV_PCM_HW_PARAM_RATE,
+_audmix_rate_constraints);
+   if (ret < 0)
+   return ret;
+   } else {
+   dev_warn(dev, "mclk may be not supported %lu\n", clk_rate);
+   }
+
+   ret = snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_CHANNELS,
+  1, 8);
+   if (ret < 0)
+   return ret;
+
+   return snd_pcm_hw_constraint_mask64(runtime, SNDRV_PCM_HW_PARAM_FORMAT,
+   FSL_AUDMIX_FORMATS);
+}
+
+static int imx_audmix_fe_hw_params(struct snd_pcm_substream *substream,
+  struct snd_pcm_hw_params *params)
+{
+   struct snd_soc_pcm_runtime *rtd = substream->private_data;
+   struct device *dev = rtd->card->dev;
+   bool tx = substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
+   unsigned int fmt = SND_SOC_DAIFMT_DSP_A | SND_SOC_DAIFMT_NB_NF;
+   u32 channels = params_channels(params);
+   int ret, dir;
+
+   /* For playback the AUDMIX is slave, and for record is master */
+   fmt |= tx ? SND_SOC_DAIFMT_CBS_CFS : 

[PATCH v5 2/3] ASoC: add fsl_audmix DT binding documentation

2019-02-15 Thread Viorel Suman
Add the DT binding documentation for NXP Audio Mixer
CPU DAI driver.

Signed-off-by: Viorel Suman 
---
 .../devicetree/bindings/sound/fsl,audmix.txt   | 50 ++
 1 file changed, 50 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/sound/fsl,audmix.txt

diff --git a/Documentation/devicetree/bindings/sound/fsl,audmix.txt 
b/Documentation/devicetree/bindings/sound/fsl,audmix.txt
new file mode 100644
index 000..840b7e0
--- /dev/null
+++ b/Documentation/devicetree/bindings/sound/fsl,audmix.txt
@@ -0,0 +1,50 @@
+NXP Audio Mixer (AUDMIX).
+
+The Audio Mixer is a on-chip functional module that allows mixing of two
+audio streams into a single audio stream. Audio Mixer has two input serial
+audio interfaces. These are driven by two Synchronous Audio interface
+modules (SAI). Each input serial interface carries 8 audio channels in its
+frame in TDM manner. Mixer mixes audio samples of corresponding channels
+from two interfaces into a single sample. Before mixing, audio samples of
+two inputs can be attenuated based on configuration. The output of the
+Audio Mixer is also a serial audio interface. Like input interfaces it has
+the same TDM frame format. This output is used to drive the serial DAC TDM
+interface of audio codec and also sent to the external pins along with the
+receive path of normal audio SAI module for readback by the CPU.
+
+The output of Audio Mixer can be selected from any of the three streams
+ - serial audio input 1
+ - serial audio input 2
+ - mixed audio
+
+Mixing operation is independent of audio sample rate but the two audio
+input streams must have same audio sample rate with same number of channels
+in TDM frame to be eligible for mixing.
+
+Device driver required properties:
+=
+  - compatible : Compatible list, contains "fsl,imx8qm-audmix"
+
+  - reg: Offset and length of the register set for the 
device.
+
+  - clocks : Must contain an entry for each entry in clock-names.
+
+  - clock-names: Must include the "ipg" for register access.
+
+  - power-domains  : Must contain the phandle to AUDMIX power domain node
+
+  - dais   : Must contain a list of phandles to AUDMIX connected
+ DAIs. The current implementation requires two phandles
+ to SAI interfaces to be provided, the first SAI in the
+ list being used to route the AUDMIX output.
+
+Device driver configuration example:
+==
+  audmix: audmix@5984 {
+compatible = "fsl,imx8qm-audmix";
+reg = <0x0 0x5984 0x0 0x1>;
+clocks = < IMX8QXP_AUD_AUDMIX_IPG>;
+clock-names = "ipg";
+power-domains = <_audmix>;
+dais = <>, <>;
+  };
-- 
2.7.4



[PATCH v5 1/3] ASoC: fsl: Add Audio Mixer CPU DAI driver

2019-02-15 Thread Viorel Suman
This patch implements Audio Mixer CPU DAI driver for NXP iMX8 SOCs.
The Audio Mixer is a on-chip functional module that allows mixing of
two audio streams into a single audio stream.

Audio Mixer datasheet is available here:
https://www.nxp.com/docs/en/reference-manual/IMX8DQXPRM.pdf

Signed-off-by: Viorel Suman 
---
 sound/soc/fsl/Kconfig  |   7 +
 sound/soc/fsl/Makefile |   3 +
 sound/soc/fsl/fsl_audmix.c | 578 +
 sound/soc/fsl/fsl_audmix.h | 102 
 4 files changed, 690 insertions(+)
 create mode 100644 sound/soc/fsl/fsl_audmix.c
 create mode 100644 sound/soc/fsl/fsl_audmix.h

diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
index 7b1d997..0af2e056 100644
--- a/sound/soc/fsl/Kconfig
+++ b/sound/soc/fsl/Kconfig
@@ -24,6 +24,13 @@ config SND_SOC_FSL_SAI
  This option is only useful for out-of-tree drivers since
  in-tree drivers select it automatically.
 
+config SND_SOC_FSL_AUDMIX
+   tristate "Audio Mixer (AUDMIX) module support"
+   select REGMAP_MMIO
+   help
+ Say Y if you want to add Audio Mixer (AUDMIX)
+ support for the NXP iMX CPUs.
+
 config SND_SOC_FSL_SSI
tristate "Synchronous Serial Interface module (SSI) support"
select SND_SOC_IMX_PCM_DMA if SND_IMX_SOC != n
diff --git a/sound/soc/fsl/Makefile b/sound/soc/fsl/Makefile
index 3c0ff31..4172d5a 100644
--- a/sound/soc/fsl/Makefile
+++ b/sound/soc/fsl/Makefile
@@ -12,6 +12,7 @@ snd-soc-p1022-rdk-objs := p1022_rdk.o
 obj-$(CONFIG_SND_SOC_P1022_RDK) += snd-soc-p1022-rdk.o
 
 # Freescale SSI/DMA/SAI/SPDIF Support
+snd-soc-fsl-audmix-objs := fsl_audmix.o
 snd-soc-fsl-asoc-card-objs := fsl-asoc-card.o
 snd-soc-fsl-asrc-objs := fsl_asrc.o fsl_asrc_dma.o
 snd-soc-fsl-sai-objs := fsl_sai.o
@@ -22,6 +23,8 @@ snd-soc-fsl-esai-objs := fsl_esai.o
 snd-soc-fsl-micfil-objs := fsl_micfil.o
 snd-soc-fsl-utils-objs := fsl_utils.o
 snd-soc-fsl-dma-objs := fsl_dma.o
+
+obj-$(CONFIG_SND_SOC_FSL_AUDMIX) += snd-soc-fsl-audmix.o
 obj-$(CONFIG_SND_SOC_FSL_ASOC_CARD) += snd-soc-fsl-asoc-card.o
 obj-$(CONFIG_SND_SOC_FSL_ASRC) += snd-soc-fsl-asrc.o
 obj-$(CONFIG_SND_SOC_FSL_SAI) += snd-soc-fsl-sai.o
diff --git a/sound/soc/fsl/fsl_audmix.c b/sound/soc/fsl/fsl_audmix.c
new file mode 100644
index 000..07b72a3
--- /dev/null
+++ b/sound/soc/fsl/fsl_audmix.c
@@ -0,0 +1,578 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * NXP AUDMIX ALSA SoC Digital Audio Interface (DAI) driver
+ *
+ * Copyright 2017 NXP
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "fsl_audmix.h"
+
+#define SOC_ENUM_SINGLE_S(xreg, xshift, xtexts) \
+   SOC_ENUM_SINGLE(xreg, xshift, ARRAY_SIZE(xtexts), xtexts)
+
+static const char
+   *tdm_sel[] = { "TDM1", "TDM2", },
+   *mode_sel[] = { "Disabled", "TDM1", "TDM2", "Mixed", },
+   *width_sel[] = { "16b", "18b", "20b", "24b", "32b", },
+   *endis_sel[] = { "Disabled", "Enabled", },
+   *updn_sel[] = { "Downward", "Upward", },
+   *mask_sel[] = { "Unmask", "Mask", };
+
+static const struct soc_enum fsl_audmix_enum[] = {
+/* FSL_AUDMIX_CTR enums */
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_CTR, FSL_AUDMIX_CTR_MIXCLK_SHIFT, tdm_sel),
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_CTR, FSL_AUDMIX_CTR_OUTSRC_SHIFT, mode_sel),
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_CTR, FSL_AUDMIX_CTR_OUTWIDTH_SHIFT, width_sel),
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_CTR, FSL_AUDMIX_CTR_MASKRTDF_SHIFT, mask_sel),
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_CTR, FSL_AUDMIX_CTR_MASKCKDF_SHIFT, mask_sel),
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_CTR, FSL_AUDMIX_CTR_SYNCMODE_SHIFT, endis_sel),
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_CTR, FSL_AUDMIX_CTR_SYNCSRC_SHIFT, tdm_sel),
+/* FSL_AUDMIX_ATCR0 enums */
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_ATCR0, 0, endis_sel),
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_ATCR0, 1, updn_sel),
+/* FSL_AUDMIX_ATCR1 enums */
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_ATCR1, 0, endis_sel),
+SOC_ENUM_SINGLE_S(FSL_AUDMIX_ATCR1, 1, updn_sel),
+};
+
+struct fsl_audmix_state {
+   u8 tdms;
+   u8 clk;
+   char msg[64];
+};
+
+static const struct fsl_audmix_state prms[4][4] = {{
+   /* DIS->DIS, do nothing */
+   { .tdms = 0, .clk = 0, .msg = "" },
+   /* DIS->TDM1*/
+   { .tdms = 1, .clk = 1, .msg = "DIS->TDM1: TDM1 not started!\n" },
+   /* DIS->TDM2*/
+   { .tdms = 2, .clk = 2, .msg = "DIS->TDM2: TDM2 not started!\n" },
+   /* DIS->MIX */
+   { .tdms = 3, .clk = 0, .msg = "DIS->MIX: Please start both TDMs!\n" }
+}, {   /* TDM1->DIS */
+   { .tdms = 1, .clk = 0, .msg = "TDM1->DIS: TDM1 not started!\n" },
+   /* TDM1->TDM1, do nothing */
+   { .tdms = 0, .clk = 0, .msg = "" },
+   /* TDM1->TDM2 */
+   { .tdms = 3, .clk = 2, .msg = "TDM1->TDM2: Please start both TDMs!\n" },
+   /* TDM1->MIX */
+   { .tdms = 3, .clk = 0, .msg = "TDM1->MIX: Please start both TDMs!\n" }
+}, {   /* TDM2->DIS */
+   { .tdms = 2, .clk = 0, .msg = "TDM2->DIS: TDM2 not started!\n" },
+   /* TDM2->TDM1 */
+   { .tdms = 3, .clk 

[PATCH v5 0/3] Add NXP AUDMIX device and machine drivers

2019-02-15 Thread Viorel Suman
The patchset adds NXP Audio Mixer (AUDMIX) device and machine
drivers and related DT bindings documentation.

Changes since V4:
1. Removed "model" attribute from device driver DT bindings documentation
   as suggested by Nicolin.

Changes since V3:
1. Removed machine driver DT bindings documentation.
2. Trigger machine driver probe from device driver as suggested by Nicolin.

Changes since V2:
1. Moved "dais" node from machine driver DTS node to device driver DTS node
  as suggested by Rob.

Changes since V1:
1. Original patch split into distinct patches for the device driver and
  DT binding documentation.
2. Replaced AMIX with AUDMIX in both code and file names as it looks more
  RM-compliant.
3. Removed polarity control from CPU DAI driver as suggested by Nicolin.
4. Added machine driver and related DT binding documentation.

Viorel Suman (3):
  ASoC: fsl: Add Audio Mixer CPU DAI driver
  ASoC: add fsl_audmix DT binding documentation
  ASoC: fsl: Add Audio Mixer machine driver

 .../devicetree/bindings/sound/fsl,audmix.txt   |  50 ++
 sound/soc/fsl/Kconfig  |  16 +
 sound/soc/fsl/Makefile |   5 +
 sound/soc/fsl/fsl_audmix.c | 578 +
 sound/soc/fsl/fsl_audmix.h | 102 
 sound/soc/fsl/imx-audmix.c | 327 
 6 files changed, 1078 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/sound/fsl,audmix.txt
 create mode 100644 sound/soc/fsl/fsl_audmix.c
 create mode 100644 sound/soc/fsl/fsl_audmix.h
 create mode 100644 sound/soc/fsl/imx-audmix.c

-- 
2.7.4



[PATCH] ASoC: fsl_esai: fix register setting issue in RIGHT_J mode

2019-02-15 Thread S.j. Wang
The ESAI_xCR_xWA is xCR's bit, not the xCCR's bit, driver set it to
wrong register, correct it.

Signed-off-by: Shengjiu Wang 
---
 sound/soc/fsl/fsl_esai.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
index 57b484768a58..afe67c865330 100644
--- a/sound/soc/fsl/fsl_esai.c
+++ b/sound/soc/fsl/fsl_esai.c
@@ -398,7 +398,8 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, 
unsigned int fmt)
break;
case SND_SOC_DAIFMT_RIGHT_J:
/* Data on rising edge of bclk, frame high, right aligned */
-   xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCR_xWA;
+   xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP;
+   xcr  |= ESAI_xCR_xWA;
break;
case SND_SOC_DAIFMT_DSP_A:
/* Data on rising edge of bclk, frame high, 1clk before data */
@@ -455,12 +456,12 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, 
unsigned int fmt)
return -EINVAL;
}
 
-   mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR;
+   mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR | ESAI_xCR_xWA;
regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR, mask, xcr);
regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR, mask, xcr);
 
mask = ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCCR_xFSP |
-   ESAI_xCCR_xFSD | ESAI_xCCR_xCKD | ESAI_xCR_xWA;
+   ESAI_xCCR_xFSD | ESAI_xCCR_xCKD;
regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, mask, xccr);
regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, mask, xccr);
 
-- 
1.9.1



Re: Kernel panic when loading the IDE controller driver

2019-02-15 Thread sgosavi1
Hi,

> Hopefully there are examples of passing these values through ACPI.


Are you suggesting here to look at the ide-acpi.c sourc file available as
part of the driver code? In my original post I mentioned that I have
modified the ide-generic.c source file to use the IO port addresses and IRQ
number that we used in the older kernel. Also, added code in the kernel
under arch/PowerPC/mm/init_32.c source to configure the addresses required
by the driver as IO ports. But the error I am getting still continues to
suggest that probably the required address range is still setup by the
kernel as IO ports. 


How can we configure a set of virtual address as IO ports in the kernel
version 4.15.13?


Sachin 



--
Sent from: http://linuxppc.10917.n7.nabble.com/linuxppc-dev-f3.html


Re: [PATCH] powerpc: use $(origin ARCH) to select KBUILD_DEFCONFIG

2019-02-15 Thread Mathieu Malaterre
On Fri, Feb 15, 2019 at 10:41 AM Masahiro Yamada
 wrote:
>
> I often test all Kconfig commands for all architectures. To ease my
> workflow, I want 'make defconfig' at least working without any cross
> compiler.
>
> Currently, arch/powerpc/Makefile checks CROSS_COMPILE to decide the
> default defconfig source.
>
> If CROSS_COMPILE is unset, it is likely to be the native build, so
> 'uname -m' is useful to choose the defconfig. If CROSS_COMPILE is set,
> the user is cross-building (i.e. 'uname -m' is probably x86_64), so
> it falls back to ppc64_defconfig. Yup, make sense.
>
> However, I want to run 'make ARCH=* defconfig' without setting
> CROSS_COMPILE for each architecture.
>
> My suggestion is to check $(origin ARCH).
>
> When you cross-compile the kernel, you need to set ARCH from your
> environment or from the command line.
>
> For the native build, you do not need to set ARCH. The default in
> the top Makefile is used:
>
>   ARCH?= $(SUBARCH)
>
> Hence, $(origin ARCH) returns 'file'.
>
> Before this commit, 'make ARCH=powerpc defconfig' failed:

In case you have not seen it, please check:

http://patchwork.ozlabs.org/patch/1037835/

>   $ make ARCH=powerpc defconfig
>   *** Default configuration is based on target 'x86_64_defconfig'
>   ***
>   *** Can't find default configuration 
> "arch/powerpc/configs/x86_64_defconfig"!
>   ***
>
> After this commit, it will succeed:
>
>   $ make ARCH=powerpc defconfig
>   *** Default configuration is based on 'ppc64_defconfig'
>   #
>   # configuration written to .config
>   #
>
> Signed-off-by: Masahiro Yamada 
> ---
>
>  arch/powerpc/Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
> index ac03334..f989979 100644
> --- a/arch/powerpc/Makefile
> +++ b/arch/powerpc/Makefile
> @@ -34,7 +34,7 @@ ifdef CONFIG_PPC_BOOK3S_32
>  KBUILD_CFLAGS  += -mcpu=powerpc
>  endif
>
> -ifeq ($(CROSS_COMPILE),)
> +ifeq ($(origin ARCH), file)
>  KBUILD_DEFCONFIG := $(shell uname -m)_defconfig
>  else
>  KBUILD_DEFCONFIG := ppc64_defconfig
> --
> 2.7.4
>


Re: [PATCH v5 3/3] powerpc/32: Add KASAN support

2019-02-15 Thread Andrey Ryabinin



On 2/15/19 1:10 PM, Christophe Leroy wrote:
> 
> 
> Le 15/02/2019 à 11:01, Andrey Ryabinin a écrit :
>>
>>
>> On 2/15/19 11:41 AM, Christophe Leroy wrote:
>>>
>>>
>>> Le 14/02/2019 à 23:04, Daniel Axtens a écrit :
 Hi Christophe,

> --- a/arch/powerpc/include/asm/string.h
> +++ b/arch/powerpc/include/asm/string.h
> @@ -27,6 +27,20 @@ extern int memcmp(const void *,const void 
> *,__kernel_size_t);
>    extern void * memchr(const void *,int,__kernel_size_t);
>    extern void * memcpy_flushcache(void *,const void *,__kernel_size_t);
>    +void *__memset(void *s, int c, __kernel_size_t count);
> +void *__memcpy(void *to, const void *from, __kernel_size_t n);
> +void *__memmove(void *to, const void *from, __kernel_size_t n);
> +
> +#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
> +/*
> + * For files that are not instrumented (e.g. mm/slub.c) we
> + * should use not instrumented version of mem* functions.
> + */
> +#define memcpy(dst, src, len) __memcpy(dst, src, len)
> +#define memmove(dst, src, len) __memmove(dst, src, len)
> +#define memset(s, c, n) __memset(s, c, n)
> +#endif
> +

 I'm finding that I miss tests like 'kasan test: kasan_memcmp
 out-of-bounds in memcmp' because the uninstrumented asm version is used
 instead of an instrumented C version. I ended up guarding the relevant
 __HAVE_ARCH_x symbols behind a #ifndef CONFIG_KASAN and only exporting
 the arch versions if we're not compiled with KASAN.

 I find I need to guard and unexport strncpy, strncmp, memchr and
 memcmp. Do you need to do this on 32bit as well, or are those tests
 passing anyway for some reason?
>>>
>>> Indeed, I didn't try the KASAN test module recently, because my configs 
>>> don't have CONFIG_MODULE by default.
>>>
>>> Trying to test it now, I am discovering that module loading oopses with 
>>> latest version of my series, I need to figure out exactly why. Here below 
>>> the oops by modprobing test_module (the one supposed to just say hello to 
>>> the world).
>>>
>>> What we see is an access to the RO kasan zero area.
>>>
>>> The shadow mem is 0xf7c0..0xffc0
>>> Linear kernel memory is shadowed by 0xf7c0-0xf8bf
>>> 0xf8c0-0xffc0 is shadowed read only by the kasan zero page.
>>>
>>> Why is kasan trying to access that ? Isn't kasan supposed to not check 
>>> stuff in vmalloc area ?
>>
>> It tries to poison global variables in modules. If module is in vmalloc, 
>> than it will try to poison vmalloc.
>> Given that the vmalloc area is not so big on 32bits, the easiest solution is 
>> to cover all vmalloc with RW shadow.
>>
> 
> Euh ... Not so big ?
> 
> Memory: 96448K/131072K available (8016K kernel code, 1680K rwdata
> , 2720K rodata, 624K init, 4678K bss, 34624K reserved, 0K cma-reserved)
> Kernel virtual memory layout:
>   * 0xffefc000..0xc000  : fixmap
>   * 0xf7c0..0xffc0  : kasan shadow mem
>   * 0xf7a0..0xf7c0  : consistent mem
>   * 0xf7a0..0xf7a0  : early ioremap
>   * 0xc900..0xf7a0  : vmalloc & ioremap
> 
> Here, vmalloc area size 0x2ea0, that is 746Mbytes. Shadow for this would 
> be 93Mbytes and we are already using 16Mbytes to shadow the linear memory 
> area  this poor board has 128Mbytes RAM in total.
> 
> So another solution is needed.
> 

Ok.
As a temporary workaround your can make __asan_register_globals() to skip 
globals in vmalloc(). 
Obviously it means that out-of-bounds accesses to in modules will be missed.

Non temporary solution would making kasan to fully support vmalloc, i.e. remove 
RO shadow and allocate/free shadow on vmalloc()/vfree().
But this feels like separate task, out of scope of this patch set.

It is also possible to follow some other arches - dedicate separate address 
range for modules, allocate/free shadow in module_alloc/free.
But it doesn't seem worthy to implement this only for the sake of kasan, since 
vmalloc support needs to be done anyway.


[PATCH] powerpc: drop unused GENERIC_CSUM Kconfig item

2019-02-15 Thread Christophe Leroy
Commit d4fde568a34a ("powerpc/64: Use optimized checksum routines on
little-endian") converted last powerpc user of GENERIC_CSUM.

This patch does a final cleanup dropping the Kconfig GENERIC_CSUM
option which is always 'n', and associated piece of code in
asm/checksum.h

Fixes: d4fde568a34a ("powerpc/64: Use optimized checksum routines on 
little-endian")
Reported-by: Christoph Hellwig 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/Kconfig| 3 ---
 arch/powerpc/include/asm/checksum.h | 4 
 2 files changed, 7 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 08908219fba9..849b0d5ac3d1 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -251,9 +251,6 @@ config PPC_BARRIER_NOSPEC
 default y
 depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E
 
-config GENERIC_CSUM
-   def_bool n
-
 config EARLY_PRINTK
bool
default y
diff --git a/arch/powerpc/include/asm/checksum.h 
b/arch/powerpc/include/asm/checksum.h
index a78a57e5058d..72a65d744a28 100644
--- a/arch/powerpc/include/asm/checksum.h
+++ b/arch/powerpc/include/asm/checksum.h
@@ -9,9 +9,6 @@
  * 2 of the License, or (at your option) any later version.
  */
 
-#ifdef CONFIG_GENERIC_CSUM
-#include 
-#else
 #include 
 #include 
 /*
@@ -217,6 +214,5 @@ __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum sum);
 
-#endif
 #endif /* __KERNEL__ */
 #endif
-- 
2.13.3



Re: [PATCH] powerpc/64s/hash: Fix assert_slb_presence() use of the slbfee. instruction

2019-02-15 Thread Aneesh Kumar K.V
Nicholas Piggin  writes:

> The slbfee. instruction must have bit 24 of RB clear, failure to do
> so can result in false negatives that result in incorrect assertions.
>
> This is not obvious from the ISA v3.0B document, which only says:
>
> The hardware ignores the contents of RB 36:38 40:63 -- p.1032
>
> This patch fixes the bug and also clears all other bits from PPC bit
> 36-63, which is good practice when dealing with reserved or ignored
> bits.
>

Reviewed-by: Aneesh Kumar K.V 

> Fixes: e15a4fea4d ("powerpc/64s/hash: Add some SLB debugging tests")
> Reported-by: Aneesh Kumar K.V 
> Tested-by: Aneesh Kumar K.V 
> Signed-off-by: Nicholas Piggin 
> ---
>  arch/powerpc/mm/slb.c | 5 +
>  1 file changed, 5 insertions(+)
>
> diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
> index bc3914d54e26..5986df48359b 100644
> --- a/arch/powerpc/mm/slb.c
> +++ b/arch/powerpc/mm/slb.c
> @@ -69,6 +69,11 @@ static void assert_slb_presence(bool present, unsigned 
> long ea)
>   if (!cpu_has_feature(CPU_FTR_ARCH_206))
>   return;
>  
> + /*
> +  * slbfee. requires bit 24 (PPC bit 39) be clear in RB. Hardware
> +  * ignores all other bits from 0-27, so just clear them all.
> +  */
> + ea &= ~((1UL << 28) - 1);

I guess these numbers '28' are derived from the size of the smallest
segment we support. If co can we use ESID_MASK?


>   asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
>  
>   WARN_ON(present == (tmp == 0));
> -- 
> 2.18.0



[PATCH] powerpc/64s/hash: Fix assert_slb_presence() use of the slbfee. instruction

2019-02-15 Thread Nicholas Piggin
The slbfee. instruction must have bit 24 of RB clear, failure to do
so can result in false negatives that result in incorrect assertions.

This is not obvious from the ISA v3.0B document, which only says:

The hardware ignores the contents of RB 36:38 40:63 -- p.1032

This patch fixes the bug and also clears all other bits from PPC bit
36-63, which is good practice when dealing with reserved or ignored
bits.

Fixes: e15a4fea4d ("powerpc/64s/hash: Add some SLB debugging tests")
Reported-by: Aneesh Kumar K.V 
Tested-by: Aneesh Kumar K.V 
Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/mm/slb.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index bc3914d54e26..5986df48359b 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -69,6 +69,11 @@ static void assert_slb_presence(bool present, unsigned long 
ea)
if (!cpu_has_feature(CPU_FTR_ARCH_206))
return;
 
+   /*
+* slbfee. requires bit 24 (PPC bit 39) be clear in RB. Hardware
+* ignores all other bits from 0-27, so just clear them all.
+*/
+   ea &= ~((1UL << 28) - 1);
asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
 
WARN_ON(present == (tmp == 0));
-- 
2.18.0



Re: [PATCH v5 3/3] powerpc/32: Add KASAN support

2019-02-15 Thread Christophe Leroy




Le 15/02/2019 à 11:01, Andrey Ryabinin a écrit :



On 2/15/19 11:41 AM, Christophe Leroy wrote:



Le 14/02/2019 à 23:04, Daniel Axtens a écrit :

Hi Christophe,


--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -27,6 +27,20 @@ extern int memcmp(const void *,const void *,__kernel_size_t);
   extern void * memchr(const void *,int,__kernel_size_t);
   extern void * memcpy_flushcache(void *,const void *,__kernel_size_t);
   +void *__memset(void *s, int c, __kernel_size_t count);
+void *__memcpy(void *to, const void *from, __kernel_size_t n);
+void *__memmove(void *to, const void *from, __kernel_size_t n);
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+


I'm finding that I miss tests like 'kasan test: kasan_memcmp
out-of-bounds in memcmp' because the uninstrumented asm version is used
instead of an instrumented C version. I ended up guarding the relevant
__HAVE_ARCH_x symbols behind a #ifndef CONFIG_KASAN and only exporting
the arch versions if we're not compiled with KASAN.

I find I need to guard and unexport strncpy, strncmp, memchr and
memcmp. Do you need to do this on 32bit as well, or are those tests
passing anyway for some reason?


Indeed, I didn't try the KASAN test module recently, because my configs don't 
have CONFIG_MODULE by default.

Trying to test it now, I am discovering that module loading oopses with latest 
version of my series, I need to figure out exactly why. Here below the oops by 
modprobing test_module (the one supposed to just say hello to the world).

What we see is an access to the RO kasan zero area.

The shadow mem is 0xf7c0..0xffc0
Linear kernel memory is shadowed by 0xf7c0-0xf8bf
0xf8c0-0xffc0 is shadowed read only by the kasan zero page.

Why is kasan trying to access that ? Isn't kasan supposed to not check stuff in 
vmalloc area ?


It tries to poison global variables in modules. If module is in vmalloc, than 
it will try to poison vmalloc.
Given that the vmalloc area is not so big on 32bits, the easiest solution is to 
cover all vmalloc with RW shadow.



Euh ... Not so big ?

Memory: 96448K/131072K available (8016K kernel code, 1680K rwdata
, 2720K rodata, 624K init, 4678K bss, 34624K reserved, 0K cma-reserved)
Kernel virtual memory layout:
  * 0xffefc000..0xc000  : fixmap
  * 0xf7c0..0xffc0  : kasan shadow mem
  * 0xf7a0..0xf7c0  : consistent mem
  * 0xf7a0..0xf7a0  : early ioremap
  * 0xc900..0xf7a0  : vmalloc & ioremap

Here, vmalloc area size 0x2ea0, that is 746Mbytes. Shadow for this 
would be 93Mbytes and we are already using 16Mbytes to shadow the linear 
memory area  this poor board has 128Mbytes RAM in total.


So another solution is needed.

Christophe


Re: [PATCH v5 3/3] powerpc/32: Add KASAN support

2019-02-15 Thread Andrey Ryabinin



On 2/15/19 11:41 AM, Christophe Leroy wrote:
> 
> 
> Le 14/02/2019 à 23:04, Daniel Axtens a écrit :
>> Hi Christophe,
>>
>>> --- a/arch/powerpc/include/asm/string.h
>>> +++ b/arch/powerpc/include/asm/string.h
>>> @@ -27,6 +27,20 @@ extern int memcmp(const void *,const void 
>>> *,__kernel_size_t);
>>>   extern void * memchr(const void *,int,__kernel_size_t);
>>>   extern void * memcpy_flushcache(void *,const void *,__kernel_size_t);
>>>   +void *__memset(void *s, int c, __kernel_size_t count);
>>> +void *__memcpy(void *to, const void *from, __kernel_size_t n);
>>> +void *__memmove(void *to, const void *from, __kernel_size_t n);
>>> +
>>> +#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
>>> +/*
>>> + * For files that are not instrumented (e.g. mm/slub.c) we
>>> + * should use not instrumented version of mem* functions.
>>> + */
>>> +#define memcpy(dst, src, len) __memcpy(dst, src, len)
>>> +#define memmove(dst, src, len) __memmove(dst, src, len)
>>> +#define memset(s, c, n) __memset(s, c, n)
>>> +#endif
>>> +
>>
>> I'm finding that I miss tests like 'kasan test: kasan_memcmp
>> out-of-bounds in memcmp' because the uninstrumented asm version is used
>> instead of an instrumented C version. I ended up guarding the relevant
>> __HAVE_ARCH_x symbols behind a #ifndef CONFIG_KASAN and only exporting
>> the arch versions if we're not compiled with KASAN.
>>
>> I find I need to guard and unexport strncpy, strncmp, memchr and
>> memcmp. Do you need to do this on 32bit as well, or are those tests
>> passing anyway for some reason?
> 
> Indeed, I didn't try the KASAN test module recently, because my configs don't 
> have CONFIG_MODULE by default.
> 
> Trying to test it now, I am discovering that module loading oopses with 
> latest version of my series, I need to figure out exactly why. Here below the 
> oops by modprobing test_module (the one supposed to just say hello to the 
> world).
> 
> What we see is an access to the RO kasan zero area.
> 
> The shadow mem is 0xf7c0..0xffc0
> Linear kernel memory is shadowed by 0xf7c0-0xf8bf
> 0xf8c0-0xffc0 is shadowed read only by the kasan zero page.
> 
> Why is kasan trying to access that ? Isn't kasan supposed to not check stuff 
> in vmalloc area ?

It tries to poison global variables in modules. If module is in vmalloc, than 
it will try to poison vmalloc.
Given that the vmalloc area is not so big on 32bits, the easiest solution is to 
cover all vmalloc with RW shadow.





Re: [PATCH] powerpc/mm: Handle mmap_min_addr correctly in get_unmapped_area callback

2019-02-15 Thread Laurent Dufour

Le 15/02/2019 à 09:16, Aneesh Kumar K.V a écrit :

After we ALIGN up the address we need to make sure we didn't overflow
and resulted in zero address. In that case, we need to make sure that
the returned address is greater than mmap_min_addr.

Also when doing top-down search the low_limit is not PAGE_SIZE but rather
max(PAGE_SIZE, mmap_min_addr). This handle cases in which mmap_min_addr >
PAGE_SIZE.

This fixes selftest va_128TBswitch --run-hugetlb reporting failures when
run as non root user for

mmap(-1, MAP_HUGETLB)
mmap(-1, MAP_HUGETLB)

We also avoid the first mmap(-1, MAP_HUGETLB) returning NULL address as mmap 
address
with this change


FWIW:
Reviewed-by: Laurent Dufour 


CC: Laurent Dufour 
Signed-off-by: Aneesh Kumar K.V 
---
  arch/powerpc/mm/hugetlbpage-radix.c |  5 +++--
  arch/powerpc/mm/slice.c | 10 ++
  2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/mm/hugetlbpage-radix.c 
b/arch/powerpc/mm/hugetlbpage-radix.c
index 2486bee0f93e..97c7a39ebc00 100644
--- a/arch/powerpc/mm/hugetlbpage-radix.c
+++ b/arch/powerpc/mm/hugetlbpage-radix.c
@@ -1,6 +1,7 @@
  // SPDX-License-Identifier: GPL-2.0
  #include 
  #include 
+#include 
  #include 
  #include 
  #include 
@@ -73,7 +74,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned 
long addr,
if (addr) {
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
-   if (high_limit - len >= addr &&
+   if (high_limit - len >= addr && addr >= mmap_min_addr &&
(!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -83,7 +84,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned 
long addr,
 */
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
-   info.low_limit = PAGE_SIZE;
+   info.low_limit = max(PAGE_SIZE, mmap_min_addr);
info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 06898c13901d..aec91dbcdc0b 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -32,6 +32,7 @@
  #include 
  #include 
  #include 
+#include 
  #include 
  #include 
  #include 
@@ -377,6 +378,7 @@ static unsigned long slice_find_area_topdown(struct 
mm_struct *mm,
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
unsigned long addr, found, prev;
struct vm_unmapped_area_info info;
+   unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr);

info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
@@ -393,7 +395,7 @@ static unsigned long slice_find_area_topdown(struct 
mm_struct *mm,
if (high_limit > DEFAULT_MAP_WINDOW)
addr += mm->context.slb_addr_limit - DEFAULT_MAP_WINDOW;

-   while (addr > PAGE_SIZE) {
+   while (addr > min_addr) {
info.high_limit = addr;
if (!slice_scan_available(addr - 1, available, 0, ))
continue;
@@ -405,8 +407,8 @@ static unsigned long slice_find_area_topdown(struct 
mm_struct *mm,
 * Check if we need to reduce the range, or if we can
 * extend it to cover the previous available slice.
 */
-   if (addr < PAGE_SIZE)
-   addr = PAGE_SIZE;
+   if (addr < min_addr)
+   addr = min_addr;
else if (slice_scan_available(addr - 1, available, 0, )) {
addr = prev;
goto prev_slice;
@@ -528,7 +530,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, 
unsigned long len,
addr = _ALIGN_UP(addr, page_size);
slice_dbg(" aligned addr=%lx\n", addr);
/* Ignore hint if it's too large or overlaps a VMA */
-   if (addr > high_limit - len ||
+   if (addr > high_limit - len || addr < mmap_min_addr ||
!slice_area_is_free(mm, addr, len))
addr = 0;
}





[PATCH] powerpc: use $(origin ARCH) to select KBUILD_DEFCONFIG

2019-02-15 Thread Masahiro Yamada
I often test all Kconfig commands for all architectures. To ease my
workflow, I want 'make defconfig' at least working without any cross
compiler.

Currently, arch/powerpc/Makefile checks CROSS_COMPILE to decide the
default defconfig source.

If CROSS_COMPILE is unset, it is likely to be the native build, so
'uname -m' is useful to choose the defconfig. If CROSS_COMPILE is set,
the user is cross-building (i.e. 'uname -m' is probably x86_64), so
it falls back to ppc64_defconfig. Yup, make sense.

However, I want to run 'make ARCH=* defconfig' without setting
CROSS_COMPILE for each architecture.

My suggestion is to check $(origin ARCH).

When you cross-compile the kernel, you need to set ARCH from your
environment or from the command line.

For the native build, you do not need to set ARCH. The default in
the top Makefile is used:

  ARCH?= $(SUBARCH)

Hence, $(origin ARCH) returns 'file'.

Before this commit, 'make ARCH=powerpc defconfig' failed:

  $ make ARCH=powerpc defconfig
  *** Default configuration is based on target 'x86_64_defconfig'
  ***
  *** Can't find default configuration "arch/powerpc/configs/x86_64_defconfig"!
  ***

After this commit, it will succeed:

  $ make ARCH=powerpc defconfig
  *** Default configuration is based on 'ppc64_defconfig'
  #
  # configuration written to .config
  #

Signed-off-by: Masahiro Yamada 
---

 arch/powerpc/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index ac03334..f989979 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -34,7 +34,7 @@ ifdef CONFIG_PPC_BOOK3S_32
 KBUILD_CFLAGS  += -mcpu=powerpc
 endif
 
-ifeq ($(CROSS_COMPILE),)
+ifeq ($(origin ARCH), file)
 KBUILD_DEFCONFIG := $(shell uname -m)_defconfig
 else
 KBUILD_DEFCONFIG := ppc64_defconfig
-- 
2.7.4



Re: [PATCH 02/11] riscv: remove the HAVE_KPROBES option

2019-02-15 Thread Masahiro Yamada
On Thu, Feb 14, 2019 at 2:40 AM Christoph Hellwig  wrote:
>
> HAVE_KPROBES is defined genericly in arch/Kconfig and architectures
> should just select it if supported.
>
> Signed-off-by: Christoph Hellwig 

Do you want this patch picked up by me?

Or, by Palmer?



> ---
>  arch/riscv/Kconfig | 3 ---
>  1 file changed, 3 deletions(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 515fc3cc9687..b60f4e3e36f4 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -94,9 +94,6 @@ config PGTABLE_LEVELS
> default 3 if 64BIT
> default 2
>
> -config HAVE_KPROBES
> -   def_bool n
> -
>  menu "Platform type"
>
>  choice
> --
> 2.20.1
>
>
> ___
> linux-riscv mailing list
> linux-ri...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv



-- 
Best Regards
Masahiro Yamada


Re: [PATCH 01/11] powerpc: remove dead ifdefs in

2019-02-15 Thread Masahiro Yamada
On Fri, Feb 15, 2019 at 5:18 PM Christophe Leroy
 wrote:
>
>
>
> Le 14/02/2019 à 18:05, Christoph Hellwig a écrit :
> > On Thu, Feb 14, 2019 at 09:26:19AM +0100, Christophe Leroy wrote:
> >> Could you also remove the 'config GENERIC_CSUM' item in
> >> arch/powerpc/Kconfig ?
> >
> > All the separate declarations go away later in this series.
> >
>
> I saw, but the purpose of the later patch is to replace arch defined
> GENERIC_CSUM by a common one that arches select. For the powerpc you are
> not in that case as the powerpc does not select GENERIC_CSUM.
>
> So I really believe that all stale bits of remaining GENERIC_CSUM in
> powerpc should go away as a single dedicated patch, as a fix of commit
> d4fde568a34a ("powerpc/64: Use optimized checksum routines on
> little-endian")
>
> Regarding the #ifdef __KERNEL__ , I think we should do a wide cleanup in
> arch/powerpc/include/asm, not only asm/checksum.h
>
> Christophe


Please send such cleanups to PowerPC ML
instead of to me (Kbuild).


Christoph,
I think this one is independent of the rest of this series.
How about separating it if you volunteer to Powerpc cleansup?


-- 
Best Regards
Masahiro Yamada


Re: [PATCH] powerpc/ptrace: Simplify vr_get/set() to avoid GCC warning

2019-02-15 Thread Michael Ellerman
Mathieu Malaterre  writes:
> On Fri, Feb 15, 2019 at 7:14 AM Michael Ellerman  wrote:
>>
>> GCC 8 warns about the logic in vr_get/set(), which with -Werror breaks
>> the build:
>>
>>   In function ‘user_regset_copyin’,
>>   inlined from ‘vr_set’ at arch/powerpc/kernel/ptrace.c:628:9:
>>   include/linux/regset.h:295:4: error: ‘memcpy’ offset [-527, -529] is
>>   out of the bounds [0, 16] of object ‘vrsave’ with type ‘union
>>   ’ [-Werror=array-bounds]
>>   arch/powerpc/kernel/ptrace.c: In function ‘vr_set’:
>>   arch/powerpc/kernel/ptrace.c:623:5: note: ‘vrsave’ declared here
>>  } vrsave;
>>
>> This has been identified as a regression in GCC, see GCC bug 88273.
>
> Good point, this does not seems this will be backported.
>
>> However we can avoid the warning and also simplify the logic and make
>> it more robust.
>>
>> Currently we pass -1 as end_pos to user_regset_copyout(). This says
>> "copy up to the end of the regset".
>>
>> The definition of the regset is:
>> [REGSET_VMX] = {
>> .core_note_type = NT_PPC_VMX, .n = 34,
>> .size = sizeof(vector128), .align = sizeof(vector128),
>> .active = vr_active, .get = vr_get, .set = vr_set
>> },
>>
>> The end is calculated as (n * size), ie. 34 * sizeof(vector128).
>>
>> In vr_get/set() we pass start_pos as 33 * sizeof(vector128), meaning
>> we can copy up to sizeof(vector128) into/out-of vrsave.
>>
>> The on-stack vrsave is defined as:
>>   union {
>>   elf_vrreg_t reg;
>>   u32 word;
>>   } vrsave;
>>
>> And elf_vrreg_t is:
>>   typedef __vector128 elf_vrreg_t;
>>
>> So there is no bug, but we rely on all those sizes lining up,
>> otherwise we would have a kernel stack exposure/overwrite on our
>> hands.
>>
>> Rather than relying on that we can pass an explict end_pos based on
>> the sizeof(vrsave). The result should be exactly the same but it's
>> more obviously not over-reading/writing the stack and it avoids the
>> compiler warning.
>>
>
> maybe:
>
> Link: https://lkml.org/lkml/2018/8/16/117

Hmm, I prefer not to include links because they're unlikely to last
forever like the git history.

If we do include them the preferred form is a link to lkml.kernel.org
using the message id. That way the message is recorded and also the link
is "guaranteed" to work in future, eg:

http://lkml.kernel.org/r/alpine.lrh.2.21.1808161041350.16...@math.ut.ee

In this case I don't think the link adds much over what I have in the
change log, in particular I did credit Meelis as the reporter.

> In any case the warning is now gone:
>
> Tested-by: Mathieu Malaterre 

Thanks.

cheers

>> Reported-by: Meelis Roos 
>> Reported-by: Mathieu Malaterre 
>> Cc: sta...@vger.kernel.org
>> Signed-off-by: Michael Ellerman 


Re: [PATCH] mmap.2: describe the 5level paging hack

2019-02-15 Thread Michael Ellerman
Jann Horn  writes:

> The manpage is missing information about the compatibility hack for
> 5-level paging that went in in 4.14, around commit ee00f4a32a76 ("x86/mm:
> Allow userspace have mappings above 47-bit"). Add some information about
> that.

Thanks for doing this.

> While I don't think any hardware supporting this is shipping yet (?), I
> think it's useful to try to write a manpage for this API, partly to
> figure out how usable that API actually is, and partly because when this
> hardware does ship, it'd be nice if distro manpages had information about
> how to use it.
>
> Signed-off-by: Jann Horn 
> ---
> This patch goes on top of the patch "[PATCH] mmap.2: fix description of
> treatment of the hint" that I just sent, but I'm not sending them in a
> series because I want the first one to go in, and I think this one might
> be a bit more controversial.
>
> It would be nice if the architecture maintainers and mm folks could have
> a look at this and check that what I wrote is right - I only looked at
> the source for this, I haven't tried it.
>
>  man2/mmap.2 | 15 +++
>  1 file changed, 15 insertions(+)
>
> diff --git a/man2/mmap.2 b/man2/mmap.2
> index 8556bbfeb..977782fa8 100644
> --- a/man2/mmap.2
> +++ b/man2/mmap.2
> @@ -67,6 +67,8 @@ is NULL,
>  then the kernel chooses the (page-aligned) address
>  at which to create the mapping;
>  this is the most portable method of creating a new mapping.
> +On Linux, in this case, the kernel may limit the maximum address that can be
> +used for allocations to a legacy limit for compatibility reasons.
>  If
>  .I addr
>  is not NULL,
> @@ -77,6 +79,19 @@ or equal to the value specified by
>  and attempt to create the mapping there.
>  If another mapping already exists there, the kernel picks a new
>  address, independent of the hint.
> +However, if a hint above the architecture's legacy address limit is provided
> +(on x86-64: above 0x7000, on arm64: above 0x1, on ppc64 
> with
> +book3s: above 0x7fff or 0x3fff, depending on page size), the

It doesn't depend on page size for ppc64(le). With 4K pages the user VM
is always 64TB.

So the only boundary for us is at 128T when using 64K pages.

cheers


Re: [PATCH] powerpc/ptrace: Add prototype for function pt_regs_check

2019-02-15 Thread Mathieu Malaterre
On Fri, Feb 15, 2019 at 9:21 AM Christophe Leroy
 wrote:
>
>
>
> Le 15/02/2019 à 09:11, Mathieu Malaterre a écrit :
> > On Sat, Dec 8, 2018 at 4:46 PM Mathieu Malaterre  wrote:
> >>
> >> `pt_regs_check` is a dummy function, its purpose is to break the build
> >> if struct pt_regs and struct user_pt_regs don't match.
> >>
> >> This function has no functionnal purpose, and will get eliminated at
> >> link time or after init depending on CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
> >>
> >> This commit adds a prototype to fix warning at W=1:
> >>
> >>arch/powerpc/kernel/ptrace.c:3339:13: error: no previous prototype for 
> >> ‘pt_regs_check’ [-Werror=missing-prototypes]
> >>
> >> Suggested-by: Christophe Leroy 
> >> Signed-off-by: Mathieu Malaterre 
> >> ---
> >>   arch/powerpc/kernel/ptrace.c | 4 
> >>   1 file changed, 4 insertions(+)
> >>
> >> diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
> >> index a398999d0770..341c0060b4c8 100644
> >> --- a/arch/powerpc/kernel/ptrace.c
> >> +++ b/arch/powerpc/kernel/ptrace.c
> >> @@ -3338,6 +3338,10 @@ void do_syscall_trace_leave(struct pt_regs *regs)
> >>  user_enter();
> >>   }
> >>
> >> +void __init pt_regs_check(void);
> >> +/* dummy function, its purpose is to break the build if struct pt_regs and
> >> + * struct user_pt_regs don't match.
> >> + */
> >
> > Another trick which seems to work with GCC is:
> >
> > -void __init pt_regs_check(void)
> > +static inline void __init pt_regs_check(void)
>
> Does this really work ? Did you test to ensure that the BUILD_BUG_ON
> still detect mismatch between struct pt_regs and struct user_pt_regs ?
>

My bad, I was unaware of GCC behavior for static inline in this case.
Sorry for the noise.
Original ugly patch does work though.
>
> >
> >>   void __init pt_regs_check(void)
> >>   {
> >>  BUILD_BUG_ON(offsetof(struct pt_regs, gpr) !=
> >> --
> >> 2.19.2
> >>


Re: [PATCH 08/11] lib: consolidate the GENERIC_BUG symbol

2019-02-15 Thread Masahiro Yamada
On Thu, Feb 14, 2019 at 2:40 AM Christoph Hellwig  wrote:
>
> And just let the architectures that want it select the symbol.
> Same for GENERIC_BUG_RELATIVE_POINTERS.
>
> Signed-off-by: Christoph Hellwig 


This slightly changes the behavior of GENERIC_BUG_RELATIVE_POINTERS
for arm64, riscv, x86.
Previously, GENERIC_BUG_RELATIVE_POINTERS was enabled only when BUG=y.


Having said that, this is not a big deal.
When CONFIG_GENERIC_BUG=n, CONFIG_GENERIC_BUG_RELATIVE_POINTERS
is actually don't care.



If you change this,
could you add some comments in commit description?


> ---

> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index c39dac831f08..913b2ca7ec22 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -85,6 +85,8 @@ config ARM64
> select FRAME_POINTER
> select GENERIC_ALLOCATOR
> select GENERIC_ARCH_TOPOLOGY
> +   select GENERIC_BUG if BUG
> +   select GENERIC_BUG_RELATIVE_POINTERS

Precisely,

  select GENERIC_BUG_RELATIVE_POINTERS if BUG




> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 732614eb3683..c410ed896567 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -19,6 +19,8 @@ config RISCV
> select ARCH_WANT_FRAME_POINTERS
> select CLONE_BACKWARDS
> select COMMON_CLK
> +   select GENERIC_BUG if BUG
> +   select GENERIC_BUG_RELATIVE_POINTERS if 64BIT

Precisely,

  select GENERIC_BUG_RELATIVE_POINTERS if 64BIT && BUG


> diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
> index 15ccdd04814e..2a5c12be633e 100644
> --- a/arch/s390/Kconfig
> +++ b/arch/s390/Kconfig
> @@ -17,12 +17,6 @@ config ARCH_HAS_ILOG2_U64
>  config GENERIC_HWEIGHT
> def_bool y
>
> -config GENERIC_BUG
> -   def_bool y if BUG
> -
> -config GENERIC_BUG_RELATIVE_POINTERS
> -   def_bool y
> -


Hmm, s390 enables GENERIC_BUG_RELATIVE_POINTERS
irrespective of BUG...





> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 1bd4f19b6b28..f4cb31174e1b 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -91,6 +91,8 @@ config X86
> select DCACHE_WORD_ACCESS
> select EDAC_ATOMIC_SCRUB
> select EDAC_SUPPORT
> +   select GENERIC_BUG  if BUG
> +   select GENERIC_BUG_RELATIVE_POINTERSif X86_64

Precisely,

  select GENERIC_BUG_RELATIVE_POINTERSif X86_64 && BUG






--
Best Regards
Masahiro Yamada


Re: [PATCH v5 3/3] powerpc/32: Add KASAN support

2019-02-15 Thread Christophe Leroy




Le 14/02/2019 à 23:04, Daniel Axtens a écrit :

Hi Christophe,


--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -27,6 +27,20 @@ extern int memcmp(const void *,const void *,__kernel_size_t);
  extern void * memchr(const void *,int,__kernel_size_t);
  extern void * memcpy_flushcache(void *,const void *,__kernel_size_t);
  
+void *__memset(void *s, int c, __kernel_size_t count);

+void *__memcpy(void *to, const void *from, __kernel_size_t n);
+void *__memmove(void *to, const void *from, __kernel_size_t n);
+
+#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)
+/*
+ * For files that are not instrumented (e.g. mm/slub.c) we
+ * should use not instrumented version of mem* functions.
+ */
+#define memcpy(dst, src, len) __memcpy(dst, src, len)
+#define memmove(dst, src, len) __memmove(dst, src, len)
+#define memset(s, c, n) __memset(s, c, n)
+#endif
+


I'm finding that I miss tests like 'kasan test: kasan_memcmp
out-of-bounds in memcmp' because the uninstrumented asm version is used
instead of an instrumented C version. I ended up guarding the relevant
__HAVE_ARCH_x symbols behind a #ifndef CONFIG_KASAN and only exporting
the arch versions if we're not compiled with KASAN.

I find I need to guard and unexport strncpy, strncmp, memchr and
memcmp. Do you need to do this on 32bit as well, or are those tests
passing anyway for some reason?


Indeed, I didn't try the KASAN test module recently, because my configs 
don't have CONFIG_MODULE by default.


Trying to test it now, I am discovering that module loading oopses with 
latest version of my series, I need to figure out exactly why. Here 
below the oops by modprobing test_module (the one supposed to just say 
hello to the world).


What we see is an access to the RO kasan zero area.

The shadow mem is 0xf7c0..0xffc0
Linear kernel memory is shadowed by 0xf7c0-0xf8bf
0xf8c0-0xffc0 is shadowed read only by the kasan zero page.

Why is kasan trying to access that ? Isn't kasan supposed to not check 
stuff in vmalloc area ?


[  189.087499] BUG: Unable to handle kernel data access at 0xf8eb7818
[  189.093455] Faulting instruction address: 0xc001ab60
[  189.098383] Oops: Kernel access of bad area, sig: 11 [#1]
[  189.103732] BE PAGE_SIZE=16K PREEMPT CMPC885
[  189.111414] Modules linked in: test_module(+)
[  189.115817] CPU: 0 PID: 514 Comm: modprobe Not tainted 
5.0.0-rc5-s3k-dev-00645-g1dd3acf23952 #1016

[  189.124627] NIP:  c001ab60 LR: c0194fe8 CTR: 0003
[  189.129641] REGS: c5645b90 TRAP: 0300   Not tainted 
(5.0.0-rc5-s3k-dev-00645-g1dd3acf23952)

[  189.137924] MSR:  9032   CR: 44002884  XER: 
[  189.144571] DAR: f8eb7818 DSISR: 8a00
[  189.144571] GPR00: c0196620 c5645c40 c5aac000 f8eb7818  
0003 f8eb7817 c01950d0
[  189.144571] GPR08: c00c2720 c95bc010  c95bc1a0 c01965ec 
100d7b30 c0802b80 c5ae0308
[  189.144571] GPR16: c5990480 0124 000f c00bcf7c c5ae0324 
c95bc32c 06b8 0001
[  189.144571] GPR24: c95bc364 c95bc360 c95bc2c0 c95bc1a0 0002 
 0018 f8eb781b

[  189.182611] NIP [c001ab60] __memset+0xb4/0xc0
[  189.186922] LR [c0194fe8] kasan_unpoison_shadow+0x34/0x54
[  189.192136] Call Trace:
[  189.194682] [c5645c50] [c0196620] __asan_register_globals+0x34/0x74
[  189.200900] [c5645c70] [c00c27a4] do_init_module+0xbc/0x5a4
[  189.206392] [c5645ca0] [c00c0d08] load_module+0x2b5c/0x3194
[  189.211901] [c5645e70] [c00c14c8] sys_init_module+0x188/0x1bc
[  189.217572] [c5645f40] [c001311c] ret_from_syscall+0x0/0x38
[  189.223049] --- interrupt: c01 at 0xfda2b50
[  189.223049] LR = 0x10014b18
[  189.230175] Instruction dump:
[  189.233117] 4200fffc 70a50003 4d820020 7ca903a6 38c60003 9c860001 
4200fffc 4e800020
[  189.240859] 2c05 4d820020 7ca903a6 38c3 <9c860001> 4200fffc 
4e800020 7c032040

[  189.248809] ---[ end trace 45cbb1b3215e5959 ]---

Christophe



Regards,
Daniel



  #ifdef CONFIG_PPC64
  #define __HAVE_ARCH_MEMSET32
  #define __HAVE_ARCH_MEMSET64
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 879b36602748..fc4c42262694 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -16,8 +16,9 @@ CFLAGS_prom_init.o  += -fPIC
  CFLAGS_btext.o+= -fPIC
  endif
  
-CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)

-CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+CFLAGS_early_32.o += -DDISABLE_BRANCH_PROFILING
+CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) 
-DDISABLE_BRANCH_PROFILING
+CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) 
-DDISABLE_BRANCH_PROFILING
  CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
  CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
  
@@ -31,6 +32,10 @@ CFLAGS_REMOVE_btext.o = $(CC_FLAGS_FTRACE)

  CFLAGS_REMOVE_prom.o = $(CC_FLAGS_FTRACE)
  endif
  
+KASAN_SANITIZE_early_32.o := n

+KASAN_SANITIZE_cputable.o := n
+KASAN_SANITIZE_prom_init.o := n
+
  obj-y := 

Re: [RFC PATCH 5/5] powerpc: KASAN for 64bit Book3E

2019-02-15 Thread Dmitry Vyukov
On Fri, Feb 15, 2019 at 1:05 AM Daniel Axtens  wrote:
>
> Wire up KASAN. Only outline instrumentation is supported.
>
> The KASAN shadow area is mapped into vmemmap space:
> 0x8000 0400   to 0x8000 0600  .
> To do this we require that vmemmap be disabled. (This is the default
> in the kernel config that QorIQ provides for the machine in their
> SDK anyway - they use flat memory.)
>
> Only the kernel linear mapping (0xc000...) is checked. The vmalloc and
> ioremap areas (also in 0x800...) are all mapped to a zero page. As
> with the Book3S hash series, this requires overriding the memory <->
> shadow mapping.
>
> Also, as with both previous 64-bit series, early instrumentation is not
> supported.  It would allow us to drop the check_return_arch_not_ready()
> hook in the KASAN core, but it's tricky to get it set up early enough:
> we need it setup before the first call to instrumented code like printk().
> Perhaps in the future.
>
> Only KASAN_MINIMAL works.
>
> Lightly tested on e6500. KVM, kexec and xmon have not been tested.

Hi Daniel,

This is great!

Not related to the patch, but if you booted a real devices and used it
to some degree, I wonder if you hit any KASAN reports?

Thanks

> The test_kasan module fires warnings as expected, except for the
> following tests:
>
>  - Expected/by design:
> kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
>
>  - Due to only supporting KASAN_MINIMAL:
> kasan test: kasan_stack_oob out-of-bounds on stack
> kasan test: kasan_global_oob out-of-bounds global variable
> kasan test: kasan_alloca_oob_left out-of-bounds to left on alloca
> kasan test: kasan_alloca_oob_right out-of-bounds to right on alloca
> kasan test: use_after_scope_test use-after-scope on int
> kasan test: use_after_scope_test use-after-scope on array
>
> Thanks to those who have done the heavy lifting over the past several years:
>  - Christophe's 32 bit series: 
> https://lists.ozlabs.org/pipermail/linuxppc-dev/2019-February/185379.html
>  - Aneesh's Book3S hash series: https://lwn.net/Articles/655642/
>  - Balbir's Book3S radix series: https://patchwork.ozlabs.org/patch/795211/
>
> Cc: Christophe Leroy 
> Cc: Aneesh Kumar K.V 
> Cc: Balbir Singh 
> Signed-off-by: Daniel Axtens 
>
> ---
>
> While useful if you have a book3e device, this is mostly intended
> as a warm-up exercise for reviving Aneesh's series for book3s hash.
> In particular, changes to the kasan core are going to be required
> for hash and radix as well.
> ---
>  arch/powerpc/Kconfig |  1 +
>  arch/powerpc/Makefile|  2 +
>  arch/powerpc/include/asm/kasan.h | 77 ++--
>  arch/powerpc/include/asm/ppc_asm.h   |  7 ++
>  arch/powerpc/include/asm/string.h|  7 +-
>  arch/powerpc/lib/mem_64.S|  6 +-
>  arch/powerpc/lib/memcmp_64.S |  5 +-
>  arch/powerpc/lib/memcpy_64.S |  3 +-
>  arch/powerpc/lib/string.S| 15 ++--
>  arch/powerpc/mm/Makefile |  2 +
>  arch/powerpc/mm/kasan/Makefile   |  1 +
>  arch/powerpc/mm/kasan/kasan_init_book3e_64.c | 53 ++
>  arch/powerpc/purgatory/Makefile  |  3 +
>  arch/powerpc/xmon/Makefile   |  1 +
>  14 files changed, 164 insertions(+), 19 deletions(-)
>  create mode 100644 arch/powerpc/mm/kasan/kasan_init_book3e_64.c
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 850b06def84f..2c7c20d52778 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -176,6 +176,7 @@ config PPC
> select HAVE_ARCH_AUDITSYSCALL
> select HAVE_ARCH_JUMP_LABEL
> select HAVE_ARCH_KASAN  if PPC32
> +   select HAVE_ARCH_KASAN  if PPC_BOOK3E_64 && 
> !SPARSEMEM_VMEMMAP
> select HAVE_ARCH_KGDB
> select HAVE_ARCH_MMAP_RND_BITS
> select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if COMPAT
> diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
> index f0738099e31e..21c2dadf0315 100644
> --- a/arch/powerpc/Makefile
> +++ b/arch/powerpc/Makefile
> @@ -428,11 +428,13 @@ endif
>  endif
>
>  ifdef CONFIG_KASAN
> +ifdef CONFIG_PPC32
>  prepare: kasan_prepare
>
>  kasan_prepare: prepare0
> $(eval KASAN_SHADOW_OFFSET = $(shell awk '{if ($$2 == 
> "KASAN_SHADOW_OFFSET") print $$3;}' include/generated/asm-offsets.h))
>  endif
> +endif
>
>  # Check toolchain versions:
>  # - gcc-4.6 is the minimum kernel-wide version so nothing required.
> diff --git a/arch/powerpc/include/asm/kasan.h 
> b/arch/powerpc/include/asm/kasan.h
> index 5d0088429b62..c2f6f05dfaa3 100644
> --- a/arch/powerpc/include/asm/kasan.h
> +++ b/arch/powerpc/include/asm/kasan.h
> @@ -5,20 +5,85 @@
>  #ifndef __ASSEMBLY__
>
>  #include 
> +#include 
>  #include 
> -#include 
>
>  #define KASAN_SHADOW_SCALE_SHIFT   3
> -#define KASAN_SHADOW_SIZE  ((~0UL - PAGE_OFFSET + 1) >> 
> 

[PATCH] powerpc/mm: Convert slb presence warning check to WARN_ON_ONCE

2019-02-15 Thread Aneesh Kumar K.V
We are hitting false positive in some case. Till we root cause
this, convert WARN_ON to WARN_ON_WONCE.

A sample stack dump looks like
NIP [c007ac40] assert_slb_presence+0x90/0xa0
LR [c007b270] slb_flush_and_restore_bolted+0x90/0xc0
Call Trace:
 arch_send_call_function_ipi_mask+0xcc/0x110 (unreliable)
 0xc00f9f38f560
 slice_flush_segments+0x58/0xb0
 on_each_cpu+0x74/0xf0
 slice_get_unmapped_area+0x6d4/0x9e0
 hugetlb_get_unmapped_area+0x124/0x150
 get_unmapped_area+0xf0/0x1a0
 do_mmap+0x1a4/0x6b0
 vm_mmap_pgoff+0xbc/0x150
 ksys_mmap_pgoff+0x260/0x2f0
 sys_mmap+0x104/0x130
 system_call+0x5c/0x70

We are checking whether we were able to successfully insert
kernel stack SLB entries. If that is not case we will crash next.
So we are not losing much debug data.

Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/slb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index bc3914d54e26..dca0cbd71b60 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -71,7 +71,7 @@ static void assert_slb_presence(bool present, unsigned long 
ea)
 
asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
 
-   WARN_ON(present == (tmp == 0));
+   WARN_ON_ONCE(present == (tmp == 0));
 #endif
 }
 
-- 
2.20.1



Re: [PATCH] powerpc/ptrace: Add prototype for function pt_regs_check

2019-02-15 Thread Christophe Leroy




Le 15/02/2019 à 09:11, Mathieu Malaterre a écrit :

On Sat, Dec 8, 2018 at 4:46 PM Mathieu Malaterre  wrote:


`pt_regs_check` is a dummy function, its purpose is to break the build
if struct pt_regs and struct user_pt_regs don't match.

This function has no functionnal purpose, and will get eliminated at
link time or after init depending on CONFIG_LD_DEAD_CODE_DATA_ELIMINATION

This commit adds a prototype to fix warning at W=1:

   arch/powerpc/kernel/ptrace.c:3339:13: error: no previous prototype for 
‘pt_regs_check’ [-Werror=missing-prototypes]

Suggested-by: Christophe Leroy 
Signed-off-by: Mathieu Malaterre 
---
  arch/powerpc/kernel/ptrace.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
index a398999d0770..341c0060b4c8 100644
--- a/arch/powerpc/kernel/ptrace.c
+++ b/arch/powerpc/kernel/ptrace.c
@@ -3338,6 +3338,10 @@ void do_syscall_trace_leave(struct pt_regs *regs)
 user_enter();
  }

+void __init pt_regs_check(void);
+/* dummy function, its purpose is to break the build if struct pt_regs and
+ * struct user_pt_regs don't match.
+ */


Another trick which seems to work with GCC is:

-void __init pt_regs_check(void)
+static inline void __init pt_regs_check(void)


Does this really work ? Did you test to ensure that the BUILD_BUG_ON 
still detect mismatch between struct pt_regs and struct user_pt_regs ?


Christophe




  void __init pt_regs_check(void)
  {
 BUILD_BUG_ON(offsetof(struct pt_regs, gpr) !=
--
2.19.2



Re: [PATCH 01/11] powerpc: remove dead ifdefs in

2019-02-15 Thread Christophe Leroy




Le 14/02/2019 à 18:05, Christoph Hellwig a écrit :

On Thu, Feb 14, 2019 at 09:26:19AM +0100, Christophe Leroy wrote:

Could you also remove the 'config GENERIC_CSUM' item in
arch/powerpc/Kconfig ?


All the separate declarations go away later in this series.



I saw, but the purpose of the later patch is to replace arch defined 
GENERIC_CSUM by a common one that arches select. For the powerpc you are 
not in that case as the powerpc does not select GENERIC_CSUM.


So I really believe that all stale bits of remaining GENERIC_CSUM in 
powerpc should go away as a single dedicated patch, as a fix of commit 
d4fde568a34a ("powerpc/64: Use optimized checksum routines on 
little-endian")


Regarding the #ifdef __KERNEL__ , I think we should do a wide cleanup in 
arch/powerpc/include/asm, not only asm/checksum.h


Christophe


[PATCH] powerpc/mm: Handle mmap_min_addr correctly in get_unmapped_area callback

2019-02-15 Thread Aneesh Kumar K.V
After we ALIGN up the address we need to make sure we didn't overflow
and resulted in zero address. In that case, we need to make sure that
the returned address is greater than mmap_min_addr.

Also when doing top-down search the low_limit is not PAGE_SIZE but rather
max(PAGE_SIZE, mmap_min_addr). This handle cases in which mmap_min_addr >
PAGE_SIZE.

This fixes selftest va_128TBswitch --run-hugetlb reporting failures when
run as non root user for

mmap(-1, MAP_HUGETLB)
mmap(-1, MAP_HUGETLB)

We also avoid the first mmap(-1, MAP_HUGETLB) returning NULL address as mmap 
address
with this change

CC: Laurent Dufour 
Signed-off-by: Aneesh Kumar K.V 
---
 arch/powerpc/mm/hugetlbpage-radix.c |  5 +++--
 arch/powerpc/mm/slice.c | 10 ++
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/mm/hugetlbpage-radix.c 
b/arch/powerpc/mm/hugetlbpage-radix.c
index 2486bee0f93e..97c7a39ebc00 100644
--- a/arch/powerpc/mm/hugetlbpage-radix.c
+++ b/arch/powerpc/mm/hugetlbpage-radix.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -73,7 +74,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned 
long addr,
if (addr) {
addr = ALIGN(addr, huge_page_size(h));
vma = find_vma(mm, addr);
-   if (high_limit - len >= addr &&
+   if (high_limit - len >= addr && addr >= mmap_min_addr &&
(!vma || addr + len <= vm_start_gap(vma)))
return addr;
}
@@ -83,7 +84,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned 
long addr,
 */
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
-   info.low_limit = PAGE_SIZE;
+   info.low_limit = max(PAGE_SIZE, mmap_min_addr);
info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
info.align_offset = 0;
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 06898c13901d..aec91dbcdc0b 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -377,6 +378,7 @@ static unsigned long slice_find_area_topdown(struct 
mm_struct *mm,
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
unsigned long addr, found, prev;
struct vm_unmapped_area_info info;
+   unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr);
 
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
@@ -393,7 +395,7 @@ static unsigned long slice_find_area_topdown(struct 
mm_struct *mm,
if (high_limit > DEFAULT_MAP_WINDOW)
addr += mm->context.slb_addr_limit - DEFAULT_MAP_WINDOW;
 
-   while (addr > PAGE_SIZE) {
+   while (addr > min_addr) {
info.high_limit = addr;
if (!slice_scan_available(addr - 1, available, 0, ))
continue;
@@ -405,8 +407,8 @@ static unsigned long slice_find_area_topdown(struct 
mm_struct *mm,
 * Check if we need to reduce the range, or if we can
 * extend it to cover the previous available slice.
 */
-   if (addr < PAGE_SIZE)
-   addr = PAGE_SIZE;
+   if (addr < min_addr)
+   addr = min_addr;
else if (slice_scan_available(addr - 1, available, 0, )) {
addr = prev;
goto prev_slice;
@@ -528,7 +530,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, 
unsigned long len,
addr = _ALIGN_UP(addr, page_size);
slice_dbg(" aligned addr=%lx\n", addr);
/* Ignore hint if it's too large or overlaps a VMA */
-   if (addr > high_limit - len ||
+   if (addr > high_limit - len || addr < mmap_min_addr ||
!slice_area_is_free(mm, addr, len))
addr = 0;
}
-- 
2.20.1



Re: [PATCH] powerpc/ptrace: Add prototype for function pt_regs_check

2019-02-15 Thread Mathieu Malaterre
On Sat, Dec 8, 2018 at 4:46 PM Mathieu Malaterre  wrote:
>
> `pt_regs_check` is a dummy function, its purpose is to break the build
> if struct pt_regs and struct user_pt_regs don't match.
>
> This function has no functionnal purpose, and will get eliminated at
> link time or after init depending on CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
>
> This commit adds a prototype to fix warning at W=1:
>
>   arch/powerpc/kernel/ptrace.c:3339:13: error: no previous prototype for 
> ‘pt_regs_check’ [-Werror=missing-prototypes]
>
> Suggested-by: Christophe Leroy 
> Signed-off-by: Mathieu Malaterre 
> ---
>  arch/powerpc/kernel/ptrace.c | 4 
>  1 file changed, 4 insertions(+)
>
> diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
> index a398999d0770..341c0060b4c8 100644
> --- a/arch/powerpc/kernel/ptrace.c
> +++ b/arch/powerpc/kernel/ptrace.c
> @@ -3338,6 +3338,10 @@ void do_syscall_trace_leave(struct pt_regs *regs)
> user_enter();
>  }
>
> +void __init pt_regs_check(void);
> +/* dummy function, its purpose is to break the build if struct pt_regs and
> + * struct user_pt_regs don't match.
> + */

Another trick which seems to work with GCC is:

-void __init pt_regs_check(void)
+static inline void __init pt_regs_check(void)

>  void __init pt_regs_check(void)
>  {
> BUILD_BUG_ON(offsetof(struct pt_regs, gpr) !=
> --
> 2.19.2
>


Re: [PATCH] powerpc/ptrace: Simplify vr_get/set() to avoid GCC warning

2019-02-15 Thread Mathieu Malaterre
On Fri, Feb 15, 2019 at 7:14 AM Michael Ellerman  wrote:
>
> GCC 8 warns about the logic in vr_get/set(), which with -Werror breaks
> the build:
>
>   In function ‘user_regset_copyin’,
>   inlined from ‘vr_set’ at arch/powerpc/kernel/ptrace.c:628:9:
>   include/linux/regset.h:295:4: error: ‘memcpy’ offset [-527, -529] is
>   out of the bounds [0, 16] of object ‘vrsave’ with type ‘union
>   ’ [-Werror=array-bounds]
>   arch/powerpc/kernel/ptrace.c: In function ‘vr_set’:
>   arch/powerpc/kernel/ptrace.c:623:5: note: ‘vrsave’ declared here
>  } vrsave;
>
> This has been identified as a regression in GCC, see GCC bug 88273.

Good point, this does not seems this will be backported.

> However we can avoid the warning and also simplify the logic and make
> it more robust.
>
> Currently we pass -1 as end_pos to user_regset_copyout(). This says
> "copy up to the end of the regset".
>
> The definition of the regset is:
> [REGSET_VMX] = {
> .core_note_type = NT_PPC_VMX, .n = 34,
> .size = sizeof(vector128), .align = sizeof(vector128),
> .active = vr_active, .get = vr_get, .set = vr_set
> },
>
> The end is calculated as (n * size), ie. 34 * sizeof(vector128).
>
> In vr_get/set() we pass start_pos as 33 * sizeof(vector128), meaning
> we can copy up to sizeof(vector128) into/out-of vrsave.
>
> The on-stack vrsave is defined as:
>   union {
>   elf_vrreg_t reg;
>   u32 word;
>   } vrsave;
>
> And elf_vrreg_t is:
>   typedef __vector128 elf_vrreg_t;
>
> So there is no bug, but we rely on all those sizes lining up,
> otherwise we would have a kernel stack exposure/overwrite on our
> hands.
>
> Rather than relying on that we can pass an explict end_pos based on
> the sizeof(vrsave). The result should be exactly the same but it's
> more obviously not over-reading/writing the stack and it avoids the
> compiler warning.
>

maybe:

Link: https://lkml.org/lkml/2018/8/16/117

In any case the warning is now gone:

Tested-by: Mathieu Malaterre 

> Reported-by: Meelis Roos 
> Reported-by: Mathieu Malaterre 
> Cc: sta...@vger.kernel.org
> Signed-off-by: Michael Ellerman 
> ---
>  arch/powerpc/kernel/ptrace.c | 10 --
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
> index 7535f89e08cd..d9ac7d94656e 100644
> --- a/arch/powerpc/kernel/ptrace.c
> +++ b/arch/powerpc/kernel/ptrace.c
> @@ -567,6 +567,7 @@ static int vr_get(struct task_struct *target, const 
> struct user_regset *regset,
> /*
>  * Copy out only the low-order word of vrsave.
>  */
> +   int start, end;
> union {
> elf_vrreg_t reg;
> u32 word;
> @@ -575,8 +576,10 @@ static int vr_get(struct task_struct *target, const 
> struct user_regset *regset,
>
> vrsave.word = target->thread.vrsave;
>
> +   start = 33 * sizeof(vector128);
> +   end = start + sizeof(vrsave);
> ret = user_regset_copyout(, , , , ,
> - 33 * sizeof(vector128), -1);
> + start, end);
> }
>
> return ret;
> @@ -614,6 +617,7 @@ static int vr_set(struct task_struct *target, const 
> struct user_regset *regset,
> /*
>  * We use only the first word of vrsave.
>  */
> +   int start, end;
> union {
> elf_vrreg_t reg;
> u32 word;
> @@ -622,8 +626,10 @@ static int vr_set(struct task_struct *target, const 
> struct user_regset *regset,
>
> vrsave.word = target->thread.vrsave;
>
> +   start = 33 * sizeof(vector128);
> +   end = start + sizeof(vrsave);
> ret = user_regset_copyin(, , , , ,
> -33 * sizeof(vector128), -1);
> +start, end);
> if (!ret)
> target->thread.vrsave = vrsave.word;
> }
> --
> 2.20.1
>


Re: [PATCH 09/11] lib: consolidate the GENERIC_CSUM symbol

2019-02-15 Thread Masahiro Yamada
On Thu, Feb 14, 2019 at 2:41 AM Christoph Hellwig  wrote:
>
> Add one definition to lib/Kconfig and let the architectures
> select if it supported.
>
> Signed-off-by: Christoph Hellwig 



> diff --git a/arch/unicore32/Kconfig b/arch/unicore32/Kconfig
> index 52b4d48e351a..9de1d983a99a 100644
> --- a/arch/unicore32/Kconfig
> +++ b/arch/unicore32/Kconfig
> @@ -29,9 +29,6 @@ config UNICORE32
>   designs licensed by PKUnity Ltd.
>   Please see web page at .
>
> -config GENERIC_CSUM
> -   def_bool y
> -
>  config NO_IOPORT_MAP
> bool
>



You missed to add 'select GENERIC_CSUM' for unicore32.



-- 
Best Regards
Masahiro Yamada