Re: [PATCH v4 4/7] KVM: PPC: clean up redundant 'kvm_run' parameters

2020-05-25 Thread Paul Mackerras
On Mon, Apr 27, 2020 at 12:35:11PM +0800, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.
> 
> Signed-off-by: Tianjia Zhang 

This looks OK, though possibly a little larger than it needs to be
because of variable name changes (kvm_run -> run) that aren't strictly
necessary.

Reviewed-by: Paul Mackerras 


Re: [PATCH v4 5/7] KVM: PPC: clean up redundant kvm_run parameters in assembly

2020-05-25 Thread Paul Mackerras
On Mon, Apr 27, 2020 at 12:35:12PM +0800, Tianjia Zhang wrote:
> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
> structure. For historical reasons, many kvm-related function parameters
> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
> patch does a unified cleanup of these remaining redundant parameters.

Some of these changes don't look completely correct to me, see below.
If you're expecting these patches to go through my tree, I can fix up
the patch and commit it (with you as author), noting the changes I
made in the commit message.  Do you want me to do that?

> diff --git a/arch/powerpc/kvm/book3s_interrupts.S 
> b/arch/powerpc/kvm/book3s_interrupts.S
> index f7ad99d972ce..0eff749d8027 100644
> --- a/arch/powerpc/kvm/book3s_interrupts.S
> +++ b/arch/powerpc/kvm/book3s_interrupts.S
> @@ -55,8 +55,7 @@
>   
> /
>  
>  /* Registers:
> - *  r3: kvm_run pointer
> - *  r4: vcpu pointer
> + *  r3: vcpu pointer
>   */
>  _GLOBAL(__kvmppc_vcpu_run)
>  
> @@ -68,8 +67,8 @@ kvm_start_entry:
>   /* Save host state to the stack */
>   PPC_STLU r1, -SWITCH_FRAME_SIZE(r1)
>  
> - /* Save r3 (kvm_run) and r4 (vcpu) */
> - SAVE_2GPRS(3, r1)
> + /* Save r3 (vcpu) */
> + SAVE_GPR(3, r1)
>  
>   /* Save non-volatile registers (r14 - r31) */
>   SAVE_NVGPRS(r1)
> @@ -82,11 +81,11 @@ kvm_start_entry:
>   PPC_STL r0, _LINK(r1)
>  
>   /* Load non-volatile guest state from the vcpu */
> - VCPU_LOAD_NVGPRS(r4)
> + VCPU_LOAD_NVGPRS(r3)
>  
>  kvm_start_lightweight:
>   /* Copy registers into shadow vcpu so we can access them in real mode */
> - mr  r3, r4
> + mr  r4, r3

This mr doesn't seem necessary.

>   bl  FUNC(kvmppc_copy_to_svcpu)
>   nop
>   REST_GPR(4, r1)

This should be loading r4 from GPR3(r1), not GPR4(r1) - which is what
REST_GPR(4, r1) will do.

Then, in the file but not in the patch context, there is this line:

PPC_LL  r3, GPR4(r1)/* vcpu pointer */

where once again GPR4 needs to be GPR3.

> @@ -191,10 +190,10 @@ after_sprg3_load:
>   PPC_STL r31, VCPU_GPR(R31)(r7)
>  
>   /* Pass the exit number as 3rd argument to kvmppc_handle_exit */

The comment should be modified to say "2nd" instead of "3rd",
otherwise it is confusing.

The rest of the patch looks OK.

Paul.


Re: [PATCH 2/2] kobject: send KOBJ_REMOVE uevent when the object is removed from sysfs

2020-05-25 Thread Greg Kroah-Hartman
On Mon, May 25, 2020 at 03:49:01PM -0700, Dmitry Torokhov wrote:
> On Sun, May 24, 2020 at 8:34 AM Greg Kroah-Hartman
>  wrote:
> >
> > It is possible for a KOBJ_REMOVE uevent to be sent to userspace way
> > after the files are actually gone from sysfs, due to how reference
> > counting for kobjects work.  This should not be a problem, but it would
> > be good to properly send the information when things are going away, not
> > at some later point in time in the future.
> >
> > Before this move, if a kobject's parent was torn down before the child,
> 
>  And this is the root of the problem and what has to be fixed.

I fixed that in patch one of this series.  Turns out the user of the
kobject was not even expecting that to happen.

> > when the call to kobject_uevent() happened, the parent walk to try to
> > reconstruct the full path of the kobject could be a total mess and cause
> > crashes.  It's not good to try to tear down a kobject tree from top
> > down, but let's at least try to not to crash if a user does so.
> 
> One can try, but if we keep proper reference counting then kobject
> core should take care of actually releasing objects in the right
> order. I do not think you should keep this patch, and instead see if
> we can push call to kobject_put(kobj->parent) into kobject_cleanup().

I tried that, but there was a _lot_ of underflow errors reported, so
there's something else happening.  Or my attempt was incorrect :)

thanks,

greg k-h


[PATCH] coresight: Use devm_kcalloc() in coresight_alloc_conns()

2020-05-25 Thread Xu Wang
A multiplication for the size determination of a memory allocation
indicated that an array data structure should be processed.
Thus use the corresponding function "devm_kcalloc".

Signed-off-by: Xu Wang 
---
 drivers/hwtracing/coresight/coresight-platform.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/hwtracing/coresight/coresight-platform.c 
b/drivers/hwtracing/coresight/coresight-platform.c
index 43418a2126ff..6720049409f3 100644
--- a/drivers/hwtracing/coresight/coresight-platform.c
+++ b/drivers/hwtracing/coresight/coresight-platform.c
@@ -27,9 +27,8 @@ static int coresight_alloc_conns(struct device *dev,
 struct coresight_platform_data *pdata)
 {
if (pdata->nr_outport) {
-   pdata->conns = devm_kzalloc(dev, pdata->nr_outport *
-   sizeof(*pdata->conns),
-   GFP_KERNEL);
+   pdata->conns = devm_kcalloc(dev, pdata->nr_outport,
+   sizeof(*pdata->conns), GFP_KERNEL);
if (!pdata->conns)
return -ENOMEM;
}
-- 
2.17.1



Re: [RFC PATCH V2 4/7] x86/hw_breakpoint: Prevent data breakpoints on user_pcid_flush_mask

2020-05-25 Thread Lai Jiangshan
On Tue, May 26, 2020 at 12:39 PM Andy Lutomirski  wrote:
>
> On Mon, May 25, 2020 at 9:31 PM Lai Jiangshan
>  wrote:
> >
> > On Tue, May 26, 2020 at 12:21 PM Andy Lutomirski  wrote:
> > >
> > > On Mon, May 25, 2020 at 6:42 PM Lai Jiangshan  
> > > wrote:
> > > >
> > > > The percpu user_pcid_flush_mask is used for CPU entry
> > > > If a data breakpoint on it, it will cause an unwanted #DB.
> > > > Protect the full cpu_tlbstate structure to be sure.
> > > >
> > > > There are some other percpu data used in CPU entry, but they are
> > > > either in already-protected cpu_tss_rw or are safe to trigger #DB
> > > > (espfix_waddr, espfix_stack).
> > >
> > > How hard would it be to rework this to have DECLARE_PERCPU_NODEBUG()
> > > and DEFINE_PERCPU_NODEBUG() or similar?
> >
> >
> > I don't know, but it is an excellent idea. Although the patchset
> > protects only 2 or 3 portions of percpu data, but there is many
> > percpu data used in tracing or kprobe code. They are needed to be
> > protected too.
> >
> > Adds CC:
> > Steven Rostedt 
> > Masami Hiramatsu 
>
> PeterZ is moving things in the direction of more aggressively
> disabling hardware breakpoints in the nasty paths where we won't
> survive a hardware breakpoint.  Does the tracing code have portions
> that won't survive a limited amount of recursion?

Agree, after "aggressively disabling hardware breakpoints in the nasty
paths", only percpu data used by entry code needs to be protected,
even non-instrumentable percpu data used by nmi_enter() doesn't need
to be marked protected, because #DB is disabled.

Only percpu data used by entry code in ranges that #DB is not disabled
needs to be protected, there are only a small number of portions,
I don't think we need DECLARE_PERCPU_NODEBUG() or so for merely 2 or 3
portions of data. This patchset is sufficient.
(espfix_waddr, espfix_stack are not counted into, which needs more
review besides me)

>
> I'm hoping that we can keep the number of no-breakpoint-here percpu
> variables low.  Maybe we could recruit objtool to help make sure we
> got all of them, but that would be a much larger project.
>
> Would we currently survive a breakpoint on the thread stack?  I don't
> see any extremely obvious reason that we wouldn't.  Blocking such a
> breakpoint would be annoying.


Re: linux-next: manual merge of the net-next tree with the bpf tree

2020-05-25 Thread Björn Töpel

On 2020-05-26 05:12, Stephen Rothwell wrote:

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.


The fix looks good!

I'll keep this is mind, and try not to repeat similar conflicts for 
future patches.


Thanks for the fixup, and for the clarification!


Cheers,
Björn


Re: [PATCH v4 3/7] KVM: PPC: Remove redundant kvm_run from vcpu_arch

2020-05-25 Thread Paul Mackerras
On Mon, Apr 27, 2020 at 12:35:10PM +0800, Tianjia Zhang wrote:
> The 'kvm_run' field already exists in the 'vcpu' structure, which
> is the same structure as the 'kvm_run' in the 'vcpu_arch' and
> should be deleted.
> 
> Signed-off-by: Tianjia Zhang 

This looks fine.

I assume each architecture sub-maintainer is taking the relevant
patches from this series via their tree - is that right?

Reviewed-by: Paul Mackerras 


Re: [PATCH] power: reset: vexpress: fix build issue

2020-05-25 Thread Nathan Chancellor
On Mon, May 25, 2020 at 07:37:45PM -0400, Valdis Klētnieks wrote:
> On Sun, 24 May 2020 15:20:25 -0700, Nathan Chancellor said:
> 
> > arm-linux-gnueabi-ld: drivers/power/reset/vexpress-poweroff.o: in function 
> > `vexpress_reset_probe':
> > vexpress-poweroff.c:(.text+0x36c): undefined reference to 
> > `devm_regmap_init_vexpress_config'
> 
> The part I can't figure out is that git blame tells me there's already an
> export:
> 
> 3b9334ac835bb (Pawel Moll  2014-04-30 16:46:29 +0100 154)   return regmap;
> 3b9334ac835bb (Pawel Moll  2014-04-30 16:46:29 +0100 155) }
> b33cdd283bd91 (Arnd Bergmann   2014-05-26 17:25:22 +0200 156) 
> EXPORT_SYMBOL_GPL(devm_regmap_init_vexpress_config);
> 3b9334ac835bb (Pawel Moll  2014-04-30 16:46:29 +0100 157)
> 
> but I can't figure out where or if drivers/power/reset/vexpress-poweroff.c 
> gets
> a MODULE_LICENSE from...

Correct, it is exported but that file is being built as a module whereas
the file requiring it is beign builtin. As far as I understand, that
will not work, hence the error.

The issue with this patch is that ARCH_VEXPRESS still just selects
POWER_RESET_VEXPRESS, which ignores "depends on", hence the Kconfig
warning and not fixing the error.

I am not that much of a Kconfig guru to come up with a solution. I am
just reporting it because arm allmodconfig is broken on -next due to
this.

Cheers,
Nathan


[PATCH] vdpa: bypass waking up vhost_woker for vdpa vq kick

2020-05-25 Thread Zhu Lingshan
Standard vhost devices rely on waking up a vhost_worker to kick
a virtquque. However vdpa devices have hardware backends, so it
does not need this waking up routin. In this commit, vdpa device
will kick a virtqueue directly, reduce the performance overhead
caused by waking up a vhost_woker.

Signed-off-by: Zhu Lingshan 
Suggested-by: Jason Wang 
---
 drivers/vhost/vdpa.c | 100 +++
 1 file changed, 100 insertions(+)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 0968361..d3a2aca 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -287,6 +287,66 @@ static long vhost_vdpa_get_vring_num(struct vhost_vdpa *v, 
u16 __user *argp)
 
return 0;
 }
+void vhost_vdpa_poll_stop(struct vhost_virtqueue *vq)
+{
+   vhost_poll_stop(>poll);
+}
+
+int vhost_vdpa_poll_start(struct vhost_virtqueue *vq)
+{
+   struct vhost_poll *poll = >poll;
+   struct file *file = vq->kick;
+   __poll_t mask;
+
+
+   if (poll->wqh)
+   return 0;
+
+   mask = vfs_poll(file, >table);
+   if (mask)
+   vq->handle_kick(>poll.work);
+   if (mask & EPOLLERR) {
+   vhost_poll_stop(poll);
+   return -EINVAL;
+   }
+
+   return 0;
+}
+
+static long vhost_vdpa_set_vring_kick(struct vhost_virtqueue *vq,
+ void __user *argp)
+{
+   bool pollstart = false, pollstop = false;
+   struct file *eventfp, *filep = NULL;
+   struct vhost_vring_file f;
+   long r;
+
+   if (copy_from_user(, argp, sizeof(f)))
+   return -EFAULT;
+
+   eventfp = f.fd == -1 ? NULL : eventfd_fget(f.fd);
+   if (IS_ERR(eventfp)) {
+   r = PTR_ERR(eventfp);
+   return r;
+   }
+
+   if (eventfp != vq->kick) {
+   pollstop = (filep = vq->kick) != NULL;
+   pollstart = (vq->kick = eventfp) != NULL;
+   } else
+   filep = eventfp;
+
+   if (pollstop && vq->handle_kick)
+   vhost_vdpa_poll_stop(vq);
+
+   if (filep)
+   fput(filep);
+
+   if (pollstart && vq->handle_kick)
+   r = vhost_vdpa_poll_start(vq);
+
+   return r;
+}
 
 static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
   void __user *argp)
@@ -316,6 +376,11 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, 
unsigned int cmd,
return 0;
}
 
+   if (cmd == VHOST_SET_VRING_KICK) {
+   r = vhost_vdpa_set_vring_kick(vq, argp);
+   return r;
+   }
+
if (cmd == VHOST_GET_VRING_BASE)
vq->last_avail_idx = ops->get_vq_state(v->vdpa, idx);
 
@@ -667,6 +732,39 @@ static void vhost_vdpa_free_domain(struct vhost_vdpa *v)
v->domain = NULL;
 }
 
+static int vhost_vdpa_poll_worker(wait_queue_entry_t *wait, unsigned int mode,
+ int sync, void *key)
+{
+   struct vhost_poll *poll = container_of(wait, struct vhost_poll, wait);
+   struct vhost_virtqueue *vq = container_of(poll, struct vhost_virtqueue,
+ poll);
+
+   if (!(key_to_poll(key) & poll->mask))
+   return 0;
+
+   vq->handle_kick(>poll.work);
+
+   return 0;
+}
+
+void vhost_vdpa_poll_init(struct vhost_dev *dev)
+{
+   struct vhost_virtqueue *vq;
+   struct vhost_poll *poll;
+   int i;
+
+   for (i = 0; i < dev->nvqs; i++) {
+   vq = dev->vqs[i];
+   poll = >poll;
+   if (vq->handle_kick) {
+   init_waitqueue_func_entry(>wait,
+ vhost_vdpa_poll_worker);
+   poll->work.fn = vq->handle_kick;
+   }
+
+   }
+}
+
 static int vhost_vdpa_open(struct inode *inode, struct file *filep)
 {
struct vhost_vdpa *v;
@@ -697,6 +795,8 @@ static int vhost_vdpa_open(struct inode *inode, struct file 
*filep)
vhost_dev_init(dev, vqs, nvqs, 0, 0, 0,
   vhost_vdpa_process_iotlb_msg);
 
+   vhost_vdpa_poll_init(dev);
+
dev->iotlb = vhost_iotlb_alloc(0, 0);
if (!dev->iotlb) {
r = -ENOMEM;
-- 
1.8.3.1



[PATCH v1] x86: Pin cr4 FSGSBASE

2020-05-25 Thread Andi Kleen
From: Andi Kleen 

Since there seem to be kernel modules floating around that set
FSGSBASE incorrectly, prevent this in the CR4 pinning. Currently
CR4 pinning just checks that bits are set, this also checks
that the FSGSBASE bit is not set, and if it is clears it again.

Note this patch will need to be undone when the full FSGSBASE
patches are merged. But it's a reasonable solution for v5.2+
stable at least. Sadly the older kernels don't have the necessary
infrastructure for this (although a simpler version of this
could be added there too)

Cc: sta...@vger.kernel.org # v5.2+
Signed-off-by: Andi Kleen 
---
 arch/x86/kernel/cpu/common.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index bed0cb83fe24..1f5b7871ae9a 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -385,6 +385,11 @@ void native_write_cr4(unsigned long val)
/* Warn after we've set the missing bits. */
WARN_ONCE(bits_missing, "CR4 bits went missing: %lx!?\n",
  bits_missing);
+   if (val & X86_CR4_FSGSBASE) {
+   WARN_ONCE(1, "CR4 unexpectedly set FSGSBASE!?\n");
+   val &= ~X86_CR4_FSGSBASE;
+   goto set_register;
+   }
}
 }
 EXPORT_SYMBOL(native_write_cr4);
-- 
2.25.4



linux-next: manual merge of the devicetree tree with the watchdog tree

2020-05-25 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the devicetree tree got a conflict in:

  Documentation/devicetree/bindings/watchdog/renesas,wdt.txt

between commit:

  ff1ee6fb276c ("dt-bindings: watchdog: renesas,wdt: Document r8a7742 support")

from the watchdog tree and commit:

  d0941cfb9fa8 ("dt-bindings: watchdog: renesas-wdt: Convert to json-schema")

from the devicetree tree.

I fixed it up (I removed the file and added the patch below) and can
carry the fix as necessary. This is now fixed as far as linux-next is
concerned, but any non trivial conflicts should be mentioned to your
upstream maintainer when your tree is submitted for merging.  You may
also want to consider cooperating with the maintainer of the conflicting
tree to minimise any particularly complex conflicts.

From: Stephen Rothwell 
Date: Tue, 26 May 2020 15:15:51 +1000
Subject: [PATCH] dt-bindings: watchdog: renesas-wdt: fix up for yaml conversion

Signed-off-by: Stephen Rothwell 
---
 Documentation/devicetree/bindings/watchdog/renesas,wdt.yaml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Documentation/devicetree/bindings/watchdog/renesas,wdt.yaml 
b/Documentation/devicetree/bindings/watchdog/renesas,wdt.yaml
index 27e8c4accd67..572f4c912fef 100644
--- a/Documentation/devicetree/bindings/watchdog/renesas,wdt.yaml
+++ b/Documentation/devicetree/bindings/watchdog/renesas,wdt.yaml
@@ -24,6 +24,7 @@ properties:
 
   - items:
   - enum:
+  - renesas,r8a7742-wdt  # RZ/G1H
   - renesas,r8a7743-wdt  # RZ/G1M
   - renesas,r8a7744-wdt  # RZ/G1N
   - renesas,r8a7745-wdt  # RZ/G1E
-- 
2.26.2

-- 
Cheers,
Stephen Rothwell


pgpNAauURmcjD.pgp
Description: OpenPGP digital signature


Re: [PATCH v2] iommu/iova: Retry from last rb tree node if iova search fails

2020-05-25 Thread Vijayanand Jitta



On 5/11/2020 4:34 PM, vji...@codeaurora.org wrote:
> From: Vijayanand Jitta 
> 
> When ever a new iova alloc request comes iova is always searched
> from the cached node and the nodes which are previous to cached
> node. So, even if there is free iova space available in the nodes
> which are next to the cached node iova allocation can still fail
> because of this approach.
> 
> Consider the following sequence of iova alloc and frees on
> 1GB of iova space
> 
> 1) alloc - 500MB
> 2) alloc - 12MB
> 3) alloc - 499MB
> 4) free -  12MB which was allocated in step 2
> 5) alloc - 13MB
> 
> After the above sequence we will have 12MB of free iova space and
> cached node will be pointing to the iova pfn of last alloc of 13MB
> which will be the lowest iova pfn of that iova space. Now if we get an
> alloc request of 2MB we just search from cached node and then look
> for lower iova pfn's for free iova and as they aren't any, iova alloc
> fails though there is 12MB of free iova space.
> 
> To avoid such iova search failures do a retry from the last rb tree node
> when iova search fails, this will search the entire tree and get an iova
> if its available
> 
> Signed-off-by: Vijayanand Jitta 
> ---
>  drivers/iommu/iova.c | 19 +++
>  1 file changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 0e6a953..7d82afc 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -184,8 +184,9 @@ static int __alloc_and_insert_iova_range(struct 
> iova_domain *iovad,
>   struct rb_node *curr, *prev;
>   struct iova *curr_iova;
>   unsigned long flags;
> - unsigned long new_pfn;
> + unsigned long new_pfn, alloc_lo_new;
>   unsigned long align_mask = ~0UL;
> + unsigned long alloc_hi = limit_pfn, alloc_lo = iovad->start_pfn;
>  
>   if (size_aligned)
>   align_mask <<= fls_long(size - 1);
> @@ -198,15 +199,25 @@ static int __alloc_and_insert_iova_range(struct 
> iova_domain *iovad,
>  
>   curr = __get_cached_rbnode(iovad, limit_pfn);
>   curr_iova = rb_entry(curr, struct iova, node);
> + alloc_lo_new = curr_iova->pfn_hi;
> +
> +retry:
>   do {
> - limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
> - new_pfn = (limit_pfn - size) & align_mask;
> + alloc_hi = min(alloc_hi, curr_iova->pfn_lo);
> + new_pfn = (alloc_hi - size) & align_mask;
>   prev = curr;
>   curr = rb_prev(curr);
>   curr_iova = rb_entry(curr, struct iova, node);
>   } while (curr && new_pfn <= curr_iova->pfn_hi);
>  
> - if (limit_pfn < size || new_pfn < iovad->start_pfn) {
> + if (alloc_hi < size || new_pfn < alloc_lo) {
> + if (alloc_lo == iovad->start_pfn && alloc_lo_new < limit_pfn) {
> + alloc_hi = limit_pfn;
> + alloc_lo = alloc_lo_new;
> + curr = >anchor.node;
> + curr_iova = rb_entry(curr, struct iova, node);
> + goto retry;
> + }
>   iovad->max32_alloc_size = size;
>   goto iova32_full;
>   }
> 

ping?
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of Code Aurora Forum, hosted by The Linux Foundation


Re: [PATCH 3/3] hwrng: ba431-rng: add support for BA431 hwrng

2020-05-25 Thread Olivier Sobrie
On Mon, May 25, 2020 at 10:28:46PM +0200, Arnd Bergmann wrote:
> On Mon, May 25, 2020 at 10:07 PM Olivier Sobrie
>  wrote:
> >
> > Silex insight BA431 is an IP designed to generate random numbers that
> > can be integrated in various FPGA.
> > This driver adds support for it through the hwrng interface.
> >
> > This driver is used in Silex Insight Viper OEM boards.
> >
> > Signed-off-by: Olivier Sobrie 
> > Signed-off-by: Waleed Ziad 
> 
> The driver looks good to me.
> 
> Acked-by: Arnd Bergmann  
> 
> >  drivers/char/hw_random/Kconfig |  10 ++
> >  drivers/char/hw_random/Makefile|   1 +
> >  drivers/char/hw_random/ba431-rng.c | 240 +
> 
> I wonder if we should move drivers/char/hw_random to its own top-level drivers
> subsystem outside of drivers/char. It seems to be growing steadily and is 
> larger
> than a lot of other subsystems with currently 34 drivers in there.
> 
> Not your problem though.
> 
> > +   /* Wait until the state changed */
> > +   for (i = 0; i < BA431_RESET_READ_STATUS_RETRIES; ++i) {
> > +   state = ba431_trng_get_state(ba431);
> > +   if (state >= BA431_STATE_STARTUP)
> > +   break;
> > +
> > +   udelay(BA431_RESET_READ_STATUS_INTERVAL);
> > +   }
> 
> Looking for something to improve, I noticed that this loop can take over
> a millisecond to time out, and it always runs in non-atomic context.
> It may be better to use usleep_range() than udelay().

Ok I'll change that and send a v2 later this week.

Thank you,

Olivier


Re: [PATCH v3 0/3] Add Qualcomm IPCC driver support

2020-05-25 Thread Manivannan Sadhasivam
Hi Jassi,

On Wed, May 20, 2020 at 02:18:51PM +0530, Manivannan Sadhasivam wrote:
> Hello,
> 
> This series adds mailbox driver support for Qualcomm Inter Processor
> Communications Controller (IPCC) block found in MSM chipsets. This block
> is used to route interrupts between modems, DSPs and APSS (Application
> Processor Subsystem).
> 
> The driver is modeled as a mailbox+irqchip driver. The irqchip part helps
> in receiving the interrupts from the IPCC clients such as modems, DSPs,
> PCI-E etc... and forwards them to respective entities in APSS.
> 
> On the other hand, the mailbox part is used to send interrupts to the IPCC
> clients from the entities of APSS.
> 
> This series is tested on SM8250-MTP board.
> 

Any update on this series?

Thanks,
Mani

> Thanks,
> Mani
> 
> Changes in v3:
> 
> * Added Bjorn's review tags
> * Few changes to DT binding as suggested by Rob
> 
> Changes in v2:
> 
> * Moved from soc/ to mailbox/
> * Switched to static mbox channels
> * Some misc cleanups
> 
> Manivannan Sadhasivam (3):
>   dt-bindings: mailbox: Add devicetree binding for Qcom IPCC
>   mailbox: Add support for Qualcomm IPCC
>   MAINTAINERS: Add entry for Qualcomm IPCC driver
> 
>  .../bindings/mailbox/qcom-ipcc.yaml   |  80 +
>  MAINTAINERS   |   8 +
>  drivers/mailbox/Kconfig   |  10 +
>  drivers/mailbox/Makefile  |   2 +
>  drivers/mailbox/qcom-ipcc.c   | 286 ++
>  include/dt-bindings/mailbox/qcom-ipcc.h   |  33 ++
>  6 files changed, 419 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/mailbox/qcom-ipcc.yaml
>  create mode 100644 drivers/mailbox/qcom-ipcc.c
>  create mode 100644 include/dt-bindings/mailbox/qcom-ipcc.h
> 
> -- 
> 2.26.GIT
> 


Re: [PATCH] drivers/virt/fsl_hypervisor: Correcting error handling path

2020-05-25 Thread Souptick Joarder
On Fri, May 22, 2020 at 6:24 PM Dan Carpenter  wrote:
>
> On Thu, May 14, 2020 at 01:53:16AM +0530, Souptick Joarder wrote:
> > First, when memory allocation for sg_list_unaligned failed, there
> > is no point of calling put_pages() as we haven't pinned any pages.
> >
> > Second, if get_user_pages_fast() failed we should unpinned num_pinned
> > pages, no point of checking till num_pages.
> >
> > This will address both.
> >
> > Signed-off-by: Souptick Joarder 
>
> If gup_flags were | FOLL_LONGTERM then this patch would fix a double
> free because of the put_page() in __gup_longterm_locked().
>
> mm/gup.c
>   1786  if (check_dax_vmas(vmas_tmp, rc)) {
>   1787  for (i = 0; i < rc; i++)
>   1788  put_page(pages[i]);
> ^^^
> put_page() here and also in the caller.
>
>   1789  rc = -EOPNOTSUPP;
>   1790  goto out;
>   1791  }
>
> But since this isn't FOLL_LONGTERM the patch is a nice cleanup which
> doesn't affect run time.
>
> Reviewed-by: Dan Carpenter 

Hi Andrew,
Is it fine to take it through mm tree ?


Re: Re: [PATCH] iio: magnetometer: ak8974: Fix runtime PM imbalance on error

2020-05-25 Thread dinghao . liu
Hi, Linus

> On Sun, May 24, 2020 at 4:51 AM Dinghao Liu  wrote:
> 
> > When devm_regmap_init_i2c() returns an error code, a pairing
> > runtime PM usage counter decrement is needed to keep the
> > counter balanced. For error paths after ak8974_set_power(),
> > ak8974_detect() and ak8974_reset(), things are the same.
> >
> > However, When iio_triggered_buffer_setup() returns an error
> > code, we don't need such a decrement because there is already
> > one before this call. Things are the same for other error paths
> > after it.
> >
> > Signed-off-by: Dinghao Liu 
> 
> > ak8974->map = devm_regmap_init_i2c(i2c, _regmap_config);
> > if (IS_ERR(ak8974->map)) {
> > dev_err(>dev, "failed to allocate register map\n");
> > +   pm_runtime_put_noidle(>dev);
> > +   pm_runtime_disable(>dev);
> > return PTR_ERR(ak8974->map);
> 
> This is correct.
> 
> > ret = ak8974_set_power(ak8974, AK8974_PWR_ON);
> > if (ret) {
> > dev_err(>dev, "could not power on\n");
> > +   pm_runtime_put_noidle(>dev);
> > +   pm_runtime_disable(>dev);
> > goto power_off;
> 
> What about just changing this to goto disable_pm;
>
> > ret = ak8974_detect(ak8974);
> > if (ret) {
> > dev_err(>dev, "neither AK8974 nor AMI30x found\n");
> > +   pm_runtime_put_noidle(>dev);
> > +   pm_runtime_disable(>dev);
> > goto power_off;
> 
> goto disable_pm;
> 
> > @@ -786,6 +792,8 @@ static int ak8974_probe(struct i2c_client *i2c,
> > ret = ak8974_reset(ak8974);
> > if (ret) {
> > dev_err(>dev, "AK8974 reset failed\n");
> > +   pm_runtime_put_noidle(>dev);
> > +   pm_runtime_disable(>dev);
> 
> goto disable_pm;
> 
> >  disable_pm:
> > -   pm_runtime_put_noidle(>dev);
> > pm_runtime_disable(>dev);
> > ak8974_set_power(ak8974, AK8974_PWR_OFF);
> 
> Keep the top pm_runtime_put_noidle().

I found that there was already a pm_runtime_put() before 
iio_triggered_buffer_setup() (just after pm_runtime_use_autosuspend).
So if we keep the pm_runtime_put_noidle() here, we will have
two pmusage counter decrement. Do you think this is a bug?

Regards,
Dinghao

> 
> The ak8974_set_power() call is fine, the power on call does not
> need to happen in balance. Sure it will attempt to write a register
> but so will the power on call.
> 
> Yours,
> Linus Walleij


[PATCH v2 2/2] phy: intel: Add Keem Bay eMMC PHY support

2020-05-25 Thread Wan Ahmad Zainie
Add support for eMMC PHY on Intel Keem Bay SoC.

Signed-off-by: Wan Ahmad Zainie 
---
 drivers/phy/intel/Kconfig|   8 +
 drivers/phy/intel/Makefile   |   1 +
 drivers/phy/intel/phy-keembay-emmc.c | 321 +++
 3 files changed, 330 insertions(+)
 create mode 100644 drivers/phy/intel/phy-keembay-emmc.c

diff --git a/drivers/phy/intel/Kconfig b/drivers/phy/intel/Kconfig
index 7b47682a4e0e..5f5497d1624a 100644
--- a/drivers/phy/intel/Kconfig
+++ b/drivers/phy/intel/Kconfig
@@ -22,3 +22,11 @@ config PHY_INTEL_EMMC
select GENERIC_PHY
help
  Enable this to support the Intel EMMC PHY
+
+config PHY_KEEMBAY_EMMC
+   tristate "Intel Keem Bay EMMC PHY Driver"
+   depends on OF
+   select GENERIC_PHY
+   select REGMAP_MMIO
+   help
+ Enable this to support the Keem Bay EMMC PHY.
diff --git a/drivers/phy/intel/Makefile b/drivers/phy/intel/Makefile
index 233d530dadde..6566334e7b77 100644
--- a/drivers/phy/intel/Makefile
+++ b/drivers/phy/intel/Makefile
@@ -1,3 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_PHY_INTEL_COMBO)  += phy-intel-combo.o
 obj-$(CONFIG_PHY_INTEL_EMMC)+= phy-intel-emmc.o
+obj-$(CONFIG_PHY_KEEMBAY_EMMC) += phy-keembay-emmc.o
diff --git a/drivers/phy/intel/phy-keembay-emmc.c 
b/drivers/phy/intel/phy-keembay-emmc.c
new file mode 100644
index ..546854cdbb0c
--- /dev/null
+++ b/drivers/phy/intel/phy-keembay-emmc.c
@@ -0,0 +1,321 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Intel Keem Bay eMMC PHY driver
+ * Copyright (C) 2020 Intel Corporation
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/* eMMC/SD/SDIO core/phy configuration registers */
+#define PHY_CFG_0  0x24
+#define  SEL_DLY_TXCLK_MASKBIT(29)
+#define  SEL_DLY_TXCLK(x)  (((x) << 29) & SEL_DLY_TXCLK_MASK)
+#define  OTAP_DLY_ENA_MASK BIT(27)
+#define  OTAP_DLY_ENA(x)   (((x) << 27) & OTAP_DLY_ENA_MASK)
+#define  OTAP_DLY_SEL_MASK GENMASK(26, 23)
+#define  OTAP_DLY_SEL(x)   (((x) << 23) & OTAP_DLY_SEL_MASK)
+#define  DLL_EN_MASK   BIT(10)
+#define  DLL_EN(x) (((x) << 10) & DLL_EN_MASK)
+#define  PWR_DOWN_MASK BIT(0)
+#define  PWR_DOWN(x)   (((x) << 0) & PWR_DOWN_MASK)
+
+#define PHY_CFG_2  0x2c
+#define  SEL_FREQ_MASK GENMASK(12, 10)
+#define  SEL_FREQ(x)   (((x) << 10) & SEL_FREQ_MASK)
+
+#define PHY_STAT   0x40
+#define  CAL_DONE_MASK BIT(6)
+#define  IS_CALDONE(x) ((x) & CAL_DONE_MASK)
+#define  DLL_RDY_MASK  BIT(5)
+#define  IS_DLLRDY(x)  ((x) & DLL_RDY_MASK)
+
+/* From ACS_eMMC51_16nFFC_RO1100_Userguide_v1p0.pdf p17 */
+#define FREQSEL_200M_170M  0x0
+#define FREQSEL_170M_140M  0x1
+#define FREQSEL_140M_110M  0x2
+#define FREQSEL_110M_80M   0x3
+#define FREQSEL_80M_50M0x4
+
+struct keembay_emmc_phy {
+   struct regmap *syscfg;
+   struct clk *emmcclk;
+};
+
+static const struct regmap_config keembay_regmap_config = {
+   .reg_bits = 32,
+   .val_bits = 32,
+   .reg_stride = 4,
+};
+
+static int keembay_emmc_phy_power(struct phy *phy, bool on_off)
+{
+   struct keembay_emmc_phy *priv = phy_get_drvdata(phy);
+   unsigned int caldone;
+   unsigned int dllrdy;
+   unsigned int freqsel;
+   unsigned int mhz;
+   int ret;
+
+   /*
+* Keep phyctrl_pdb and phyctrl_endll low to allow
+* initialization of CALIO state M/C DFFs
+*/
+   ret = regmap_update_bits(priv->syscfg, PHY_CFG_0, PWR_DOWN_MASK,
+PWR_DOWN(0));
+   if (ret) {
+   dev_err(>dev, "CALIO power down bar failed: %d\n", ret);
+   return ret;
+   }
+
+   ret = regmap_update_bits(priv->syscfg, PHY_CFG_0, DLL_EN_MASK,
+DLL_EN(0));
+   if (ret) {
+   dev_err(>dev, "turn off the dll failed: %d\n", ret);
+   return ret;
+   }
+
+   /* Already finish power off above */
+   if (!on_off)
+   return 0;
+
+   mhz = DIV_ROUND_CLOSEST(clk_get_rate(priv->emmcclk), 100);
+   if (mhz <= 200 && mhz >= 170)
+   freqsel = FREQSEL_200M_170M;
+   else if (mhz <= 170 && mhz >= 140)
+   freqsel = FREQSEL_170M_140M;
+   else if (mhz <= 140 && mhz >= 110)
+   freqsel = FREQSEL_140M_110M;
+   else if (mhz <= 110 && mhz >= 80)
+   freqsel = FREQSEL_110M_80M;
+   else if (mhz <= 80 && mhz >= 50)
+   freqsel = FREQSEL_80M_50M;
+   else
+   freqsel = 0x0;
+
+   if (mhz < 50 || mhz > 200)
+   dev_warn(>dev, "Unsupported rate: %d MHz\n", mhz);
+
+   /*
+* According to the user manual, calpad calibration
+* cycle takes more than 2us without the minimal recommended
+* value, 

[PATCH 2/2] arm64: tegra: Add pwm-fan profile settings

2020-05-25 Thread Sandipan Patra
Add support for profiles in device tree to allow
different fan settings for trip point temp/hyst/pwm.

Signed-off-by: Sandipan Patra 
---
 arch/arm64/boot/dts/nvidia/tegra194-p2972-.dts | 15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/boot/dts/nvidia/tegra194-p2972-.dts 
b/arch/arm64/boot/dts/nvidia/tegra194-p2972-.dts
index e15d1ea..ff2b980 100644
--- a/arch/arm64/boot/dts/nvidia/tegra194-p2972-.dts
+++ b/arch/arm64/boot/dts/nvidia/tegra194-p2972-.dts
@@ -219,10 +219,19 @@
 
fan: fan {
compatible = "pwm-fan";
-   pwms = < 0 45334>;
-
-   cooling-levels = <0 64 128 255>;
#cooling-cells = <2>;
+   pwms = < 0 45334>;
+   profiles {
+   default = "quiet";
+   quiet {
+   state_cap = <4>;
+   cooling-levels = <0 77 120 160 255 255 255 255 
255 255>;
+   };
+   cool {
+   state_cap = <4>;
+   cooling-levels = <0 77 120 160 255 255 255 255 
255 255>;
+   };
+   };
};
 
gpio-keys {
-- 
2.7.4



[PATCH v2 1/2] dt-bindings: phy: intel: Add Keem Bay eMMC PHY bindings

2020-05-25 Thread Wan Ahmad Zainie
Binding description for Intel Keem Bay eMMC PHY.

Signed-off-by: Wan Ahmad Zainie 
---
 .../bindings/phy/intel,keembay-emmc-phy.yaml  | 45 +++
 1 file changed, 45 insertions(+)
 create mode 100644 
Documentation/devicetree/bindings/phy/intel,keembay-emmc-phy.yaml

diff --git a/Documentation/devicetree/bindings/phy/intel,keembay-emmc-phy.yaml 
b/Documentation/devicetree/bindings/phy/intel,keembay-emmc-phy.yaml
new file mode 100644
index ..d3e0f169eb0a
--- /dev/null
+++ b/Documentation/devicetree/bindings/phy/intel,keembay-emmc-phy.yaml
@@ -0,0 +1,45 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+# Copyright 2020 Intel Corporation
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/phy/intel,keembay-emmc-phy.yaml#;
+$schema: "http://devicetree.org/meta-schemas/core.yaml#;
+
+title: Intel Keem Bay eMMC PHY bindings
+
+maintainers:
+  - Wan Ahmad Zainie 
+
+properties:
+  compatible:
+const: intel,keembay-emmc-phy
+
+  reg:
+maxItems: 1
+
+  clocks:
+maxItems: 1
+
+  clock-names:
+items:
+  - const: emmcclk
+
+  "#phy-cells":
+const: 0
+
+required:
+  - compatible
+  - reg
+  - "#phy-cells"
+
+additionalProperties: false
+
+examples:
+  - |
+phy@2029 {
+  compatible = "intel,keembay-emmc-phy";
+  reg = <0x0 0x2029 0x0 0x54>;
+  clocks = <>;
+  clock-names = "emmcclk";
+  #phy-cells = <0>;
+};
-- 
2.17.1



[PATCH v2 0/2] phy: intel: Add Keem Bay eMMC PHY support

2020-05-25 Thread Wan Ahmad Zainie
Hi.

The first part is to document DT bindings for Keem Bay eMMC PHY.

The second is the driver file, loosely based on phy-rockchip-emmc.c
and phy-intel-emmc.c. The latter is not being reused as there are
quite a number of differences i.e. registers offset, supported clock
rates, bitfield to set.

The patch was tested with Keem Bay evaluation module board.

Thank you.

Best regards,
Zainie

Changes since v1:
- Rework phy-keembay-emmc.c to make it similar to phy-intel-emmc.c.
- Use regmap_mmio, and remove reference to intel,syscon.
- Use node name phy@
- Update license i.e. use dual license.


Wan Ahmad Zainie (2):
  dt-bindings: phy: intel: Add Keem Bay eMMC PHY bindings
  phy: intel: Add Keem Bay eMMC PHY support

 .../bindings/phy/intel,keembay-emmc-phy.yaml  |  45 +++
 drivers/phy/intel/Kconfig |   8 +
 drivers/phy/intel/Makefile|   1 +
 drivers/phy/intel/phy-keembay-emmc.c  | 321 ++
 4 files changed, 375 insertions(+)
 create mode 100644 
Documentation/devicetree/bindings/phy/intel,keembay-emmc-phy.yaml
 create mode 100644 drivers/phy/intel/phy-keembay-emmc.c

-- 
2.17.1



[PATCH 1/2] hwmon: pwm-fan: Add profile support and add remove module support

2020-05-25 Thread Sandipan Patra
This change has 2 parts:
1. Add support for profiles mode settings.
This allows different fan settings for trip point temp/hyst/pwm.
T194 has multiple fan-profiles support.

2. Add pwm-fan remove support. This is essential since the config is
tristate capable.

Signed-off-by: Sandipan Patra 
---
 drivers/hwmon/pwm-fan.c | 112 ++--
 1 file changed, 100 insertions(+), 12 deletions(-)

diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
index 30b7b3e..26db589 100644
--- a/drivers/hwmon/pwm-fan.c
+++ b/drivers/hwmon/pwm-fan.c
@@ -3,8 +3,10 @@
  * pwm-fan.c - Hwmon driver for fans connected to PWM lines.
  *
  * Copyright (c) 2014 Samsung Electronics Co., Ltd.
+ * Copyright (c) 2020, NVIDIA Corporation.
  *
  * Author: Kamil Debski 
+ * Author: Sandipan Patra 
  */
 
 #include 
@@ -21,6 +23,8 @@
 #include 
 
 #define MAX_PWM 255
+/* Based on OF max device tree node name length */
+#define MAX_PROFILE_NAME_LENGTH31
 
 struct pwm_fan_ctx {
struct mutex lock;
@@ -38,6 +42,12 @@ struct pwm_fan_ctx {
unsigned int pwm_fan_state;
unsigned int pwm_fan_max_state;
unsigned int *pwm_fan_cooling_levels;
+
+   unsigned int pwm_fan_profiles;
+   const char **fan_profile_names;
+   unsigned int **fan_profile_cooling_levels;
+   unsigned int fan_current_profile;
+
struct thermal_cooling_device *cdev;
 };
 
@@ -227,28 +237,86 @@ static int pwm_fan_of_get_cooling_data(struct device *dev,
   struct pwm_fan_ctx *ctx)
 {
struct device_node *np = dev->of_node;
+   struct device_node *base_profile = NULL;
+   struct device_node *profile_np = NULL;
+   const char *default_profile = NULL;
int num, i, ret;
 
-   if (!of_find_property(np, "cooling-levels", NULL))
-   return 0;
+   num = of_property_count_u32_elems(np, "cooling-levels");
+   if (num <= 0) {
+   base_profile = of_get_child_by_name(np, "profiles");
+   if (!base_profile) {
+   dev_err(dev, "Wrong Data\n");
+   return -EINVAL;
+   }
+   }
+
+   if (base_profile) {
+   ctx->pwm_fan_profiles =
+   of_get_available_child_count(base_profile);
+
+   if (ctx->pwm_fan_profiles <= 0) {
+   dev_err(dev, "Profiles used but not defined\n");
+   return -EINVAL;
+   }
 
-   ret = of_property_count_u32_elems(np, "cooling-levels");
-   if (ret <= 0) {
-   dev_err(dev, "Wrong data!\n");
-   return ret ? : -EINVAL;
+   ctx->fan_profile_names = devm_kzalloc(dev,
+   sizeof(const char *) * ctx->pwm_fan_profiles,
+   GFP_KERNEL);
+   ctx->fan_profile_cooling_levels = devm_kzalloc(dev,
+   sizeof(int *) * ctx->pwm_fan_profiles,
+   GFP_KERNEL);
+
+   if (!ctx->fan_profile_names
+   || !ctx->fan_profile_cooling_levels)
+   return -ENOMEM;
+
+   ctx->fan_current_profile = 0;
+   i = 0;
+   for_each_available_child_of_node(base_profile, profile_np) {
+   num = of_property_count_u32_elems(profile_np,
+   "cooling-levels");
+   if (num <= 0) {
+   dev_err(dev, "No data in cooling-levels inside 
profile node!\n");
+   return -EINVAL;
+   }
+
+   of_property_read_string(profile_np, "name",
+   >fan_profile_names[i]);
+   if (default_profile &&
+   !strncmp(default_profile,
+   ctx->fan_profile_names[i],
+   MAX_PROFILE_NAME_LENGTH))
+   ctx->fan_current_profile = i;
+
+   ctx->fan_profile_cooling_levels[i] =
+   devm_kzalloc(dev, sizeof(int) * num,
+   GFP_KERNEL);
+   if (!ctx->fan_profile_cooling_levels[i])
+   return -ENOMEM;
+
+   of_property_read_u32_array(profile_np, "cooling-levels",
+   ctx->fan_profile_cooling_levels[i], num);
+   i++;
+   }
}
 
-   num = ret;
ctx->pwm_fan_cooling_levels = devm_kcalloc(dev, num, sizeof(u32),
   GFP_KERNEL);
if (!ctx->pwm_fan_cooling_levels)
return -ENOMEM;
 
-   ret = of_property_read_u32_array(np, 

Re: [PATCH] kexec: Do not verify the signature without the lockdown or mandatory signature

2020-05-25 Thread Dave Young
On 05/25/20 at 01:23pm, Lianbo Jiang wrote:
> Signature verification is an important security feature, to protect
> system from being attacked with a kernel of unknown origin. Kexec
> rebooting is a way to replace the running kernel, hence need be
> secured carefully.
> 
> In the current code of handling signature verification of kexec kernel,
> the logic is very twisted. It mixes signature verification, IMA signature
> appraising and kexec lockdown.
> 
> If there is no KEXEC_SIG_FORCE, kexec kernel image doesn't have one of
> signature, the supported crypto, and key, we don't think this is wrong,
> Unless kexec lockdown is executed. IMA is considered as another kind of
> signature appraising method.
> 
> If kexec kernel image has signature/crypto/key, it has to go through the
> signature verification and pass. Otherwise it's seen as verification
> failure, and won't be loaded.
> 
> Seems kexec kernel image with an unqualified signature is even worse than
> those w/o signature at all, this sounds very unreasonable. E.g. If people
> get a unsigned kernel to load, or a kernel signed with expired key, which
> one is more dangerous?
> 
> So, here, let's simplify the logic to improve code readability. If the
> KEXEC_SIG_FORCE enabled or kexec lockdown enabled, signature verification
> is mandated. Otherwise, we lift the bar for any kernel image.
> 
> Signed-off-by: Lianbo Jiang 
> ---
>  kernel/kexec_file.c | 37 ++---
>  1 file changed, 6 insertions(+), 31 deletions(-)
> 
> diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
> index faa74d5f6941..e4bdf0c42f35 100644
> --- a/kernel/kexec_file.c
> +++ b/kernel/kexec_file.c
> @@ -181,52 +181,27 @@ void kimage_file_post_load_cleanup(struct kimage *image)
>  static int
>  kimage_validate_signature(struct kimage *image)
>  {
> - const char *reason;
>   int ret;
>  
>   ret = arch_kexec_kernel_verify_sig(image, image->kernel_buf,
>  image->kernel_buf_len);
> - switch (ret) {
> - case 0:
> - break;
> + if (ret) {
> + pr_debug("kernel signature verification failed (%d).\n", ret);
>  
> - /* Certain verification errors are non-fatal if we're not
> -  * checking errors, provided we aren't mandating that there
> -  * must be a valid signature.
> -  */
> - case -ENODATA:
> - reason = "kexec of unsigned image";
> - goto decide;
> - case -ENOPKG:
> - reason = "kexec of image with unsupported crypto";
> - goto decide;
> - case -ENOKEY:
> - reason = "kexec of image with unavailable key";
> - decide:
> - if (IS_ENABLED(CONFIG_KEXEC_SIG_FORCE)) {
> - pr_notice("%s rejected\n", reason);
> + if (IS_ENABLED(CONFIG_KEXEC_SIG_FORCE))
>   return ret;
> - }
>  
> - /* If IMA is guaranteed to appraise a signature on the kexec
> + /*
> +  * If IMA is guaranteed to appraise a signature on the kexec
>* image, permit it even if the kernel is otherwise locked
>* down.
>*/
>   if (!ima_appraise_signature(READING_KEXEC_IMAGE) &&
>   security_locked_down(LOCKDOWN_KEXEC))
>   return -EPERM;
> -
> - return 0;
> -
> - /* All other errors are fatal, including nomem, unparseable
> -  * signatures and signature check failures - even if signatures
> -  * aren't required.
> -  */
> - default:
> - pr_notice("kernel signature verification failed (%d).\n", ret);
>   }
>  
> - return ret;
> + return 0;
>  }
>  #endif
>  
> -- 
> 2.17.1
> 


Acked-by: Dave Young 

Thanks
Dave



drivers/mfd/sprd-sc27xx-spi.c:59:23: warning: no previous prototype for 'sprd_pmic_detect_charger_type'

2020-05-25 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   9cb1fd0efd195590b828b9b865421ad345a4a145
commit: 2a7e7274f3d43d2a072cab25c0035dc994903bb9 mfd: sc27xx: Add USB charger 
type detection support
date:   8 weeks ago
config: h8300-randconfig-r006-20200526 (attached as .config)
compiler: h8300-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
git checkout 2a7e7274f3d43d2a072cab25c0035dc994903bb9
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross 
ARCH=h8300 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 

All warnings (new ones prefixed by >>, old ones prefixed by <<):

>> drivers/mfd/sprd-sc27xx-spi.c:59:23: warning: no previous prototype for 
>> 'sprd_pmic_detect_charger_type' [-Wmissing-prototypes]
59 | enum usb_charger_type sprd_pmic_detect_charger_type(struct device *dev)
|   ^

vim +/sprd_pmic_detect_charger_type +59 drivers/mfd/sprd-sc27xx-spi.c

58  
  > 59  enum usb_charger_type sprd_pmic_detect_charger_type(struct device *dev)
60  {
61  struct spi_device *spi = to_spi_device(dev);
62  struct sprd_pmic *ddata = spi_get_drvdata(spi);
63  const struct sprd_pmic_data *pdata = ddata->pdata;
64  enum usb_charger_type type;
65  u32 val;
66  int ret;
67  
68  ret = regmap_read_poll_timeout(ddata->regmap, 
pdata->charger_det, val,
69 (val & SPRD_PMIC_CHG_DET_DONE),
70 SPRD_PMIC_CHG_DET_DELAY_US,
71 SPRD_PMIC_CHG_DET_TIMEOUT);
72  if (ret) {
73  dev_err(>dev, "failed to detect charger type\n");
74  return UNKNOWN_TYPE;
75  }
76  
77  switch (val & SPRD_PMIC_CHG_TYPE_MASK) {
78  case SPRD_PMIC_CDP_TYPE:
79  type = CDP_TYPE;
80  break;
81  case SPRD_PMIC_DCP_TYPE:
82  type = DCP_TYPE;
83  break;
84  case SPRD_PMIC_SDP_TYPE:
85  type = SDP_TYPE;
86  break;
87  default:
88  type = UNKNOWN_TYPE;
89  break;
90  }
91  
92  return type;
93  }
94  EXPORT_SYMBOL_GPL(sprd_pmic_detect_charger_type);
95  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip


Re: [PATCH v2 1/4] gpio: gpiolib: Allow GPIO IRQs to lazy disable

2020-05-25 Thread Maulik Shah

Hi,

On 5/25/2020 5:52 PM, Hans Verkuil wrote:

On 25/05/2020 13:55, Linus Walleij wrote:

On Sat, May 23, 2020 at 7:11 PM Maulik Shah  wrote:


With 'commit 461c1a7d4733 ("gpiolib: override irq_enable/disable")' gpiolib
overrides irqchip's irq_enable and irq_disable callbacks. If irq_disable
callback is implemented then genirq takes unlazy path to disable irq.

Underlying irqchip may not want to implement irq_disable callback to lazy
disable irq when client drivers invokes disable_irq(). By overriding
irq_disable callback, gpiolib ends up always unlazy disabling IRQ.

Allow gpiolib to lazy disable IRQs by overriding irq_disable callback only
if irqchip implemented irq_disable. In cases where irq_disable is not
implemented irq_mask is overridden. Similarly override irq_enable callback
only if irqchip implemented irq_enable otherwise irq_unmask is overridden.

Fixes: 461c1a7d47 (gpiolib: override irq_enable/disable)
Signed-off-by: Maulik Shah 

I definitely want Hans Verkuils test and review on this, since it
is a usecase that he is really dependent on.

Maulik, since I am no longer subscribed to linux-gpio, can you mail the
series to me?

I have two use-cases, but I can only test one (I don't have access to the
SBC I need to test the other use-case for the next few months).

Once I have the whole series I'll try to test the first use-case and at
least look into the code if this series could affect the second use-case.

Regards,

Hans


Hi Hans,

Mailed you the entire series.

Thanks,
Maulik



Also the irqchip people preferredly.

But it does seem to mop up my mistakes and fix this up properly!

So with some testing I'll be happy to merge it, even this one
patch separately if Hans can verify that it works.

Yours,
Linus Walleij


--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum, hosted by The Linux Foundation



Re: linux-next: manual merge of the akpm-current tree with the tip tree

2020-05-25 Thread Singh, Balbir
On Mon, 2020-05-25 at 21:04 +1000, Stephen Rothwell wrote:
> Hi all,
> 
> Today's linux-next merge of the akpm-current tree got a conflict in:
> 
>   arch/x86/mm/tlb.c
> 
> between commit:
> 
>   83ce56f712af ("x86/mm: Refactor cond_ibpb() to support other use cases")
> 
> from the tip tree and commit:
> 
>   36c8e34d03a1 ("x86/mm: remove vmalloc faulting")
> 
> from the akpm-current tree.
> 
> I fixed it up (see below) and can carry the fix as necessary. This
> is now fixed as far as linux-next is concerned, but any non trivial
> conflicts should be mentioned to your upstream maintainer when your tree
> is submitted for merging.  You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.
> 

The changes look reasonable to me (in terms of the merge resolution).

Acked-by: Balbir Singh 



Re: [RFC PATCH V2 4/7] x86/hw_breakpoint: Prevent data breakpoints on user_pcid_flush_mask

2020-05-25 Thread Andy Lutomirski
On Mon, May 25, 2020 at 9:31 PM Lai Jiangshan
 wrote:
>
> On Tue, May 26, 2020 at 12:21 PM Andy Lutomirski  wrote:
> >
> > On Mon, May 25, 2020 at 6:42 PM Lai Jiangshan  
> > wrote:
> > >
> > > The percpu user_pcid_flush_mask is used for CPU entry
> > > If a data breakpoint on it, it will cause an unwanted #DB.
> > > Protect the full cpu_tlbstate structure to be sure.
> > >
> > > There are some other percpu data used in CPU entry, but they are
> > > either in already-protected cpu_tss_rw or are safe to trigger #DB
> > > (espfix_waddr, espfix_stack).
> >
> > How hard would it be to rework this to have DECLARE_PERCPU_NODEBUG()
> > and DEFINE_PERCPU_NODEBUG() or similar?
>
>
> I don't know, but it is an excellent idea. Although the patchset
> protects only 2 or 3 portions of percpu data, but there is many
> percpu data used in tracing or kprobe code. They are needed to be
> protected too.
>
> Adds CC:
> Steven Rostedt 
> Masami Hiramatsu 

PeterZ is moving things in the direction of more aggressively
disabling hardware breakpoints in the nasty paths where we won't
survive a hardware breakpoint.  Does the tracing code have portions
that won't survive a limited amount of recursion?

I'm hoping that we can keep the number of no-breakpoint-here percpu
variables low.  Maybe we could recruit objtool to help make sure we
got all of them, but that would be a much larger project.

Would we currently survive a breakpoint on the thread stack?  I don't
see any extremely obvious reason that we wouldn't.  Blocking such a
breakpoint would be annoying.


Re: linux-next: build failure after merge of the block tree

2020-05-25 Thread Stephen Rothwell
Hi all,

On Mon, 25 May 2020 13:03:44 -0600 Jens Axboe  wrote:
>
> On 5/24/20 11:08 PM, Stephen Rothwell wrote:
> > 
> > After merging the block tree, today's linux-next build (arm
> > multi_v7_defconfig) failed like this:
> > 
> > mm/filemap.c: In function 'generic_file_buffered_read':
> > mm/filemap.c:2075:9: error: 'written' undeclared (first use in this 
> > function); did you mean 'writeb'?
> >  2075 | if (written) {
> >   | ^~~
> >   | writeb
> > 
> > Caused by commit
> > 
> >   23d513106fd8 ("mm: support async buffered reads in 
> > generic_file_buffered_read()")
> > 
> > from the block tree interacting with commit
> > 
> >   6e66f10f2cac ("fs: export generic_file_buffered_read()")
> > 
> > from the btrfs tree.
> > 
> > [Aside: that btrfs tree commit talks about "correct the comments and 
> > variable
> > names", but changes "written" to "copied" in the function definition
> > but to "already_read" in the header file declaration ...]
> > 
> > I ave applied the following merge fix patch:  
> 
> Looks like a frivolous change... Thanks for fixing this up Stephen.

The variable name change has been removed from the btrfs tree.

-- 
Cheers,
Stephen Rothwell


pgpmwOAytud2n.pgp
Description: OpenPGP digital signature


[PATCH 0/2] x86/entry: simplify RESTORE_CR3

2020-05-25 Thread Lai Jiangshan
When I searched percpu data touched by entry code for #DB
protection[1], it seems to me RESTORE_CR3() does too much work,
this patchset simplifies it.

Patch 1 enhances 21e944591102("x86/mm: Optimize RESTORE_CR3") for
kernel CR3.

Patch 2 *reverts* 21e944591102("x86/mm: Optimize RESTORE_CR3") for
User CR3.

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Link: 
https://lore.kernel.org/lkml/20200525145102.122557-1-la...@linux.alibaba.com
Lai Jiangshan (2):
  x86/entry: Don't write to CR3 when restoring to kernel CR3
  x86/entry: always flush user CR3 in RESTORE_CR3

 arch/x86/entry/calling.h  | 36 
 arch/x86/entry/entry_64.S |  6 +++---
 2 files changed, 11 insertions(+), 31 deletions(-)

-- 
2.20.1



[PATCH 2/2] x86/entry: always flush user CR3 in RESTORE_CR3

2020-05-25 Thread Lai Jiangshan
RESTORE_CR3 is called when CPL==0 or #DF, it is unlikely
CPL==0==userCR3 and #DF itself is unlikely case.
There is no much overhead to always flush userCR3.

Signed-off-by: Lai Jiangshan 
---
 arch/x86/entry/calling.h  | 27 ++-
 arch/x86/entry/entry_64.S |  6 +++---
 2 files changed, 9 insertions(+), 24 deletions(-)

diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 505246185624..ff26e4eb7063 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -265,33 +265,18 @@ For 32-bit we have the following conventions - kernel is 
built with
 .Ldone_\@:
 .endm
 
-.macro RESTORE_CR3 scratch_reg:req save_reg:req
+.macro RESTORE_CR3 save_reg:req
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI
 
/*
 * Skip resuming KERNEL pages since it is already KERNEL CR3.
+*
+* RESTORE_CR3 is called when CPL==0 or #DF, it is unlikely
+* CPL==0==userCR3 and #DF itself is unlikely case.
+* There is no much overhead to always flush userCR3.
 */
bt  $PTI_USER_PGTABLE_BIT, \save_reg
jnc .Lend_\@
-
-   ALTERNATIVE "jmp .Lwrcr3_\@", "", X86_FEATURE_PCID
-
-   /*
-* Check if there's a pending flush for the user ASID we're
-* about to set.
-*/
-   movq\save_reg, \scratch_reg
-   andq$(0x7FF), \scratch_reg
-   bt  \scratch_reg, THIS_CPU_user_pcid_flush_mask
-   jnc .Lnoflush_\@
-
-   btr \scratch_reg, THIS_CPU_user_pcid_flush_mask
-   jmp .Lwrcr3_\@
-
-.Lnoflush_\@:
-   SET_NOFLUSH_BIT \save_reg
-
-.Lwrcr3_\@:
movq\save_reg, %cr3
 .Lend_\@:
 .endm
@@ -306,7 +291,7 @@ For 32-bit we have the following conventions - kernel is 
built with
 .endm
 .macro SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg:req save_reg:req
 .endm
-.macro RESTORE_CR3 scratch_reg:req save_reg:req
+.macro RESTORE_CR3 save_reg:req
 .endm
 
 #endif
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index d983a0d4bc73..46efa842a45e 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1283,13 +1283,13 @@ SYM_CODE_START_LOCAL(paranoid_exit)
jnz .Lparanoid_exit_no_swapgs
TRACE_IRQS_IRETQ
/* Always restore stashed CR3 value (see paranoid_entry) */
-   RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
+   RESTORE_CR3 save_reg=%r14
SWAPGS_UNSAFE_STACK
jmp restore_regs_and_return_to_kernel
 .Lparanoid_exit_no_swapgs:
TRACE_IRQS_IRETQ_DEBUG
/* Always restore stashed CR3 value (see paranoid_entry) */
-   RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
+   RESTORE_CR3 save_reg=%r14
jmp restore_regs_and_return_to_kernel
 SYM_CODE_END(paranoid_exit)
 
@@ -1703,7 +1703,7 @@ end_repeat_nmi:
callexc_nmi
 
/* Always restore stashed CR3 value (see paranoid_entry) */
-   RESTORE_CR3 scratch_reg=%r15 save_reg=%r14
+   RESTORE_CR3 save_reg=%r14
 
testl   %ebx, %ebx  /* swapgs needed? */
jnz nmi_restore
-- 
2.20.1



[PATCH 1/2] x86/entry: Don't write to CR3 when restoring to kernel CR3

2020-05-25 Thread Lai Jiangshan
Skip resuming KERNEL pages since it is already KERNEL CR3

Signed-off-by: Lai Jiangshan 
---
 arch/x86/entry/calling.h | 13 -
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 1c7f13bb6728..505246185624 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -268,14 +268,13 @@ For 32-bit we have the following conventions - kernel is 
built with
 .macro RESTORE_CR3 scratch_reg:req save_reg:req
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI
 
-   ALTERNATIVE "jmp .Lwrcr3_\@", "", X86_FEATURE_PCID
-
/*
-* KERNEL pages can always resume with NOFLUSH as we do
-* explicit flushes.
+* Skip resuming KERNEL pages since it is already KERNEL CR3.
 */
bt  $PTI_USER_PGTABLE_BIT, \save_reg
-   jnc .Lnoflush_\@
+   jnc .Lend_\@
+
+   ALTERNATIVE "jmp .Lwrcr3_\@", "", X86_FEATURE_PCID
 
/*
 * Check if there's a pending flush for the user ASID we're
@@ -293,10 +292,6 @@ For 32-bit we have the following conventions - kernel is 
built with
SET_NOFLUSH_BIT \save_reg
 
 .Lwrcr3_\@:
-   /*
-* The CR3 write could be avoided when not changing its value,
-* but would require a CR3 read *and* a scratch register.
-*/
movq\save_reg, %cr3
 .Lend_\@:
 .endm
-- 
2.20.1



drivers/usb/cdns3/drd.c:43:31: sparse: expected void const volatile [noderef] *

2020-05-25 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   9cb1fd0efd195590b828b9b865421ad345a4a145
commit: 70d8b9e5e63d212019ba3f6823c8ec3d2df87645 usb: cdns3: make signed 1 bit 
bitfields unsigned
date:   9 weeks ago
config: sh-randconfig-s032-20200526 (attached as .config)
compiler: sh4-linux-gcc (GCC) 9.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.1-240-gf0fe1cd9-dirty
git checkout 70d8b9e5e63d212019ba3f6823c8ec3d2df87645
# save the attached .config to linux build tree
make W=1 C=1 ARCH=sh CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 


sparse warnings: (new ones prefixed by >>)

   ./arch/sh/include/generated/uapi/asm/unistd_32.h:411:37: sparse: sparse: no 
newline at end of file
   drivers/usb/cdns3/drd.c:43:31: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
>> drivers/usb/cdns3/drd.c:43:31: sparse:expected void const volatile 
>> [noderef]  *
   drivers/usb/cdns3/drd.c:43:31: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:45:25: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:45:25: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:45:25: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:47:31: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:47:31: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:47:31: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:49:25: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:49:25: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:49:25: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:71:14: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:71:14: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:71:14: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:81:19: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:81:19: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:81:19: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:114:9: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:114:9: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:114:9: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:123:9: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:123:9: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:123:9: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:141:17: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:141:17: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:141:17: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:144:23: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:144:23: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:144:23: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:144:23: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:144:23: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:144:23: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:152:17: sparse: sparse: incorrect type in argument 1 
(different address spaces) @@expected void const volatile [noderef]  
* @@got [noderef]  * @@
   drivers/usb/cdns3/drd.c:152:17: sparse:expected void const volatile 
[noderef]  *
   drivers/usb/cdns3/drd.c:152:17: sparse:got restricted __le32 *
   drivers/usb/cdns3/drd.c:156:17: 

Re: [patch V9 00/39] x86/entry: Rework leftovers (was part V)

2020-05-25 Thread Andy Lutomirski
On Thu, May 21, 2020 at 1:31 PM Thomas Gleixner  wrote:
>
> Folks!
>
> This is V9 of the rework series. V7 and V8 were never posted but I used the
> version numbers for tags while fixing up 0day complaints. The last posted
> version was V6 which can be found here:

The whole pile is Acked-by: Andy Lutomirski 

Go test on Linus' new AMD laptop!

--Andy


drivers/spi/spi-meson-spicc.c:363:6: warning: variable 'data' set but not used

2020-05-25 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   9cb1fd0efd195590b828b9b865421ad345a4a145
commit: 0eb707ac7dd7a4329d93d47feada6c9bb5ea8ee9 spi: meson-spicc: adapt burst 
handling for G12A support
date:   2 months ago
config: h8300-randconfig-r006-20200526 (attached as .config)
compiler: h8300-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
git checkout 0eb707ac7dd7a4329d93d47feada6c9bb5ea8ee9
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross 
ARCH=h8300 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 

All warnings (new ones prefixed by >>, old ones prefixed by <<):

drivers/spi/spi-meson-spicc.c: In function 'meson_spicc_reset_fifo':
>> drivers/spi/spi-meson-spicc.c:363:6: warning: variable 'data' set but not 
>> used [-Wunused-but-set-variable]
363 |  u32 data;
|  ^~~~

vim +/data +363 drivers/spi/spi-meson-spicc.c

   360  
   361  static void meson_spicc_reset_fifo(struct meson_spicc_device *spicc)
   362  {
 > 363  u32 data;
   364  
   365  if (spicc->data->has_oen)
   366  writel_bits_relaxed(SPICC_ENH_MAIN_CLK_AO,
   367  SPICC_ENH_MAIN_CLK_AO,
   368  spicc->base + SPICC_ENH_CTL0);
   369  
   370  writel_bits_relaxed(SPICC_FIFORST_W1_MASK, 
SPICC_FIFORST_W1_MASK,
   371  spicc->base + SPICC_TESTREG);
   372  
   373  while (meson_spicc_rxready(spicc))
   374  data = readl_relaxed(spicc->base + SPICC_RXDATA);
   375  
   376  if (spicc->data->has_oen)
   377  writel_bits_relaxed(SPICC_ENH_MAIN_CLK_AO, 0,
   378  spicc->base + SPICC_ENH_CTL0);
   379  }
   380  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip


Re: [RFC PATCH V2 4/7] x86/hw_breakpoint: Prevent data breakpoints on user_pcid_flush_mask

2020-05-25 Thread Lai Jiangshan
On Tue, May 26, 2020 at 12:21 PM Andy Lutomirski  wrote:
>
> On Mon, May 25, 2020 at 6:42 PM Lai Jiangshan  wrote:
> >
> > The percpu user_pcid_flush_mask is used for CPU entry
> > If a data breakpoint on it, it will cause an unwanted #DB.
> > Protect the full cpu_tlbstate structure to be sure.
> >
> > There are some other percpu data used in CPU entry, but they are
> > either in already-protected cpu_tss_rw or are safe to trigger #DB
> > (espfix_waddr, espfix_stack).
>
> How hard would it be to rework this to have DECLARE_PERCPU_NODEBUG()
> and DEFINE_PERCPU_NODEBUG() or similar?


I don't know, but it is an excellent idea. Although the patchset
protects only 2 or 3 portions of percpu data, but there is many
percpu data used in tracing or kprobe code. They are needed to be
protected too.

Adds CC:
Steven Rostedt 
Masami Hiramatsu 


Re: mmotm 2020-05-25-16-56 uploaded (drm/nouveau)

2020-05-25 Thread Randy Dunlap
On 5/25/20 9:23 PM, Dave Airlie wrote:
> On Tue, 26 May 2020 at 13:50, Randy Dunlap  wrote:
>>
>> On 5/25/20 4:57 PM, Andrew Morton wrote:
>>> The mm-of-the-moment snapshot 2020-05-25-16-56 has been uploaded to
>>>
>>>http://www.ozlabs.org/~akpm/mmotm/
>>>
>>> mmotm-readme.txt says
>>>
>>> README for mm-of-the-moment:
>>>
>>> http://www.ozlabs.org/~akpm/mmotm/
>>>
>>> This is a snapshot of my -mm patch queue.  Uploaded at random hopefully
>>> more than once a week.
>>>
>>> You will need quilt to apply these patches to the latest Linus release (5.x
>>> or 5.x-rcY).  The series file is in broken-out.tar.gz and is duplicated in
>>> http://ozlabs.org/~akpm/mmotm/series
>>>
>>> The file broken-out.tar.gz contains two datestamp files: .DATE and
>>> .DATE--mm-dd-hh-mm-ss.  Both contain the string -mm-dd-hh-mm-ss,
>>> followed by the base kernel version against which this patch series is to
>>> be applied.
>>>
>>
>> on x86_64:
>>
>> when CONFIG_DRM_NOUVEAU=y and CONFIG_FB=m:
>>
>> ld: drivers/gpu/drm/nouveau/nouveau_drm.o: in function `nouveau_drm_probe':
>> nouveau_drm.c:(.text+0x1d67): undefined reference to 
>> `remove_conflicting_pci_framebuffers'
> 
> I've pushed the fix for this to drm-next.
> 
> Ben just used the wrong API.

That patch is
Acked-by: Randy Dunlap  # build-tested

thanks.
-- 
~Randy


[PATCH 1/1] nvme-fcloop: verify wwnn and wwpn format

2020-05-25 Thread Dongli Zhang
The nvme host and target verify the wwnn and wwpn format via
nvme_fc_parse_traddr(). For instance, it is required that the length of
wwnn to be either 21 ("nn-0x") or 19 (nn-).

Add this verification to nvme-fcloop so that the input should always be in
hex and the length of input should always be 18.

Otherwise, the user may use e.g. 0x2 to create fcloop local port, while
0x0002 is required for nvme host and target. This makes the
requirement of format confusing.

Signed-off-by: Dongli Zhang 
---
 drivers/nvme/target/fcloop.c | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
index f69ce66e2d44..14124e6d4bf2 100644
--- a/drivers/nvme/target/fcloop.c
+++ b/drivers/nvme/target/fcloop.c
@@ -43,6 +43,17 @@ static const match_table_t opt_tokens = {
{ NVMF_OPT_ERR, NULL}
 };
 
+static int fcloop_verify_addr(substring_t *s)
+{
+   size_t blen = s->to - s->from + 1;
+
+   if (strnlen(s->from, blen) != NVME_FC_TRADDR_HEXNAMELEN + 2 ||
+   strncmp(s->from, "0x", 2))
+   return -EINVAL;
+
+   return 0;
+}
+
 static int
 fcloop_parse_options(struct fcloop_ctrl_options *opts,
const char *buf)
@@ -64,14 +75,16 @@ fcloop_parse_options(struct fcloop_ctrl_options *opts,
opts->mask |= token;
switch (token) {
case NVMF_OPT_WWNN:
-   if (match_u64(args, )) {
+   if (fcloop_verify_addr(args) ||
+   match_u64(args, )) {
ret = -EINVAL;
goto out_free_options;
}
opts->wwnn = token64;
break;
case NVMF_OPT_WWPN:
-   if (match_u64(args, )) {
+   if (fcloop_verify_addr(args) ||
+   match_u64(args, )) {
ret = -EINVAL;
goto out_free_options;
}
@@ -92,14 +105,16 @@ fcloop_parse_options(struct fcloop_ctrl_options *opts,
opts->fcaddr = token;
break;
case NVMF_OPT_LPWWNN:
-   if (match_u64(args, )) {
+   if (fcloop_verify_addr(args) ||
+   match_u64(args, )) {
ret = -EINVAL;
goto out_free_options;
}
opts->lpwwnn = token64;
break;
case NVMF_OPT_LPWWPN:
-   if (match_u64(args, )) {
+   if (fcloop_verify_addr(args) ||
+   match_u64(args, )) {
ret = -EINVAL;
goto out_free_options;
}
@@ -141,14 +156,16 @@ fcloop_parse_nm_options(struct device *dev, u64 *nname, 
u64 *pname,
token = match_token(p, opt_tokens, args);
switch (token) {
case NVMF_OPT_WWNN:
-   if (match_u64(args, )) {
+   if (fcloop_verify_addr(args) ||
+   match_u64(args, )) {
ret = -EINVAL;
goto out_free_options;
}
*nname = token64;
break;
case NVMF_OPT_WWPN:
-   if (match_u64(args, )) {
+   if (fcloop_verify_addr(args) ||
+   match_u64(args, )) {
ret = -EINVAL;
goto out_free_options;
}
-- 
2.17.1



Re: mmotm 2020-05-25-16-56 uploaded (drm/nouveau)

2020-05-25 Thread Dave Airlie
On Tue, 26 May 2020 at 13:50, Randy Dunlap  wrote:
>
> On 5/25/20 4:57 PM, Andrew Morton wrote:
> > The mm-of-the-moment snapshot 2020-05-25-16-56 has been uploaded to
> >
> >http://www.ozlabs.org/~akpm/mmotm/
> >
> > mmotm-readme.txt says
> >
> > README for mm-of-the-moment:
> >
> > http://www.ozlabs.org/~akpm/mmotm/
> >
> > This is a snapshot of my -mm patch queue.  Uploaded at random hopefully
> > more than once a week.
> >
> > You will need quilt to apply these patches to the latest Linus release (5.x
> > or 5.x-rcY).  The series file is in broken-out.tar.gz and is duplicated in
> > http://ozlabs.org/~akpm/mmotm/series
> >
> > The file broken-out.tar.gz contains two datestamp files: .DATE and
> > .DATE--mm-dd-hh-mm-ss.  Both contain the string -mm-dd-hh-mm-ss,
> > followed by the base kernel version against which this patch series is to
> > be applied.
> >
>
> on x86_64:
>
> when CONFIG_DRM_NOUVEAU=y and CONFIG_FB=m:
>
> ld: drivers/gpu/drm/nouveau/nouveau_drm.o: in function `nouveau_drm_probe':
> nouveau_drm.c:(.text+0x1d67): undefined reference to 
> `remove_conflicting_pci_framebuffers'

I've pushed the fix for this to drm-next.

Ben just used the wrong API.

Dave.


Re: [RFC PATCH V2 4/7] x86/hw_breakpoint: Prevent data breakpoints on user_pcid_flush_mask

2020-05-25 Thread Andy Lutomirski
On Mon, May 25, 2020 at 6:42 PM Lai Jiangshan  wrote:
>
> The percpu user_pcid_flush_mask is used for CPU entry
> If a data breakpoint on it, it will cause an unwanted #DB.
> Protect the full cpu_tlbstate structure to be sure.
>
> There are some other percpu data used in CPU entry, but they are
> either in already-protected cpu_tss_rw or are safe to trigger #DB
> (espfix_waddr, espfix_stack).

How hard would it be to rework this to have DECLARE_PERCPU_NODEBUG()
and DEFINE_PERCPU_NODEBUG() or similar?


Re: [PATCH] Input: elantech - Remove read/write registers in attr.

2020-05-25 Thread Dmitry Torokhov
Hi Jingle,

On Tue, May 26, 2020 at 10:22:46AM +0800, Jingle.Wu wrote:
> New Elan IC would not be accessed with the specific regiters.

What about older Elaan parts? We can't simply drop compatibility with
older chips in newer kernels.

Thanks.

-- 
Dmitry


Re: inux-next: build failure after merge of the drm-msm tree

2020-05-25 Thread Stephen Rothwell
Hi all,

On Tue, 19 May 2020 15:09:55 +1000 Stephen Rothwell  
wrote:
>
> Hi all,
> 
> After merging the drm-msm tree, today's linux-next build (arm
> multi_v7_defconfig) failed like this:
> 
> ERROR: modpost: "__aeabi_ldivmod" [drivers/gpu/drm/msm/msm.ko] undefined!
> ERROR: modpost: "__aeabi_uldivmod" [drivers/gpu/drm/msm/msm.ko] undefined!
> 
> Caused by commit
> 
>   04d9044f6c57 ("drm/msm/dpu: add support for clk and bw scaling for display")
> 
> I applied the following patch for today (this is mechanical, there may
> be a better way):
> 
> From: Stephen Rothwell 
> Date: Tue, 19 May 2020 14:12:39 +1000
> Subject: [PATCH] drm/msm/dpu: fix up u64/u32 division for 32 bit architectures
> 
> Signed-off-by: Stephen Rothwell 
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c | 23 ++-
>  drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 15 
>  2 files changed, 28 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
> index 9697abcbec3f..85c2a4190840 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
> @@ -10,6 +10,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "dpu_kms.h"
>  #include "dpu_trace.h"
> @@ -53,8 +54,11 @@ static u64 _dpu_core_perf_calc_bw(struct dpu_kms *kms,
>   }
>  
>   bw_factor = kms->catalog->perf.bw_inefficiency_factor;
> - if (bw_factor)
> - crtc_plane_bw = mult_frac(crtc_plane_bw, bw_factor, 100);
> + if (bw_factor) {
> + u64 quot = crtc_plane_bw;
> + u32 rem = do_div(quot, 100);
> + crtc_plane_bw = (quot * bw_factor) + ((rem * bw_factor) / 100);
> + }
>  
>   return crtc_plane_bw;
>  }
> @@ -89,8 +93,11 @@ static u64 _dpu_core_perf_calc_clk(struct dpu_kms *kms,
>   }
>  
>   clk_factor = kms->catalog->perf.clk_inefficiency_factor;
> - if (clk_factor)
> - crtc_clk = mult_frac(crtc_clk, clk_factor, 100);
> + if (clk_factor) {
> + u64 quot = crtc_clk;
> + u32 rem = do_div(quot, 100);
> + crtc_clk = (quot * clk_factor) + ((rem * clk_factor) / 100);
> + }
>  
>   return crtc_clk;
>  }
> @@ -234,8 +241,12 @@ static int _dpu_core_perf_crtc_update_bus(struct dpu_kms 
> *kms,
>   }
>   }
>  
> - avg_bw = kms->num_paths ?
> - perf.bw_ctl / kms->num_paths : 0;
> + if (kms->num_paths) {
> + avg_bw = perf.bw_ctl;
> + do_div(avg_bw, kms->num_paths);
> + } else {
> + avg_bw = 0;
> + }
>  
>   for (i = 0; i < kms->num_paths; i++)
>   icc_set_bw(kms->path[i],
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
> index c2a6e3dacd68..ad95f32eac13 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
> @@ -9,6 +9,7 @@
>  
>  #include 
>  #include 
> +#include 
>  
>  #include 
>  #include 
> @@ -174,7 +175,11 @@ static void _dpu_plane_calc_bw(struct drm_plane *plane,
>   plane_prefill_bw =
>   src_width * hw_latency_lines * fps * fmt->bpp * scale_factor;
>  
> - plane_prefill_bw = mult_frac(plane_prefill_bw, mode->vtotal, (vbp+vpw));
> + {
> + u64 quot = plane_prefill_bw;
> + u32 rem = do_div(plane_prefill_bw, vbp + vpw);
> + plane_prefill_bw = quot * mode->vtotal + rem * mode->vtotal / 
> (vbp + vpw);
> + }
>  
>   pstate->plane_fetch_bw = max(plane_bw, plane_prefill_bw);
>  }
> @@ -204,9 +209,11 @@ static void _dpu_plane_calc_clk(struct drm_plane *plane)
>   pstate->plane_clk =
>   dst_width * mode->vtotal * fps;
>  
> - if (src_height > dst_height)
> - pstate->plane_clk = mult_frac(pstate->plane_clk,
> - src_height, dst_height);
> + if (src_height > dst_height) {
> + u64 quot = pstate->plane_clk;
> + u32 rem = do_div(quot, dst_height);
> + pstate->plane_clk = quot * src_height + rem * src_height / 
> dst_height;
> + }
>  }
>  
>  /**
> -- 
> 2.26.2

I am still applying the above ...

-- 
Cheers,
Stephen Rothwell


pgpEs4IDl0HRd.pgp
Description: OpenPGP digital signature


WARNING: suspicious RCU usage in idtentry_exit

2020-05-25 Thread syzbot
Hello,

syzbot found the following crash on:

HEAD commit:7b4cb0a4 Add linux-next specific files for 20200525
git tree:   linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=1335601610
kernel config:  https://syzkaller.appspot.com/x/.config?x=47b0740d89299c10
dashboard link: https://syzkaller.appspot.com/bug?extid=3ae5eaae0809ee311e75
compiler:   gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+3ae5eaae0809ee311...@syzkaller.appspotmail.com

=
WARNING: suspicious RCU usage
5.7.0-rc7-next-20200525-syzkaller #0 Not tainted
-
kernel/rcu/tree.c:715 RCU dynticks_nesting counter underflow/zero!!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 2, debug_locks = 1
RCU used illegally from extended quiescent state!
no locks held by syz-executor.5/24641.

stack backtrace:
CPU: 1 PID: 24641 Comm: syz-executor.5 Not tainted 
5.7.0-rc7-next-20200525-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 
01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x18f/0x20d lib/dump_stack.c:118
 rcu_irq_exit_preempt+0x1fa/0x250 kernel/rcu/tree.c:715
 idtentry_exit+0x9e/0xc0 arch/x86/entry/common.c:583
 exc_general_protection+0x23d/0x520 arch/x86/kernel/traps.c:506
 asm_exc_general_protection+0x1e/0x30 arch/x86/include/asm/idtentry.h:353
RIP: 0010:kvm_fastop_exception+0xb68/0xfe8
Code: f2 ff ff ff 48 31 db e9 fb c9 2a f9 b8 f2 ff ff ff 48 31 f6 e9 ff c9 2a 
f9 31 c0 e9 ec 2c 2b f9 b8 fb ff ff ff e9 13 a9 31 f9  fb ff ff ff 31 c0 31 
d2 e9 33 a9 31 f9 31 db e9 2a 0b 42 f9 31
RSP: 0018:c90004a87a30 EFLAGS: 00010212
RAX: 0004 RBX: 88809cca4080 RCX: 0122
RDX: 63ff RSI: c90004a87a98 RDI: 0122
RBP: 0122 R08: 888058486480 R09: fbfff131f481
R10: 898fa403 R11: fbfff131f480 R12: 0122
R13: 0078 R14: 0006 R15: 88244b5c
 paravirt_read_msr_safe arch/x86/include/asm/paravirt.h:178 [inline]
 vmx_create_vcpu+0x184/0x2b40 arch/x86/kvm/vmx/vmx.c:6827
 kvm_arch_vcpu_create+0x6a8/0xb30 arch/x86/kvm/x86.c:9427
 kvm_vm_ioctl_create_vcpu arch/x86/kvm/../../../virt/kvm/kvm_main.c:3043 
[inline]
 kvm_vm_ioctl+0x15b7/0x2460 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3603
 vfs_ioctl fs/ioctl.c:48 [inline]
 ksys_ioctl+0x11a/0x180 fs/ioctl.c:753
 __do_sys_ioctl fs/ioctl.c:762 [inline]
 __se_sys_ioctl fs/ioctl.c:760 [inline]
 __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:760
 do_syscall_64+0x60/0xe0 arch/x86/entry/common.c:353
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x45ca29
Code: 0d b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 
89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 
db b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:7f2c93b11c78 EFLAGS: 0246 ORIG_RAX: 0010
RAX: ffda RBX: 004e73c0 RCX: 0045ca29
RDX:  RSI: ae41 RDI: 0004
RBP: 0078bf00 R08:  R09: 
R10:  R11: 0246 R12: 
R13: 0396 R14: 004c62c6 R15: 7f2c93b126d4

=
WARNING: suspicious RCU usage
5.7.0-rc7-next-20200525-syzkaller #0 Not tainted
-
kernel/rcu/tree.c:717 RCU in extended quiescent state!!

other info that might help us debug this:


RCU used illegally from idle CPU!
rcu_scheduler_active = 2, debug_locks = 1
RCU used illegally from extended quiescent state!
no locks held by syz-executor.5/24641.

stack backtrace:
CPU: 1 PID: 24641 Comm: syz-executor.5 Not tainted 
5.7.0-rc7-next-20200525-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 
01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x18f/0x20d lib/dump_stack.c:118
 idtentry_exit+0x9e/0xc0 arch/x86/entry/common.c:583
 exc_general_protection+0x23d/0x520 arch/x86/kernel/traps.c:506
 asm_exc_general_protection+0x1e/0x30 arch/x86/include/asm/idtentry.h:353
RIP: 0010:kvm_fastop_exception+0xb68/0xfe8
Code: f2 ff ff ff 48 31 db e9 fb c9 2a f9 b8 f2 ff ff ff 48 31 f6 e9 ff c9 2a 
f9 31 c0 e9 ec 2c 2b f9 b8 fb ff ff ff e9 13 a9 31 f9  fb ff ff ff 31 c0 31 
d2 e9 33 a9 31 f9 31 db e9 2a 0b 42 f9 31
RSP: 0018:c90004a87a30 EFLAGS: 00010212
RAX: 0004 RBX: 88809cca4080 RCX: 0122
RDX: 63ff RSI: c90004a87a98 RDI: 0122
RBP: 0122 R08: 888058486480 R09: fbfff131f481
R10: 898fa403 R11: fbfff131f480 R12: 0122
R13: 0078 R14: 0006 R15: 88244b5c
 paravirt_read_ms

drivers/soc/fsl/dpio/qbman-portal.c:661:11: warning: variable 'addr_cena' set but not used

2020-05-25 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   9cb1fd0efd195590b828b9b865421ad345a4a145
commit: 3b2abda7d28c69f564c1157b9b9c21ef40092ee9 soc: fsl: dpio: Replace QMAN 
array mode with ring mode enqueue
date:   3 months ago
config: i386-randconfig-r004-20200526 (attached as .config)
compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
reproduce (this is a W=1 build):
git checkout 3b2abda7d28c69f564c1157b9b9c21ef40092ee9
# save the attached .config to linux build tree
make W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 

All warnings (new ones prefixed by >>, old ones prefixed by <<):

drivers/soc/fsl/dpio/qbman-portal.c: In function 
'qbman_swp_enqueue_multiple_direct':
>> drivers/soc/fsl/dpio/qbman-portal.c:661:11: warning: variable 'addr_cena' 
>> set but not used [-Wunused-but-set-variable]
uint64_t addr_cena;
^
drivers/soc/fsl/dpio/qbman-portal.c: In function 
'qbman_swp_enqueue_multiple_desc_direct':
drivers/soc/fsl/dpio/qbman-portal.c:869:14: warning: cast from pointer to 
integer of different size [-Wpointer-to-int-cast]
addr_cena = (uint64_t)s->addr_cena;
^
drivers/soc/fsl/dpio/qbman-portal.c:825:11: warning: variable 'addr_cena' set 
but not used [-Wunused-but-set-variable]
uint64_t addr_cena;
^

vim +/addr_cena +661 drivers/soc/fsl/dpio/qbman-portal.c

   638  
   639  /**
   640   * qbman_swp_enqueue_multiple_direct() - Issue a multi enqueue command
   641   * using one enqueue descriptor
   642   * @s:  the software portal used for enqueue
   643   * @d:  the enqueue descriptor
   644   * @fd: table pointer of frame descriptor table to be enqueued
   645   * @flags: table pointer of QBMAN_ENQUEUE_FLAG_DCA flags, not used if 
NULL
   646   * @num_frames: number of fd to be enqueued
   647   *
   648   * Return the number of fd enqueued, or a negative error number.
   649   */
   650  static
   651  int qbman_swp_enqueue_multiple_direct(struct qbman_swp *s,
   652const struct qbman_eq_desc *d,
   653const struct dpaa2_fd *fd,
   654uint32_t *flags,
   655int num_frames)
   656  {
   657  uint32_t *p = NULL;
   658  const uint32_t *cl = (uint32_t *)d;
   659  uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask;
   660  int i, num_enqueued = 0;
 > 661  uint64_t addr_cena;
   662  
   663  spin_lock(>access_spinlock);
   664  half_mask = (s->eqcr.pi_ci_mask>>1);
   665  full_mask = s->eqcr.pi_ci_mask;
   666  
   667  if (!s->eqcr.available) {
   668  eqcr_ci = s->eqcr.ci;
   669  p = s->addr_cena + QBMAN_CENA_SWP_EQCR_CI;
   670  s->eqcr.ci = qbman_read_register(s, 
QBMAN_CINH_SWP_EQCR_CI);
   671  
   672  s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size,
   673  eqcr_ci, s->eqcr.ci);
   674  if (!s->eqcr.available) {
   675  spin_unlock(>access_spinlock);
   676  return 0;
   677  }
   678  }
   679  
   680  eqcr_pi = s->eqcr.pi;
   681  num_enqueued = (s->eqcr.available < num_frames) ?
   682  s->eqcr.available : num_frames;
   683  s->eqcr.available -= num_enqueued;
   684  /* Fill in the EQCR ring */
   685  for (i = 0; i < num_enqueued; i++) {
   686  p = (s->addr_cena + QBMAN_CENA_SWP_EQCR(eqcr_pi & 
half_mask));
   687  /* Skip copying the verb */
   688  memcpy([1], [1], EQ_DESC_SIZE_WITHOUT_FD - 1);
   689  memcpy([EQ_DESC_SIZE_FD_START/sizeof(uint32_t)],
   690 [i], sizeof(*fd));
   691  eqcr_pi++;
   692  }
   693  
   694  dma_wmb();
   695  
   696  /* Set the verb byte, have to substitute in the valid-bit */
   697  eqcr_pi = s->eqcr.pi;
   698  for (i = 0; i < num_enqueued; i++) {
   699  p = (s->addr_cena + QBMAN_CENA_SWP_EQCR(eqcr_pi & 
half_mask));
   700  p[0] = cl[0] | s->eqcr.pi_vb;
   701  if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) {
   702  struct qbman_eq_desc *d = (struct qbman_eq_desc 
*)p;
   703  
   704  d->dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) |
   705  ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK);
   706  }
   707  eqcr_pi++;
   708  if (!(eqcr_pi & half_mask))
   709  s->eqcr.pi_vb ^= QB_VALID_BIT;
   710  }
   711  
   712  /* Flush all the cacheline without load/store in between */
   713  

general protection fault in tomoyo_check_acl

2020-05-25 Thread syzbot
Hello,

syzbot found the following crash on:

HEAD commit:d2f8825a Merge tag 'for_linus' of git://git.kernel.org/pub..
git tree:   upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13c5592210
kernel config:  https://syzkaller.appspot.com/x/.config?x=b3368ce0cc5f5ace
dashboard link: https://syzkaller.appspot.com/bug?extid=cff8c4c75acd8c6fb842
compiler:   gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+cff8c4c75acd8c6fb...@syzkaller.appspotmail.com

general protection fault, probably for non-canonical address 
0xe2666003:  [#1] PREEMPT SMP KASAN
KASAN: probably user-memory-access in range 
[0x0018-0x001f]
CPU: 0 PID: 12489 Comm: systemd-rfkill Not tainted 5.7.0-rc6-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 
01/01/2011
RIP: 0010:tomoyo_check_acl+0xa9/0x3e0 security/tomoyo/domain.c:173
Code: 00 0f 85 2d 03 00 00 49 8b 1c 24 49 39 dc 0f 84 bd 01 00 00 e8 28 65 14 
fe 48 8d 7b 18 48 89 f8 48 89 fa 48 c1 e8 03 83 e2 07 <0f> b6 04 28 38 d0 7f 08 
84 c0 0f 85 a7 02 00 00 44 0f b6 73 18 31
RSP: 0018:c90016987bc8 EFLAGS: 00010246
RAX: 0003 RBX:  RCX: 835ed028
RDX:  RSI: 835ecff8 RDI: 0018
RBP: dc00 R08: 8880a1cce1c0 R09: 
R10:  R11:  R12: 8880a72db990
R13: c90016987c80 R14: 0033 R15: 0002
FS:  7f9e4b6ba8c0() GS:8880ae60() knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2: 7f9e4b399e30 CR3: 4df8d000 CR4: 001426f0
DR0:  DR1:  DR2: 
DR3:  DR6: fffe0ff0 DR7: 0400
Call Trace:
 tomoyo_path_number_perm+0x314/0x4d0 security/tomoyo/file.c:733
 security_file_ioctl+0x6c/0xb0 security/security.c:1460
 ksys_ioctl+0x50/0x180 fs/ioctl.c:765
 __do_sys_ioctl fs/ioctl.c:780 [inline]
 __se_sys_ioctl fs/ioctl.c:778 [inline]
 __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:778
 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
 entry_SYSCALL_64_after_hwframe+0x49/0xb3
RIP: 0033:0x7f9e4adab80a
Code: ff e9 62 fe ff ff 66 2e 0f 1f 84 00 00 00 00 00 53 49 89 f0 48 63 ff be 
01 54 00 00 b8 10 00 00 00 48 83 ec 30 48 89 e2 0f 05 <48> 3d 00 f0 ff ff 77 6e 
85 c0 89 c3 75 5c 8b 04 24 8b 54 24 0c 4c
RSP: 002b:7fffcc16ea20 EFLAGS: 0202 ORIG_RAX: 0010
RAX: ffda RBX: 0007 RCX: 7f9e4adab80a
RDX: 7fffcc16ea20 RSI: 5401 RDI: 0002
RBP: 0007 R08: 7fffcc16ea60 R09: 
R10: 00020d50 R11: 0202 R12: 56153471ef90
R13: 7fffcc16ec30 R14:  R15: 
Modules linked in:
---[ end trace 1f05c0d7f6671379 ]---
RIP: 0010:tomoyo_check_acl+0xa9/0x3e0 security/tomoyo/domain.c:173
Code: 00 0f 85 2d 03 00 00 49 8b 1c 24 49 39 dc 0f 84 bd 01 00 00 e8 28 65 14 
fe 48 8d 7b 18 48 89 f8 48 89 fa 48 c1 e8 03 83 e2 07 <0f> b6 04 28 38 d0 7f 08 
84 c0 0f 85 a7 02 00 00 44 0f b6 73 18 31
RSP: 0018:c90016987bc8 EFLAGS: 00010246
RAX: 0003 RBX:  RCX: 835ed028
RDX:  RSI: 835ecff8 RDI: 0018
RBP: dc00 R08: 8880a1cce1c0 R09: 
R10:  R11:  R12: 8880a72db990
R13: c90016987c80 R14: 0033 R15: 0002
FS:  7f9e4b6ba8c0() GS:8880ae60() knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2: 55f7b8031310 CR3: 4df8d000 CR4: 001426f0
DR0:  DR1:  DR2: 
DR3:  DR6: fffe0ff0 DR7: 0400


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkal...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.


[PATCH v1] Bluetooth: hci_qca: Improve controller ID info log level

2020-05-25 Thread Zijun Hu
Controller ID info got by VSC EDL_PATCH_GETVER is very
important, so improve its log level from DEBUG to INFO.

Signed-off-by: Zijun Hu 
---
 drivers/bluetooth/btqca.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
index 3ea866d..49e5aeb 100644
--- a/drivers/bluetooth/btqca.c
+++ b/drivers/bluetooth/btqca.c
@@ -74,10 +74,10 @@ int qca_read_soc_version(struct hci_dev *hdev, u32 
*soc_version,
 
ver = (struct qca_btsoc_version *)(edl->data);
 
-   BT_DBG("%s: Product:0x%08x", hdev->name, le32_to_cpu(ver->product_id));
-   BT_DBG("%s: Patch  :0x%08x", hdev->name, le16_to_cpu(ver->patch_ver));
-   BT_DBG("%s: ROM:0x%08x", hdev->name, le16_to_cpu(ver->rom_ver));
-   BT_DBG("%s: SOC:0x%08x", hdev->name, le32_to_cpu(ver->soc_id));
+   bt_dev_info(hdev, "QCA Product:0x%08x", le32_to_cpu(ver->product_id));
+   bt_dev_info(hdev, "QCA Patch  :0x%08x", le16_to_cpu(ver->patch_ver));
+   bt_dev_info(hdev, "QCA ROM:0x%08x", le16_to_cpu(ver->rom_ver));
+   bt_dev_info(hdev, "QCA SOC:0x%08x", le32_to_cpu(ver->soc_id));
 
/* QCA chipset version can be decided by patch and SoC
 * version, combination with upper 2 bytes from SoC
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a 
Linux Foundation Collaborative Project



Re: [PATCH v2] relay: handle alloc_percpu returning NULL in relay_open

2020-05-25 Thread Michael Ellerman
[ + akpm ]

Daniel Axtens  writes:
>>> > Check if alloc_percpu returns NULL.
>>> > 
>>> > This was found by syzkaller both on x86 and powerpc, and the reproducer
>>> > it found on powerpc is capable of hitting the issue as an unprivileged
>>> > user.
>>> > 
>>> > Fixes: 017c59c042d0 ("relay: Use per CPU constructs for the relay channel 
>>> > buffer pointers")
>>> > Reported-by: syzbot+1e925b4b836afe85a...@syzkaller-ppc64.appspotmail.com
>>> > Reported-by: syzbot+587b2421926808309...@syzkaller-ppc64.appspotmail.com
>>> > Reported-by: syzbot+58320b7171734bf79...@syzkaller.appspotmail.com
>>> > Reported-by: syzbot+d6074fb08bdb2e010...@syzkaller.appspotmail.com
>>> > Cc: Akash Goel 
>>> > Cc: Andrew Donnellan  # syzkaller-ppc64
>>> > Reviewed-by: Michael Ellerman 
>>> > Reviewed-by: Andrew Donnellan 
>>> > Cc: sta...@vger.kernel.org # v4.10+
>>> > Signed-off-by: Daniel Axtens 
>>> 
>>> Acked-by: David Rientjes 
>>
>> It looks this one was never applied (which relates to CVE-2019-19462,
>> as pointed by Guenter in 20191223163610.ga32...@roeck-us.net).
>>
>> Whas this lost or are there any issues pending?
>
> I'm not aware of any pending issues.
>
> (But, if anyone does have any objections I'm happy to revise the patch.)

It looks like kernel/relay.c is lacking a maintainer?

Andrew are you able to pick this up for v5.8? It's pretty obviously
correct, and has David's ack.

Original is here if that helps:
  https://lore.kernel.org/lkml/20191219121256.26480-1-...@axtens.net/


cheers


[PATCH] drm/msm/a6xx: set ubwc config for A640 and A650

2020-05-25 Thread Jonathan Marek
This is required for A640 and A650 to be able to share UBWC-compressed
images with other HW such as display, which expect this configuration.

Signed-off-by: Jonathan Marek 
---
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 38 ++-
 1 file changed, 32 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c 
b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 6f335ae179c8..aa004a261277 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -289,6 +289,37 @@ static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state)
gpu_write(gpu, REG_A6XX_RBBM_CLOCK_CNTL, state ? 0x8aa8aa02 : 0);
 }
 
+static void a6xx_set_ubwc_config(struct msm_gpu *gpu)
+{
+   struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
+   u32 lower_bit = 2;
+   u32 amsbc = 0;
+   u32 rgb565_predicator = 0;
+   u32 uavflagprd_inv = 0;
+
+   /* a618 is using the hw default values */
+   if (adreno_is_a618(adreno_gpu))
+   return;
+
+   if (adreno_is_a640(adreno_gpu))
+   amsbc = 1;
+
+   if (adreno_is_a650(adreno_gpu)) {
+   /* TODO: get ddr type from bootloader and use 2 for LPDDR4 */
+   lower_bit = 3;
+   amsbc = 1;
+   rgb565_predicator = 1;
+   uavflagprd_inv = 2;
+   }
+
+   gpu_write(gpu, REG_A6XX_RB_NC_MODE_CNTL,
+   rgb565_predicator << 11 | amsbc << 4 | lower_bit << 1);
+   gpu_write(gpu, REG_A6XX_TPL1_NC_MODE_CNTL, lower_bit << 1);
+   gpu_write(gpu, REG_A6XX_SP_NC_MODE_CNTL,
+   uavflagprd_inv >> 4 | lower_bit << 1);
+   gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, lower_bit << 21);
+}
+
 static int a6xx_cp_init(struct msm_gpu *gpu)
 {
struct msm_ringbuffer *ring = gpu->rb[0];
@@ -478,12 +509,7 @@ static int a6xx_hw_init(struct msm_gpu *gpu)
/* Select CP0 to always count cycles */
gpu_write(gpu, REG_A6XX_CP_PERFCTR_CP_SEL_0, PERF_CP_ALWAYS_COUNT);
 
-   if (adreno_is_a630(adreno_gpu)) {
-   gpu_write(gpu, REG_A6XX_RB_NC_MODE_CNTL, 2 << 1);
-   gpu_write(gpu, REG_A6XX_TPL1_NC_MODE_CNTL, 2 << 1);
-   gpu_write(gpu, REG_A6XX_SP_NC_MODE_CNTL, 2 << 1);
-   gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, 2 << 21);
-   }
+   a6xx_set_ubwc_config(gpu);
 
/* Enable fault detection */
gpu_write(gpu, REG_A6XX_RBBM_INTERFACE_HANG_INT_CNTL,
-- 
2.26.1



[PATCH 6/8] drm/msm/dpu: intf timing path for displayport

2020-05-25 Thread Jonathan Marek
Calculate the correct timings for displayport, from downstream driver.

Signed-off-by: Jonathan Marek 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c | 20 +++-
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
index 64f556d693dd..6f0f54588124 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
@@ -107,11 +107,6 @@ static void dpu_hw_intf_setup_timing_engine(struct 
dpu_hw_intf *ctx,
display_v_end = ((vsync_period - p->v_front_porch) * hsync_period) +
p->hsync_skew - 1;
 
-   if (ctx->cap->type == INTF_EDP || ctx->cap->type == INTF_DP) {
-   display_v_start += p->hsync_pulse_width + p->h_back_porch;
-   display_v_end -= p->h_front_porch;
-   }
-
hsync_start_x = p->h_back_porch + p->hsync_pulse_width;
hsync_end_x = hsync_period - p->h_front_porch - 1;
 
@@ -144,10 +139,25 @@ static void dpu_hw_intf_setup_timing_engine(struct 
dpu_hw_intf *ctx,
hsync_ctl = (hsync_period << 16) | p->hsync_pulse_width;
display_hctl = (hsync_end_x << 16) | hsync_start_x;
 
+   if (ctx->cap->type == INTF_EDP || ctx->cap->type == INTF_DP) {
+   active_h_start = hsync_start_x;
+   active_h_end = active_h_start + p->xres - 1;
+   active_v_start = display_v_start;
+   active_v_end = active_v_start + (p->yres * hsync_period) - 1;
+
+   display_v_start += p->hsync_pulse_width + p->h_back_porch;
+
+   active_hctl = (active_h_end << 16) | active_h_start;
+   display_hctl = active_hctl;
+   }
+
den_polarity = 0;
if (ctx->cap->type == INTF_HDMI) {
hsync_polarity = p->yres >= 720 ? 0 : 1;
vsync_polarity = p->yres >= 720 ? 0 : 1;
+   } else if (ctx->cap->type == INTF_DP) {
+   hsync_polarity = p->hsync_polarity;
+   vsync_polarity = p->vsync_polarity;
} else {
hsync_polarity = 0;
vsync_polarity = 0;
-- 
2.26.1



[PATCH 4/8] drm/msm/dpu: don't use INTF_INPUT_CTRL feature on sdm845

2020-05-25 Thread Jonathan Marek
The INTF_INPUT_CTRL feature is not available on sdm845, so don't set it.

This also adds separate feature bits for INTF (based on downstream) instead
of using CTL feature bit for it, and removes the unnecessary NULL check in
the added bind_pingpong_blk function.

Fixes: 73bfb790ac786ca55fa2786a06f59 ("msm:disp:dpu1: setup display datapath 
for SC7180 target")

Signed-off-by: Jonathan Marek 
---
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 20 +++
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h| 13 
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c   |  9 ++---
 3 files changed, 27 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index 496407f1cd08..1e64fa08c219 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -41,6 +41,10 @@
 #define PINGPONG_SDM845_SPLIT_MASK \
(PINGPONG_SDM845_MASK | BIT(DPU_PINGPONG_TE2))
 
+#define INTF_SDM845_MASK (0)
+
+#define INTF_SC7180_MASK BIT(DPU_INTF_INPUT_CTRL) | BIT(DPU_INTF_TE)
+
 #define DEFAULT_PIXEL_RAM_SIZE (50 * 1024)
 #define DEFAULT_DPU_LINE_WIDTH 2048
 #define DEFAULT_DPU_OUTPUT_LINE_WIDTH  2560
@@ -376,26 +380,26 @@ static struct dpu_pingpong_cfg sc7180_pp[] = {
 /*
  * INTF sub blocks config
  */
-#define INTF_BLK(_name, _id, _base, _type, _ctrl_id) \
+#define INTF_BLK(_name, _id, _base, _type, _ctrl_id, _features) \
{\
.name = _name, .id = _id, \
.base = _base, .len = 0x280, \
-   .features = BIT(DPU_CTL_ACTIVE_CFG), \
+   .features = _features, \
.type = _type, \
.controller_id = _ctrl_id, \
.prog_fetch_lines_worst_case = 24 \
}
 
 static const struct dpu_intf_cfg sdm845_intf[] = {
-   INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0),
-   INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0),
-   INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1),
-   INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_DP, 1),
+   INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0, INTF_SDM845_MASK),
+   INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, INTF_SDM845_MASK),
+   INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1, INTF_SDM845_MASK),
+   INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_DP, 1, INTF_SDM845_MASK),
 };
 
 static const struct dpu_intf_cfg sc7180_intf[] = {
-   INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0),
-   INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0),
+   INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0, INTF_SC7180_MASK),
+   INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, INTF_SC7180_MASK),
 };
 
 /*
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
index 7a8d1c6658d2..31ddb2be9c57 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
@@ -175,6 +175,19 @@ enum {
DPU_CTL_MAX
 };
 
+/**
+ * INTF sub-blocks
+ * @DPU_INTF_INPUT_CTRL Supports the setting of pp block from which
+ *  pixel data arrives to this INTF
+ * @DPU_INTF_TE INTF block has TE configuration support
+ * @DPU_INTF_MAX
+ */
+enum {
+   DPU_INTF_INPUT_CTRL = 0x1,
+   DPU_INTF_TE,
+   DPU_INTF_MAX
+};
+
 /**
  * VBIF sub-blocks and features
  * @DPU_VBIF_QOS_OTLIMVBIF supports OT Limit
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
index efe9a5719c6b..64f556d693dd 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
@@ -225,14 +225,9 @@ static void dpu_hw_intf_bind_pingpong_blk(
bool enable,
const enum dpu_pingpong pp)
 {
-   struct dpu_hw_blk_reg_map *c;
+   struct dpu_hw_blk_reg_map *c = >hw;
u32 mux_cfg;
 
-   if (!intf)
-   return;
-
-   c = >hw;
-
mux_cfg = DPU_REG_READ(c, INTF_MUX);
mux_cfg &= ~0xf;
 
@@ -280,7 +275,7 @@ static void _setup_intf_ops(struct dpu_hw_intf_ops *ops,
ops->get_status = dpu_hw_intf_get_status;
ops->enable_timing = dpu_hw_intf_enable_timing_engine;
ops->get_line_count = dpu_hw_intf_get_line_count;
-   if (cap & BIT(DPU_CTL_ACTIVE_CFG))
+   if (cap & BIT(DPU_INTF_INPUT_CTRL))
ops->bind_pingpong_blk = dpu_hw_intf_bind_pingpong_blk;
 }
 
-- 
2.26.1



[PATCH 7/8] drm/msm/dpu: add SM8150 to hw catalog

2020-05-25 Thread Jonathan Marek
This brings up basic video mode functionality for SM8150 DPU. Command mode
and dual mixer/intf configurations are not working, future patches will
address this. Scaler functionality and multiple planes is also untested.

Signed-off-by: Jonathan Marek 
---
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 147 ++
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h   |   2 +
 2 files changed, 149 insertions(+)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index 1e64fa08c219..f99622870676 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -90,6 +90,23 @@ static const struct dpu_caps sc7180_dpu_caps = {
.pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
 };
 
+static const struct dpu_caps sm8150_dpu_caps = {
+   .max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+   .max_mixer_blendstages = 0xb,
+   .max_linewidth = 4096,
+   .qseed_type = DPU_SSPP_SCALER_QSEED3,
+   .smart_dma_rev = DPU_SSPP_SMART_DMA_V2_5,
+   .ubwc_version = DPU_HW_UBWC_VER_30,
+   .has_src_split = true,
+   .has_dim_layer = true,
+   .has_idle_pc = true,
+   .has_3d_merge = true,
+   .max_linewidth = 4096,
+   .pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
+   .max_hdeci_exp = MAX_HORZ_DECIMATION,
+   .max_vdeci_exp = MAX_VERT_DECIMATION,
+};
+
 static const struct dpu_mdp_cfg sdm845_mdp[] = {
{
.name = "top_0", .id = MDP_TOP,
@@ -181,6 +198,39 @@ static const struct dpu_ctl_cfg sc7180_ctl[] = {
},
 };
 
+static const struct dpu_ctl_cfg sm8150_ctl[] = {
+   {
+   .name = "ctl_0", .id = CTL_0,
+   .base = 0x1000, .len = 0x1e0,
+   .features = BIT(DPU_CTL_ACTIVE_CFG) | BIT(DPU_CTL_SPLIT_DISPLAY)
+   },
+   {
+   .name = "ctl_1", .id = CTL_1,
+   .base = 0x1200, .len = 0x1e0,
+   .features = BIT(DPU_CTL_ACTIVE_CFG) | BIT(DPU_CTL_SPLIT_DISPLAY)
+   },
+   {
+   .name = "ctl_2", .id = CTL_2,
+   .base = 0x1400, .len = 0x1e0,
+   .features = BIT(DPU_CTL_ACTIVE_CFG)
+   },
+   {
+   .name = "ctl_3", .id = CTL_3,
+   .base = 0x1600, .len = 0x1e0,
+   .features = BIT(DPU_CTL_ACTIVE_CFG)
+   },
+   {
+   .name = "ctl_4", .id = CTL_4,
+   .base = 0x1800, .len = 0x1e0,
+   .features = BIT(DPU_CTL_ACTIVE_CFG)
+   },
+   {
+   .name = "ctl_5", .id = CTL_5,
+   .base = 0x1a00, .len = 0x1e0,
+   .features = BIT(DPU_CTL_ACTIVE_CFG)
+   },
+};
+
 /*
  * SSPP sub blocks config
  */
@@ -335,6 +385,23 @@ static const struct dpu_lm_cfg sc7180_lm[] = {
_lm_sblk, PINGPONG_1, LM_0),
 };
 
+/* SM8150 */
+
+static const struct dpu_lm_cfg sm8150_lm[] = {
+   LM_BLK("lm_0", LM_0, 0x44000, MIXER_SDM845_MASK,
+   _lm_sblk, PINGPONG_0, LM_1),
+   LM_BLK("lm_1", LM_1, 0x45000, MIXER_SDM845_MASK,
+   _lm_sblk, PINGPONG_1, LM_0),
+   LM_BLK("lm_2", LM_2, 0x46000, MIXER_SDM845_MASK,
+   _lm_sblk, PINGPONG_2, LM_3),
+   LM_BLK("lm_3", LM_3, 0x47000, MIXER_SDM845_MASK,
+   _lm_sblk, PINGPONG_3, LM_2),
+   LM_BLK("lm_4", LM_4, 0x48000, MIXER_SDM845_MASK,
+   _lm_sblk, PINGPONG_4, LM_5),
+   LM_BLK("lm_5", LM_5, 0x49000, MIXER_SDM845_MASK,
+   _lm_sblk, PINGPONG_5, LM_4),
+};
+
 /*
  * PINGPONG sub blocks config
  */
@@ -377,6 +444,15 @@ static struct dpu_pingpong_cfg sc7180_pp[] = {
PP_BLK_TE("pingpong_1", PINGPONG_1, 0x70800),
 };
 
+static const struct dpu_pingpong_cfg sm8150_pp[] = {
+   PP_BLK_TE("pingpong_0", PINGPONG_0, 0x7),
+   PP_BLK_TE("pingpong_1", PINGPONG_1, 0x70800),
+   PP_BLK("pingpong_2", PINGPONG_2, 0x71000),
+   PP_BLK("pingpong_3", PINGPONG_3, 0x71800),
+   PP_BLK("pingpong_4", PINGPONG_4, 0x72000),
+   PP_BLK("pingpong_5", PINGPONG_5, 0x72800),
+};
+
 /*
  * INTF sub blocks config
  */
@@ -402,6 +478,13 @@ static const struct dpu_intf_cfg sc7180_intf[] = {
INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, INTF_SC7180_MASK),
 };
 
+static const struct dpu_intf_cfg sm8150_intf[] = {
+   INTF_BLK("intf_0", INTF_0, 0x6A000, INTF_DP, 0, INTF_SC7180_MASK),
+   INTF_BLK("intf_1", INTF_1, 0x6A800, INTF_DSI, 0, INTF_SC7180_MASK),
+   INTF_BLK("intf_2", INTF_2, 0x6B000, INTF_DSI, 1, INTF_SC7180_MASK),
+   INTF_BLK("intf_3", INTF_3, 0x6B800, INTF_DP, 1, INTF_SC7180_MASK),
+};
+
 /*
  * VBIF sub blocks config
  

[PATCH 5/8] drm/msm/dpu: set missing flush bits for INTF_2 and INTF_3

2020-05-25 Thread Jonathan Marek
This fixes flushing of INTF_2 and INTF_3 on SM8150 and SM8250 hardware.

Signed-off-by: Jonathan Marek 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c | 20 ++--
 1 file changed, 2 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
index 831e5f7a9b7f..99afdd66 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
@@ -245,30 +245,14 @@ static int dpu_hw_ctl_get_bitmask_intf(struct dpu_hw_ctl 
*ctx,
 static int dpu_hw_ctl_get_bitmask_intf_v1(struct dpu_hw_ctl *ctx,
u32 *flushbits, enum dpu_intf intf)
 {
-   switch (intf) {
-   case INTF_0:
-   case INTF_1:
-   *flushbits |= BIT(31);
-   break;
-   default:
-   return 0;
-   }
+   *flushbits |= BIT(31);
return 0;
 }
 
 static int dpu_hw_ctl_active_get_bitmask_intf(struct dpu_hw_ctl *ctx,
u32 *flushbits, enum dpu_intf intf)
 {
-   switch (intf) {
-   case INTF_0:
-   *flushbits |= BIT(0);
-   break;
-   case INTF_1:
-   *flushbits |= BIT(1);
-   break;
-   default:
-   return 0;
-   }
+   *flushbits |= BIT(intf - INTF_0);
return 0;
 }
 
-- 
2.26.1



[PATCH 8/8] drm/msm/dpu: add SM8250 to hw catalog

2020-05-25 Thread Jonathan Marek
This brings up basic video mode functionality for SM8250 DPU. Command mode
and dual mixer/intf configurations are not working, future patches will
address this. Scaler functionality and multiple planes is also untested.

Signed-off-by: Jonathan Marek 
---
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 106 ++
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h|   3 +
 2 files changed, 109 insertions(+)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index f99622870676..711ec1e6a543 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -107,6 +107,21 @@ static const struct dpu_caps sm8150_dpu_caps = {
.max_vdeci_exp = MAX_VERT_DECIMATION,
 };
 
+static const struct dpu_caps sm8250_dpu_caps = {
+   .max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+   .max_mixer_blendstages = 0xb,
+   .max_linewidth = 4096,
+   .qseed_type = DPU_SSPP_SCALER_QSEED3, /* TODO: qseed3 lite */
+   .smart_dma_rev = DPU_SSPP_SMART_DMA_V2_5,
+   .ubwc_version = DPU_HW_UBWC_VER_40,
+   .has_src_split = true,
+   .has_dim_layer = true,
+   .has_idle_pc = true,
+   .has_3d_merge = true,
+   .max_linewidth = 4096,
+   .pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
+};
+
 static const struct dpu_mdp_cfg sdm845_mdp[] = {
{
.name = "top_0", .id = MDP_TOP,
@@ -149,6 +164,33 @@ static const struct dpu_mdp_cfg sc7180_mdp[] = {
},
 };
 
+static const struct dpu_mdp_cfg sm8250_mdp[] = {
+   {
+   .name = "top_0", .id = MDP_TOP,
+   .base = 0x0, .len = 0x45C,
+   .features = 0,
+   .highest_bank_bit = 0x3, /* TODO: 2 for LP_DDR4 */
+   .clk_ctrls[DPU_CLK_CTRL_VIG0] = {
+   .reg_off = 0x2AC, .bit_off = 0},
+   .clk_ctrls[DPU_CLK_CTRL_VIG1] = {
+   .reg_off = 0x2B4, .bit_off = 0},
+   .clk_ctrls[DPU_CLK_CTRL_VIG2] = {
+   .reg_off = 0x2BC, .bit_off = 0},
+   .clk_ctrls[DPU_CLK_CTRL_VIG3] = {
+   .reg_off = 0x2C4, .bit_off = 0},
+   .clk_ctrls[DPU_CLK_CTRL_DMA0] = {
+   .reg_off = 0x2AC, .bit_off = 8},
+   .clk_ctrls[DPU_CLK_CTRL_DMA1] = {
+   .reg_off = 0x2B4, .bit_off = 8},
+   .clk_ctrls[DPU_CLK_CTRL_CURSOR0] = {
+   .reg_off = 0x2BC, .bit_off = 8},
+   .clk_ctrls[DPU_CLK_CTRL_CURSOR1] = {
+   .reg_off = 0x2C4, .bit_off = 8},
+   .clk_ctrls[DPU_CLK_CTRL_REG_DMA] = {
+   .reg_off = 0x2BC, .bit_off = 20},
+   },
+};
+
 /*
  * CTL sub blocks config
  */
@@ -519,6 +561,14 @@ static const struct dpu_reg_dma_cfg sm8150_regdma = {
.base = 0x0, .version = 0x00010001, .trigger_sel_off = 0x119c
 };
 
+static const struct dpu_reg_dma_cfg sm8250_regdma = {
+   .base = 0x0,
+   .version = 0x00010002,
+   .trigger_sel_off = 0x119c,
+   .xin_id = 7,
+   .clk_ctrl = DPU_CLK_CTRL_REG_DMA,
+};
+
 /*
  * PERF data config
  */
@@ -656,6 +706,31 @@ static const struct dpu_perf_cfg sm8150_perf_data = {
},
 };
 
+static const struct dpu_perf_cfg sm8250_perf_data = {
+   .max_bw_low = 1370,
+   .max_bw_high = 1660,
+   .min_core_ib = 480,
+   .min_llcc_ib = 0,
+   .min_dram_ib = 80,
+   .danger_lut_tbl = {0xf, 0x, 0x0},
+   .qos_lut_tbl = {
+   {.nentry = ARRAY_SIZE(sc7180_qos_linear),
+   .entries = sc7180_qos_linear
+   },
+   {.nentry = ARRAY_SIZE(sc7180_qos_macrotile),
+   .entries = sc7180_qos_macrotile
+   },
+   {.nentry = ARRAY_SIZE(sc7180_qos_nrt),
+   .entries = sc7180_qos_nrt
+   },
+   /* TODO: macrotile-qseed is different from macrotile */
+   },
+   .cdp_cfg = {
+   {.rd_enable = 1, .wr_enable = 1},
+   {.rd_enable = 1, .wr_enable = 0}
+   },
+};
+
 /*
  * Hardware catalog init
  */
@@ -747,11 +822,42 @@ static void sm8150_cfg_init(struct dpu_mdss_cfg *dpu_cfg)
};
 }
 
+/*
+ * sm8250_cfg_init(): populate sm8250 dpu sub-blocks reg offsets
+ * and instance counts.
+ */
+static void sm8250_cfg_init(struct dpu_mdss_cfg *dpu_cfg)
+{
+   *dpu_cfg = (struct dpu_mdss_cfg){
+   .caps = _dpu_caps,
+   .mdp_count = ARRAY_SIZE(sm8250_mdp),
+   .mdp = sm8250_mdp,
+   .ctl_count = ARRAY_SIZE(sm8150_ctl),
+   .ctl = sm8150_ctl,
+   /* TODO: sspp qseed 

[PATCH 2/8] drm/msm/dpu: update UBWC config for sm8150 and sm8250

2020-05-25 Thread Jonathan Marek
Update the UBWC registers to the right values for sm8150 and sm8250.

This removes broken dpu_hw_reset_ubwc, which doesn't work because the
"force blk offset to zero to access beginning of register region" hack is
copied from downstream, where mapped region starts 0x1000 below what is
used in the upstream driver.

Also simplifies the overly complicated change that was introduced in
e4f9bbe9f8beab9a1ce4 to work around dpu_hw_reset_ubwc being broken.

Signed-off-by: Jonathan Marek 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c   |  6 --
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h|  8 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c   | 16 +++-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c| 18 -
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h|  7 --
 drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c  | 75 ++-
 6 files changed, 42 insertions(+), 88 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index 1b960d9d1b33..3b48257886c6 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -1090,12 +1090,6 @@ static void _dpu_encoder_virt_enable_helper(struct 
drm_encoder *drm_enc)
return;
}
 
-   if (dpu_enc->cur_master->hw_mdptop &&
-   dpu_enc->cur_master->hw_mdptop->ops.reset_ubwc)
-   dpu_enc->cur_master->hw_mdptop->ops.reset_ubwc(
-   dpu_enc->cur_master->hw_mdptop,
-   dpu_kms->catalog);
-
_dpu_encoder_update_vsync_source(dpu_enc, _enc->disp_info);
 }
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
index 09df7d87dd43..f45f031a3a05 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
@@ -37,7 +37,9 @@
 #define DPU_HW_VER_400 DPU_HW_VER(4, 0, 0) /* sdm845 v1.0 */
 #define DPU_HW_VER_401 DPU_HW_VER(4, 0, 1) /* sdm845 v2.0 */
 #define DPU_HW_VER_410 DPU_HW_VER(4, 1, 0) /* sdm670 v1.0 */
-#define DPU_HW_VER_500 DPU_HW_VER(5, 0, 0) /* sdm855 v1.0 */
+#define DPU_HW_VER_500 DPU_HW_VER(5, 0, 0) /* sm8150 v1.0 */
+#define DPU_HW_VER_501 DPU_HW_VER(5, 0, 1) /* sm8150 v2.0 */
+#define DPU_HW_VER_600 DPU_HW_VER(6, 0, 0) /* sm8250 */
 #define DPU_HW_VER_620 DPU_HW_VER(6, 2, 0) /* sc7180 v1.0 */
 
 
@@ -65,10 +67,9 @@ enum {
DPU_HW_UBWC_VER_10 = 0x100,
DPU_HW_UBWC_VER_20 = 0x200,
DPU_HW_UBWC_VER_30 = 0x300,
+   DPU_HW_UBWC_VER_40 = 0x400,
 };
 
-#define IS_UBWC_20_SUPPORTED(rev)   ((rev) >= DPU_HW_UBWC_VER_20)
-
 /**
  * MDP TOP BLOCK features
  * @DPU_MDP_PANIC_PER_PIPE Panic configuration needs to be be done per pipe
@@ -426,7 +427,6 @@ struct dpu_clk_ctrl_reg {
 struct dpu_mdp_cfg {
DPU_HW_BLK_INFO;
u32 highest_bank_bit;
-   u32 ubwc_static;
u32 ubwc_swizzle;
struct dpu_clk_ctrl_reg clk_ctrls[DPU_CLK_CTRL_MAX];
 };
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
index 82c5dbfdabc7..c940b69435e1 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c
@@ -303,11 +303,25 @@ static void dpu_hw_sspp_setup_format(struct dpu_hw_pipe 
*ctx,
DPU_REG_WRITE(c, SSPP_FETCH_CONFIG,
DPU_FETCH_CONFIG_RESET_VALUE |
ctx->mdp->highest_bank_bit << 18);
-   if (IS_UBWC_20_SUPPORTED(ctx->catalog->caps->ubwc_version)) {
+   switch (ctx->catalog->caps->ubwc_version) {
+   case DPU_HW_UBWC_VER_10:
+   /* TODO: UBWC v1 case */
+   break;
+   case DPU_HW_UBWC_VER_20:
fast_clear = fmt->alpha_enable ? BIT(31) : 0;
DPU_REG_WRITE(c, SSPP_UBWC_STATIC_CTRL,
fast_clear | (ctx->mdp->ubwc_swizzle) |
(ctx->mdp->highest_bank_bit << 4));
+   break;
+   case DPU_HW_UBWC_VER_30:
+   DPU_REG_WRITE(c, SSPP_UBWC_STATIC_CTRL,
+   BIT(30) | (ctx->mdp->ubwc_swizzle) |
+   (ctx->mdp->highest_bank_bit << 4));
+   break;
+   case DPU_HW_UBWC_VER_40:
+   DPU_REG_WRITE(c, SSPP_UBWC_STATIC_CTRL,
+   DPU_FORMAT_IS_YUV(fmt) ? 0 : BIT(30));
+   break;
}
}
 
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
index f9af52ae9f3e..01b76766a9a8 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
@@ -8,7 +8,6 @@
 #include "dpu_kms.h"
 
 #define SSPP_SPARE0x28
-#define 

[PATCH 3/8] drm/msm/dpu: move some sspp caps to dpu_caps

2020-05-25 Thread Jonathan Marek
This isn't something that ever changes between planes, so move it to
dpu_caps struct. Making this change will allow more re-use in the
"SSPP sub blocks config" part of the catalog, in particular when adding
support for SM8150 and SM8250 which have different max_linewidth.

This also sets max_hdeci_exp/max_vdeci_exp to 0 for sc7180, as decimation
is not supported on the newest DPU versions. (note that decimation is not
implemented, so this changes nothing)

Signed-off-by: Jonathan Marek 
---
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 14 +--
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h| 24 +++
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c |  6 ++---
 3 files changed, 17 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index c567917541e8..496407f1cd08 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -68,6 +68,10 @@ static const struct dpu_caps sdm845_dpu_caps = {
.has_dim_layer = true,
.has_idle_pc = true,
.has_3d_merge = true,
+   .max_linewidth = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+   .pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
+   .max_hdeci_exp = MAX_HORZ_DECIMATION,
+   .max_vdeci_exp = MAX_VERT_DECIMATION,
 };
 
 static const struct dpu_caps sc7180_dpu_caps = {
@@ -78,6 +82,8 @@ static const struct dpu_caps sc7180_dpu_caps = {
.ubwc_version = DPU_HW_UBWC_VER_20,
.has_dim_layer = true,
.has_idle_pc = true,
+   .max_linewidth = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
+   .pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
 };
 
 static const struct dpu_mdp_cfg sdm845_mdp[] = {
@@ -176,16 +182,9 @@ static const struct dpu_ctl_cfg sc7180_ctl[] = {
  */
 
 /* SSPP common configuration */
-static const struct dpu_sspp_blks_common sdm845_sspp_common = {
-   .maxlinewidth = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
-   .pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
-   .maxhdeciexp = MAX_HORZ_DECIMATION,
-   .maxvdeciexp = MAX_VERT_DECIMATION,
-};
 
 #define _VIG_SBLK(num, sdma_pri, qseed_ver) \
{ \
-   .common = _sspp_common, \
.maxdwnscale = MAX_DOWNSCALE_RATIO, \
.maxupscale = MAX_UPSCALE_RATIO, \
.smart_dma_priority = sdma_pri, \
@@ -205,7 +204,6 @@ static const struct dpu_sspp_blks_common sdm845_sspp_common 
= {
 
 #define _DMA_SBLK(num, sdma_pri) \
{ \
-   .common = _sspp_common, \
.maxdwnscale = SSPP_UNITY_SCALE, \
.maxupscale = SSPP_UNITY_SCALE, \
.smart_dma_priority = sdma_pri, \
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
index f45f031a3a05..7a8d1c6658d2 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
@@ -290,6 +290,10 @@ struct dpu_qos_lut_tbl {
  * @has_dim_layer  dim layer feature status
  * @has_idle_pcindicate if idle power collapse feature is supported
  * @has_3d_merge   indicate if 3D merge is supported
+ * @max_linewidth  max linewidth for sspp
+ * @pixel_ram_size size of latency hiding and de-tiling buffer in bytes
+ * @max_hdeci_exp  max horizontal decimation supported (max is 2^value)
+ * @max_vdeci_exp  max vertical decimation supported (max is 2^value)
  */
 struct dpu_caps {
u32 max_mixer_width;
@@ -301,22 +305,11 @@ struct dpu_caps {
bool has_dim_layer;
bool has_idle_pc;
bool has_3d_merge;
-};
-
-/**
- * struct dpu_sspp_blks_common : SSPP sub-blocks common configuration
- * @maxwidth: max pixelwidth supported by this pipe
- * @pixel_ram_size: size of latency hiding and de-tiling buffer in bytes
- * @maxhdeciexp: max horizontal decimation supported by this pipe
- * (max is 2^value)
- * @maxvdeciexp: max vertical decimation supported by this pipe
- * (max is 2^value)
- */
-struct dpu_sspp_blks_common {
-   u32 maxlinewidth;
+   /* SSPP limits */
+   u32 max_linewidth;
u32 pixel_ram_size;
-   u32 maxhdeciexp;
-   u32 maxvdeciexp;
+   u32 max_hdeci_exp;
+   u32 max_vdeci_exp;
 };
 
 /**
@@ -342,7 +335,6 @@ struct dpu_sspp_blks_common {
  * @virt_num_formats: Number of supported formats for virtual planes
  */
 struct dpu_sspp_sub_blks {
-   const struct dpu_sspp_blks_common *common;
u32 creq_vblank;
u32 danger_vblank;
u32 maxdwnscale;
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index 3b9c33e694bf..33f6c56f01ed 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -153,7 +153,7 @@ static int _dpu_plane_calc_fill_level(struct drm_plane 
*plane,
 
pdpu = to_dpu_plane(plane);
pstate = 

[PATCH 1/8] drm/msm/dpu: use right setup_blend_config for sm8150 and sm8250

2020-05-25 Thread Jonathan Marek
All DPU versions starting from 4.0 use the sdm845 version, so check for
that instead of checking each version individually. This chooses the right
function for sm8150 and sm8250.

Signed-off-by: Jonathan Marek 
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c 
b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
index 37becd43bd54..4b8baf71423f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
@@ -152,14 +152,13 @@ static void _setup_mixer_ops(const struct dpu_mdss_cfg *m,
unsigned long features)
 {
ops->setup_mixer_out = dpu_hw_lm_setup_out;
-   if (IS_SDM845_TARGET(m->hwversion) || IS_SDM670_TARGET(m->hwversion)
-   || IS_SC7180_TARGET(m->hwversion))
+   if (m->hwversion >= DPU_HW_VER_400)
ops->setup_blend_config = dpu_hw_lm_setup_blend_config_sdm845;
else
ops->setup_blend_config = dpu_hw_lm_setup_blend_config;
ops->setup_alpha_out = dpu_hw_lm_setup_color3;
ops->setup_border_color = dpu_hw_lm_setup_border_color;
-};
+}
 
 static struct dpu_hw_blk_ops dpu_hw_ops;
 
-- 
2.26.1



[PATCH 0/8] Initial SM8150 and SM8250 DPU bringup

2020-05-25 Thread Jonathan Marek
These patches bring up SM8150 and SM8250 with basic functionality.

Tested with displayport output (single mixer, video mode case).

I will send patches later to add support for merge3d and dual DSI
configurations, and possibly also patches to fix command mode on
these SoCs (note it is also currently broken for SC7180).

Jonathan Marek (8):
  drm/msm/dpu: use right setup_blend_config for sm8150 and sm8250
  drm/msm/dpu: update UBWC config for sm8150 and sm8250
  drm/msm/dpu: move some sspp caps to dpu_caps
  drm/msm/dpu: don't use INTF_INPUT_CTRL feature on sdm845
  drm/msm/dpu: set missing flush bits for INTF_2 and INTF_3
  drm/msm/dpu: intf timing path for displayport
  drm/msm/dpu: add SM8150 to hw catalog
  drm/msm/dpu: add SM8250 to hw catalog

 drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c   |   6 -
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c| 287 +-
 .../gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h|  48 +--
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c|  20 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c   |  29 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c |   5 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h   |   2 +
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp.c   |  16 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c|  18 --
 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.h|   7 -
 drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c  |  75 ++---
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c |   6 +-
 12 files changed, 363 insertions(+), 156 deletions(-)

-- 
2.26.1



[PATCH v2 2/2] crypto: virtio: Fix use-after-free in virtio_crypto_skcipher_finalize_req()

2020-05-25 Thread Longpeng(Mike)
The system'll crash when the users insmod crypto/tcrypto.ko with mode=155
( testing "authenc(hmac(sha1),cbc(aes))" ). It's caused by reuse the memory
of request structure.

In crypto_authenc_init_tfm(), the reqsize is set to:
  [PART 1] sizeof(authenc_request_ctx) +
  [PART 2] ictx->reqoff +
  [PART 3] MAX(ahash part, skcipher part)
and the 'PART 3' is used by both ahash and skcipher in turn.

When the virtio_crypto driver finish skcipher req, it'll call ->complete
callback(in crypto_finalize_skcipher_request) and then free its
resources whose pointers are recorded in 'skcipher parts'.

However, the ->complete is 'crypto_authenc_encrypt_done' in this case,
it will use the 'ahash part' of the request and change its content,
so virtio_crypto driver will get the wrong pointer after ->complete
finish and mistakenly free some other's memory. So the system will crash
when these memory will be used again.

The resources which need to be cleaned up are not used any more. But the
pointers of these resources may be changed in the function
"crypto_finalize_skcipher_request". Thus release specific resources before
calling this function.

Fixes: dbaf0624ffa5 ("crypto: add virtio-crypto driver")
Reported-by: LABBE Corentin 
Cc: Gonglei 
Cc: Herbert Xu 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: "David S. Miller" 
Cc: Markus Elfring 
Cc: virtualizat...@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Cc: sta...@vger.kernel.org
Message-Id: <20200123101000.GB24255@Red>
Signed-off-by: Longpeng(Mike) 
---
 drivers/crypto/virtio/virtio_crypto_algs.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c 
b/drivers/crypto/virtio/virtio_crypto_algs.c
index 5f8243563009..52261b6c247e 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -582,10 +582,11 @@ static void virtio_crypto_skcipher_finalize_req(
scatterwalk_map_and_copy(req->iv, req->dst,
 req->cryptlen - AES_BLOCK_SIZE,
 AES_BLOCK_SIZE, 0);
-   crypto_finalize_skcipher_request(vc_sym_req->base.dataq->engine,
-  req, err);
kzfree(vc_sym_req->iv);
virtcrypto_clear_request(_sym_req->base);
+
+   crypto_finalize_skcipher_request(vc_sym_req->base.dataq->engine,
+  req, err);
 }
 
 static struct virtio_crypto_algo virtio_crypto_algs[] = { {
-- 
2.23.0



[PATCH v2 1/2] crypto: virtio: Fix src/dst scatterlist calculation in __virtio_crypto_skcipher_do_req()

2020-05-25 Thread Longpeng(Mike)
The system will crash when the users insmod crypto/tcrypt.ko with mode=38
( testing "cts(cbc(aes))" ).

Usually the next entry of one sg will be @sg@ + 1, but if this sg element
is part of a chained scatterlist, it could jump to the start of a new
scatterlist array. Fix it by sg_next() on calculation of src/dst
scatterlist.

Fixes: dbaf0624ffa5 ("crypto: add virtio-crypto driver")
Reported-by: LABBE Corentin 
Cc: Herbert Xu 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: "David S. Miller" 
Cc: Markus Elfring 
Cc: virtualizat...@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Cc: sta...@vger.kernel.org
Message-Id: <20200123101000.GB24255@Red>
Signed-off-by: Gonglei 
Signed-off-by: Longpeng(Mike) 
---
 drivers/crypto/virtio/virtio_crypto_algs.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c 
b/drivers/crypto/virtio/virtio_crypto_algs.c
index fd045e64972a..5f8243563009 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.c
+++ b/drivers/crypto/virtio/virtio_crypto_algs.c
@@ -350,13 +350,18 @@ __virtio_crypto_skcipher_do_req(struct 
virtio_crypto_sym_request *vc_sym_req,
int err;
unsigned long flags;
struct scatterlist outhdr, iv_sg, status_sg, **sgs;
-   int i;
u64 dst_len;
unsigned int num_out = 0, num_in = 0;
int sg_total;
uint8_t *iv;
+   struct scatterlist *sg;
 
src_nents = sg_nents_for_len(req->src, req->cryptlen);
+   if (src_nents < 0) {
+   pr_err("Invalid number of src SG.\n");
+   return src_nents;
+   }
+
dst_nents = sg_nents(req->dst);
 
pr_debug("virtio_crypto: Number of sgs (src_nents: %d, dst_nents: 
%d)\n",
@@ -442,12 +447,12 @@ __virtio_crypto_skcipher_do_req(struct 
virtio_crypto_sym_request *vc_sym_req,
vc_sym_req->iv = iv;
 
/* Source data */
-   for (i = 0; i < src_nents; i++)
-   sgs[num_out++] = >src[i];
+   for (sg = req->src; src_nents; sg = sg_next(sg), src_nents--)
+   sgs[num_out++] = sg;
 
/* Destination data */
-   for (i = 0; i < dst_nents; i++)
-   sgs[num_out + num_in++] = >dst[i];
+   for (sg = req->dst; sg; sg = sg_next(sg))
+   sgs[num_out + num_in++] = sg;
 
/* Status */
sg_init_one(_sg, _req->status, sizeof(vc_req->status));
-- 
2.23.0



[PATCH v2 0/2] crypto: virtio: Fix two crash issue

2020-05-25 Thread Longpeng(Mike)
Link: https://lkml.org/lkml/2020/1/23/205

Changes since v1:
 - remove some redundant checks [Jason]
 - normalize the commit message [Markus]

Cc: Gonglei 
Cc: Herbert Xu 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: "David S. Miller" 
Cc: Markus Elfring 
Cc: virtualizat...@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Cc: sta...@vger.kernel.org

Longpeng(Mike) (2):
  crypto: virtio: Fix src/dst scatterlist calculation in
__virtio_crypto_skcipher_do_req()
  crypto: virtio: Fix use-after-free in
virtio_crypto_skcipher_finalize_req()

 drivers/crypto/virtio/virtio_crypto_algs.c | 20 +---
 1 file changed, 13 insertions(+), 7 deletions(-)

-- 
2.23.0



INFO: trying to register non-static key in calculate_sigpending

2020-05-25 Thread syzbot
Hello,

syzbot found the following crash on:

HEAD commit:d2f8825a Merge tag 'for_linus' of git://git.kernel.org/pub..
git tree:   upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=12470c2610
kernel config:  https://syzkaller.appspot.com/x/.config?x=b3368ce0cc5f5ace
dashboard link: https://syzkaller.appspot.com/bug?extid=eb1b67ef4194d8f9ebff
compiler:   gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+eb1b67ef4194d8f9e...@syzkaller.appspotmail.com

INFO: trying to register non-static key.
the code is fine but needs lockdep annotation.
turning off the locking correctness validator.
CPU: 0 PID: 1080 Comm: syz-executor.4 Not tainted 5.7.0-rc6-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 
01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x188/0x20d lib/dump_stack.c:118
 assign_lock_key kernel/locking/lockdep.c:913 [inline]
 register_lock_class+0x1664/0x1760 kernel/locking/lockdep.c:1225
 __lock_acquire+0x104/0x4c50 kernel/locking/lockdep.c:4234
 lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4934
 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
 _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
 spin_lock_irq include/linux/spinlock.h:378 [inline]
 calculate_sigpending+0x42/0xa0 kernel/signal.c:196
 ret_from_fork+0x8/0x30 arch/x86/entry/entry_64.S:335


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkal...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.


Lieber Freund (Assalamu Alaikum),?

2020-05-25 Thread AISHA GADDAFI
-- 
Lieber Freund (Assalamu Alaikum),

Ich bin vor einer privaten Suche auf Ihren E-Mail-Kontakt gestoßen
Ihre Hilfe. Mein Name ist Aisha Al-Qaddafi, eine alleinerziehende
Mutter und eine Witwe
mit drei Kindern. Ich bin die einzige leibliche Tochter des Spätlibyschen
Präsident (verstorbener Oberst Muammar Gaddafi).

Ich habe Investmentfonds im Wert von siebenundzwanzig Millionen
fünfhunderttausend
United State Dollar ($ 27.500.000.00) und ich brauche eine
vertrauenswürdige Investition
Manager / Partner aufgrund meines aktuellen Flüchtlingsstatus bin ich jedoch
Möglicherweise interessieren Sie sich für die Unterstützung von
Investitionsprojekten in Ihrem Land
Von dort aus können wir in naher Zukunft Geschäftsbeziehungen aufbauen.

Ich bin bereit, mit Ihnen über das Verhältnis zwischen Investition und
Unternehmensgewinn zu verhandeln
Basis für die zukünftige Investition Gewinne zu erzielen.

Wenn Sie bereit sind, dieses Projekt in meinem Namen zu bearbeiten,
antworten Sie bitte dringend
Damit ich Ihnen mehr Informationen über die Investmentfonds geben kann.

Ihre dringende Antwort wird geschätzt. schreibe mir an diese email adresse (
ayishagdda...@mail.ru ) zur weiteren Diskussion.

Freundliche Grüße
Frau Aisha Al-Qaddafi


linux-next: manual merge of the net-next tree with the bpf tree

2020-05-25 Thread Stephen Rothwell
Hi all,

Today's linux-next merge of the net-next tree got a conflict in:

  net/xdp/xdp_umem.c

between commit:

  b16a87d0aef7 ("xsk: Add overflow check for u64 division, stored into u32")

from the bpf tree and commit:

  2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")

from the net-next tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.

-- 
Cheers,
Stephen Rothwell

diff --cc net/xdp/xdp_umem.c
index 3889bd9aec46,19e59d1a5e9f..
--- a/net/xdp/xdp_umem.c
+++ b/net/xdp/xdp_umem.c
@@@ -389,13 -349,10 +353,10 @@@ static int xdp_umem_reg(struct xdp_ume
if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
return -EINVAL;
  
-   umem->address = (unsigned long)addr;
-   umem->chunk_mask = unaligned_chunks ? XSK_UNALIGNED_BUF_ADDR_MASK
-   : ~((u64)chunk_size - 1);
umem->size = size;
umem->headroom = headroom;
-   umem->chunk_size_nohr = chunk_size - headroom;
+   umem->chunk_size = chunk_size;
 -  umem->npgs = size / PAGE_SIZE;
 +  umem->npgs = (u32)npgs;
umem->pgs = NULL;
umem->user = NULL;
umem->flags = mr->flags;


pgpIcBMmTWURI.pgp
Description: OpenPGP digital signature


[PATCH v2] bluetooth: hci_qca: Fix qca6390 enable failure after warm reboot

2020-05-25 Thread Zijun Hu
Warm reboot can not restore qca6390 controller baudrate
to default due to lack of controllable BT_EN pin or power
supply, so fails to download firmware after warm reboot.

Fixed by sending EDL_SOC_RESET VSC to reset controller
within added device shutdown implementation.

Signed-off-by: Zijun Hu 
---
 drivers/bluetooth/hci_qca.c | 27 +++
 1 file changed, 27 insertions(+)

diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
index e4a6823..b479e51 100644
--- a/drivers/bluetooth/hci_qca.c
+++ b/drivers/bluetooth/hci_qca.c
@@ -1975,6 +1975,32 @@ static void qca_serdev_remove(struct serdev_device 
*serdev)
hci_uart_unregister_device(>serdev_hu);
 }
 
+static void qca_serdev_shutdown(struct device *dev)
+{
+   int res;
+   int timeout = msecs_to_jiffies(CMD_TRANS_TIMEOUT_MS);
+   struct serdev_device *serdev = to_serdev_device(dev);
+   struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev);
+   const u8 ibs_wake_cmd[] = { 0xFD };
+   const u8 edl_reset_soc_cmd[] = { 0x01, 0x00, 0xFC, 0x01, 0x05 };
+
+   if (qcadev->btsoc_type == QCA_QCA6390) {
+   serdev_device_write_flush(serdev);
+   res = serdev_device_write_buf(serdev,
+   ibs_wake_cmd, sizeof(ibs_wake_cmd));
+   BT_DBG("%s: send ibs_wake_cmd res = %d", __func__, res);
+   serdev_device_wait_until_sent(serdev, timeout);
+   usleep_range(8000, 1);
+
+   serdev_device_write_flush(serdev);
+   res = serdev_device_write_buf(serdev,
+   edl_reset_soc_cmd, sizeof(edl_reset_soc_cmd));
+   BT_DBG("%s: send edl_reset_soc_cmd res = %d", __func__, res);
+   serdev_device_wait_until_sent(serdev, timeout);
+   usleep_range(8000, 1);
+   }
+}
+
 static int __maybe_unused qca_suspend(struct device *dev)
 {
struct hci_dev *hdev = container_of(dev, struct hci_dev, dev);
@@ -2100,6 +2126,7 @@ static struct serdev_device_driver qca_serdev_driver = {
.name = "hci_uart_qca",
.of_match_table = of_match_ptr(qca_bluetooth_of_match),
.acpi_match_table = ACPI_PTR(qca_bluetooth_acpi_match),
+   .shutdown = qca_serdev_shutdown,
.pm = _pm_ops,
},
 };
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a 
Linux Foundation Collaborative Project



Re: [PATCH][V3] arm64: perf: Get the wrong PC value in REGS_ABI_32 mode

2020-05-25 Thread Jiping Ma

Hi, Will

Please help to review the change.

Thanks,
Jiping

On 05/11/2020 10:52 AM, Jiping Ma wrote:

Modified the patch subject and the change description.

PC value is get from regs[15] in REGS_ABI_32 mode, but correct PC
is regs->pc(regs[PERF_REG_ARM64_PC]) in arm64 kernel, which caused
that perf can not parser the backtrace of app with dwarf mode in the
32bit system and 64bit kernel.

Signed-off-by: Jiping Ma 
---
  arch/arm64/kernel/perf_regs.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/arch/arm64/kernel/perf_regs.c b/arch/arm64/kernel/perf_regs.c
index 0bbac61..0ef2880 100644
--- a/arch/arm64/kernel/perf_regs.c
+++ b/arch/arm64/kernel/perf_regs.c
@@ -32,6 +32,10 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
if ((u32)idx == PERF_REG_ARM64_PC)
return regs->pc;
  
+	if (perf_reg_abi(current) == PERF_SAMPLE_REGS_ABI_32

+   && idx == 15)
+   return regs->pc;
+
return regs->regs[idx];
  }
  




RE: [EXT] Re: [PATCH] arm64: dts: ls1028a: add one more thermal zone support

2020-05-25 Thread Andy Tang


-Original Message-
From: Daniel Lezcano  
Sent: 2020年5月25日 19:08
To: Andy Tang ; shawn...@kernel.org; robh...@kernel.org; 
mark.rutl...@arm.com; catalin.mari...@arm.com; will.dea...@arm.com
Cc: devicet...@vger.kernel.org; linux-arm-ker...@lists.infradead.org; 
linux-kernel@vger.kernel.org
Subject: [EXT] Re: [PATCH] arm64: dts: ls1028a: add one more thermal zone 
support

Caution: EXT Email

On 25/05/2020 09:38, Yuantian Tang wrote:
> There are 2 thermal zones in ls1028a soc. Current dts only includes 
> one. This patch adds the other thermal zone node in dts to enable it.

For my personal information, is there a cooling device for the DDR?

A: There is only one cooling device which is used by core-cluster sensor zone.
So there is no cooling device for DDR.

BR,
Andy 

> Signed-off-by: Yuantian Tang 
> ---
>  .../arm64/boot/dts/freescale/fsl-ls1028a.dtsi | 22 
> ++-
>  1 file changed, 21 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi 
> b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
> index 055f114cf848..bc6f0c0f85da 100644
> --- a/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
> +++ b/arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
> @@ -129,11 +129,31 @@
>   };
>
>   thermal-zones {
> - core-cluster {
> + ddr-controller {
>   polling-delay-passive = <1000>;
>   polling-delay = <5000>;
>   thermal-sensors = < 0>;
>
> + trips {
> + ddr-ctrler-alert {
> + temperature = <85000>;
> + hysteresis = <2000>;
> + type = "passive";
> + };
> +
> + ddr-ctrler-crit {
> + temperature = <95000>;
> + hysteresis = <2000>;
> + type = "critical";
> + };
> + };
> + };
> +
> + core-cluster {
> + polling-delay-passive = <1000>;
> + polling-delay = <5000>;
> + thermal-sensors = < 1>;
> +
>   trips {
>   core_cluster_alert: core-cluster-alert {
>   temperature = <85000>;
>


--

 Linaro.org │ Open source software for ARM SoCs

Follow Linaro:  

 Facebook | 

 Twitter | 

 Blog


Re: [PATCH v1] clk: mediatek: assign the initial value to clk_init_data of mtk_mux

2020-05-25 Thread Weiyi Lu
On Mon, 2020-05-25 at 11:08 +0200, Matthias Brugger wrote:
> 
> On 25/05/2020 08:41, Weiyi Lu wrote:
> > It'd be dangerous when struct clk_core have new memebers.
> > Add the missing initial value to clk_init_data.
> > 
> 
> Sorry I don't really understand this commit message, can please explain.
> In any case if this is a problem, then we probably we should fix it for all 
> drivers.
> Apart from drivers/clk/mediatek/clk-cpumux.c
> 

Actually, we were looking into an android kernel patch "ANDROID: GKI:
clk: Add support for voltage voting" [1]

In this patch, there adds a new member struct clk_vdd_class *vdd_class;
in struct clk_init_data and struct clk_core

And then in clk_register(...)
core->vdd_class = hw->init->vdd_class;

In many clock APIs, it will check the core->vdd_class to select the
correct control flow.
So, if we don't assign an initial value to clk_init_data of mtk_mux
clock type, something might go wrong. And assigning an initial value
might be the easiest and good way to avoid such problem if any new clock
support added in the future.

[1] https://android-review.googlesource.com/c/kernel/common/+/1278046

> It's a widely used pattern:
> $ git grep "struct clk_init_data init;"| wc -l
> 235
> 
> Regards,
> Matthias
> 
> > Fixes: a3ae549917f1 ("clk: mediatek: Add new clkmux register API")
> > Cc: 
> > Signed-off-by: Weiyi Lu 
> > ---
> >  drivers/clk/mediatek/clk-mux.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/clk/mediatek/clk-mux.c b/drivers/clk/mediatek/clk-mux.c
> > index 76f9cd0..14e127e 100644
> > --- a/drivers/clk/mediatek/clk-mux.c
> > +++ b/drivers/clk/mediatek/clk-mux.c
> > @@ -160,7 +160,7 @@ struct clk *mtk_clk_register_mux(const struct mtk_mux 
> > *mux,
> >  spinlock_t *lock)
> >  {
> > struct mtk_clk_mux *clk_mux;
> > -   struct clk_init_data init;
> > +   struct clk_init_data init = {};
> > struct clk *clk;
> >  
> > clk_mux = kzalloc(sizeof(*clk_mux), GFP_KERNEL);
> > 



Re: [PATCH v3 1/6] arm64: Detect the ARMv8.4 TTL feature

2020-05-25 Thread Anshuman Khandual
Hello Zhenyu,

On 05/25/2020 06:22 PM, Zhenyu Ye wrote:
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index c4ac0ac25a00..477d84ba1056 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -725,6 +725,7 @@
>  
>  /* id_aa64mmfr2 */
>  #define ID_AA64MMFR2_E0PD_SHIFT  60
> +#define ID_AA64MMFR2_TTL_SHIFT   48
>  #define ID_AA64MMFR2_FWB_SHIFT   40
>  #define ID_AA64MMFR2_AT_SHIFT32
>  #define ID_AA64MMFR2_LVA_SHIFT   16
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 9fac745aa7bb..d993dc6dc7d5 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -244,6 +244,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
>  
>  static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>   ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, 
> ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
> ID_AA64MMFR2_TTL_SHIFT, 4, 0),
>   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
> ID_AA64MMFR2_FWB_SHIFT, 4, 0),
>   ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
> ID_AA64MMFR2_AT_SHIFT, 4, 0),
>   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
> ID_AA64MMFR2_LVA_SHIFT, 4, 0),
> @@ -1622,6 +1623,16 @@ static const struct arm64_cpu_capabilities 
> arm64_features[] = {
>   .matches = has_cpuid_feature,
>   .cpu_enable = cpu_has_fwb,
>   },

This patch (https://patchwork.kernel.org/patch/11557359/) is adding some
more ID_AA64MMFR2 features including the TTL. I am going to respin parts
of the V4 series patches along with the above mentioned patch. So please
rebase this series accordingly, probably on latest next.

- Anshuman


[PATCH] PCI: qcom: fix several error-hanlding problem.

2020-05-25 Thread wu000273
From: Qiushi Wu 

In function qcom_pcie_probe(), there are several error-handling problem.
1. pm_runtime_put() should be called after pm_runtime_get_sync() failed,
because refcount will be increased even pm_runtime_get_sync() returns 
an error.
2. pm_runtime_disable() are called twice, after the call of phy_init() and
dw_pcie_host_init() failed.
Fix these problem by pm_runtime_put() after the call of call 
pm_runtime_get_sync() failed. Also removing the redundant 
pm_runtime_disable().

Fixes: 6e5da6f7d824 ("PCI: qcom: Fix error handling in runtime PM support")
Signed-off-by: Qiushi Wu 
---
 drivers/pci/controller/dwc/pcie-qcom.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-qcom.c 
b/drivers/pci/controller/dwc/pcie-qcom.c
index 138e1a2d21cc..10393ab607bf 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -1340,8 +1340,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
-   pm_runtime_disable(dev);
-   return ret;
+   goto err_pm_runtime_put;
}
 
pci->dev = dev;
@@ -1401,7 +1400,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
 
ret = phy_init(pcie->phy);
if (ret) {
-   pm_runtime_disable(>dev);
goto err_pm_runtime_put;
}
 
@@ -1410,7 +1408,6 @@ static int qcom_pcie_probe(struct platform_device *pdev)
ret = dw_pcie_host_init(pp);
if (ret) {
dev_err(dev, "cannot initialize host\n");
-   pm_runtime_disable(>dev);
goto err_pm_runtime_put;
}
 
-- 
2.17.1



RE: [PATCH] exfat: optimize dir-cache

2020-05-25 Thread Sungjong Seo
> Optimize directory access based on exfat_entry_set_cache.
>  - Hold bh instead of copied d-entry.
>  - Modify bh->data directly instead of the copied d-entry.
>  - Write back the retained bh instead of rescanning the d-entry-set.
> And
>  - Remove unused cache related definitions.
> 
> Signed-off-by: Tetsuhiro Kohada
> 
> ---
>  fs/exfat/dir.c  | 197 +---
>  fs/exfat/exfat_fs.h |  27 +++---
>  fs/exfat/file.c |  15 ++--
>  fs/exfat/inode.c|  53 +---
>  fs/exfat/namei.c|  14 ++--
>  5 files changed, 124 insertions(+), 182 deletions(-)
[snip]
> 
> - es->entries[0].dentry.file.checksum = cpu_to_le16(chksum);
> +void exfat_free_dentry_set(struct exfat_entry_set_cache *es, int sync)
> +{
> + int i;
> 
> - while (num_entries) {
> - /* write per sector base */
> - remaining_byte_in_sector = (1 << sb->s_blocksize_bits) -
off;
> - copy_entries = min_t(int,
> - EXFAT_B_TO_DEN(remaining_byte_in_sector),
> - num_entries);
> - bh = sb_bread(sb, sec);
> - if (!bh)
> - goto err_out;
> - memcpy(bh->b_data + off,
> - (unsigned char *)>entries[0] + buf_off,
> - EXFAT_DEN_TO_B(copy_entries));
> - exfat_update_bh(sb, bh, sync);
> - brelse(bh);
> - num_entries -= copy_entries;
> -
> - if (num_entries) {
> - /* get next sector */
> - if (exfat_is_last_sector_in_cluster(sbi, sec)) {
> - clu = exfat_sector_to_cluster(sbi, sec);
> - if (es->alloc_flag == ALLOC_NO_FAT_CHAIN)
> - clu++;
> - else if (exfat_get_next_cluster(sb, ))
> - goto err_out;
> - sec = exfat_cluster_to_sector(sbi, clu);
> - } else {
> - sec++;
> - }
> - off = 0;
> - buf_off += EXFAT_DEN_TO_B(copy_entries);
> - }
> + for (i = 0; i < es->num_bh; i++) {
> + if (es->modified)
> + exfat_update_bh(es->sb, es->bh[i], sync);

Overall, it looks good to me.
However, if "sync" is set, it looks better to return the result of
exfat_update_bh().
Of course, a tiny modification for exfat_update_bh() is also required.

> + brelse(es->bh[i]);
>   }
> -
> - return 0;
> -err_out:
> - return -EIO;
> + kfree(es);
>  }
> 
>  static int exfat_walk_fat_chain(struct super_block *sb, @@ -820,34
> +786,40 @@ static bool exfat_validate_entry(unsigned int type,
>   }
>  }
> 
> +struct exfat_dentry *exfat_get_dentry_cached(
> + struct exfat_entry_set_cache *es, int num) {
> + int off = es->start_off + num * DENTRY_SIZE;
> + struct buffer_head *bh = es->bh[EXFAT_B_TO_BLK(off, es->sb)];
> + char *p = bh->b_data + EXFAT_BLK_OFFSET(off, es->sb);

In order to prevent illegal accesses to bh and dentries, it would be better
to check validation for num and bh.

> +
> + return (struct exfat_dentry *)p;
> +}
> +
>  /*
>   * Returns a set of dentries for a file or dir.
>   *
> - * Note that this is a copy (dump) of dentries so that user should
> - * call write_entry_set() to apply changes made in this entry set
> - * to the real device.
> + * Note It provides a direct pointer to bh->data via
> exfat_get_dentry_cached().
> + * User should call exfat_get_dentry_set() after setting 'modified' to
> + apply
> + * changes made in this entry set to the real device.
>   *
>   * in:
[snip]
>   /* check if the given file ID is opened */ @@ -153,12 +151,15 @@
> int __exfat_truncate(struct inode *inode, loff_t new_size)
>   /* update the directory entry */
>   if (!evict) {
>   struct timespec64 ts;
> + struct exfat_dentry *ep, *ep2;
> + struct exfat_entry_set_cache *es;
> 
>   es = exfat_get_dentry_set(sb, &(ei->dir), ei->entry,
> - ES_ALL_ENTRIES, );
> + ES_ALL_ENTRIES);
>   if (!es)
>   return -EIO;
> - ep2 = ep + 1;
> + ep = exfat_get_dentry_cached(es, 0);
> + ep2 = exfat_get_dentry_cached(es, 1);
> 
>   ts = current_time(inode);
>   exfat_set_entry_time(sbi, ,
> @@ -185,10 +186,8 @@ int __exfat_truncate(struct inode *inode, loff_t
> new_size)
>   ep2->dentry.stream.start_clu = EXFAT_FREE_CLUSTER;
>   }
> 
> - if (exfat_update_dir_chksum_with_entry_set(sb, es,
> - inode_needs_sync(inode)))
> - return -EIO;
> - kfree(es);
> + exfat_update_dir_chksum_with_entry_set(es);
> + 

Re: [PATCH v8 0/5] support reserving crashkernel above 4G on arm64 kdump

2020-05-25 Thread chenzhou
Hi Baoquan,


Thanks for your suggestions.

You are right, some details should be made in the commit log.


Thanks,

Chen Zhou


On 2020/5/26 9:42, Baoquan He wrote:
> On 05/21/20 at 05:38pm, Chen Zhou wrote:
>> This patch series enable reserving crashkernel above 4G in arm64.
>>
>> There are following issues in arm64 kdump:
>> 1. We use crashkernel=X to reserve crashkernel below 4G, which will fail
>> when there is no enough low memory.
>> 2. Currently, crashkernel=Y@X can be used to reserve crashkernel above 4G,
>> in this case, if swiotlb or DMA buffers are required, crash dump kernel
>> will boot failure because there is no low memory available for allocation.
>>
>> To solve these issues, introduce crashkernel=X,low to reserve specified
>> size low memory.
>> Crashkernel=X tries to reserve memory for the crash dump kernel under
>> 4G. If crashkernel=Y,low is specified simultaneously, reserve spcified
>> size low memory for crash kdump kernel devices firstly and then reserve
>> memory above 4G.
>>
>> When crashkernel is reserved above 4G in memory, that is, crashkernel=X,low
>> is specified simultaneously, kernel should reserve specified size low memory
>> for crash dump kernel devices. So there may be two crash kernel regions, one
>> is below 4G, the other is above 4G.
>> In order to distinct from the high region and make no effect to the use of
>> kexec-tools, rename the low region as "Crash kernel (low)", and add DT 
>> property
>> "linux,low-memory-range" to crash dump kernel's dtb to pass the low region.
>>
>> Besides, we need to modify kexec-tools:
>> arm64: kdump: add another DT property to crash dump kernel's dtb(see [1])
>>
>> The previous changes and discussions can be retrieved from:
>>
>> Changes since [v7]
>> - Move x86 CRASH_ALIGN to 2M
>> Suggested by Dave and do some test, move x86 CRASH_ALIGN to 2M.
> OK, moving x86 CRASH_ALIGN to 2M is suggested by Dave. Because
> CONFIG_PHYSICAL_ALIGN can be selected from 2M to 16M. So 2M seems good.
> But, anyway, we should tell the reason why it need be changed in commit
> log.
>
>
> arch/x86/Kconfig:
> config PHYSICAL_ALIGN
> hex "Alignment value to which kernel should be aligned"
> default "0x20"
> range 0x2000 0x100 if X86_32
> range 0x20 0x100 if X86_64
>
>> - Update Documentation/devicetree/bindings/chosen.txt 
>> Add corresponding documentation to 
>> Documentation/devicetree/bindings/chosen.txt suggested by Arnd.
>> - Add Tested-by from Jhon and pk
>>
>> Changes since [v6]
>> - Fix build errors reported by kbuild test robot.
>>
>> Changes since [v5]
>> - Move reserve_crashkernel_low() into kernel/crash_core.c.
>> - Delete crashkernel=X,high.
> And the crashkernel=X,high being deleted need be told too. Otherwise
> people reading the commit have to check why themselves. I didn't follow
> the old version, can't see why ,high can't be specified explicitly.
>
>> - Modify crashkernel=X,low.
>> If crashkernel=X,low is specified simultaneously, reserve spcified size low
>> memory for crash kdump kernel devices firstly and then reserve memory above 
>> 4G.
>> In addition, rename crashk_low_res as "Crash kernel (low)" for arm64, and 
>> then
>> pass to crash dump kernel by DT property "linux,low-memory-range".
>> - Update Documentation/admin-guide/kdump/kdump.rst.
>>
>> Changes since [v4]
>> - Reimplement memblock_cap_memory_ranges for multiple ranges by Mike.
>>
>> Changes since [v3]
>> - Add memblock_cap_memory_ranges back for multiple ranges.
>> - Fix some compiling warnings.
>>
>> Changes since [v2]
>> - Split patch "arm64: kdump: support reserving crashkernel above 4G" as
>> two. Put "move reserve_crashkernel_low() into kexec_core.c" in a separate
>> patch.
>>
>> Changes since [v1]:
>> - Move common reserve_crashkernel_low() code into kernel/kexec_core.c.
>> - Remove memblock_cap_memory_ranges() i added in v1 and implement that
>> in fdt_enforce_memory_region().
>> There are at most two crash kernel regions, for two crash kernel regions
>> case, we cap the memory range [min(regs[*].start), max(regs[*].end)]
>> and then remove the memory range in the middle.
>>
>> [1]: http://lists.infradead.org/pipermail/kexec/2020-May/025128.html
>> [v1]: https://lkml.org/lkml/2019/4/2/1174
>> [v2]: https://lkml.org/lkml/2019/4/9/86
>> [v3]: https://lkml.org/lkml/2019/4/9/306
>> [v4]: https://lkml.org/lkml/2019/4/15/273
>> [v5]: https://lkml.org/lkml/2019/5/6/1360
>> [v6]: https://lkml.org/lkml/2019/8/30/142
>> [v7]: https://lkml.org/lkml/2019/12/23/411
>>
>> Chen Zhou (5):
>>   x86: kdump: move reserve_crashkernel_low() into crash_core.c
>>   arm64: kdump: reserve crashkenel above 4G for crash dump kernel
>>   arm64: kdump: add memory for devices by DT property, low-memory-range
>>   kdump: update Documentation about crashkernel on arm64
>>   dt-bindings: chosen: Document linux,low-memory-range for arm64 kdump
>>
>>  Documentation/admin-guide/kdump/kdump.rst | 13 ++-
>>  .../admin-guide/kernel-parameters.txt   

Re: [RESEND] kunit: use --build_dir=.kunit as default

2020-05-25 Thread Vitor Massaru Iha
On Mon, 2020-05-25 at 22:52 -0300, Vitor Massaru Iha wrote:
> Hi Shuah,
> 
> On Fri, 2020-05-22 at 16:40 -0600, shuah wrote:
> > On 4/16/20 5:11 PM, Brendan Higgins wrote:
> > > On Tue, Apr 14, 2020 at 4:09 PM Vitor Massaru Iha <
> > > vi...@massaru.org> wrote:
> > > > To make KUnit easier to use, and to avoid overwriting object
> > > > and
> > > > .config files, the default KUnit build directory is set to
> > > > .kunit
> > > > 
> > > >   * Related bug: 
> > > > https://bugzilla.kernel.org/show_bug.cgi?id=205221
> > > > 
> > > > Signed-off-by: Vitor Massaru Iha 
> > > 
> > > Reviewed-by: Brendan Higgins 
> > > 
> > 
> > Applied the patch to kselftest/kunit on top of
> > 
> > 45ba7a893ad89114e773b3dc32f6431354c465d6
> > kunit: kunit_tool: Separate out config/build/exec/parse
> > 
> > from David's work resolving merge conflicts. Please check if it is
> > sane.
> > 
> > thanks,
> > -- Shuah
> 
> The kunit branch had some problems related to identation. KUnit's
> code
> has mixed identation, and with that, in conflict correction, it ended
> up breaking python.
> 
> In addition I found a bug: related to the creation of the
> .kunitconfig
> file inside the default build directory.

This is actually related to the other patch "kunit: use KUnit defconfig
by default"


>  Should I send the patch again?
> Or do I make a bugfix patch?
> 
> BR,
> Vitor
> 



[PATCH] Input: elantech - Remove read/write registers in attr.

2020-05-25 Thread Jingle.Wu
New Elan IC would not be accessed with the specific regiters.

Signed-off-by: Jingle Wu 
---
 drivers/input/mouse/elantech.c | 20 
 1 file changed, 20 deletions(-)

diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
index 2d8434b7b623..fa1aa5f441f5 100644
--- a/drivers/input/mouse/elantech.c
+++ b/drivers/input/mouse/elantech.c
@@ -1280,31 +1280,11 @@ static ssize_t elantech_set_int_attr(struct psmouse 
*psmouse,
elantech_show_int_attr,\
elantech_set_int_attr)
 
-ELANTECH_INT_ATTR(reg_07, 0x07);
-ELANTECH_INT_ATTR(reg_10, 0x10);
-ELANTECH_INT_ATTR(reg_11, 0x11);
-ELANTECH_INT_ATTR(reg_20, 0x20);
-ELANTECH_INT_ATTR(reg_21, 0x21);
-ELANTECH_INT_ATTR(reg_22, 0x22);
-ELANTECH_INT_ATTR(reg_23, 0x23);
-ELANTECH_INT_ATTR(reg_24, 0x24);
-ELANTECH_INT_ATTR(reg_25, 0x25);
-ELANTECH_INT_ATTR(reg_26, 0x26);
 ELANTECH_INFO_ATTR(debug);
 ELANTECH_INFO_ATTR(paritycheck);
 ELANTECH_INFO_ATTR(crc_enabled);
 
 static struct attribute *elantech_attrs[] = {
-   _attr_reg_07.dattr.attr,
-   _attr_reg_10.dattr.attr,
-   _attr_reg_11.dattr.attr,
-   _attr_reg_20.dattr.attr,
-   _attr_reg_21.dattr.attr,
-   _attr_reg_22.dattr.attr,
-   _attr_reg_23.dattr.attr,
-   _attr_reg_24.dattr.attr,
-   _attr_reg_25.dattr.attr,
-   _attr_reg_26.dattr.attr,
_attr_debug.dattr.attr,
_attr_paritycheck.dattr.attr,
_attr_crc_enabled.dattr.attr,
-- 
2.17.1



[PATCH] Input: elantench - Remove read registers in attr Signed-off-by: Jingle Wu

2020-05-25 Thread Jingle.Wu
---
 drivers/input/mouse/elantech.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
index 2d8434b7b623..5bdf2b19118e 100644
--- a/drivers/input/mouse/elantech.c
+++ b/drivers/input/mouse/elantech.c
@@ -1280,7 +1280,7 @@ static ssize_t elantech_set_int_attr(struct psmouse 
*psmouse,
elantech_show_int_attr,\
elantech_set_int_attr)
 
-ELANTECH_INT_ATTR(reg_07, 0x07);
+/*ELANTECH_INT_ATTR(reg_07, 0x07);
 ELANTECH_INT_ATTR(reg_10, 0x10);
 ELANTECH_INT_ATTR(reg_11, 0x11);
 ELANTECH_INT_ATTR(reg_20, 0x20);
@@ -1289,13 +1289,13 @@ ELANTECH_INT_ATTR(reg_22, 0x22);
 ELANTECH_INT_ATTR(reg_23, 0x23);
 ELANTECH_INT_ATTR(reg_24, 0x24);
 ELANTECH_INT_ATTR(reg_25, 0x25);
-ELANTECH_INT_ATTR(reg_26, 0x26);
+ELANTECH_INT_ATTR(reg_26, 0x26);*/
 ELANTECH_INFO_ATTR(debug);
 ELANTECH_INFO_ATTR(paritycheck);
 ELANTECH_INFO_ATTR(crc_enabled);
 
 static struct attribute *elantech_attrs[] = {
-   _attr_reg_07.dattr.attr,
+   /*_attr_reg_07.dattr.attr,
_attr_reg_10.dattr.attr,
_attr_reg_11.dattr.attr,
_attr_reg_20.dattr.attr,
@@ -1304,7 +1304,7 @@ static struct attribute *elantech_attrs[] = {
_attr_reg_23.dattr.attr,
_attr_reg_24.dattr.attr,
_attr_reg_25.dattr.attr,
-   _attr_reg_26.dattr.attr,
+   _attr_reg_26.dattr.attr,*/
_attr_debug.dattr.attr,
_attr_paritycheck.dattr.attr,
_attr_crc_enabled.dattr.attr,
-- 
2.17.1



Re: [RFC PATCH 0/7] kvm: arm64: Support stage2 hardware DBM

2020-05-25 Thread zhukeqian
Hi Marc,

On 2020/5/25 23:44, Marc Zyngier wrote:
> On 2020-05-25 12:23, Keqian Zhu wrote:
>> This patch series add support for stage2 hardware DBM, and it is only
>> used for dirty log for now.
>>
>> It works well under some migration test cases, including VM with 4K
>> pages or 2M THP. I checked the SHA256 hash digest of all memory and
>> they keep same for source VM and destination VM, which means no dirty
>> pages is missed under hardware DBM.
>>
>> However, there are some known issues not solved.
>>
>> 1. Some mechanisms that rely on "write permission fault" become invalid,
>>such as kvm_set_pfn_dirty and "mmap page sharing".
>>
>>kvm_set_pfn_dirty is called in user_mem_abort when guest issues write
>>fault. This guarantees physical page will not be dropped directly when
>>host kernel recycle memory. After using hardware dirty management, we
>>have no chance to call kvm_set_pfn_dirty.
> 
> Then you will end-up with memory corruption under memory pressure.
> This also breaks things like CoW, which we depend on.
>
Yes, these problems looks knotty. But I think x86 PML support will face these
problems too. I believe there must be some methods to solve them.
>>
>>For "mmap page sharing" mechanism, host kernel will allocate a new
>>physical page when guest writes a page that is shared with other page
>>table entries. After using hardware dirty management, we have no chance
>>to do this too.
>>
>>I need to do some survey on how stage1 hardware DBM solve these problems.
>>It helps if anyone can figure it out.
>>
>> 2. Page Table Modification Races: Though I have found and solved some data
>>races when kernel changes page table entries, I still doubt that there
>>are data races I am not aware of. It's great if anyone can figure them 
>> out.
>>
>> 3. Performance: Under Kunpeng 920 platform, for every 64GB memory, KVM
>>consumes about 40ms to traverse all PTEs to collect dirty log. It will
>>cause unbearable downtime for migration if memory size is too big. I will
>>try to solve this problem in Patch v1.
> 
> This, in my opinion, is why Stage-2 DBM is fairly useless.
> From a performance perspective, this is the worse possible
> situation. You end up continuously scanning page tables, at
> an arbitrary rate, without a way to evaluate the fault rate.
> 
> One thing S2-DBM would be useful for is SVA, where a device
> write would mark the S2 PTs dirty as they are shared between
> CPU and SMMU. Another thing is SPE, which is essentially a DMA
> agent using the CPU's PTs.
> 
> But on its own, and just to log the dirty pages, S2-DBM is
> pretty rubbish. I wish arm64 had something like Intel's PML,
> which looks far more interesting for the purpose of tracking
> accesses.

Sure, PML is a better solution on hardware management of dirty state.
However, compared to optimizing hardware, optimizing software is with
shorter cycle time.

Here I have an optimization in mind to solve it. Scanning page tables
can be done parallel, which can greatly reduce time consumption. For there
is no communication between parallel CPUs, we can achieve high speedup
ratio.


> 
> Thanks,
> 
> M.
Thanks,
Keqian


Re: [v2] workqueue: Fix double kfree for rescuer

2020-05-25 Thread qzhang2

Thanks for your advice.
The rescuer null pointer is intentionally passed by a data structure?
and also I read the code of workqueue again, when destroy_workqueue is
called, after "wq->rescuer = NULL" was executed, The scenario described 
below does not happen


"if non-null pointers (according to valid rescuer objects) are 
occasionally passed by the corresponding data structure member

for the callback function "rcu_free_wq"."


On 5/25/20 6:40 PM, Markus Elfring wrote:

I see, kfree does nothing with null pointers and direct return.
but again kfree is not a good suggestion.


I have got the impression that the implementation detail is important here
if non-null pointers (according to valid rescuer objects) are occasionally
passed by the corresponding data structure member for the callback
function “rcu_free_wq”.
Can another clarification attempt reduce unwanted confusion for this patch 
review?

Regards,
Markus



Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-25 Thread Jaegeuk Kim
On 05/26, Chao Yu wrote:
> On 2020/5/26 9:11, Chao Yu wrote:
> > On 2020/5/25 23:06, Jaegeuk Kim wrote:
> >> On 05/25, Chao Yu wrote:
> >>> On 2020/5/25 11:56, Jaegeuk Kim wrote:
>  Shutdown test is somtimes hung, since it keeps trying to flush dirty 
>  node pages
> >>>
> >>> IMO, for umount case, we should drop dirty reference and dirty pages on 
> >>> meta/data
> >>> pages like we change for node pages to avoid potential dead loop...
> >>
> >> I believe we're doing for them. :P
> > 
> > Actually, I mean do we need to drop dirty meta/data pages explicitly as 
> > below:
> > 
> > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> > index 3dc3ac6fe143..4c08fd0a680a 100644
> > --- a/fs/f2fs/checkpoint.c
> > +++ b/fs/f2fs/checkpoint.c
> > @@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,
> > 
> > trace_f2fs_writepage(page, META);
> > 
> > -   if (unlikely(f2fs_cp_error(sbi)))
> > +   if (unlikely(f2fs_cp_error(sbi))) {
> > +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> > +   ClearPageUptodate(page);
> > +   dec_page_count(sbi, F2FS_DIRTY_META);
> > +   unlock_page(page);
> > +   return 0;
> > +   }
> > goto redirty_out;
> > +   }
> > if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
> > goto redirty_out;
> > if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
> > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> > index 48a622b95b76..94b342802513 100644
> > --- a/fs/f2fs/data.c
> > +++ b/fs/f2fs/data.c
> > @@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, 
> > int *submitted,
> > 
> > /* we should bypass data pages to proceed the kworkder jobs */
> > if (unlikely(f2fs_cp_error(sbi))) {
> > +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> > +   ClearPageUptodate(page);
> > +   inode_dec_dirty_pages(inode);
> > +   unlock_page(page);
> > +   return 0;
> > +   }
> 
> Oh, I notice previously, we will drop non-directory inode's dirty pages 
> directly,
> however, during umount, we'd better drop directory inode's dirty pages as 
> well, right?

Hmm, I remember I dropped them before. Need to double check.

> 
> > mapping_set_error(page->mapping, -EIO);
> > /*
> >  * don't drop any dirty dentry pages for keeping lastest
> > 
> >>
> >>>
> >>> Thanks,
> >>>
>  in an inifinite loop. Let's drop dirty pages at umount in that case.
> 
>  Signed-off-by: Jaegeuk Kim 
>  ---
>  v3:
>   - fix wrong unlock
> 
>  v2:
>   - fix typos
> 
>   fs/f2fs/node.c | 9 -
>   1 file changed, 8 insertions(+), 1 deletion(-)
> 
>  diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>  index e632de10aedab..e0bb0f7e0506e 100644
>  --- a/fs/f2fs/node.c
>  +++ b/fs/f2fs/node.c
>  @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, 
>  bool atomic, bool *submitted,
>   
>   trace_f2fs_writepage(page, NODE);
>   
>  -if (unlikely(f2fs_cp_error(sbi)))
>  +if (unlikely(f2fs_cp_error(sbi))) {
>  +if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
>  +ClearPageUptodate(page);
>  +dec_page_count(sbi, F2FS_DIRTY_NODES);
>  +unlock_page(page);
>  +return 0;
>  +}
>   goto redirty_out;
>  +}
>   
>   if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>   goto redirty_out;
> 
> >> .
> >>
> > 
> > 
> > ___
> > Linux-f2fs-devel mailing list
> > linux-f2fs-de...@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> > .
> > 


[PATCH] f2fs: compress: don't compress any datas after cp stop

2020-05-25 Thread Chao Yu
While compressed data writeback, we need to drop dirty pages like we did
for non-compressed pages if cp stops, however it's not needed to compress
any data in such case, so let's detect cp stop condition in
cluster_may_compress() to avoid redundant compressing and let following
f2fs_write_raw_pages() drops dirty pages correctly.

Signed-off-by: Chao Yu 
---
 fs/f2fs/compress.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index bf152c0d79fe..a53578a89211 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -849,6 +849,8 @@ static bool cluster_may_compress(struct compress_ctx *cc)
return false;
if (!f2fs_cluster_is_full(cc))
return false;
+   if (unlikely(f2fs_cp_error(F2FS_I_SB(cc->inode
+   return false;
return __cluster_may_compress(cc);
 }
 
-- 
2.18.0.rc1



Re: [RESEND] kunit: use --build_dir=.kunit as default

2020-05-25 Thread Vitor Massaru Iha
Hi Shuah,

On Fri, 2020-05-22 at 16:40 -0600, shuah wrote:
> On 4/16/20 5:11 PM, Brendan Higgins wrote:
> > On Tue, Apr 14, 2020 at 4:09 PM Vitor Massaru Iha <
> > vi...@massaru.org> wrote:
> > > To make KUnit easier to use, and to avoid overwriting object and
> > > .config files, the default KUnit build directory is set to .kunit
> > > 
> > >   * Related bug: 
> > > https://bugzilla.kernel.org/show_bug.cgi?id=205221
> > > 
> > > Signed-off-by: Vitor Massaru Iha 
> > 
> > Reviewed-by: Brendan Higgins 
> > 
> 
> Applied the patch to kselftest/kunit on top of
> 
> 45ba7a893ad89114e773b3dc32f6431354c465d6
> kunit: kunit_tool: Separate out config/build/exec/parse
> 
> from David's work resolving merge conflicts. Please check if it is
> sane.
> 
> thanks,
> -- Shuah

The kunit branch had some problems related to identation. KUnit's code
has mixed identation, and with that, in conflict correction, it ended
up breaking python.

In addition I found a bug: related to the creation of the .kunitconfig
file inside the default build directory. Should I send the patch again?
Or do I make a bugfix patch?

BR,
Vitor



Re: [RFC PATCH 0/5] x86/hw_breakpoint: protects more cpu entry data

2020-05-25 Thread Lai Jiangshan
On Mon, May 25, 2020 at 11:27 PM Peter Zijlstra  wrote:
>
> On Mon, May 25, 2020 at 02:50:57PM +, Lai Jiangshan wrote:
> > Hello
> >
> > The patchset is based on (tag: entry-v9-the-rest, tglx-devel/x86/entry).
> > And it is complement of 3ea11ac991d
> > ("x86/hw_breakpoint: Prevent data breakpoints on cpu_entry_area").
> >
> > After reading the code, we can see that more data needs to be protected
> > against hw_breakpoint, otherwise it may cause
> > dangerous/recursive/unwanted #DB.
> >
> >
> > Lai Jiangshan (5):
> >   x86/hw_breakpoint: add within_area() to check data breakpoints
> >   x86/hw_breakpoint: Prevent data breakpoints on direct GDT
> >   x86/hw_breakpoint: Prevent data breakpoints on per_cpu cpu_tss_rw
>
> I think we can actually get rid of that #DB IST stack frobbing, also see
> patches linked below.

Hi, Peter

I reviewed that patchset before. It is all what I want, but it still
didn't remove IST-shifting. I remove it in V2.

>
> >   x86/hw_breakpoint: Prevent data breakpoints on user_pcid_flush_mask
>
> Should we disallow the full structure just to be sure?

Sure, just did it as you suggested, thanks!

>
> >   x86/hw_breakpoint: Prevent data breakpoints on debug_idt_table
>
> That's going away, see:

Yes, so I added a note in the patch, saying "Please drop this patch
when Peter's work to remove debug_idt_table is merged."

I directly drop the patch in V2.

Thank you.
Lai


>
> https://lkml.kernel.org/r/20200522204738.645043...@infradead.org
>
> But yes, nice!
>


Re: [PATCH v8 0/5] support reserving crashkernel above 4G on arm64 kdump

2020-05-25 Thread Baoquan He
On 05/21/20 at 05:38pm, Chen Zhou wrote:
> This patch series enable reserving crashkernel above 4G in arm64.
> 
> There are following issues in arm64 kdump:
> 1. We use crashkernel=X to reserve crashkernel below 4G, which will fail
> when there is no enough low memory.
> 2. Currently, crashkernel=Y@X can be used to reserve crashkernel above 4G,
> in this case, if swiotlb or DMA buffers are required, crash dump kernel
> will boot failure because there is no low memory available for allocation.
> 
> To solve these issues, introduce crashkernel=X,low to reserve specified
> size low memory.
> Crashkernel=X tries to reserve memory for the crash dump kernel under
> 4G. If crashkernel=Y,low is specified simultaneously, reserve spcified
> size low memory for crash kdump kernel devices firstly and then reserve
> memory above 4G.
> 
> When crashkernel is reserved above 4G in memory, that is, crashkernel=X,low
> is specified simultaneously, kernel should reserve specified size low memory
> for crash dump kernel devices. So there may be two crash kernel regions, one
> is below 4G, the other is above 4G.
> In order to distinct from the high region and make no effect to the use of
> kexec-tools, rename the low region as "Crash kernel (low)", and add DT 
> property
> "linux,low-memory-range" to crash dump kernel's dtb to pass the low region.
> 
> Besides, we need to modify kexec-tools:
> arm64: kdump: add another DT property to crash dump kernel's dtb(see [1])
> 
> The previous changes and discussions can be retrieved from:
> 
> Changes since [v7]
> - Move x86 CRASH_ALIGN to 2M
> Suggested by Dave and do some test, move x86 CRASH_ALIGN to 2M.

OK, moving x86 CRASH_ALIGN to 2M is suggested by Dave. Because
CONFIG_PHYSICAL_ALIGN can be selected from 2M to 16M. So 2M seems good.
But, anyway, we should tell the reason why it need be changed in commit
log.


arch/x86/Kconfig:
config PHYSICAL_ALIGN
hex "Alignment value to which kernel should be aligned"
default "0x20"
range 0x2000 0x100 if X86_32
range 0x20 0x100 if X86_64

> - Update Documentation/devicetree/bindings/chosen.txt 
> Add corresponding documentation to 
> Documentation/devicetree/bindings/chosen.txt suggested by Arnd.
> - Add Tested-by from Jhon and pk
> 
> Changes since [v6]
> - Fix build errors reported by kbuild test robot.
> 
> Changes since [v5]
> - Move reserve_crashkernel_low() into kernel/crash_core.c.
> - Delete crashkernel=X,high.

And the crashkernel=X,high being deleted need be told too. Otherwise
people reading the commit have to check why themselves. I didn't follow
the old version, can't see why ,high can't be specified explicitly.

> - Modify crashkernel=X,low.
> If crashkernel=X,low is specified simultaneously, reserve spcified size low
> memory for crash kdump kernel devices firstly and then reserve memory above 
> 4G.
> In addition, rename crashk_low_res as "Crash kernel (low)" for arm64, and then
> pass to crash dump kernel by DT property "linux,low-memory-range".
> - Update Documentation/admin-guide/kdump/kdump.rst.
> 
> Changes since [v4]
> - Reimplement memblock_cap_memory_ranges for multiple ranges by Mike.
> 
> Changes since [v3]
> - Add memblock_cap_memory_ranges back for multiple ranges.
> - Fix some compiling warnings.
> 
> Changes since [v2]
> - Split patch "arm64: kdump: support reserving crashkernel above 4G" as
> two. Put "move reserve_crashkernel_low() into kexec_core.c" in a separate
> patch.
> 
> Changes since [v1]:
> - Move common reserve_crashkernel_low() code into kernel/kexec_core.c.
> - Remove memblock_cap_memory_ranges() i added in v1 and implement that
> in fdt_enforce_memory_region().
> There are at most two crash kernel regions, for two crash kernel regions
> case, we cap the memory range [min(regs[*].start), max(regs[*].end)]
> and then remove the memory range in the middle.
> 
> [1]: http://lists.infradead.org/pipermail/kexec/2020-May/025128.html
> [v1]: https://lkml.org/lkml/2019/4/2/1174
> [v2]: https://lkml.org/lkml/2019/4/9/86
> [v3]: https://lkml.org/lkml/2019/4/9/306
> [v4]: https://lkml.org/lkml/2019/4/15/273
> [v5]: https://lkml.org/lkml/2019/5/6/1360
> [v6]: https://lkml.org/lkml/2019/8/30/142
> [v7]: https://lkml.org/lkml/2019/12/23/411
> 
> Chen Zhou (5):
>   x86: kdump: move reserve_crashkernel_low() into crash_core.c
>   arm64: kdump: reserve crashkenel above 4G for crash dump kernel
>   arm64: kdump: add memory for devices by DT property, low-memory-range
>   kdump: update Documentation about crashkernel on arm64
>   dt-bindings: chosen: Document linux,low-memory-range for arm64 kdump
> 
>  Documentation/admin-guide/kdump/kdump.rst | 13 ++-
>  .../admin-guide/kernel-parameters.txt | 12 ++-
>  Documentation/devicetree/bindings/chosen.txt  | 25 ++
>  arch/arm64/kernel/setup.c |  8 +-
>  arch/arm64/mm/init.c  | 61 -
>  arch/x86/kernel/setup.c   | 66 ++
>  

[RFC PATCH V2 4/7] x86/hw_breakpoint: Prevent data breakpoints on user_pcid_flush_mask

2020-05-25 Thread Lai Jiangshan
The percpu user_pcid_flush_mask is used for CPU entry
If a data breakpoint on it, it will cause an unwanted #DB.
Protect the full cpu_tlbstate structure to be sure.

There are some other percpu data used in CPU entry, but they are
either in already-protected cpu_tss_rw or are safe to trigger #DB
(espfix_waddr, espfix_stack).

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Signed-off-by: Lai Jiangshan 
---
 arch/x86/kernel/hw_breakpoint.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
index 7d3966b9aa12..67ef8e24af6a 100644
--- a/arch/x86/kernel/hw_breakpoint.c
+++ b/arch/x86/kernel/hw_breakpoint.c
@@ -33,6 +33,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* Per cpu debug control register value */
 DEFINE_PER_CPU(unsigned long, cpu_dr7);
@@ -268,6 +269,16 @@ static inline bool within_cpu_entry(unsigned long addr, 
unsigned long end)
(unsigned long)_cpu(cpu_tss_rw, cpu),
sizeof(struct tss_struct)))
return true;
+
+   /*
+* cpu_tlbstate.user_pcid_flush_mask is used for CPU entry.
+* If a data breakpoint on it, it will cause an unwanted #DB.
+* Protect the full cpu_tlbstate structure to be sure.
+*/
+   if (within_area(addr, end,
+   (unsigned long)_cpu(cpu_tlbstate, cpu),
+   sizeof(struct tlb_state)))
+   return true;
}
 
return false;
-- 
2.20.1



[RFC PATCH V2 2/7] x86/hw_breakpoint: Prevent data breakpoints on direct GDT

2020-05-25 Thread Lai Jiangshan
A data breakpoint on the GDT is terrifying and should be avoided.
The GDT on CPU entry area is already protected. The direct GDT
should be also protected, although it is seldom used and only
used for short time.

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Signed-off-by: Lai Jiangshan 
---
 arch/x86/kernel/hw_breakpoint.c | 30 ++
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
index c149c7b29ac3..f859095c1b6c 100644
--- a/arch/x86/kernel/hw_breakpoint.c
+++ b/arch/x86/kernel/hw_breakpoint.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /* Per cpu debug control register value */
 DEFINE_PER_CPU(unsigned long, cpu_dr7);
@@ -237,13 +238,26 @@ static inline bool within_area(unsigned long addr, 
unsigned long end,
 }
 
 /*
- * Checks whether the range from addr to end, inclusive, overlaps the CPU
- * entry area range.
+ * Checks whether the range from addr to end, inclusive, overlaps the fixed
+ * mapped CPU entry area range or other ranges used for CPU entry.
  */
-static inline bool within_cpu_entry_area(unsigned long addr, unsigned long end)
+static inline bool within_cpu_entry(unsigned long addr, unsigned long end)
 {
-   return within_area(addr, end, CPU_ENTRY_AREA_BASE,
-  CPU_ENTRY_AREA_TOTAL_SIZE);
+   int cpu;
+
+   /* CPU entry erea is always used for CPU entry */
+   if (within_area(addr, end, CPU_ENTRY_AREA_BASE,
+   CPU_ENTRY_AREA_TOTAL_SIZE))
+   return true;
+
+   for_each_possible_cpu(cpu) {
+   /* The original rw GDT is being used after load_direct_gdt() */
+   if (within_area(addr, end, (unsigned long)get_cpu_gdt_rw(cpu),
+   GDT_SIZE))
+   return true;
+   }
+
+   return false;
 }
 
 static int arch_build_bp_info(struct perf_event *bp,
@@ -257,12 +271,12 @@ static int arch_build_bp_info(struct perf_event *bp,
return -EINVAL;
 
/*
-* Prevent any breakpoint of any type that overlaps the
-* cpu_entry_area.  This protects the IST stacks and also
+* Prevent any breakpoint of any type that overlaps the CPU
+* entry area and data.  This protects the IST stacks and also
 * reduces the chance that we ever find out what happens if
 * there's a data breakpoint on the GDT, IDT, or TSS.
 */
-   if (within_cpu_entry_area(attr->bp_addr, bp_end))
+   if (within_cpu_entry(attr->bp_addr, bp_end))
return -EINVAL;
 
hw->address = attr->bp_addr;
-- 
2.20.1



[RFC PATCH V2 7/7] x86/entry: remove DB1 stack and DB2 hole from cpu entry area

2020-05-25 Thread Lai Jiangshan
IST-shift code is removed from entry code, #DB will stick to
DB stack only. So we remove the DB1 stack and the DB2 hole.

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Signed-off-by: Lai Jiangshan 
---
 arch/x86/include/asm/cpu_entry_area.h | 12 +++-
 arch/x86/kernel/asm-offsets_64.c  |  4 
 arch/x86/kernel/dumpstack_64.c| 10 +++---
 arch/x86/mm/cpu_entry_area.c  |  4 +---
 4 files changed, 7 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/cpu_entry_area.h 
b/arch/x86/include/asm/cpu_entry_area.h
index 02c0078d3787..8902fdb7de13 100644
--- a/arch/x86/include/asm/cpu_entry_area.h
+++ b/arch/x86/include/asm/cpu_entry_area.h
@@ -11,15 +11,11 @@
 #ifdef CONFIG_X86_64
 
 /* Macro to enforce the same ordering and stack sizes */
-#define ESTACKS_MEMBERS(guardsize, db2_holesize)\
+#define ESTACKS_MEMBERS(guardsize) \
charDF_stack_guard[guardsize];  \
charDF_stack[EXCEPTION_STKSZ];  \
charNMI_stack_guard[guardsize]; \
charNMI_stack[EXCEPTION_STKSZ]; \
-   charDB2_stack_guard[guardsize]; \
-   charDB2_stack[db2_holesize];\
-   charDB1_stack_guard[guardsize]; \
-   charDB1_stack[EXCEPTION_STKSZ]; \
charDB_stack_guard[guardsize];  \
charDB_stack[EXCEPTION_STKSZ];  \
charMCE_stack_guard[guardsize]; \
@@ -28,12 +24,12 @@
 
 /* The exception stacks' physical storage. No guard pages required */
 struct exception_stacks {
-   ESTACKS_MEMBERS(0, 0)
+   ESTACKS_MEMBERS(0)
 };
 
 /* The effective cpu entry area mapping with guard pages. */
 struct cea_exception_stacks {
-   ESTACKS_MEMBERS(PAGE_SIZE, EXCEPTION_STKSZ)
+   ESTACKS_MEMBERS(PAGE_SIZE)
 };
 
 /*
@@ -42,8 +38,6 @@ struct cea_exception_stacks {
 enum exception_stack_ordering {
ESTACK_DF,
ESTACK_NMI,
-   ESTACK_DB2,
-   ESTACK_DB1,
ESTACK_DB,
ESTACK_MCE,
N_EXCEPTION_STACKS
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index 472378330169..4b4974d91d90 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -57,10 +57,6 @@ int main(void)
BLANK();
 #undef ENTRY
 
-   DEFINE(DB_STACK_OFFSET, offsetof(struct cea_exception_stacks, DB_stack) 
-
-  offsetof(struct cea_exception_stacks, DB1_stack));
-   BLANK();
-
 #ifdef CONFIG_STACKPROTECTOR
DEFINE(stack_canary_offset, offsetof(struct fixed_percpu_data, 
stack_canary));
BLANK();
diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
index 460ae7f66818..6b7051fa3669 100644
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -22,15 +22,13 @@
 static const char * const exception_stack_names[] = {
[ ESTACK_DF ]   = "#DF",
[ ESTACK_NMI]   = "NMI",
-   [ ESTACK_DB2]   = "#DB2",
-   [ ESTACK_DB1]   = "#DB1",
[ ESTACK_DB ]   = "#DB",
[ ESTACK_MCE]   = "#MC",
 };
 
 const char *stack_type_name(enum stack_type type)
 {
-   BUILD_BUG_ON(N_EXCEPTION_STACKS != 6);
+   BUILD_BUG_ON(N_EXCEPTION_STACKS != 4);
 
if (type == STACK_TYPE_IRQ)
return "IRQ";
@@ -72,14 +70,12 @@ struct estack_pages {
 /*
  * Array of exception stack page descriptors. If the stack is larger than
  * PAGE_SIZE, all pages covering a particular stack will have the same
- * info. The guard pages including the not mapped DB2 stack are zeroed
- * out.
+ * info. The guard pages are zeroed out.
  */
 static const
 struct estack_pages estack_pages[CEA_ESTACK_PAGES] cacheline_aligned = {
EPAGERANGE(DF),
EPAGERANGE(NMI),
-   EPAGERANGE(DB1),
EPAGERANGE(DB),
EPAGERANGE(MCE),
 };
@@ -91,7 +87,7 @@ static bool in_exception_stack(unsigned long *stack, struct 
stack_info *info)
struct pt_regs *regs;
unsigned int k;
 
-   BUILD_BUG_ON(N_EXCEPTION_STACKS != 6);
+   BUILD_BUG_ON(N_EXCEPTION_STACKS != 4);
 
begin = (unsigned long)__this_cpu_read(cea_exception_stacks);
/*
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index 5199d8a1daf1..686af163be20 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -102,12 +102,10 @@ static void __init percpu_setup_exception_stacks(unsigned 
int cpu)
 
/*
 * The exceptions stack mappings in the per cpu area are protected
-* by guard pages so each stack must be mapped separately. DB2 is
-* not mapped; it just exists to catch triple nesting of #DB.
+* by guard pages so each stack must be mapped separately.
 */
cea_map_stack(DF);
cea_map_stack(NMI);
-   cea_map_stack(DB1);
cea_map_stack(DB);
  

[RFC PATCH V2 6/7] x86/entry: is_debug_stack() don't check of DB1 stack

2020-05-25 Thread Lai Jiangshan
IST-shift code is removed from entry code, #DB will not
at DB1 stack. So we remove the check of DB1 stack in
is_debug_stack().

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Signed-off-by: Lai Jiangshan 
---
 arch/x86/kernel/nmi.c | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index 1c58454ac5fb..2f463f5880c6 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -500,15 +500,10 @@ static noinstr bool is_debug_stack(unsigned long addr)
 {
struct cea_exception_stacks *cs = __this_cpu_read(cea_exception_stacks);
unsigned long top = CEA_ESTACK_TOP(cs, DB);
-   unsigned long bot = CEA_ESTACK_BOT(cs, DB1);
+   unsigned long bot = CEA_ESTACK_BOT(cs, DB);
 
if (__this_cpu_read(debug_stack_usage))
return true;
-   /*
-* Note, this covers the guard page between DB and DB1 as well to
-* avoid two checks. But by all means @addr can never point into
-* the guard page.
-*/
return addr >= bot && addr < top;
 }
 #endif
-- 
2.20.1



[RFC PATCH V2 1/7] x86/hw_breakpoint: add within_area() to check data breakpoints

2020-05-25 Thread Lai Jiangshan
within_area() is added for checking if the data breakpoints overlap
with cpu_entry_area, and will be used for checking if the data
breakpoints overlap with GDT, IDT, or TSS in places other than
cpu_entry_area next patches.

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Signed-off-by: Lai Jiangshan 
---
 arch/x86/kernel/hw_breakpoint.c | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
index 9ddf441ccaa8..c149c7b29ac3 100644
--- a/arch/x86/kernel/hw_breakpoint.c
+++ b/arch/x86/kernel/hw_breakpoint.c
@@ -227,14 +227,23 @@ int arch_check_bp_in_kernelspace(struct 
arch_hw_breakpoint *hw)
return (va >= TASK_SIZE_MAX) || ((va + len - 1) >= TASK_SIZE_MAX);
 }
 
+/*
+ * Checks whether the range [addr, end], overlaps the area [base, base + size).
+ */
+static inline bool within_area(unsigned long addr, unsigned long end,
+  unsigned long base, unsigned long size)
+{
+   return end >= base && addr < (base + size);
+}
+
 /*
  * Checks whether the range from addr to end, inclusive, overlaps the CPU
  * entry area range.
  */
 static inline bool within_cpu_entry_area(unsigned long addr, unsigned long end)
 {
-   return end >= CPU_ENTRY_AREA_BASE &&
-  addr < (CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_TOTAL_SIZE);
+   return within_area(addr, end, CPU_ENTRY_AREA_BASE,
+  CPU_ENTRY_AREA_TOTAL_SIZE);
 }
 
 static int arch_build_bp_info(struct perf_event *bp,
-- 
2.20.1



[RFC PATCH V2 5/7] x86/entry: don't shift stack on #DB

2020-05-25 Thread Lai Jiangshan
debug_enter() will disable #DB, there should be no recursive #DB.

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Signed-off-by: Lai Jiangshan 
---
 arch/x86/entry/entry_64.S| 17 -
 arch/x86/kernel/asm-offsets_64.c |  1 -
 2 files changed, 18 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 265ff97b3961..8ecaeee53653 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -396,11 +396,6 @@ SYM_CODE_END(\asmsym)
idtentry \vector asm_\cfunc \cfunc has_error_code=0
 .endm
 
-/*
- * MCE and DB exceptions
- */
-#define CPU_TSS_IST(x) PER_CPU_VAR(cpu_tss_rw) + (TSS_ist + (x) * 8)
-
 /**
  * idtentry_mce_db - Macro to generate entry stubs for #MC and #DB
  * @vector:Vector number
@@ -416,10 +411,6 @@ SYM_CODE_END(\asmsym)
  * If hits in kernel mode then it needs to go through the paranoid
  * entry as the exception can hit any random state. No preemption
  * check on exit to keep the paranoid path simple.
- *
- * If the trap is #DB then the interrupt stack entry in the IST is
- * moved to the second stack, so a potential recursion will have a
- * fresh IST.
  */
 .macro idtentry_mce_db vector asmsym cfunc
 SYM_CODE_START(\asmsym)
@@ -445,16 +436,8 @@ SYM_CODE_START(\asmsym)
 
movq%rsp, %rdi  /* pt_regs pointer */
 
-   .if \vector == X86_TRAP_DB
-   subq$DB_STACK_OFFSET, CPU_TSS_IST(IST_INDEX_DB)
-   .endif
-
call\cfunc
 
-   .if \vector == X86_TRAP_DB
-   addq$DB_STACK_OFFSET, CPU_TSS_IST(IST_INDEX_DB)
-   .endif
-
jmp paranoid_exit
 
/* Switch to the regular task stack and use the noist entry point */
diff --git a/arch/x86/kernel/asm-offsets_64.c b/arch/x86/kernel/asm-offsets_64.c
index c2a47016f243..472378330169 100644
--- a/arch/x86/kernel/asm-offsets_64.c
+++ b/arch/x86/kernel/asm-offsets_64.c
@@ -57,7 +57,6 @@ int main(void)
BLANK();
 #undef ENTRY
 
-   OFFSET(TSS_ist, tss_struct, x86_tss.ist);
DEFINE(DB_STACK_OFFSET, offsetof(struct cea_exception_stacks, DB_stack) 
-
   offsetof(struct cea_exception_stacks, DB1_stack));
BLANK();
-- 
2.20.1



[RFC PATCH V2 0/7] x86/DB: protects more cpu entry data and

2020-05-25 Thread Lai Jiangshan
Hello

The patchset is based on (tag: entry-v9-the-rest, tglx-devel/x86/entry).
And it is complement of 3ea11ac991d
("x86/hw_breakpoint: Prevent data breakpoints on cpu_entry_area").

After reading the code, we can see that more data needs to be protected
against hw_breakpoint, otherwise it may cause
dangerous/recursive/unwanted #DB.

This patch also remove IST-shifting(patch 5-7). Because tglx work includes
debug_enter() which disables nested #DB.
Patch 5-7 depends tglx'w work only by now; they don't depends on Peter's
patchset[3], but this patch 6 should be discarded when they are mareged
with Peter's work.

Actually, I beg/hope Peter incorporate this V2 patchset into his patchset
which will be incorporated to tglx work. Because this V2 patchset
doesn't protect debug_idt_table and patch6 conflicts with Peter's
work.

Changed from V1
  Protect the full cpu_tlbstate structure to be sure. Suggested
by Peter.
  Drop the last patch of the V1 because debug_idt_table is removed
in Peter's patchset[3].
  remove IST-shifting

Lai Jiangshan (7):
  x86/hw_breakpoint: add within_area() to check data breakpoints
  x86/hw_breakpoint: Prevent data breakpoints on direct GDT
  x86/hw_breakpoint: Prevent data breakpoints on per_cpu cpu_tss_rw
  x86/hw_breakpoint: Prevent data breakpoints on user_pcid_flush_mask
  x86/entry: don't shift stack on #DB
  x86/entry: is_debug_stack() don't check of DB1 stack
  x86/entry: remove DB1 stack and DB2 hole from cpu entry area

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Link: https://lkml.kernel.org/r/20200505134058.272448...@linutronix.de
Link: https://lore.kernel.org/lkml/20200521200513.656533...@linutronix.de
Link: https://lkml.kernel.org/r/20200522204738.645043...@infradead.org

 arch/x86/entry/entry_64.S | 17 
 arch/x86/include/asm/cpu_entry_area.h | 12 ++---
 arch/x86/kernel/asm-offsets_64.c  |  5 ---
 arch/x86/kernel/dumpstack_64.c| 10 ++---
 arch/x86/kernel/hw_breakpoint.c   | 63 +++
 arch/x86/kernel/nmi.c |  7 +--
 arch/x86/mm/cpu_entry_area.c  |  4 +-
 7 files changed, 63 insertions(+), 55 deletions(-)

-- 
2.20.1



[RFC PATCH V2 3/7] x86/hw_breakpoint: Prevent data breakpoints on per_cpu cpu_tss_rw

2020-05-25 Thread Lai Jiangshan
cpu_tss_rw is not directly referenced by hardware, but
cpu_tss_rw is also used in CPU entry code, especially
when #DB shifts its stacks. If a data breakpoint is on
the cpu_tss_rw.x86_tss.ist[IST_INDEX_DB], it will cause
recursive #DB (and then #DF soon for #DB is generated
after the access, IST-shifting, is done).

Cc: Andy Lutomirski 
Cc: Peter Zijlstra (Intel) 
Cc: Thomas Gleixner 
Cc: x...@kernel.org
Signed-off-by: Lai Jiangshan 
---
 arch/x86/kernel/hw_breakpoint.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
index f859095c1b6c..7d3966b9aa12 100644
--- a/arch/x86/kernel/hw_breakpoint.c
+++ b/arch/x86/kernel/hw_breakpoint.c
@@ -255,6 +255,19 @@ static inline bool within_cpu_entry(unsigned long addr, 
unsigned long end)
if (within_area(addr, end, (unsigned long)get_cpu_gdt_rw(cpu),
GDT_SIZE))
return true;
+
+   /*
+* cpu_tss_rw is not directly referenced by hardware, but
+* cpu_tss_rw is also used in CPU entry code, especially
+* when #DB shifts its stacks. If a data breakpoint is on
+* the cpu_tss_rw.x86_tss.ist[IST_INDEX_DB], it will cause
+* recursive #DB (and then #DF soon for #DB is generated
+* after the access, IST-shifting, is done).
+*/
+   if (within_area(addr, end,
+   (unsigned long)_cpu(cpu_tss_rw, cpu),
+   sizeof(struct tss_struct)))
+   return true;
}
 
return false;
-- 
2.20.1



Re: [PATCH v6 1/4] rcu/kasan: record and print call_rcu() call stack

2020-05-25 Thread Walter Wu
On Mon, 2020-05-25 at 11:56 +0200, Dmitry Vyukov wrote:
> On Fri, May 22, 2020 at 4:01 AM Walter Wu  wrote:
> >
> > This feature will record the last two call_rcu() call stacks and
> > prints up to 2 call_rcu() call stacks in KASAN report.
> >
> > When call_rcu() is called, we store the call_rcu() call stack into
> > slub alloc meta-data, so that the KASAN report can print rcu stack.
> >
> > [1]https://bugzilla.kernel.org/show_bug.cgi?id=198437
> > [2]https://groups.google.com/forum/#!searchin/kasan-dev/better$20stack$20traces$20for$20rcu%7Csort:date/kasan-dev/KQsjT_88hDE/7rNUZprRBgAJ
> 
> Hi Walter,
> 
> The series look good to me. Thanks for bearing with me. I am eager to
> see this in syzbot reports.
> 
> Reviewed-and-tested-by: Dmitry Vyukov 
> 

Hi Dmitry,

I appreciate for your response. This patches make KASAN report more
better and let me learn a lot. Thank you for good suggestion and
detailed explanation.

Walter

> > Signed-off-by: Walter Wu 
> > Suggested-by: Dmitry Vyukov 
> > Acked-by: Paul E. McKenney 
> > Cc: Andrey Ryabinin 
> > Cc: Dmitry Vyukov 
> > Cc: Alexander Potapenko 
> > Cc: Andrew Morton 
> > Cc: Josh Triplett 
> > Cc: Mathieu Desnoyers 
> > Cc: Lai Jiangshan 
> > Cc: Joel Fernandes 
> > Cc: Andrey Konovalov 
> > ---
> >  include/linux/kasan.h |  2 ++
> >  kernel/rcu/tree.c |  2 ++
> >  mm/kasan/common.c |  4 ++--
> >  mm/kasan/generic.c| 21 +
> >  mm/kasan/kasan.h  | 10 ++
> >  mm/kasan/report.c | 28 +++-
> >  6 files changed, 60 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> > index 31314ca7c635..23b7ee00572d 100644
> > --- a/include/linux/kasan.h
> > +++ b/include/linux/kasan.h
> > @@ -174,11 +174,13 @@ static inline size_t kasan_metadata_size(struct 
> > kmem_cache *cache) { return 0; }
> >
> >  void kasan_cache_shrink(struct kmem_cache *cache);
> >  void kasan_cache_shutdown(struct kmem_cache *cache);
> > +void kasan_record_aux_stack(void *ptr);
> >
> >  #else /* CONFIG_KASAN_GENERIC */
> >
> >  static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
> >  static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
> > +static inline void kasan_record_aux_stack(void *ptr) {}
> >
> >  #endif /* CONFIG_KASAN_GENERIC */
> >
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 06548e2ebb72..36a4ff7f320b 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -57,6 +57,7 @@
> >  #include 
> >  #include 
> >  #include 
> > +#include 
> >  #include "../time/tick-internal.h"
> >
> >  #include "tree.h"
> > @@ -2668,6 +2669,7 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func)
> > head->func = func;
> > head->next = NULL;
> > local_irq_save(flags);
> > +   kasan_record_aux_stack(head);
> > rdp = this_cpu_ptr(_data);
> >
> > /* Add the callback to our list. */
> > diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> > index 2906358e42f0..8bc618289bb1 100644
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -41,7 +41,7 @@
> >  #include "kasan.h"
> >  #include "../slab.h"
> >
> > -static inline depot_stack_handle_t save_stack(gfp_t flags)
> > +depot_stack_handle_t kasan_save_stack(gfp_t flags)
> >  {
> > unsigned long entries[KASAN_STACK_DEPTH];
> > unsigned int nr_entries;
> > @@ -54,7 +54,7 @@ static inline depot_stack_handle_t save_stack(gfp_t flags)
> >  static inline void set_track(struct kasan_track *track, gfp_t flags)
> >  {
> > track->pid = current->pid;
> > -   track->stack = save_stack(flags);
> > +   track->stack = kasan_save_stack(flags);
> >  }
> >
> >  void kasan_enable_current(void)
> > diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
> > index 56ff8885fe2e..8acf48882ba2 100644
> > --- a/mm/kasan/generic.c
> > +++ b/mm/kasan/generic.c
> > @@ -325,3 +325,24 @@ DEFINE_ASAN_SET_SHADOW(f2);
> >  DEFINE_ASAN_SET_SHADOW(f3);
> >  DEFINE_ASAN_SET_SHADOW(f5);
> >  DEFINE_ASAN_SET_SHADOW(f8);
> > +
> > +void kasan_record_aux_stack(void *addr)
> > +{
> > +   struct page *page = kasan_addr_to_page(addr);
> > +   struct kmem_cache *cache;
> > +   struct kasan_alloc_meta *alloc_info;
> > +   void *object;
> > +
> > +   if (!(page && PageSlab(page)))
> > +   return;
> > +
> > +   cache = page->slab_cache;
> > +   object = nearest_obj(cache, page, addr);
> > +   alloc_info = get_alloc_info(cache, object);
> > +
> > +   /*
> > +* record the last two call_rcu() call stacks.
> > +*/
> > +   alloc_info->aux_stack[1] = alloc_info->aux_stack[0];
> > +   alloc_info->aux_stack[0] = kasan_save_stack(GFP_NOWAIT);
> > +}
> > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> > index e8f37199d885..a7391bc83070 100644
> > --- a/mm/kasan/kasan.h
> > +++ b/mm/kasan/kasan.h
> > @@ -104,7 +104,15 @@ struct kasan_track {
> >
> >  struct kasan_alloc_meta {
> > 

Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-25 Thread Chao Yu
On 2020/5/26 9:11, Chao Yu wrote:
> On 2020/5/25 23:06, Jaegeuk Kim wrote:
>> On 05/25, Chao Yu wrote:
>>> On 2020/5/25 11:56, Jaegeuk Kim wrote:
 Shutdown test is somtimes hung, since it keeps trying to flush dirty node 
 pages
>>>
>>> IMO, for umount case, we should drop dirty reference and dirty pages on 
>>> meta/data
>>> pages like we change for node pages to avoid potential dead loop...
>>
>> I believe we're doing for them. :P
> 
> Actually, I mean do we need to drop dirty meta/data pages explicitly as below:
> 
> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
> index 3dc3ac6fe143..4c08fd0a680a 100644
> --- a/fs/f2fs/checkpoint.c
> +++ b/fs/f2fs/checkpoint.c
> @@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,
> 
>   trace_f2fs_writepage(page, META);
> 
> - if (unlikely(f2fs_cp_error(sbi)))
> + if (unlikely(f2fs_cp_error(sbi))) {
> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> + ClearPageUptodate(page);
> + dec_page_count(sbi, F2FS_DIRTY_META);
> + unlock_page(page);
> + return 0;
> + }
>   goto redirty_out;
> + }
>   if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>   goto redirty_out;
>   if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 48a622b95b76..94b342802513 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, int 
> *submitted,
> 
>   /* we should bypass data pages to proceed the kworkder jobs */
>   if (unlikely(f2fs_cp_error(sbi))) {
> + if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
> + ClearPageUptodate(page);
> + inode_dec_dirty_pages(inode);
> + unlock_page(page);
> + return 0;
> + }

Oh, I notice previously, we will drop non-directory inode's dirty pages 
directly,
however, during umount, we'd better drop directory inode's dirty pages as well, 
right?

>   mapping_set_error(page->mapping, -EIO);
>   /*
>* don't drop any dirty dentry pages for keeping lastest
> 
>>
>>>
>>> Thanks,
>>>
 in an inifinite loop. Let's drop dirty pages at umount in that case.

 Signed-off-by: Jaegeuk Kim 
 ---
 v3:
  - fix wrong unlock

 v2:
  - fix typos

  fs/f2fs/node.c | 9 -
  1 file changed, 8 insertions(+), 1 deletion(-)

 diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
 index e632de10aedab..e0bb0f7e0506e 100644
 --- a/fs/f2fs/node.c
 +++ b/fs/f2fs/node.c
 @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, 
 bool atomic, bool *submitted,
  
trace_f2fs_writepage(page, NODE);
  
 -  if (unlikely(f2fs_cp_error(sbi)))
 +  if (unlikely(f2fs_cp_error(sbi))) {
 +  if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
 +  ClearPageUptodate(page);
 +  dec_page_count(sbi, F2FS_DIRTY_NODES);
 +  unlock_page(page);
 +  return 0;
 +  }
goto redirty_out;
 +  }
  
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
goto redirty_out;

>> .
>>
> 
> 
> ___
> Linux-f2fs-devel mailing list
> linux-f2fs-de...@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> .
> 


Re: [PATCH v4 00/36] Large pages in the page cache

2020-05-25 Thread Matthew Wilcox
On Tue, May 26, 2020 at 09:07:51AM +1000, Dave Chinner wrote:
> On Thu, May 21, 2020 at 08:05:53PM -0700, Matthew Wilcox wrote:
> > On Fri, May 22, 2020 at 12:57:51PM +1000, Dave Chinner wrote:
> > > Again, why is this dependent on THP? We can allocate compound pages
> > > without using THP, so why only allow the page cache to use larger
> > > pages when THP is configured?
> > 
> > We have too many CONFIG options.  My brain can't cope with adding
> > CONFIG_LARGE_PAGES because then we might have neither THP nor LP, LP and
> > not THP, THP and not LP or both THP and LP.  And of course HUGETLBFS,
> > which has its own special set of issues that one has to think about when
> > dealing with the page cache.
> 
> That sounds like something that should be fixed. :/

If I have to fix hugetlbfs before doing large pages in the page cache,
we'll be five years away and at least two mental breakdowns.  Honestly,
I'd rather work on almost anything else.  Some of the work I'm doing
will help make hugetlbfs more similar to everything else, eventually,
but ... no, not going to put all this on hold to fix hugetlbfs.  Sorry.

> Really, I don't care about the historical mechanisms that people can
> configure large pages with. If the mm subsystem does not have a
> unified abstraction and API for working with large pages, then that
> is the first problem that needs to be addressed before other
> subsystems start trying to use large pages. 

I think you're reading too quickly.  Let me try again.

Historically, Transparent Huge Pages have been PMD sized.  They have also
had a complicated interface to use.  I am changing both those things;
THPs may now be arbitrary order, and I'm adding interfaces to make THPs
easier to work with.

Now, if you want to contend that THPs are inextricably linked with
PMD sizes and I need to choose a different name, I've been thinking
about other options a bit.  One would be 'lpage' for 'large page'.
Another would be 'mop' for 'multi-order page'.

We should not be seeing 'compound_order' in any filesystem code.
Compound pages are an mm concept.  They happen to be how THPs are
implemented, but it'd be a layering violation to use them directly.


Re: [f2fs-dev] [PATCH v3] f2fs: avoid inifinite loop to wait for flushing node pages at cp_error

2020-05-25 Thread Chao Yu
On 2020/5/25 23:06, Jaegeuk Kim wrote:
> On 05/25, Chao Yu wrote:
>> On 2020/5/25 11:56, Jaegeuk Kim wrote:
>>> Shutdown test is somtimes hung, since it keeps trying to flush dirty node 
>>> pages
>>
>> IMO, for umount case, we should drop dirty reference and dirty pages on 
>> meta/data
>> pages like we change for node pages to avoid potential dead loop...
> 
> I believe we're doing for them. :P

Actually, I mean do we need to drop dirty meta/data pages explicitly as below:

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 3dc3ac6fe143..4c08fd0a680a 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -299,8 +299,15 @@ static int __f2fs_write_meta_page(struct page *page,

trace_f2fs_writepage(page, META);

-   if (unlikely(f2fs_cp_error(sbi)))
+   if (unlikely(f2fs_cp_error(sbi))) {
+   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
+   ClearPageUptodate(page);
+   dec_page_count(sbi, F2FS_DIRTY_META);
+   unlock_page(page);
+   return 0;
+   }
goto redirty_out;
+   }
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
goto redirty_out;
if (wbc->for_reclaim && page->index < GET_SUM_BLOCK(sbi, 0))
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 48a622b95b76..94b342802513 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2682,6 +2682,12 @@ int f2fs_write_single_data_page(struct page *page, int 
*submitted,

/* we should bypass data pages to proceed the kworkder jobs */
if (unlikely(f2fs_cp_error(sbi))) {
+   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
+   ClearPageUptodate(page);
+   inode_dec_dirty_pages(inode);
+   unlock_page(page);
+   return 0;
+   }
mapping_set_error(page->mapping, -EIO);
/*
 * don't drop any dirty dentry pages for keeping lastest

> 
>>
>> Thanks,
>>
>>> in an inifinite loop. Let's drop dirty pages at umount in that case.
>>>
>>> Signed-off-by: Jaegeuk Kim 
>>> ---
>>> v3:
>>>  - fix wrong unlock
>>>
>>> v2:
>>>  - fix typos
>>>
>>>  fs/f2fs/node.c | 9 -
>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
>>> index e632de10aedab..e0bb0f7e0506e 100644
>>> --- a/fs/f2fs/node.c
>>> +++ b/fs/f2fs/node.c
>>> @@ -1520,8 +1520,15 @@ static int __write_node_page(struct page *page, bool 
>>> atomic, bool *submitted,
>>>  
>>> trace_f2fs_writepage(page, NODE);
>>>  
>>> -   if (unlikely(f2fs_cp_error(sbi)))
>>> +   if (unlikely(f2fs_cp_error(sbi))) {
>>> +   if (is_sbi_flag_set(sbi, SBI_IS_CLOSE)) {
>>> +   ClearPageUptodate(page);
>>> +   dec_page_count(sbi, F2FS_DIRTY_NODES);
>>> +   unlock_page(page);
>>> +   return 0;
>>> +   }
>>> goto redirty_out;
>>> +   }
>>>  
>>> if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>>> goto redirty_out;
>>>
> .
> 


Re: [PATCH] bridge: mrp: Fix out-of-bounds read in br_mrp_parse

2020-05-25 Thread David Miller
From: Horatiu Vultur 
Date: Mon, 25 May 2020 09:55:41 +

> The issue was reported by syzbot. When the function br_mrp_parse was
> called with a valid net_bridge_port, the net_bridge was an invalid
> pointer. Therefore the check br->stp_enabled could pass/fail
> depending where it was pointing in memory.
> The fix consists of setting the net_bridge pointer if the port is a
> valid pointer.
> 
> Reported-by: syzbot+9c6f0f1f8e32223df...@syzkaller.appspotmail.com
> Fixes: 6536993371fa ("bridge: mrp: Integrate MRP into the bridge")
> Signed-off-by: Horatiu Vultur 

Applied to net-next, thanks.


[ANNOUNCE] Reiser5: Data Tiering. Burst Buffers. Speedup synchronous modifications

2020-05-25 Thread Edward Shishkin

 Reiser5: Data Tiering. Burst Buffers
  Speedup synchronous modifications


 Dumping peaks of IO load to a proxy device


Now you can add a small high-performance block device to your large
logical volume composed of relatively slow commodity disks and get
an impression that the whole your volume has throughput which is as
high, as the one of that "proxy" device!

This is based on a simple observation that in real life IO load is
going by peaks, and the idea is to dump those peaks to a high-
performance "proxy" device. Usually you have enough time between peaks
to flush the proxy device, that is, to migrate the "hot data" from the
proxy device to slow media in background mode, so that your proxy
device is always ready to accept a new portion of "peaks".

Such technique, which is also known as "Burst Buffers", initially
appeared in the area of HPC. Despite this fact, it is also important
for usual applications. In particular, it allows to speedup the ones,
which perform so-called "atomic updates".


   Speedup "atomic updates" in user-space


There is a whole class of applications with high requirements to data
integrity. Such applications (typically data bases) want to be sure
that any data modifications either complete, or they don't. And they
don't appear as partially occurred. Some applications has weaker
requirements: with some restrictions they accept also partially
occurred modifications.

Atomic updates in user space are performed via a sequence of 3 steps.
Suppose you need to modify data of some file "foo" in an atomic way.
For this you need to:

1. write a new temporary file "foo.tmp" with modified data
2. issue fsync(2) against "foo.tmp"
3. rename "foo.tmp" to "foo".

At step 1 the file system populates page cache with new data
At step 2 the file system allocates disk addresses for all logical
blocks of the file foo.tmp and writes that file to disk. At step 3 all
blocks containing old data get released.

Note that steps 2 and 3 become a reason of essential performance drop
on slow media. The situation gets improved, when all dirty data are
written to a dedicated high-performance proxy-disk, which exactly
happens in a file system with Burst Buffers support.


  Speedup all synchronous modifications (TODO)
  Burst Buffers and transaction manager


Not only dirty data pages, but also dirty meta-data pages can be
dumped to the proxy-device, so that step (3) above also won't
contribute to the performance drop.

Moreover, not only new logical data blocks can be dumped to the proxy
disk. All dirty data pages, including ones, which already have
location on the main (slow) storage can also be relocated to the proxy
disk, thus, speeding up synchronous modification of files in _all_
cases (not only in atomic updates via write-fsync-rename sequence
described above).

Indeed, let's remind that any modified page is always written to disk
in a context of committing some transaction. Depending on the commit
strategy (there are 2 ones "relocate" and "overwrite"), for each such
modified dirty page there are only 2 possibility:

a) to be written right away to a new location,
b) to be written first to a temporary location (journal), then to be
   written back to permanent location.

With Burst buffers support in the case (a) the file system writes
dirty page right away to the proxy device. Then user should take care
to migrate it back to the permanent storage (see section "Flushing
proxy devise" below). In the case (b) the modified copy will be
written to the proxy device (wandering logs), then at checkpoint time
(playing a transaction) reiser4 transaction manager will write it to
the permanent location (on commodity disks). In this case user doesn't
need to worry on flushing proxy device, however, the procedure of
commit takes more time, as user should also wait for "checkpoint
completion".

So from the standpoint of performance "write-anywhere" transaction
model (reiser4 mount option "txmod=wa") is more preferable then
journalling model (txmod=journal), or even hybrid model (txmod=hybrid)


Predictable and non-predictable migration
  Meta-data migration


As we already mentioned, not only dirty data pages, but also dirty
meta-data pages can be dumped to the proxy-device. Note, however, that
not predictable meta-data migration is not possible because of
chicken-eggish problem. Indeed, non-predictable migration means that
nobody knows, on what device of your logical volume a stripe of data
will be relocated in the future. Such migration requires to record
location of data stripes. Now note, that such records is always a part
of meta-data. Hence, you are now able to migrate meta-data in
non-predictable way.

However, it is perfectly possible to distribute/migrate meta-data in a
predictable way (it will be supported in so-called "symmetric" logical
volumes - currently not implemented). Classic example of 

Re: [PATCH] qlcnic: fix missing release in qlcnic_83xx_interrupt_test.

2020-05-25 Thread David Miller
From: wu000...@umn.edu
Date: Mon, 25 May 2020 03:24:39 -0500

> From: Qiushi Wu 
> 
> In function qlcnic_83xx_interrupt_test(), function
> qlcnic_83xx_diag_alloc_res() is not handled by function
> qlcnic_83xx_diag_free_res() after a call of the function
> qlcnic_alloc_mbx_args() failed. Fix this issue by adding
> a jump target "fail_mbx_args", and jump to this new target
> when qlcnic_alloc_mbx_args() failed.
> 
> Fixes: b6b4316c8b2f ("qlcnic: Handle qlcnic_alloc_mbx_args() failure")
> Signed-off-by: Qiushi Wu 

Applied, thank you.


Re: [PATCH] drivers: ipa: print dev_err info accurately

2020-05-25 Thread David Miller
From: Wang Wenhu 
Date: Sun, 24 May 2020 23:29:51 -0700

> Print certain name string instead of hard-coded "memory" for dev_err
> output, which would be more accurate and helpful for debugging.
> 
> Signed-off-by: Wang Wenhu 

Applied to net-next, thanks.


Re: [PATCH v8 2/5] arm64: kdump: reserve crashkenel above 4G for crash dump kernel

2020-05-25 Thread Baoquan He
On 05/21/20 at 05:38pm, Chen Zhou wrote:
> Crashkernel=X tries to reserve memory for the crash dump kernel under
> 4G. If crashkernel=X,low is specified simultaneously, reserve spcified
> size low memory for crash kdump kernel devices firstly and then reserve
> memory above 4G.

Wondering why crashkernel=,high is not introduced to arm64 to be
consistent with x86_64, to make the behaviour be the same on all
architecutres. 

> 
> Signed-off-by: Chen Zhou 
> Tested-by: John Donnelly 
> Tested-by: Prabhakar Kushwaha 
> ---
>  arch/arm64/kernel/setup.c |  8 +++-
>  arch/arm64/mm/init.c  | 31 +--
>  2 files changed, 36 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 3fd2c11c09fc..a8487e4d3e5a 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -238,7 +238,13 @@ static void __init request_standard_resources(void)
>   kernel_data.end <= res->end)
>   request_resource(res, _data);
>  #ifdef CONFIG_KEXEC_CORE
> - /* Userspace will find "Crash kernel" region in /proc/iomem. */
> + /*
> +  * Userspace will find "Crash kernel" region in /proc/iomem.
> +  * Note: the low region is renamed as Crash kernel (low).
> +  */
> + if (crashk_low_res.end && crashk_low_res.start >= res->start &&
> + crashk_low_res.end <= res->end)
> + request_resource(res, _low_res);
>   if (crashk_res.end && crashk_res.start >= res->start &&
>   crashk_res.end <= res->end)
>   request_resource(res, _res);
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index e42727e3568e..71498acf0cd8 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -81,6 +81,7 @@ static void __init reserve_crashkernel(void)
>  {
>   unsigned long long crash_base, crash_size;
>   int ret;
> + phys_addr_t crash_max = arm64_dma32_phys_limit;
>  
>   ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(),
>   _size, _base);
> @@ -88,12 +89,38 @@ static void __init reserve_crashkernel(void)
>   if (ret || !crash_size)
>   return;
>  
> + ret = reserve_crashkernel_low();
> + if (!ret && crashk_low_res.end) {
> + /*
> +  * If crashkernel=X,low specified, there may be two regions,
> +  * we need to make some changes as follows:
> +  *
> +  * 1. rename the low region as "Crash kernel (low)"
> +  * In order to distinct from the high region and make no effect
> +  * to the use of existing kexec-tools, rename the low region as
> +  * "Crash kernel (low)".
> +  *
> +  * 2. change the upper bound for crash memory
> +  * Set MEMBLOCK_ALLOC_ACCESSIBLE upper bound for crash memory.
> +  *
> +  * 3. mark the low region as "nomap"
> +  * The low region is intended to be used for crash dump kernel
> +  * devices, just mark the low region as "nomap" simply.
> +  */
> + const char *rename = "Crash kernel (low)";
> +
> + crashk_low_res.name = rename;
> + crash_max = MEMBLOCK_ALLOC_ACCESSIBLE;
> + memblock_mark_nomap(crashk_low_res.start,
> + resource_size(_low_res));
> + }
> +
>   crash_size = PAGE_ALIGN(crash_size);
>  
>   if (crash_base == 0) {
>   /* Current arm64 boot protocol requires 2MB alignment */
> - crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit,
> - crash_size, SZ_2M);
> + crash_base = memblock_find_in_range(0, crash_max, crash_size,
> + SZ_2M);
>   if (crash_base == 0) {
>   pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
>   crash_size);
> -- 
> 2.20.1
> 
> 
> ___
> kexec mailing list
> ke...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
> 



Re: [PATCH v8 1/5] x86: kdump: move reserve_crashkernel_low() into crash_core.c

2020-05-25 Thread Baoquan He
On 05/21/20 at 05:38pm, Chen Zhou wrote:
> In preparation for supporting reserve_crashkernel_low in arm64 as
> x86_64 does, move reserve_crashkernel_low() into kernel/crash_core.c.



> BTW, move x86 CRASH_ALIGN to 2M.

The reason is?

> 
> Note, in arm64, we reserve low memory if and only if crashkernel=X,low
> is specified. Different with x86_64, don't set low memory automatically.
> 
> Reported-by: kbuild test robot 
> Signed-off-by: Chen Zhou 
> Tested-by: John Donnelly 
> Tested-by: Prabhakar Kushwaha 
> ---
>  arch/x86/kernel/setup.c| 66 -
>  include/linux/crash_core.h |  3 ++
>  include/linux/kexec.h  |  2 -
>  kernel/crash_core.c| 85 ++
>  kernel/kexec_core.c| 17 
>  5 files changed, 96 insertions(+), 77 deletions(-)
> 
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 4b3fa6cd3106..de75fec73d47 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -395,8 +395,8 @@ static void __init 
> memblock_x86_reserve_range_setup_data(void)
>  
>  #ifdef CONFIG_KEXEC_CORE
>  
> -/* 16M alignment for crash kernel regions */
> -#define CRASH_ALIGN  SZ_16M
> +/* 2M alignment for crash kernel regions */
> +#define CRASH_ALIGN  SZ_2M
>  
>  /*
>   * Keep the crash kernel below this limit.
> @@ -419,59 +419,6 @@ static void __init 
> memblock_x86_reserve_range_setup_data(void)
>  # define CRASH_ADDR_HIGH_MAX SZ_64T
>  #endif
>  
> -static int __init reserve_crashkernel_low(void)
> -{
> -#ifdef CONFIG_X86_64
> - unsigned long long base, low_base = 0, low_size = 0;
> - unsigned long total_low_mem;
> - int ret;
> -
> - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT));
> -
> - /* crashkernel=Y,low */
> - ret = parse_crashkernel_low(boot_command_line, total_low_mem, 
> _size, );
> - if (ret) {
> - /*
> -  * two parts from kernel/dma/swiotlb.c:
> -  * -swiotlb size: user-specified with swiotlb= or default.
> -  *
> -  * -swiotlb overflow buffer: now hardcoded to 32k. We round it
> -  * to 8M for other buffers that may need to stay low too. Also
> -  * make sure we allocate enough extra low memory so that we
> -  * don't run out of DMA buffers for 32-bit devices.
> -  */
> - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL 
> << 20);
> - } else {
> - /* passed with crashkernel=0,low ? */
> - if (!low_size)
> - return 0;
> - }
> -
> - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN);
> - if (!low_base) {
> - pr_err("Cannot reserve %ldMB crashkernel low memory, please try 
> smaller size.\n",
> -(unsigned long)(low_size >> 20));
> - return -ENOMEM;
> - }
> -
> - ret = memblock_reserve(low_base, low_size);
> - if (ret) {
> - pr_err("%s: Error reserving crashkernel low memblock.\n", 
> __func__);
> - return ret;
> - }
> -
> - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System 
> low RAM: %ldMB)\n",
> - (unsigned long)(low_size >> 20),
> - (unsigned long)(low_base >> 20),
> - (unsigned long)(total_low_mem >> 20));
> -
> - crashk_low_res.start = low_base;
> - crashk_low_res.end   = low_base + low_size - 1;
> - insert_resource(_resource, _low_res);
> -#endif
> - return 0;
> -}
> -
>  static void __init reserve_crashkernel(void)
>  {
>   unsigned long long crash_size, crash_base, total_mem;
> @@ -535,9 +482,12 @@ static void __init reserve_crashkernel(void)
>   return;
>   }
>  
> - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) {
> - memblock_free(crash_base, crash_size);
> - return;
> + if (crash_base >= (1ULL << 32)) {
> + if (reserve_crashkernel_low()) {
> + memblock_free(crash_base, crash_size);
> + return;
> + }
> + insert_resource(_resource, _low_res);
>   }
>  
>   pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System 
> RAM: %ldMB)\n",
> diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h
> index 525510a9f965..4df8c0bff03e 100644
> --- a/include/linux/crash_core.h
> +++ b/include/linux/crash_core.h
> @@ -63,6 +63,8 @@ phys_addr_t paddr_vmcoreinfo_note(void);
>  extern unsigned char *vmcoreinfo_data;
>  extern size_t vmcoreinfo_size;
>  extern u32 *vmcoreinfo_note;
> +extern struct resource crashk_res;
> +extern struct resource crashk_low_res;
>  
>  Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type,
> void *data, size_t data_len);
> @@ -74,5 +76,6 @@ int parse_crashkernel_high(char *cmdline, unsigned long 
> long system_ram,
>   

Re: [PATCH v1 07/25] lockdep: Add preemption disabled assertion API

2020-05-25 Thread Ahmed S. Darwish
Peter Zijlstra  wrote:
> On Sun, May 24, 2020 at 12:41:32AM +0200, Peter Zijlstra wrote:
> > On Sat, May 23, 2020 at 04:59:42PM +0200, Sebastian A. Siewior wrote:
> > >
> > > Any "static inline" in the header file using
> > > lockdep_assert_preemption_disabled() will tro to complain about missing
> > > current-> define. But yes, it will work otherwise.
> >
> > Because...? /me rummages around.. Ah you're proposing sticking this in
> > seqcount itself and then header hell.
> >
> > Moo.. ok I'll go have another look on Monday.
>
> How's this?
>

This will work for my case as current-> is no longer referenced by the
lockdep macros. Please continue below though.

...

> -#define lockdep_assert_irqs_enabled()do {
> \
> - WARN_ONCE(debug_locks && !current->lockdep_recursion && \
> -   !current->hardirqs_enabled,   \
> -   "IRQs not enabled as expected\n");\
> - } while (0)
> +DECLARE_PER_CPU(int, hardirqs_enabled);
> +DECLARE_PER_CPU(int, hardirq_context);
>
> -#define lockdep_assert_irqs_disabled()   do {
> \
> - WARN_ONCE(debug_locks && !current->lockdep_recursion && \
> -   current->hardirqs_enabled,\
> -   "IRQs not disabled as expected\n");   \
> - } while (0)
> +#define lockdep_assert_irqs_enabled()
> \
> +do { \
> + WARN_ON_ONCE(debug_locks && !this_cpu_read(hardirqs_enabled));  \
> +} while (0)
>

Given that lockdep_off() is defined at lockdep.c as:

  void lockdep_off(void)
  {
current->lockdep_recursion += LOCKDEP_OFF;
  }

This would imply that all of the macros:

  - lockdep_assert_irqs_enabled()
  - lockdep_assert_irqs_disabled()
  - lockdep_assert_in_irq()
  - lockdep_assert_preemption_disabled()
  - lockdep_assert_preemption_enabled()

will do the lockdep checks *even if* lockdep_off() was called.

This doesn't sound right. Even if all of the above macros call sites
didn't care about lockdep_off()/on(), it is semantically incoherent.

Thanks,

--
Ahmed S. Darwish
Linutronix GmbH


Re: [PATCH] drivers: ipa: print dev_err info accurately

2020-05-25 Thread Alex Elder

On 5/25/20 1:29 AM, Wang Wenhu wrote:

Print certain name string instead of hard-coded "memory" for dev_err
output, which would be more accurate and helpful for debugging.

Signed-off-by: Wang Wenhu 
Cc: Alex Elder 


Good idea.

Reviewed-by: Alex Elder 


---
  drivers/net/ipa/ipa_clock.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c
index ddbd687fe64b..749ff5668e37 100644
--- a/drivers/net/ipa/ipa_clock.c
+++ b/drivers/net/ipa/ipa_clock.c
@@ -66,8 +66,8 @@ ipa_interconnect_init_one(struct device *dev, const char 
*name)
  
  	path = of_icc_get(dev, name);

if (IS_ERR(path))
-   dev_err(dev, "error %ld getting memory interconnect\n",
-   PTR_ERR(path));
+   dev_err(dev, "error %ld getting %s interconnect\n",
+   PTR_ERR(path), name);
  
  	return path;

  }





arch/powerpc/boot/decompress.c:137: undefined reference to `__decompress'

2020-05-25 Thread kbuild test robot
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 
master
head:   b85051e755b0e9d6dd8f17ef1da083851b83287d
commit: 1cc9a21b0bb36debdf96dbcc4b139d6639373018 powerpc/boot: Add lzma support 
for uImage
config: powerpc-randconfig-r012-20200520 (attached as .config)
compiler: powerpc-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
git checkout 1cc9a21b0bb36debdf96dbcc4b139d6639373018
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross 
ARCH=powerpc 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot 

All errors (new ones prefixed by >>, old ones prefixed by <<):

powerpc-linux-ld: arch/powerpc/boot/wrapper.a(decompress.o): in function 
`partial_decompress':
>> arch/powerpc/boot/decompress.c:137: undefined reference to `__decompress'

# 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1cc9a21b0bb36debdf96dbcc4b139d6639373018
git remote add linus 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
git remote update linus
git checkout 1cc9a21b0bb36debdf96dbcc4b139d6639373018
vim +137 arch/powerpc/boot/decompress.c

1b7898ee276b39 Oliver O'Halloran 2016-09-22  102  
1b7898ee276b39 Oliver O'Halloran 2016-09-22  103  /**
1b7898ee276b39 Oliver O'Halloran 2016-09-22  104   * partial_decompress - 
decompresses part or all of a compressed buffer
1b7898ee276b39 Oliver O'Halloran 2016-09-22  105   * @inbuf:   input buffer
1b7898ee276b39 Oliver O'Halloran 2016-09-22  106   * @input_size:  length of 
the input buffer
1b7898ee276b39 Oliver O'Halloran 2016-09-22  107   * @outbuf:  input buffer
1b7898ee276b39 Oliver O'Halloran 2016-09-22  108   * @output_size: length of 
the input buffer
1b7898ee276b39 Oliver O'Halloran 2016-09-22  109   * @skip number of 
output bytes to ignore
1b7898ee276b39 Oliver O'Halloran 2016-09-22  110   *
1b7898ee276b39 Oliver O'Halloran 2016-09-22  111   * This function takes 
compressed data from inbuf, decompresses and write it to
1b7898ee276b39 Oliver O'Halloran 2016-09-22  112   * outbuf. Once output_size 
bytes are written to the output buffer, or the
1b7898ee276b39 Oliver O'Halloran 2016-09-22  113   * stream is exhausted the 
function will return the number of bytes that were
1b7898ee276b39 Oliver O'Halloran 2016-09-22  114   * decompressed. Otherwise it 
will return whatever error code the decompressor
1b7898ee276b39 Oliver O'Halloran 2016-09-22  115   * reported (NB: This is 
specific to each decompressor type).
1b7898ee276b39 Oliver O'Halloran 2016-09-22  116   *
1b7898ee276b39 Oliver O'Halloran 2016-09-22  117   * The skip functionality is 
mainly there so the program and discover
1b7898ee276b39 Oliver O'Halloran 2016-09-22  118   * the size of the compressed 
image so that it can ask firmware (if present)
1b7898ee276b39 Oliver O'Halloran 2016-09-22  119   * for an appropriately sized 
buffer.
1b7898ee276b39 Oliver O'Halloran 2016-09-22  120   */
1b7898ee276b39 Oliver O'Halloran 2016-09-22  121  long partial_decompress(void 
*inbuf, unsigned long input_size,
1b7898ee276b39 Oliver O'Halloran 2016-09-22  122void *outbuf, unsigned 
long output_size, unsigned long _skip)
1b7898ee276b39 Oliver O'Halloran 2016-09-22  123  {
1b7898ee276b39 Oliver O'Halloran 2016-09-22  124int ret;
1b7898ee276b39 Oliver O'Halloran 2016-09-22  125  
1b7898ee276b39 Oliver O'Halloran 2016-09-22  126/*
1b7898ee276b39 Oliver O'Halloran 2016-09-22  127 * The skipped bytes 
needs to be included in the size of data we want
1b7898ee276b39 Oliver O'Halloran 2016-09-22  128 * to decompress.
1b7898ee276b39 Oliver O'Halloran 2016-09-22  129 */
1b7898ee276b39 Oliver O'Halloran 2016-09-22  130output_size += _skip;
1b7898ee276b39 Oliver O'Halloran 2016-09-22  131  
1b7898ee276b39 Oliver O'Halloran 2016-09-22  132decompressed_bytes = 0;
1b7898ee276b39 Oliver O'Halloran 2016-09-22  133output_buffer = outbuf;
1b7898ee276b39 Oliver O'Halloran 2016-09-22  134limit = output_size;
1b7898ee276b39 Oliver O'Halloran 2016-09-22  135skip = _skip;
1b7898ee276b39 Oliver O'Halloran 2016-09-22  136  
1b7898ee276b39 Oliver O'Halloran 2016-09-22 @137ret = 
__decompress(inbuf, input_size, NULL, flush, outbuf,

:: The code at line 137 was first introduced by commit
:: 1b7898ee276b39e54d870dc4ef3374f663d0b426 powerpc/boot: Use the pre-boot 
decompression API

:: TO: Oliver O'Halloran 
:: CC: Michael Ellerman 

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


.config.gz
Description: application/gzip


  1   2   3   4   5   6   7   8   9   10   >