Re: [f2fs-dev] [PATCH] f2fs: ignore when len of range in f2fs_sec_trim_file is zero

2020-07-09 Thread Daeho Jeong
I can add it~

2020년 7월 9일 (목) 오후 2:39, Jaegeuk Kim 님이 작성:
>
> On 07/09, Chao Yu wrote:
> > On 2020/7/9 9:57, Daeho Jeong wrote:
> > > From: Daeho Jeong 
> > >
> > > When end_addr comes to zero, it'll trigger different behaviour.
> > > To prevent this, we need to ignore the case of that range.len is
> > > zero in the function.
> > >
> > > Signed-off-by: Daeho Jeong 
> > > ---
> > >  fs/f2fs/file.c | 7 +++
> > >  1 file changed, 3 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> > > index 368c80f8e2a1..98b0a8dbf669 100644
> > > --- a/fs/f2fs/file.c
> > > +++ b/fs/f2fs/file.c
> > > @@ -3813,15 +3813,14 @@ static int f2fs_sec_trim_file(struct file *filp, 
> > > unsigned long arg)
> > > file_start_write(filp);
> > > inode_lock(inode);
> > >
> > > -   if (f2fs_is_atomic_file(inode) || f2fs_compressed_file(inode)) {
> > > +   if (f2fs_is_atomic_file(inode) || f2fs_compressed_file(inode) ||
> > > +   range.start >= inode->i_size) {
> > > ret = -EINVAL;
> > > goto err;
> > > }
> > >
> > > -   if (range.start >= inode->i_size) {
> > > -   ret = -EINVAL;
> > > +   if (range.len == 0)
> > > goto err;
> > > -   }
> > >
> > > if (inode->i_size - range.start < range.len) {
> > > ret = -E2BIG;
> >
> > How about the case trimming last partial written block?
> >
> > i_size = 8000
> > range.start = 4096
> > range.len = 4096
> >
> > Do we need to roundup(isize, PAGE_SIZE) before comparison?
>
> If we want to trim whole file, do we need to give the exact i_size?
> Wouldn't it be better to give trim(0, -1)?
>
> >
> > Thanks,
> >
> > >
> >
> >
> > ___
> > Linux-f2fs-devel mailing list
> > linux-f2fs-de...@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


[PATCH v4 0/2] Add Intel LGM soc DMA support

2020-07-09 Thread Amireddy Mallikarjuna reddy
Add DMA controller driver for Lightning Mountain(LGM) family of SoCs.

The main function of the DMA controller is the transfer of data from/to any
DPlus compliant peripheral to/from the memory. A memory to memory copy
capability can also be configured.
This ldma driver is used for configure the device and channnels for data
and control paths.

These controllers provide DMA capabilities for a variety of on-chip
devices such as SSC, HSNAND and GSWIP.

-
Future Plans:
-
LGM SOC also supports Hardware Memory Copy engine.
The role of the HW Memory copy engine is to offload memory copy operations
from the CPU.

Amireddy Mallikarjuna reddy (2):
  dt-bindings: dma: Add bindings for intel LGM SOC
  Add Intel LGM soc DMA support.

 .../devicetree/bindings/dma/intel,ldma.yaml|  416 +
 drivers/dma/Kconfig|2 +
 drivers/dma/Makefile   |1 +
 drivers/dma/lgm/Kconfig|9 +
 drivers/dma/lgm/Makefile   |2 +
 drivers/dma/lgm/lgm-dma.c  | 1941 
 include/linux/dma/lgm_dma.h|   27 +
 7 files changed, 2398 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/intel,ldma.yaml
 create mode 100644 drivers/dma/lgm/Kconfig
 create mode 100644 drivers/dma/lgm/Makefile
 create mode 100644 drivers/dma/lgm/lgm-dma.c
 create mode 100644 include/linux/dma/lgm_dma.h

-- 
2.11.0



[PATCH v4 1/2] dt-bindings: dma: Add bindings for intel LGM SOC

2020-07-09 Thread Amireddy Mallikarjuna reddy
Add DT bindings YAML schema for DMA controller driver
of Lightning Mountain(LGM) SoC.

Signed-off-by: Amireddy Mallikarjuna reddy 
---
v1:
- Initial version.

v2:
- Fix bot errors.

v3:
- No change.

v4:
- Address Thomas langer comments
---
 .../devicetree/bindings/dma/intel,ldma.yaml| 416 +
 1 file changed, 416 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/dma/intel,ldma.yaml

diff --git a/Documentation/devicetree/bindings/dma/intel,ldma.yaml 
b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
new file mode 100644
index ..7f666b9812e4
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/intel,ldma.yaml
@@ -0,0 +1,416 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/dma/intel,ldma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Lightning Mountain centralized low speed DMA and high speed DMA 
controllers.
+
+maintainers:
+  - chuanhua@intel.com
+  - mallikarjunax.re...@intel.com
+
+properties:
+ $nodename:
+   pattern: "^dma(@.*)?$"
+
+ "#dma-cells":
+   const: 1
+
+ compatible:
+  anyOf:
+   - const: intel,lgm-cdma
+   - const: intel,lgm-dma2tx
+   - const: intel,lgm-dma1rx
+   - const: intel,lgm-dma1tx
+   - const: intel,lgm-dma0tx
+   - const: intel,lgm-dma3
+   - const: intel,lgm-toe-dma30
+   - const: intel,lgm-toe-dma31
+
+ reg:
+  maxItems: 1
+
+ clocks:
+  maxItems: 1
+
+ resets:
+  maxItems: 1
+
+ interrupts:
+  maxItems: 1
+
+ intel,dma-poll-cnt:
+   $ref: /schemas/types.yaml#definitions/uint32
+   description:
+ DMA descriptor polling counter. It may need fine tune according
+ to the system application scenario.
+
+ intel,dma-byte-en:
+   type: boolean
+   description:
+ DMA byte enable is only valid for DMA write(RX).
+ Byte enable(1) means DMA write will be based on the number of dwords
+ instead of the whole burst.
+
+ intel,dma-drb:
+type: boolean
+description:
+  DMA descriptor read back to make sure data and desc synchronization.
+
+ intel,dma-burst:
+$ref: /schemas/types.yaml#definitions/uint32
+description:
+   Specifiy the DMA burst size(in dwords), the valid value will be 8, 16, 
32.
+   Default is 16 for data path dma, 32 is for memcopy DMA.
+
+ intel,dma-polling-cnt:
+$ref: /schemas/types.yaml#definitions/uint32
+description:
+   DMA descriptor polling counter. It may need fine tune according to
+   the system application scenario.
+
+ intel,dma-desc-in-sram:
+type: boolean
+description:
+   DMA descritpors in SRAM or not. Some old controllers descriptors
+   can be in DRAM or SRAM. The new ones are all in SRAM.
+
+ intel,dma-orrc:
+$ref: /schemas/types.yaml#definitions/uint32
+description:
+   DMA outstanding read counter. The maximum value is 16, and it may
+   need fine tune according to the system application scenarios.
+
+ intel,dma-dburst-wr:
+type: boolean
+description:
+   Enable RX dynamic burst write. It only applies to RX DMA and memcopy 
DMA.
+
+
+ dma-ports:
+type: object
+description:
+   This sub-node must contain a sub-node for each DMA port.
+properties:
+  '#address-cells':
+const: 1
+  '#size-cells':
+const: 0
+
+patternProperties:
+  "^dma-ports@[0-9]+$":
+  type: object
+
+  properties:
+reg:
+  items:
+- enum: [0, 1, 2, 3, 4, 5]
+  description:
+ Which port this node refers to.
+
+intel,name:
+  $ref: /schemas/types.yaml#definitions/string-array
+  description:
+ Port name of each DMA port.
+
+intel,chans:
+  $ref: /schemas/types.yaml#/definitions/uint32-array
+  description:
+ The channels included on this port. Format is channel start
+ number and how many channels on this port.
+
+intel,burst:
+  $ref: /schemas/types.yaml#/definitions/uint32
+  description:
+ Specify the DMA port burst size, the valid value will be
+ 2, 4, 8. Default is 2 for data path dma.
+
+intel,txwgt:
+  $ref: /schemas/types.yaml#/definitions/uint32
+  description:
+ Specify the port transmit weight for QoS purpose. The valid
+ value is 1~7. Default value is 1.
+
+intel,endian:
+  $ref: /schemas/types.yaml#/definitions/uint32
+  description:
+ Specify the DMA port endiannes conversion due to SoC 
endianness difference.
+
+  required:
+- reg
+- intel,name
+- intel,chans
+
+
+ dma-channels:
+type: object
+description:
+   This sub-node must contain a sub-node for each DMA channel.
+properties:
+  '#address-cells':
+

Re: [PATCH] bcache: writeback: Remove unneeded variable i

2020-07-09 Thread Coly Li
On 2020/7/9 13:53, Xu Wang wrote:
> Remove unneeded variable i in bch_dirty_init_thread().
> 
> Signed-off-by: Xu Wang 

Add it into my testing queue. Thanks.

Coly Li

> ---
>  drivers/md/bcache/writeback.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
> index 1cf1e5016cb9..71801c086b82 100644
> --- a/drivers/md/bcache/writeback.c
> +++ b/drivers/md/bcache/writeback.c
> @@ -825,10 +825,8 @@ static int bch_dirty_init_thread(void *arg)
>   struct btree_iter iter;
>   struct bkey *k, *p;
>   int cur_idx, prev_idx, skip_nr;
> - int i;
>  
>   k = p = NULL;
> - i = 0;
>   cur_idx = prev_idx = 0;
>  
>   bch_btree_iter_init(>root->keys, , NULL);
> 



Re: [PATCH] omapfb: dss: Fix max fclk divider for omap36xx

2020-07-09 Thread Greg KH
On Wed, Jul 08, 2020 at 06:37:51PM -0500, Adam Ford wrote:
> On Mon, Jul 6, 2020 at 6:18 AM Adam Ford  wrote:
> >
> > On Mon, Jul 6, 2020 at 1:02 AM Tomi Valkeinen  wrote:
> > >
> > > Hi,
> > >
> > > On 03/07/2020 22:36, Sam Ravnborg wrote:
> > > > Hi Tomi.
> > > >
> > > > On Fri, Jul 03, 2020 at 10:17:29AM +0300, Tomi Valkeinen wrote:
> > > >> On 30/06/2020 21:26, Adam Ford wrote:
> > > >>> The drm/omap driver was fixed to correct an issue where using a
> > > >>> divider of 32 breaks the DSS despite the TRM stating 32 is a valid
> > > >>> number.  Through experimentation, it appears that 31 works, and
> > > >>> it is consistent with the value used by the drm/omap driver.
> > > >>>
> > > >>> This patch fixes the divider for fbdev driver instead of the drm.
> > > >>>
> > > >>> Fixes: f76ee892a99e ("omapfb: copy omapdss & displays for omapfb")
> > > >>>
> > > >>> Cc:  #4.9+
> > > >>> Signed-off-by: Adam Ford 
> > > >>> ---
> > > >>> Linux 4.4 will need a similar patch, but it doesn't apply cleanly.
> > > >>>
> > > >>> The DRM version of this same fix is:
> > > >>> e2c4ed148cf3 ("drm/omap: fix max fclk divider for omap36xx")
> > > >>>
> > > >>>
> > > >>> diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dss.c 
> > > >>> b/drivers/video/fbdev/omap2/omapfb/dss/dss.c
> > > >>> index 7252d22dd117..bfc5c4c5a26a 100644
> > > >>> --- a/drivers/video/fbdev/omap2/omapfb/dss/dss.c
> > > >>> +++ b/drivers/video/fbdev/omap2/omapfb/dss/dss.c
> > > >>> @@ -833,7 +833,7 @@ static const struct dss_features 
> > > >>> omap34xx_dss_feats = {
> > > >>>};
> > > >>>static const struct dss_features omap3630_dss_feats = {
> > > >>> -   .fck_div_max=   32,
> > > >>> +   .fck_div_max=   31,
> > > >>> .dss_fck_multiplier =   1,
> > > >>> .parent_clk_name=   "dpll4_ck",
> > > >>> .dpi_select_source  =   
> > > >>> _dpi_select_source_omap2_omap3,
> > > >>>
> > > >>
> > > >> Reviewed-by: Tomi Valkeinen 
> > > > Will you apply to drm-misc?
> > >
> > > This is for fbdev, so I presume Bartlomiej will pick this one.
> > >
> > > > Note  following output from "dim fixes":
> > > > $ dim fixes f76ee892a99e
> > > > Fixes: f76ee892a99e ("omapfb: copy omapdss & displays for omapfb")
> > > > Cc: Tomi Valkeinen 
> > > > Cc: Dave Airlie 
> > > > Cc: Rob Clark 
> > > > Cc: Laurent Pinchart 
> > > > Cc: Sam Ravnborg 
> > > > Cc: Bartlomiej Zolnierkiewicz 
> > > > Cc: Jason Yan 
> > > > Cc: "Andrew F. Davis" 
> > > > Cc: YueHaibing 
> > > > Cc:  # v4.5+
> > > >
> > > > Here it says the fix is valid from v4.5 onwards.
> > >
> > > Hmm... Adam, you marked the fix to apply to v4.9+, and then you said
> > > v4.4 needs a new patch (that's before the big copy/rename). Did you
> > > check the versions between 4.4 and 4.9? I would guess this one applies
> > > to v4.5+.
> >
> > I only tried 4.9 because it's listed as an LTS kernel.  The stuff
> > between 4.4 and 4.9 were EOL, so I didn't go back further.The 4.5+
> > is probably more accurate.  I would like to do the same thing for the
> > 4.4 kernel, but I am not sure the proper way to do that.
> 
> What is the correct protocol for patching 4.4?  I'd like to do that,
> but the patch would be unique to the 4.4.  Should I just submit the
> patch directly to stable and cc Tomi?

Yes, and document the heck out of why this is a 4.4-only patch, and why
we can't take whatever happened in newer kernels instead.

thanks,

greg k-h


Re: [PATCH v6 6/7] seccomp: Introduce addfd ioctl to seccomp user notifier

2020-07-09 Thread Kees Cook
On Tue, Jul 07, 2020 at 03:30:49PM +0200, Christian Brauner wrote:
> Hm, maybe change that description to sm like:
> 
> [...]

Cool, yeah. Thanks! I've tweaked it a little more

> > +   /* 24 is original sizeof(struct seccomp_notif_addfd) */
> > +   if (size < 24 || size >= PAGE_SIZE)
> > +   return -EINVAL;
> 
> Hm, so maybe add the following:
> 
> #define SECCOMP_NOTIFY_ADDFD_VER0 24
> #define SECCOMP_NOTIFY_ADDFD_LATEST SECCOMP_NOTIFY_ADDFD_VER0
> 
> and then place:
> 
> BUILD_BUG_ON(sizeof(struct seccomp_notify_addfd) < SECCOMP_NOTIFY_ADDFD_VER0);
> BUILD_BUG_ON(sizeof(struct open_how) != SECCOMP_NOTIFY_ADDFD_LATEST);

Yes, good idea (BTW, did the EA syscall docs land?)

I've made these SECCOMP_NOTIFY_ADDFD_SIZE_* to match your examples below
(i.e.  I added "SIZE" to what you suggested above).

> somewhere which is what we do for clone3(), openat2() and others to
> catch build-time nonsense.
> 
> include/uapi/linux/perf_event.h:#define PERF_ATTR_SIZE_VER0 64  /* 
> sizeof first published struct */
> include/uapi/linux/sched.h:#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first 
> published struct */
> include/uapi/linux/sched/types.h:#define SCHED_ATTR_SIZE_VER0   48  /* 
> sizeof first published struct */
> include/linux/fcntl.h:#define OPEN_HOW_SIZE_VER024 /* sizeof first 
> published struct */
> include/linux/fcntl.h:#define OPEN_HOW_SIZE_LATEST  OPEN_HOW_SIZE_VER0

The ..._SIZE_VER0 and ...LATEST stuff doesn't seem useful to export via
UAPI. Above, 2 of the 3 export to uapi. Is there a specific rationale
for which should and which shouldn't?

> > +#undef EA_IOCTL
> 
> Why is this undefed? :)

It was defined "in" a function, so I like to mimic function visibility.
But you're right; there's no reason to undef it.

-- 
Kees Cook


Re: [PATCH] Replace HTTP links with HTTPS ones: DISKQUOTA

2020-07-09 Thread Jan Kara
On Wed 08-07-20 19:19:05, Alexander A. Klimov wrote:
> Rationale:
> Reduces attack surface on kernel devs opening the links for MITM
> as HTTPS traffic is much harder to manipulate.
> 
> Deterministic algorithm:
> For each file:
>   If not .svg:
> For each line:
>   If doesn't contain `\bxmlns\b`:
> For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
> If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
> If both the HTTP and HTTPS versions
> return 200 OK and serve the same content:
>   Replace HTTP with HTTPS.
> 
> Signed-off-by: Alexander A. Klimov 

Thanks. I've applied the patch. I'll also note that somehow your script
missed converting the sourceforge.net link in quota.rst to https. I did
that myself together with replacing link to libnl doc with a working one...

Honza

> ---
>  Continuing my work started at 93431e0607e5.
>  See also: git log --oneline '--author=Alexander A. Klimov 
> ' v5.7..master
>  (Actually letting a shell for loop submit all this stuff for me.)
> 
>  If there are any URLs to be removed completely or at least not HTTPSified:
>  Just clearly say so and I'll *undo my change*.
>  See also: https://lkml.org/lkml/2020/6/27/64
> 
>  If there are any valid, but yet not changed URLs:
>  See: https://lkml.org/lkml/2020/6/26/837
> 
>  If you apply the patch, please let me know.
> 
> 
>  Documentation/filesystems/quota.rst | 2 +-
>  fs/quota/Kconfig| 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/filesystems/quota.rst 
> b/Documentation/filesystems/quota.rst
> index a30cdd47c652..6508c4520ba5 100644
> --- a/Documentation/filesystems/quota.rst
> +++ b/Documentation/filesystems/quota.rst
> @@ -31,7 +31,7 @@ the above events to userspace. There they can be captured 
> by an application
>  and processed accordingly.
>  
>  The interface uses generic netlink framework (see
> -http://lwn.net/Articles/208755/ and http://people.suug.ch/~tgr/libnl/ for 
> more
> +https://lwn.net/Articles/208755/ and http://people.suug.ch/~tgr/libnl/ for 
> more
>  details about this layer). The name of the quota generic netlink interface
>  is "VFS_DQUOT". Definitions of constants below are in .
>  Since the quota netlink protocol is not namespace aware, quota netlink 
> messages
> diff --git a/fs/quota/Kconfig b/fs/quota/Kconfig
> index 7218314ca13f..d1ceb76adb71 100644
> --- a/fs/quota/Kconfig
> +++ b/fs/quota/Kconfig
> @@ -15,7 +15,7 @@ config QUOTA
> Ext3, ext4 and reiserfs also support journaled quotas for which
> you don't need to run quotacheck(8) after an unclean shutdown.
> For further details, read the Quota mini-HOWTO, available from
> -   , or the documentation provided
> +   , or the documentation provided
> with the quota tools. Probably the quota support is only useful for
> multi user systems. If unsure, say N.
>  
> -- 
> 2.27.0
> 
-- 
Jan Kara 
SUSE Labs, CR


[mm/debug_vm_pgtable] a97a171093: BUG:unable_to_handle_page_fault_for_address

2020-07-09 Thread kernel test robot
Greeting,

FYI, we noticed the following commit (built with gcc-9):

commit: a97a17109332c3a9e361553adfa383c1e5205f3b ("[PATCH V4 2/4] 
mm/debug_vm_pgtable: Add tests validating advanced arch page table helpers")
url: 
https://github.com/0day-ci/linux/commits/Anshuman-Khandual/mm-debug_vm_pgtable-Add-some-more-tests/20200706-085212
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git 
dcb7fd82c75ee2d6e6f9d8cc71c52519ed52e258

in testcase: boot

on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G

caused below changes (please refer to attached dmesg/kmsg for entire 
log/backtrace):


+-+++
| | 30011bfca7 | a97a171093 |
+-+++
| boot_successes  | 4  | 0  |
| boot_failures   | 0  | 4  |
| BUG:unable_to_handle_page_fault_for_address | 0  | 4  |
| Oops:#[##]  | 0  | 4  |
| RIP:hugetlb_advanced_tests  | 0  | 4  |
| Kernel_panic-not_syncing:Fatal_exception| 0  | 4  |
+-+++


If you fix the issue, kindly add following tag
Reported-by: kernel test robot 


[   94.349598] BUG: unable to handle page fault for address: ed10a7ffddff
[   94.351039] #PF: supervisor read access in kernel mode
[   94.352172] #PF: error_code(0x) - not-present page
[   94.353256] PGD 43ffed067 P4D 43ffed067 PUD 43fdee067 PMD 0 
[   94.354484] Oops:  [#1] SMP KASAN
[   94.355238] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 
5.8.0-rc4-2-ga97a17109332c #1
[   94.360456] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
1.12.0-1 04/01/2014
[   94.361950] RIP: 0010:hugetlb_advanced_tests+0x137/0x699
[   94.363026] Code: 8b 13 4d 85 f6 75 0b 48 ff 05 2c e4 6a 01 31 ed eb 41 bf 
f8 ff ff ff ba ff ff 37 00 4c 01 f7 48 c1 e2 2a 48 89 f9 48 c1 e9 03 <80> 3c 11 
00 74 05 e8 cd c0 67 fa ba f8 ff ff ff 49 8b 2c 16 48 85
[   94.366592] RSP: :c9047d30 EFLAGS: 00010a06
[   94.367693] RAX: 11049b80 RBX: 888380525308 RCX: 1110a7ffddff
[   94.369215] RDX: dc00 RSI: 111087ffdc00 RDI: 88853ffeeff8
[   94.370693] RBP: 0018e510 R08: 0025 R09: 0001
[   94.372165] R10: 888380523c07 R11: ed10700a4780 R12: 88843208e510
[   94.373674] R13: 0025 R14: 88843ffef000 R15: 31e01ae61000
[   94.375147] FS:  () GS:8883a380() 
knlGS:
[   94.376883] CS:  0010 DS:  ES:  CR0: 80050033
[   94.378051] CR2: ed10a7ffddff CR3: 04e15000 CR4: 000406a0
[   94.379522] Call Trace:
[   94.380073]  debug_vm_pgtable+0xd81/0x2029
[   94.380871]  ? pmd_advanced_tests+0x621/0x621
[   94.381819]  do_one_initcall+0x1eb/0xbd0
[   94.382551]  ? trace_event_raw_event_initcall_finish+0x240/0x240
[   94.383634]  ? rcu_read_lock_sched_held+0xb9/0x110
[   94.388727]  ? rcu_read_lock_held+0xd0/0xd0
[   94.389604]  ? __kasan_check_read+0x1d/0x30
[   94.390485]  kernel_init_freeable+0x430/0x4f8
[   94.391416]  ? rest_init+0x3f8/0x3f8
[   94.392185]  kernel_init+0x14/0x1e8
[   94.392918]  ret_from_fork+0x22/0x30
[   94.393662] Modules linked in:
[   94.394289] CR2: ed10a7ffddff
[   94.395000] ---[ end trace 8ca5a1655dfb8c39 ]---


To reproduce:

# build kernel
cd linux
cp config-5.8.0-rc4-2-ga97a17109332c .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare 
modules_prepare bzImage

git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k  job-script # job-script is attached in this 
email



Thanks,
lkp

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 5.8.0-rc4 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc-9 (Debian 9.3.0-14) 9.3.0"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=90300
CONFIG_LD_VERSION=23400
CONFIG_CLANG_VERSION=0
CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_CONSTRUCTORS=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
# CONFIG_KERNEL_GZIP is not set
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
CONFIG_KERNEL_XZ=y
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
# CONFIG_SYSVIPC is not set
# 

Re: [PATCH] Replace HTTP links with HTTPS ones: USB MASS STORAGE DRIVER

2020-07-09 Thread Greg KH
On Wed, Jul 08, 2020 at 08:41:54PM +0200, Alexander A. Klimov wrote:
> 
> 
> Am 08.07.20 um 12:39 schrieb Greg KH:
> > On Wed, Jul 08, 2020 at 11:55:00AM +0200, Alexander A. Klimov wrote:
> > > Rationale:
> > > Reduces attack surface on kernel devs opening the links for MITM
> > > as HTTPS traffic is much harder to manipulate.
> > > 
> > > Deterministic algorithm:
> > > For each file:
> > >If not .svg:
> > >  For each line:
> > >If doesn't contain `\bxmlns\b`:
> > >  For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
> > > If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
> > >  If both the HTTP and HTTPS versions
> > >  return 200 OK and serve the same content:
> > >Replace HTTP with HTTPS.
> > > 
> > > Signed-off-by: Alexander A. Klimov 
> > 
> > Your subject lines are very odd compared to all patches for this
> > subsystem, as well as all other kernel subsystems.  Any reason you are
> > doing it this way and not the normal and standard method of:
> > USB: storage: replace http links with https
> > 
> > That would look more uniform as well as not shout at anyone.
> > 
> > thanks,
> > 
> > greg k-h
> > 
> Hi,
> 
> I'm very sorry.
> 
> As Torvalds has merged 93431e0607e5 and many of you devs (including big
> maintainers like David Miller) just applied this stuff, I assumed that's OK.
> 
> And now I've rolled out tens of patches via shell loop... *sigh*
> 
> As this is the third (I think) change request like this, I assume this rule
> applies to all subsystems – right?

Yes, you should try to emulate what the subsystem does, look at other
patches for the same files, but the format I suggested is almost always
the correct one.  If not, I'm sure maintainers will be glad to tell you
otherwise :)

thanks,

greg k-h


[PATCH v6.1 6/7] seccomp: Introduce addfd ioctl to seccomp user notifier

2020-07-09 Thread Kees Cook
From: Sargun Dhillon 

The current SECCOMP_RET_USER_NOTIF API allows for syscall supervision over
an fd. It is often used in settings where a supervising task emulates
syscalls on behalf of a supervised task in userspace, either to further
restrict the supervisee's syscall abilities or to circumvent kernel
enforced restrictions the supervisor deems safe to lift (e.g. actually
performing a mount(2) for an unprivileged container).

While SECCOMP_RET_USER_NOTIF allows for the interception of any syscall,
only a certain subset of syscalls could be correctly emulated. Over the
last few development cycles, the set of syscalls which can't be emulated
has been reduced due to the addition of pidfd_getfd(2). With this we are
now able to, for example, intercept syscalls that require the supervisor
to operate on file descriptors of the supervisee such as connect(2).

However, syscalls that cause new file descriptors to be installed can not
currently be correctly emulated since there is no way for the supervisor
to inject file descriptors into the supervisee. This patch adds a
new addfd ioctl to remove this restriction by allowing the supervisor to
install file descriptors into the intercepted task. By implementing this
feature via seccomp the supervisor effectively instructs the supervisee
to install a set of file descriptors into its own file descriptor table
during the intercepted syscall. This way it is possible to intercept
syscalls such as open() or accept(), and install (or replace, like
dup2(2)) the supervisor's resulting fd into the supervisee. One
replacement use-case would be to redirect the stdout and stderr of a
supervisee into log file descriptors opened by the supervisor.

The ioctl handling is based on the discussions[1] of how Extensible
Arguments should interact with ioctls. Instead of building size into
the addfd structure, make it a function of the ioctl command (which
is how sizes are normally passed to ioctls). To support forward and
backward compatibility, just mask out the direction and size, and match
everything. The size (and any future direction) checks are done along
with copy_struct_from_user() logic.

As a note, the seccomp_notif_addfd structure is laid out based on 8-byte
alignment without requiring packing as there have been packing issues
with uapi highlighted before[2][3]. Although we could overload the
newfd field and use -1 to indicate that it is not to be used, doing
so requires changing the size of the fd field, and introduces struct
packing complexity.

[1]: https://lore.kernel.org/lkml/87o8w9bcaf@mid.deneb.enyo.de/
[2]: 
https://lore.kernel.org/lkml/a328b91d-fd8f-4f27-b3c2-91a9c45f1...@rasmusvillemoes.dk/
[3]: 
https://lore.kernel.org/lkml/20200612104629.GA15814@ircssh-2.c.rugged-nimbus-611.internal

Cc: Christoph Hellwig 
Cc: Christian Brauner 
Cc: Tycho Andersen 
Cc: Jann Horn 
Cc: Robert Sesek 
Cc: Chris Palmer 
Cc: Al Viro 
Cc: linux-fsde...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-...@vger.kernel.org
Suggested-by: Matt Denton 
Link: https://lore.kernel.org/r/20200603011044.7972-4-sar...@sargun.me
Signed-off-by: Sargun Dhillon 
Co-developed-by: Kees Cook 
Signed-off-by: Kees Cook 
---
v6.1:
- clarify commit log (christian)
- add ..._SIZE_{VER0,LATEST} and BUILD_BUG_ON()s (christian)
- remove undef (christian)
- fix embedded URL reference numbers
v6: https://lore.kernel.org/lkml/20200707133049.nfxc6vz6vcs26m3b@wittgenstein
---
 include/linux/seccomp.h  |   4 +
 include/uapi/linux/seccomp.h |  22 +
 kernel/seccomp.c | 173 ++-
 3 files changed, 198 insertions(+), 1 deletion(-)

diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index babcd6c02d09..881c90b6aa25 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -10,6 +10,10 @@
 SECCOMP_FILTER_FLAG_NEW_LISTENER | \
 SECCOMP_FILTER_FLAG_TSYNC_ESRCH)
 
+/* sizeof() the first published struct seccomp_notif_addfd */
+#define SECCOMP_NOTIFY_ADDFD_SIZE_VER0 24
+#define SECCOMP_NOTIFY_ADDFD_SIZE_LATEST SECCOMP_NOTIFY_ADDFD_SIZE_VER0
+
 #ifdef CONFIG_SECCOMP
 
 #include 
diff --git a/include/uapi/linux/seccomp.h b/include/uapi/linux/seccomp.h
index 965290f7dcc2..6ba18b82a02e 100644
--- a/include/uapi/linux/seccomp.h
+++ b/include/uapi/linux/seccomp.h
@@ -113,6 +113,25 @@ struct seccomp_notif_resp {
__u32 flags;
 };
 
+/* valid flags for seccomp_notif_addfd */
+#define SECCOMP_ADDFD_FLAG_SETFD   (1UL << 0) /* Specify remote fd */
+
+/**
+ * struct seccomp_notif_addfd
+ * @id: The ID of the seccomp notification
+ * @flags: SECCOMP_ADDFD_FLAG_*
+ * @srcfd: The local fd number
+ * @newfd: Optional remote FD number if SETFD option is set, otherwise 0.
+ * @newfd_flags: The O_* flags the remote FD should have applied
+ */
+struct seccomp_notif_addfd {
+   __u64 id;
+   __u32 flags;
+   __u32 srcfd;
+   __u32 newfd;
+   __u32 

[PATCH v3 02/12] ima: Free the entire rule when deleting a list of rules

2020-07-09 Thread Tyler Hicks
Create a function, ima_free_rule(), to free all memory associated with
an ima_rule_entry. Use the new function to fix memory leaks of allocated
ima_rule_entry members, such as .fsname and .keyrings, when deleting a
list of rules.

Make the existing ima_lsm_free_rule() function specific to the LSM
audit rule array of an ima_rule_entry and require that callers make an
additional call to kfree to free the ima_rule_entry itself.

This fixes a memory leak seen when loading by a valid rule that contains
an additional piece of allocated memory, such as an fsname, followed by
an invalid rule that triggers a policy load failure:

 # echo -e "dont_measure fsname=securityfs\nbad syntax" > \
/sys/kernel/security/ima/policy
 -bash: echo: write error: Invalid argument
 # echo scan > /sys/kernel/debug/kmemleak
 # cat /sys/kernel/debug/kmemleak
 unreferenced object 0x9bab67ca12c0 (size 16):
   comm "bash", pid 684, jiffies 4295212803 (age 252.344s)
   hex dump (first 16 bytes):
 73 65 63 75 72 69 74 79 66 73 00 6b 6b 6b 6b a5  securityfs..
   backtrace:
 [] kstrdup+0x2e/0x60
 [] ima_parse_add_rule+0x7d4/0x1020
 [<444825ac>] ima_write_policy+0xab/0x1d0
 [<2b7f0d6c>] vfs_write+0xde/0x1d0
 [<96feedcf>] ksys_write+0x68/0xe0
 [<52b544a2>] do_syscall_64+0x56/0xa0
 [<7ead1ba7>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

Fixes: f1b08bbcbdaf ("ima: define a new policy condition based on the 
filesystem name")
Fixes: 2b60c0ecedf8 ("IMA: Read keyrings= option from the IMA policy")
Signed-off-by: Tyler Hicks 
---

* v3
  - No change
* v2
  - Collapsed patch #2 from v1 of this series, into this patch. This
patch now introduces ima_free_rule().
  - Existing callers of ima_lsm_free_rule() are doing so to free rules
after a successful or failed ima_lsm_copy_rule() and those callers
continue to directly call ima_lsm_copy_rule() rather than doing
explicit reference ownership and calling ima_free_rule().
  - The kfree(entry) of ima_lsm_free_rule() was removed from that
function to make it focused on freeing the LSM references. Direct
callers of ima_lsm_free_rule() must now call kfree(entry) after
ima_lsm_free_rule().
  - A comment was added in ima_lsm_update_rule() to clarify why
ima_free_rule() isn't being used.

 security/integrity/ima/ima_policy.c | 29 -
 1 file changed, 24 insertions(+), 5 deletions(-)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index d7c268c2b0ce..bf00b966e87f 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -261,6 +261,21 @@ static void ima_lsm_free_rule(struct ima_rule_entry *entry)
security_filter_rule_free(entry->lsm[i].rule);
kfree(entry->lsm[i].args_p);
}
+}
+
+static void ima_free_rule(struct ima_rule_entry *entry)
+{
+   if (!entry)
+   return;
+
+   /*
+* entry->template->fields may be allocated in ima_parse_rule() but that
+* reference is owned by the corresponding ima_template_desc element in
+* the defined_templates list and cannot be freed here
+*/
+   kfree(entry->fsname);
+   kfree(entry->keyrings);
+   ima_lsm_free_rule(entry);
kfree(entry);
 }
 
@@ -302,6 +317,7 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct 
ima_rule_entry *entry)
 
 out_err:
ima_lsm_free_rule(nentry);
+   kfree(nentry);
return NULL;
 }
 
@@ -315,7 +331,14 @@ static int ima_lsm_update_rule(struct ima_rule_entry 
*entry)
 
list_replace_rcu(>list, >list);
synchronize_rcu();
+   /*
+* ima_lsm_copy_rule() shallow copied all references, except for the
+* LSM references, from entry to nentry so we only want to free the LSM
+* references and the entry itself. All other memory refrences will now
+* be owned by nentry.
+*/
ima_lsm_free_rule(entry);
+   kfree(entry);
 
return 0;
 }
@@ -1402,15 +1425,11 @@ ssize_t ima_parse_add_rule(char *rule)
 void ima_delete_rules(void)
 {
struct ima_rule_entry *entry, *tmp;
-   int i;
 
temp_ima_appraise = 0;
list_for_each_entry_safe(entry, tmp, _temp_rules, list) {
-   for (i = 0; i < MAX_LSM_RULES; i++)
-   kfree(entry->lsm[i].args_p);
-
list_del(>list);
-   kfree(entry);
+   ima_free_rule(entry);
}
 }
 
-- 
2.25.1



[PATCH v3 00/12] ima: Fix rule parsing bugs and extend KEXEC_CMDLINE rule support

2020-07-09 Thread Tyler Hicks
This series ultimately extends the supported IMA rule conditionals for
the KEXEC_CMDLINE hook function. As of today, there's an imbalance in
IMA language conditional support for KEXEC_CMDLINE rules in comparison
to KEXEC_KERNEL_CHECK and KEXEC_INITRAMFS_CHECK rules. The KEXEC_CMDLINE
rules do not support *any* conditionals so you cannot have a sequence of
rules like this:

 dont_measure func=KEXEC_KERNEL_CHECK obj_type=foo_t
 dont_measure func=KEXEC_INITRAMFS_CHECK obj_type=foo_t
 dont_measure func=KEXEC_CMDLINE obj_type=foo_t
 measure func=KEXEC_KERNEL_CHECK
 measure func=KEXEC_INITRAMFS_CHECK
 measure func=KEXEC_CMDLINE

Instead, KEXEC_CMDLINE rules can only be measured or not measured and
there's no additional flexibility in today's implementation of the
KEXEC_CMDLINE hook function.

With this series, the above sequence of rules becomes valid and any
calls to kexec_file_load() with a kernel and initramfs inode type of
foo_t will not be measured (that includes the kernel cmdline buffer)
while all other objects given to a kexec_file_load() syscall will be
measured. There's obviously not an inode directly associated with the
kernel cmdline buffer but this patch series ties the inode based
decision making for KEXEC_CMDLINE to the kernel's inode. I think this
will be intuitive to policy authors.

While reading IMA code and preparing to make this change, I realized
that the buffer based hook functions (KEXEC_CMDLINE and KEY_CHECK) are
quite special in comparison to longer standing hook functions. These
buffer based hook functions can only support measure actions and there
are some restrictions on the conditionals that they support. However,
the rule parser isn't enforcing any of those restrictions and IMA policy
authors wouldn't have any immediate way of knowing that the policy that
they wrote is invalid. For example, the sequence of rules above parses
successfully in today's kernel but the
"dont_measure func=KEXEC_CMDLINE ..." rule is incorrectly handled in
ima_match_rules(). The dont_measure rule is *always* considered to be a
match so, surprisingly, no KEXEC_CMDLINE measurements are made.

While making the rule parser more strict, I realized that the parser
does not correctly free all of the allocated memory associated with an
ima_rule_entry when going down some error paths. Invalid policy loaded
by the policy administrator could result in small memory leaks.

I envision patches 1-7 going to stable. The series is ordered in a way
that has all the fixes up front, followed by cleanups, followed by the
feature patch. The breakdown of patches looks like so:

 Memory leak fixes: 1-3
 Parser strictness fixes: 4-7
 Code cleanups made possible by the fixes: 8-11
 Extend KEXEC_CMDLINE rule support: 12

Perhaps the most logical ordering for code review is:

 1, 2, 3, 8, 9, 4, 5, 6, 7, 10, 11, 12

If you'd like me to re-order or split up the series, just let me know.
Thanks for considering these patches!

* Series-wide v3 changes
  - Indentation changes in patch #4 which caused some churn
  - Added patch #7
  - Significant changes to patch #10 to address Mimi's requests
* Series-wide v2 changes
  - Rebased onto next-integrity-testing
  - Squashed patches 2 and 3 from v1
+ Updated this cover letter to account for changes to patch index
  changes
+ See patch 2 for specific code changes

Tyler

Tyler Hicks (12):
  ima: Have the LSM free its audit rule
  ima: Free the entire rule when deleting a list of rules
  ima: Free the entire rule if it fails to parse
  ima: Fail rule parsing when buffer hook functions have an invalid
action
  ima: Fail rule parsing when the KEXEC_CMDLINE hook is combined with an
invalid cond
  ima: Fail rule parsing when the KEY_CHECK hook is combined with an
invalid cond
  ima: Fail rule parsing when appraise_flag=blacklist is unsupportable
  ima: Shallow copy the args_p member of ima_rule_entry.lsm elements
  ima: Use correct type for the args_p member of ima_rule_entry.lsm
elements
  ima: Move comprehensive rule validation checks out of the token parser
  ima: Use the common function to detect LSM conditionals in a rule
  ima: Support additional conditionals in the KEXEC_CMDLINE hook
function

 include/linux/ima.h  |   4 +-
 kernel/kexec_file.c  |   2 +-
 security/integrity/ima/ima.h |  13 +-
 security/integrity/ima/ima_api.c |   2 +-
 security/integrity/ima/ima_appraise.c|   2 +-
 security/integrity/ima/ima_asymmetric_keys.c |   2 +-
 security/integrity/ima/ima_main.c|  23 ++-
 security/integrity/ima/ima_modsig.c  |  20 --
 security/integrity/ima/ima_policy.c  | 206 ++-
 security/integrity/ima/ima_queue_keys.c  |   2 +-
 10 files changed, 182 insertions(+), 94 deletions(-)

-- 
2.25.1



[PATCH v2] EDAC-I7300: Replace HTTP links with HTTPS ones

2020-07-09 Thread Alexander A. Klimov
Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.

Deterministic algorithm:
For each file:
  If not .svg:
For each line:
  If doesn't contain `\bxmlns\b`:
For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
  If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
If both the HTTP and HTTPS versions
return 200 OK and serve the same content:
  Replace HTTP with HTTPS.

Signed-off-by: Alexander A. Klimov 
---
 drivers/edac/i7300_edac.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/edac/i7300_edac.c b/drivers/edac/i7300_edac.c
index 2e9bbe56cde9..4f28b8c8d378 100644
--- a/drivers/edac/i7300_edac.c
+++ b/drivers/edac/i7300_edac.c
@@ -5,7 +5,7 @@
  * Copyright (c) 2010 by:
  *  Mauro Carvalho Chehab
  *
- * Red Hat Inc. http://www.redhat.com
+ * Red Hat Inc. https://www.redhat.com
  *
  * Intel 7300 Chipset Memory Controller Hub (MCH) - Datasheet
  * http://www.intel.com/Assets/PDF/datasheet/318082.pdf
@@ -1206,7 +1206,7 @@ module_exit(i7300_exit);
 
 MODULE_LICENSE("GPL");
 MODULE_AUTHOR("Mauro Carvalho Chehab");
-MODULE_AUTHOR("Red Hat Inc. (http://www.redhat.com)");
+MODULE_AUTHOR("Red Hat Inc. (https://www.redhat.com)");
 MODULE_DESCRIPTION("MC Driver for Intel I7300 memory controllers - "
   I7300_REVISION);
 
-- 
2.27.0



[PATCH v3 05/12] ima: Fail rule parsing when the KEXEC_CMDLINE hook is combined with an invalid cond

2020-07-09 Thread Tyler Hicks
The KEXEC_CMDLINE hook function only supports the pcr conditional. Make
this clear at policy load so that IMA policy authors don't assume that
other conditionals are supported.

Since KEXEC_CMDLINE's inception, ima_match_rules() has always returned
true on any loaded KEXEC_CMDLINE rule without any consideration for
other conditionals present in the rule. Make it clear that pcr is the
only supported KEXEC_CMDLINE conditional by returning an error during
policy load.

An example of why this is a problem can be explained with the following
rule:

 dont_measure func=KEXEC_CMDLINE obj_type=foo_t

An IMA policy author would have assumed that rule is valid because the
parser accepted it but the result was that measurements for all
KEXEC_CMDLINE operations would be disabled.

Fixes: b0935123a183 ("IMA: Define a new hook to measure the kexec boot command 
line arguments")
Signed-off-by: Tyler Hicks 
Reviewed-by: Mimi Zohar 
Reviewed-by: Lakshmi Ramasubramanian 
---

* v3
  - Adjust for the indentation change introduced in patch #4
  - Added Lakshmi's Reviewed-by
* v2
  - Added Mimi's Reviewed-by

 security/integrity/ima/ima_policy.c | 21 +
 1 file changed, 21 insertions(+)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index 40c28f1a6a5a..1c64bd6f1728 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -343,6 +343,17 @@ static int ima_lsm_update_rule(struct ima_rule_entry 
*entry)
return 0;
 }
 
+static bool ima_rule_contains_lsm_cond(struct ima_rule_entry *entry)
+{
+   int i;
+
+   for (i = 0; i < MAX_LSM_RULES; i++)
+   if (entry->lsm[i].args_p)
+   return true;
+
+   return false;
+}
+
 /*
  * The LSM policy can be reloaded, leaving the IMA LSM based rules referring
  * to the old, stale LSM policy.  Update the IMA LSM based rules to reflect
@@ -998,6 +1009,16 @@ static bool ima_validate_rule(struct ima_rule_entry 
*entry)
/* Validation of these hook functions is in ima_parse_rule() */
break;
case KEXEC_CMDLINE:
+   if (entry->action & ~(MEASURE | DONT_MEASURE))
+   return false;
+
+   if (entry->flags & ~(IMA_FUNC | IMA_PCR))
+   return false;
+
+   if (ima_rule_contains_lsm_cond(entry))
+   return false;
+
+   break;
case KEY_CHECK:
if (entry->action & ~(MEASURE | DONT_MEASURE))
return false;
-- 
2.25.1



[PATCH v3 03/12] ima: Free the entire rule if it fails to parse

2020-07-09 Thread Tyler Hicks
Use ima_free_rule() to fix memory leaks of allocated ima_rule_entry
members, such as .fsname and .keyrings, when an error is encountered
during rule parsing.

Set the args_p pointer to NULL after freeing it in the error path of
ima_lsm_rule_init() so that it isn't freed twice.

This fixes a memory leak seen when loading an rule that contains an
additional piece of allocated memory, such as an fsname, followed by an
invalid conditional:

 # echo "measure fsname=tmpfs bad=cond" > /sys/kernel/security/ima/policy
 -bash: echo: write error: Invalid argument
 # echo scan > /sys/kernel/debug/kmemleak
 # cat /sys/kernel/debug/kmemleak
 unreferenced object 0x98e7e4ece6c0 (size 8):
   comm "bash", pid 672, jiffies 4294791843 (age 21.855s)
   hex dump (first 8 bytes):
 74 6d 70 66 73 00 6b a5  tmpfs.k.
   backtrace:
 [] kstrdup+0x2e/0x60
 [] ima_parse_add_rule+0x7d4/0x1020
 [] ima_write_policy+0xab/0x1d0
 [] vfs_write+0xde/0x1d0
 [] ksys_write+0x68/0xe0
 [] do_syscall_64+0x56/0xa0
 [<89ea7b98>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

Fixes: f1b08bbcbdaf ("ima: define a new policy condition based on the 
filesystem name")
Fixes: 2b60c0ecedf8 ("IMA: Read keyrings= option from the IMA policy")
Signed-off-by: Tyler Hicks 
---

* v3
  - No change
* v2
  - No change

 security/integrity/ima/ima_policy.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index bf00b966e87f..e458cd47c099 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -913,6 +913,7 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry,
 
if (ima_rules == _default_rules) {
kfree(entry->lsm[lsm_rule].args_p);
+   entry->lsm[lsm_rule].args_p = NULL;
result = -EINVAL;
} else
result = 0;
@@ -1404,7 +1405,7 @@ ssize_t ima_parse_add_rule(char *rule)
 
result = ima_parse_rule(p, entry);
if (result) {
-   kfree(entry);
+   ima_free_rule(entry);
integrity_audit_msg(AUDIT_INTEGRITY_STATUS, NULL,
NULL, op, "invalid-policy", result,
audit_info);
-- 
2.25.1



[PATCH v3 01/12] ima: Have the LSM free its audit rule

2020-07-09 Thread Tyler Hicks
Ask the LSM to free its audit rule rather than directly calling kfree().
Both AppArmor and SELinux do additional work in their audit_rule_free()
hooks. Fix memory leaks by allowing the LSMs to perform necessary work.

Fixes: b16942455193 ("ima: use the lsm policy update notifier")
Signed-off-by: Tyler Hicks 
Cc: Janne Karhunen 
Cc: Casey Schaufler 
Reviewed-by: Mimi Zohar 
---

* v3
  - No change
* v2
  - Fixed build warning by dropping the 'return -EINVAL' from
the stubbed out security_filter_rule_free() since it has a void
return type
  - Added Mimi's Reviewed-by
  - Developed a follow-on patch to rename security_filter_rule_*()
functions, to address Casey's request, but I'll submit it
independently of this patch series since it is somewhat unrelated

 security/integrity/ima/ima.h| 5 +
 security/integrity/ima/ima_policy.c | 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
index 4515975cc540..59ec28f5c117 100644
--- a/security/integrity/ima/ima.h
+++ b/security/integrity/ima/ima.h
@@ -420,6 +420,7 @@ static inline void ima_free_modsig(struct modsig *modsig)
 #ifdef CONFIG_IMA_LSM_RULES
 
 #define security_filter_rule_init security_audit_rule_init
+#define security_filter_rule_free security_audit_rule_free
 #define security_filter_rule_match security_audit_rule_match
 
 #else
@@ -430,6 +431,10 @@ static inline int security_filter_rule_init(u32 field, u32 
op, char *rulestr,
return -EINVAL;
 }
 
+static inline void security_filter_rule_free(void *lsmrule)
+{
+}
+
 static inline int security_filter_rule_match(u32 secid, u32 field, u32 op,
 void *lsmrule)
 {
diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index 66aa3e17a888..d7c268c2b0ce 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -258,7 +258,7 @@ static void ima_lsm_free_rule(struct ima_rule_entry *entry)
int i;
 
for (i = 0; i < MAX_LSM_RULES; i++) {
-   kfree(entry->lsm[i].rule);
+   security_filter_rule_free(entry->lsm[i].rule);
kfree(entry->lsm[i].args_p);
}
kfree(entry);
-- 
2.25.1



[PATCH v3 09/12] ima: Use correct type for the args_p member of ima_rule_entry.lsm elements

2020-07-09 Thread Tyler Hicks
Make args_p be of the char pointer type rather than have it be a void
pointer that gets casted to char pointer when it is used. It is a simple
NUL-terminated string as returned by match_strdup().

Signed-off-by: Tyler Hicks 
---

* v3
  - No change
* v2
  - No change

 security/integrity/ima/ima_policy.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index b02e1ffd10c9..13a178c70b44 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -74,7 +74,7 @@ struct ima_rule_entry {
int pcr;
struct {
void *rule; /* LSM file metadata specific */
-   void *args_p;   /* audit value */
+   char *args_p;   /* audit value */
int type;   /* audit type */
} lsm[MAX_LSM_RULES];
char *fsname;
@@ -314,7 +314,7 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct 
ima_rule_entry *entry)
  >lsm[i].rule);
if (!nentry->lsm[i].rule)
pr_warn("rule for LSM \'%s\' is undefined\n",
-   (char *)entry->lsm[i].args_p);
+   entry->lsm[i].args_p);
}
return nentry;
 }
@@ -918,7 +918,7 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry,
   >lsm[lsm_rule].rule);
if (!entry->lsm[lsm_rule].rule) {
pr_warn("rule for LSM \'%s\' is undefined\n",
-   (char *)entry->lsm[lsm_rule].args_p);
+   entry->lsm[lsm_rule].args_p);
 
if (ima_rules == _default_rules) {
kfree(entry->lsm[lsm_rule].args_p);
@@ -1682,27 +1682,27 @@ int ima_policy_show(struct seq_file *m, void *v)
switch (i) {
case LSM_OBJ_USER:
seq_printf(m, pt(Opt_obj_user),
-  (char *)entry->lsm[i].args_p);
+  entry->lsm[i].args_p);
break;
case LSM_OBJ_ROLE:
seq_printf(m, pt(Opt_obj_role),
-  (char *)entry->lsm[i].args_p);
+  entry->lsm[i].args_p);
break;
case LSM_OBJ_TYPE:
seq_printf(m, pt(Opt_obj_type),
-  (char *)entry->lsm[i].args_p);
+  entry->lsm[i].args_p);
break;
case LSM_SUBJ_USER:
seq_printf(m, pt(Opt_subj_user),
-  (char *)entry->lsm[i].args_p);
+  entry->lsm[i].args_p);
break;
case LSM_SUBJ_ROLE:
seq_printf(m, pt(Opt_subj_role),
-  (char *)entry->lsm[i].args_p);
+  entry->lsm[i].args_p);
break;
case LSM_SUBJ_TYPE:
seq_printf(m, pt(Opt_subj_type),
-  (char *)entry->lsm[i].args_p);
+  entry->lsm[i].args_p);
break;
}
seq_puts(m, " ");
-- 
2.25.1



[PATCH v3 06/12] ima: Fail rule parsing when the KEY_CHECK hook is combined with an invalid cond

2020-07-09 Thread Tyler Hicks
The KEY_CHECK function only supports the uid, pcr, and keyrings
conditionals. Make this clear at policy load so that IMA policy authors
don't assume that other conditionals are supported.

Fixes: 5808611cccb2 ("IMA: Add KEY_CHECK func to measure keys")
Signed-off-by: Tyler Hicks 
Reviewed-by: Lakshmi Ramasubramanian 
---

* v3
  - Added Lakshmi's Reviewed-by
  - Adjust for the indentation change introduced in patch #4
* v2
  - No change

 security/integrity/ima/ima_policy.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index 1c64bd6f1728..81da02071d41 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -1023,6 +1023,13 @@ static bool ima_validate_rule(struct ima_rule_entry 
*entry)
if (entry->action & ~(MEASURE | DONT_MEASURE))
return false;
 
+   if (entry->flags & ~(IMA_FUNC | IMA_UID | IMA_PCR |
+IMA_KEYRINGS))
+   return false;
+
+   if (ima_rule_contains_lsm_cond(entry))
+   return false;
+
break;
default:
return false;
-- 
2.25.1



[PATCH v3 12/12] ima: Support additional conditionals in the KEXEC_CMDLINE hook function

2020-07-09 Thread Tyler Hicks
Take the properties of the kexec kernel's inode and the current task
ownership into consideration when matching a KEXEC_CMDLINE operation to
the rules in the IMA policy. This allows for some uniformity when
writing IMA policy rules for KEXEC_KERNEL_CHECK, KEXEC_INITRAMFS_CHECK,
and KEXEC_CMDLINE operations.

Prior to this patch, it was not possible to write a set of rules like
this:

 dont_measure func=KEXEC_KERNEL_CHECK obj_type=foo_t
 dont_measure func=KEXEC_INITRAMFS_CHECK obj_type=foo_t
 dont_measure func=KEXEC_CMDLINE obj_type=foo_t
 measure func=KEXEC_KERNEL_CHECK
 measure func=KEXEC_INITRAMFS_CHECK
 measure func=KEXEC_CMDLINE

The inode information associated with the kernel being loaded by a
kexec_kernel_load(2) syscall can now be included in the decision to
measure or not

Additonally, the uid, euid, and subj_* conditionals can also now be
used in KEXEC_CMDLINE rules. There was no technical reason as to why
those conditionals weren't being considered previously other than
ima_match_rules() didn't have a valid inode to use so it immediately
bailed out for KEXEC_CMDLINE operations rather than going through the
full list of conditional comparisons.

Signed-off-by: Tyler Hicks 
Cc: Eric Biederman 
Cc: ke...@lists.infradead.org
Reviewed-by: Lakshmi Ramasubramanian 
---

* v3
  - Added Lakshmi's Reviewed-by
  - Adjust for the indentation change introduced in patch #4
* v2
  - Moved the inode parameter of process_buffer_measurement() to be the
first parameter so that it more closely matches process_masurement()

 include/linux/ima.h  |  4 ++--
 kernel/kexec_file.c  |  2 +-
 security/integrity/ima/ima.h |  2 +-
 security/integrity/ima/ima_api.c |  2 +-
 security/integrity/ima/ima_appraise.c|  2 +-
 security/integrity/ima/ima_asymmetric_keys.c |  2 +-
 security/integrity/ima/ima_main.c| 23 +++-
 security/integrity/ima/ima_policy.c  | 17 +--
 security/integrity/ima/ima_queue_keys.c  |  2 +-
 9 files changed, 31 insertions(+), 25 deletions(-)

diff --git a/include/linux/ima.h b/include/linux/ima.h
index 9164e1534ec9..d15100de6cdd 100644
--- a/include/linux/ima.h
+++ b/include/linux/ima.h
@@ -25,7 +25,7 @@ extern int ima_post_read_file(struct file *file, void *buf, 
loff_t size,
  enum kernel_read_file_id id);
 extern void ima_post_path_mknod(struct dentry *dentry);
 extern int ima_file_hash(struct file *file, char *buf, size_t buf_size);
-extern void ima_kexec_cmdline(const void *buf, int size);
+extern void ima_kexec_cmdline(int kernel_fd, const void *buf, int size);
 
 #ifdef CONFIG_IMA_KEXEC
 extern void ima_add_kexec_buffer(struct kimage *image);
@@ -103,7 +103,7 @@ static inline int ima_file_hash(struct file *file, char 
*buf, size_t buf_size)
return -EOPNOTSUPP;
 }
 
-static inline void ima_kexec_cmdline(const void *buf, int size) {}
+static inline void ima_kexec_cmdline(int kernel_fd, const void *buf, int size) 
{}
 #endif /* CONFIG_IMA */
 
 #ifndef CONFIG_IMA_KEXEC
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index bb05fd52de85..07df431c1f21 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -287,7 +287,7 @@ kimage_file_prepare_segments(struct kimage *image, int 
kernel_fd, int initrd_fd,
goto out;
}
 
-   ima_kexec_cmdline(image->cmdline_buf,
+   ima_kexec_cmdline(kernel_fd, image->cmdline_buf,
  image->cmdline_buf_len - 1);
}
 
diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
index ea7e77536f3c..576ae2c6d418 100644
--- a/security/integrity/ima/ima.h
+++ b/security/integrity/ima/ima.h
@@ -265,7 +265,7 @@ void ima_store_measurement(struct integrity_iint_cache 
*iint, struct file *file,
   struct evm_ima_xattr_data *xattr_value,
   int xattr_len, const struct modsig *modsig, int pcr,
   struct ima_template_desc *template_desc);
-void process_buffer_measurement(const void *buf, int size,
+void process_buffer_measurement(struct inode *inode, const void *buf, int size,
const char *eventname, enum ima_hooks func,
int pcr, const char *keyring);
 void ima_audit_measurement(struct integrity_iint_cache *iint,
diff --git a/security/integrity/ima/ima_api.c b/security/integrity/ima/ima_api.c
index bf22de8b7ce0..4f39fb93f278 100644
--- a/security/integrity/ima/ima_api.c
+++ b/security/integrity/ima/ima_api.c
@@ -162,7 +162,7 @@ void ima_add_violation(struct file *file, const unsigned 
char *filename,
 
 /**
  * ima_get_action - appraise & measure decision based on policy.
- * @inode: pointer to inode to measure
+ * @inode: pointer to the inode associated with the object being validated
  * @cred: pointer to credentials structure to validate
  * @secid: secid 

[PATCH v3 10/12] ima: Move comprehensive rule validation checks out of the token parser

2020-07-09 Thread Tyler Hicks
Use ima_validate_rule(), at the end of the token parsing stage, to
verify combinations of actions, hooks, and flags. This is useful to
increase readability by consolidating such checks into a single function
and also because rule conditionals can be specified in arbitrary order
making it difficult to do comprehensive rule validation until the entire
rule has been parsed.

This allows for the check that ties together the "keyrings" conditional
with the KEY_CHECK function hook to be moved into the final rule
validation.

The modsig check no longer needs to compiled conditionally because the
token parser will ensure that modsig support is enabled before accepting
"imasig|modsig" appraise type values. The final rule validation will
ensure that appraise_type and appraise_flag options are only present in
appraise rules.

Finally, this allows for the check that ties together the "pcr"
conditional with the measure action to be moved into the final rule
validation.

Signed-off-by: Tyler Hicks 
---

* v3
  - Significant broadening of the patch's scope along with renaming and
re-describing the patch. ima_validate_rule() is now the consolidated
location for checking combinations of
actions/functions/conditionals and the existing checks are now
removed from the token parsing code.
+ Ensure that the IMA_FUNC flag is only set when a function hook is
  specified, and vice versa, which allows us to use the NONE case in
  the switch statement to enforce that "keyrings=",
  "appraise_type=imasig|modsig", and "appraise_flag=blacklist"
  cannot be specified on a rule without an appropriate hook function
  - Adjust for the indentation change introduced in patch #4
  - Adjust for new fixes introduced in patch #7
* v2
  - Allowed IMA_DIGSIG_REQUIRED, IMA_PERMIT_DIRECTIO,
IMA_MODSIG_ALLOWED, and IMA_CHECK_BLACKLIST conditionals to be
present in the rule entry flags for non-buffer hook functions.

 security/integrity/ima/ima.h|  6 ---
 security/integrity/ima/ima_modsig.c | 20 --
 security/integrity/ima/ima_policy.c | 57 +++--
 3 files changed, 37 insertions(+), 46 deletions(-)

diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
index 59ec28f5c117..ea7e77536f3c 100644
--- a/security/integrity/ima/ima.h
+++ b/security/integrity/ima/ima.h
@@ -372,7 +372,6 @@ static inline int ima_read_xattr(struct dentry *dentry,
 #endif /* CONFIG_IMA_APPRAISE */
 
 #ifdef CONFIG_IMA_APPRAISE_MODSIG
-bool ima_hook_supports_modsig(enum ima_hooks func);
 int ima_read_modsig(enum ima_hooks func, const void *buf, loff_t buf_len,
struct modsig **modsig);
 void ima_collect_modsig(struct modsig *modsig, const void *buf, loff_t size);
@@ -382,11 +381,6 @@ int ima_get_raw_modsig(const struct modsig *modsig, const 
void **data,
   u32 *data_len);
 void ima_free_modsig(struct modsig *modsig);
 #else
-static inline bool ima_hook_supports_modsig(enum ima_hooks func)
-{
-   return false;
-}
-
 static inline int ima_read_modsig(enum ima_hooks func, const void *buf,
  loff_t buf_len, struct modsig **modsig)
 {
diff --git a/security/integrity/ima/ima_modsig.c 
b/security/integrity/ima/ima_modsig.c
index d106885cc495..fb25723c65bc 100644
--- a/security/integrity/ima/ima_modsig.c
+++ b/security/integrity/ima/ima_modsig.c
@@ -32,26 +32,6 @@ struct modsig {
u8 raw_pkcs7[];
 };
 
-/**
- * ima_hook_supports_modsig - can the policy allow modsig for this hook?
- *
- * modsig is only supported by hooks using ima_post_read_file(), because only
- * they preload the contents of the file in a buffer. FILE_CHECK does that in
- * some cases, but not when reached from vfs_open(). POLICY_CHECK can support
- * it, but it's not useful in practice because it's a text file so deny.
- */
-bool ima_hook_supports_modsig(enum ima_hooks func)
-{
-   switch (func) {
-   case KEXEC_KERNEL_CHECK:
-   case KEXEC_INITRAMFS_CHECK:
-   case MODULE_CHECK:
-   return true;
-   default:
-   return false;
-   }
-}
-
 /*
  * ima_read_modsig - Read modsig from buf.
  *
diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index 13a178c70b44..c4d0a0c1f896 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -984,10 +984,27 @@ static void check_template_modsig(const struct 
ima_template_desc *template)
 
 static bool ima_validate_rule(struct ima_rule_entry *entry)
 {
-   /* Ensure that the action is set */
+   /* Ensure that the action is set and is compatible with the flags */
if (entry->action == UNKNOWN)
return false;
 
+   if (entry->action != MEASURE && entry->flags & IMA_PCR)
+   return false;
+
+   if (entry->action != APPRAISE &&
+   entry->flags & (IMA_DIGSIG_REQUIRED | IMA_MODSIG_ALLOWED | 
IMA_CHECK_BLACKLIST))
+   

[PATCH v3 08/12] ima: Shallow copy the args_p member of ima_rule_entry.lsm elements

2020-07-09 Thread Tyler Hicks
The args_p member is a simple string that is allocated by
ima_rule_init(). Shallow copy it like other non-LSM references in
ima_rule_entry structs.

There are no longer any necessary error path cleanups to do in
ima_lsm_copy_rule().

Signed-off-by: Tyler Hicks 
---

* v3
  - No change
* v2
  - Adjusted context to account for ima_lsm_copy_rule() directly calling
ima_lsm_free_rule() and the lack of explicit reference ownership
transfers
  - Added comment to ima_lsm_copy_rule() to document the args_p
reference ownership transfer

 security/integrity/ima/ima_policy.c | 16 +++-
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index 9842e2e0bc6d..b02e1ffd10c9 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -300,10 +300,13 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct 
ima_rule_entry *entry)
continue;
 
nentry->lsm[i].type = entry->lsm[i].type;
-   nentry->lsm[i].args_p = kstrdup(entry->lsm[i].args_p,
-   GFP_KERNEL);
-   if (!nentry->lsm[i].args_p)
-   goto out_err;
+   nentry->lsm[i].args_p = entry->lsm[i].args_p;
+   /*
+* Remove the reference from entry so that the associated
+* memory will not be freed during a later call to
+* ima_lsm_free_rule(entry).
+*/
+   entry->lsm[i].args_p = NULL;
 
security_filter_rule_init(nentry->lsm[i].type,
  Audit_equal,
@@ -314,11 +317,6 @@ static struct ima_rule_entry *ima_lsm_copy_rule(struct 
ima_rule_entry *entry)
(char *)entry->lsm[i].args_p);
}
return nentry;
-
-out_err:
-   ima_lsm_free_rule(nentry);
-   kfree(nentry);
-   return NULL;
 }
 
 static int ima_lsm_update_rule(struct ima_rule_entry *entry)
-- 
2.25.1



[PATCH v3 07/12] ima: Fail rule parsing when appraise_flag=blacklist is unsupportable

2020-07-09 Thread Tyler Hicks
The "appraise_flag" option is only appropriate for appraise actions
and its "blacklist" value is only appropriate when
CONFIG_IMA_APPRAISE_MODSIG is enabled and "appraise_flag=blacklist" is
only appropriate when "appraise_type=imasig|modsig" is also present.
Make this clear at policy load so that IMA policy authors don't assume
that other uses of "appraise_flag=blacklist" are supported.

Fixes: 273df864cf74 ("ima: Check against blacklisted hashes for files with 
modsig")
Signed-off-by: Tyler Hicks 
Cc: Nayna Jain 
---

* v3
  - New patch

 security/integrity/ima/ima_policy.c | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index 81da02071d41..9842e2e0bc6d 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -1035,6 +1035,11 @@ static bool ima_validate_rule(struct ima_rule_entry 
*entry)
return false;
}
 
+   /* Ensure that combinations of flags are compatible with each other */
+   if (entry->flags & IMA_CHECK_BLACKLIST &&
+   !(entry->flags & IMA_MODSIG_ALLOWED))
+   return false;
+
return true;
 }
 
@@ -1371,8 +1376,14 @@ static int ima_parse_rule(char *rule, struct 
ima_rule_entry *entry)
result = -EINVAL;
break;
case Opt_appraise_flag:
+   if (entry->action != APPRAISE) {
+   result = -EINVAL;
+   break;
+   }
+
ima_log_string(ab, "appraise_flag", args[0].from);
-   if (strstr(args[0].from, "blacklist"))
+   if (IS_ENABLED(CONFIG_IMA_APPRAISE_MODSIG) &&
+   strstr(args[0].from, "blacklist"))
entry->flags |= IMA_CHECK_BLACKLIST;
break;
case Opt_permit_directio:
-- 
2.25.1



[PATCH v3 11/12] ima: Use the common function to detect LSM conditionals in a rule

2020-07-09 Thread Tyler Hicks
Make broader use of ima_rule_contains_lsm_cond() to check if a given
rule contains an LSM conditional. This is a code cleanup and has no
user-facing change.

Signed-off-by: Tyler Hicks 
Reviewed-by: Mimi Zohar 
---

* v3
  - No change
* v2
  - No change

 security/integrity/ima/ima_policy.c | 11 ++-
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index c4d0a0c1f896..81ee8fd1d83a 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -360,17 +360,10 @@ static bool ima_rule_contains_lsm_cond(struct 
ima_rule_entry *entry)
 static void ima_lsm_update_rules(void)
 {
struct ima_rule_entry *entry, *e;
-   int i, result, needs_update;
+   int result;
 
list_for_each_entry_safe(entry, e, _policy_rules, list) {
-   needs_update = 0;
-   for (i = 0; i < MAX_LSM_RULES; i++) {
-   if (entry->lsm[i].args_p) {
-   needs_update = 1;
-   break;
-   }
-   }
-   if (!needs_update)
+   if (!ima_rule_contains_lsm_cond(entry))
continue;
 
result = ima_lsm_update_rule(entry);
-- 
2.25.1



[PATCH v3 04/12] ima: Fail rule parsing when buffer hook functions have an invalid action

2020-07-09 Thread Tyler Hicks
Buffer based hook functions, such as KEXEC_CMDLINE and KEY_CHECK, can
only measure. The process_buffer_measurement() function quietly ignores
all actions except measure so make this behavior clear at the time of
policy load.

The parsing of the keyrings conditional had a check to ensure that it
was only specified with measure actions but the check should be on the
hook function and not the keyrings conditional since
"appraise func=KEY_CHECK" is not a valid rule.

Fixes: b0935123a183 ("IMA: Define a new hook to measure the kexec boot command 
line arguments")
Fixes: 5808611cccb2 ("IMA: Add KEY_CHECK func to measure keys")
Signed-off-by: Tyler Hicks 
---

* v3
  - Add comments to ima_validate_rule() to separate/explain the types of
validation checks (section for action checks, section for hook
function checks, soon to be a section for combination of options
checks, etc.)
  - Removed the "if (entry->flags & IMA_FUNC)" conditional around the
switch statement in ima_validate_rule() which reduced the overall indention
by a tab. This could be removed because entry->func is NONE when the
IMA_FUNC flag is not set. We'll explicitly enforce and then leverage
that property in a later patch when we start validating all hook
functions in ima_validate_rule().
  - Add comment explicitly stating that all hook functions except
KEXEC_CMDLINE and KEY_CHECK are still being validated in
ima_parse_rule().
* v2
  - No change

 security/integrity/ima/ima_policy.c | 40 +++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/security/integrity/ima/ima_policy.c 
b/security/integrity/ima/ima_policy.c
index e458cd47c099..40c28f1a6a5a 100644
--- a/security/integrity/ima/ima_policy.c
+++ b/security/integrity/ima/ima_policy.c
@@ -973,6 +973,43 @@ static void check_template_modsig(const struct 
ima_template_desc *template)
 #undef MSG
 }
 
+static bool ima_validate_rule(struct ima_rule_entry *entry)
+{
+   /* Ensure that the action is set */
+   if (entry->action == UNKNOWN)
+   return false;
+
+   /*
+* Ensure that the hook function is compatible with the other
+* components of the rule
+*/
+   switch (entry->func) {
+   case NONE:
+   case FILE_CHECK:
+   case MMAP_CHECK:
+   case BPRM_CHECK:
+   case CREDS_CHECK:
+   case POST_SETATTR:
+   case MODULE_CHECK:
+   case FIRMWARE_CHECK:
+   case KEXEC_KERNEL_CHECK:
+   case KEXEC_INITRAMFS_CHECK:
+   case POLICY_CHECK:
+   /* Validation of these hook functions is in ima_parse_rule() */
+   break;
+   case KEXEC_CMDLINE:
+   case KEY_CHECK:
+   if (entry->action & ~(MEASURE | DONT_MEASURE))
+   return false;
+
+   break;
+   default:
+   return false;
+   }
+
+   return true;
+}
+
 static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
 {
struct audit_buffer *ab;
@@ -1150,7 +1187,6 @@ static int ima_parse_rule(char *rule, struct 
ima_rule_entry *entry)
keyrings_len = strlen(args[0].from) + 1;
 
if ((entry->keyrings) ||
-   (entry->action != MEASURE) ||
(entry->func != KEY_CHECK) ||
(keyrings_len < 2)) {
result = -EINVAL;
@@ -1356,7 +1392,7 @@ static int ima_parse_rule(char *rule, struct 
ima_rule_entry *entry)
break;
}
}
-   if (!result && (entry->action == UNKNOWN))
+   if (!result && !ima_validate_rule(entry))
result = -EINVAL;
else if (entry->action == APPRAISE)
temp_ima_appraise |= ima_appraise_flag(entry->func);
-- 
2.25.1



Re: [PATCH v2 1/2] riscv: Support R_RISCV_ADD64 and R_RISCV_SUB64 relocs

2020-07-09 Thread Björn Töpel
On Wed, 8 Jul 2020 at 23:10, Emil Renner Berthing  wrote:
>
> These are needed for the __jump_table in modules using
> static keys/jump-labels with the layout from
> HAVE_ARCH_JUMP_LABEL_RELATIVE on 64bit kernels.
>
> Signed-off-by: Emil Renner Berthing 

Reviewed-by: Björn Töpel 
Tested-by: Björn Töpel 

> ---
>
> Tested on the HiFive Unleashed board.
>
> This patch is new in v2. It fixes an error loading modules
> containing static keys found by Björn Töpel.
>
>  arch/riscv/kernel/module.c | 16 
>  1 file changed, 16 insertions(+)
>
> diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
> index 7191342c54da..104fba889cf7 100644
> --- a/arch/riscv/kernel/module.c
> +++ b/arch/riscv/kernel/module.c
> @@ -263,6 +263,13 @@ static int apply_r_riscv_add32_rela(struct module *me, 
> u32 *location,
> return 0;
>  }
>
> +static int apply_r_riscv_add64_rela(struct module *me, u32 *location,
> +   Elf_Addr v)
> +{
> +   *(u64 *)location += (u64)v;
> +   return 0;
> +}
> +
>  static int apply_r_riscv_sub32_rela(struct module *me, u32 *location,
> Elf_Addr v)
>  {
> @@ -270,6 +277,13 @@ static int apply_r_riscv_sub32_rela(struct module *me, 
> u32 *location,
> return 0;
>  }
>
> +static int apply_r_riscv_sub64_rela(struct module *me, u32 *location,
> +   Elf_Addr v)
> +{
> +   *(u64 *)location -= (u64)v;
> +   return 0;
> +}
> +
>  static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
> Elf_Addr v) = {
> [R_RISCV_32]= apply_r_riscv_32_rela,
> @@ -290,7 +304,9 @@ static int (*reloc_handlers_rela[]) (struct module *me, 
> u32 *location,
> [R_RISCV_RELAX] = apply_r_riscv_relax_rela,
> [R_RISCV_ALIGN] = apply_r_riscv_align_rela,
> [R_RISCV_ADD32] = apply_r_riscv_add32_rela,
> +   [R_RISCV_ADD64] = apply_r_riscv_add64_rela,
> [R_RISCV_SUB32] = apply_r_riscv_sub32_rela,
> +   [R_RISCV_SUB64] = apply_r_riscv_sub64_rela,
>  };
>
>  int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
> --
> 2.27.0
>


Re: [PATCH v6.1 6/7] seccomp: Introduce addfd ioctl to seccomp user notifier

2020-07-09 Thread Kees Cook
On Wed, Jul 08, 2020 at 11:17:27PM -0700, Kees Cook wrote:
> +static long seccomp_notify_addfd(struct seccomp_filter *filter,
> +  struct seccomp_notif_addfd __user *uaddfd,
> +  unsigned int size)
> +{
> + struct seccomp_notif_addfd addfd;
> + struct seccomp_knotif *knotif;
> + struct seccomp_kaddfd kaddfd;
> + int ret;
> +
> + BUILD_BUG_ON(sizeof(struct seccomp_notify_addfd) < 
> SECCOMP_NOTIFY_ADDFD_SIZE_VER0);
> + BUILD_BUG_ON(sizeof(struct seccomp_notify_addfd) != 
> SECCOMP_NOTIFY_ADDFD_SIZE_LATEST);

*brown paper bag* I built the wrong tree! This is a typo:
seccomp_notify_addfd should be seccomp_notif_addfd (no "y").

-- 
Kees Cook


Re: [PATCH v2 2/2] riscv: Add jump-label implementation

2020-07-09 Thread Björn Töpel
On Wed, 8 Jul 2020 at 23:10, Emil Renner Berthing  wrote:
>
> Add jump-label implementation based on the ARM64 version
> and add CONFIG_JUMP_LABEL=y to the defconfigs.
>
> Signed-off-by: Emil Renner Berthing 
> Reviewed-by: Björn Töpel 

Tested-by: Björn Töpel 

> ---
>
> Tested on the HiFive Unleashed board.
>
> Changes since v1:
> - WARN and give up gracefully if the jump offset cannot be
>   represented in a JAL instruction.
> - Add missing braces.
> - Add CONFIG_JUMP_LABEL=y to defconfigs.
>
> All suggested by Björn Töpel.
>
> Changes since RFC:
> - Use RISCV_PTR and RISCV_LGPTR macros to match struct jump_table
>   also in 32bit kernels.
> - Remove unneeded branch ? 1 : 0, thanks Björn
> - Fix \n\n instead of \n\t mistake
>
>  .../core/jump-labels/arch-support.txt |  2 +-
>  arch/riscv/Kconfig|  2 +
>  arch/riscv/configs/defconfig  |  1 +
>  arch/riscv/configs/nommu_k210_defconfig   |  1 +
>  arch/riscv/configs/nommu_virt_defconfig   |  1 +
>  arch/riscv/configs/rv32_defconfig |  1 +
>  arch/riscv/include/asm/jump_label.h   | 59 +++
>  arch/riscv/kernel/Makefile|  2 +
>  arch/riscv/kernel/jump_label.c| 49 +++
>  9 files changed, 117 insertions(+), 1 deletion(-)
>  create mode 100644 arch/riscv/include/asm/jump_label.h
>  create mode 100644 arch/riscv/kernel/jump_label.c
>
> diff --git a/Documentation/features/core/jump-labels/arch-support.txt 
> b/Documentation/features/core/jump-labels/arch-support.txt
> index 632a1c7aefa2..760243d18ed7 100644
> --- a/Documentation/features/core/jump-labels/arch-support.txt
> +++ b/Documentation/features/core/jump-labels/arch-support.txt
> @@ -23,7 +23,7 @@
>  |openrisc: | TODO |
>  |  parisc: |  ok  |
>  | powerpc: |  ok  |
> -|   riscv: | TODO |
> +|   riscv: |  ok  |
>  |s390: |  ok  |
>  |  sh: | TODO |
>  |   sparc: |  ok  |
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index fd639937e251..d2f5c53fdc19 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -46,6 +46,8 @@ config RISCV
> select GENERIC_TIME_VSYSCALL if MMU && 64BIT
> select HANDLE_DOMAIN_IRQ
> select HAVE_ARCH_AUDITSYSCALL
> +   select HAVE_ARCH_JUMP_LABEL
> +   select HAVE_ARCH_JUMP_LABEL_RELATIVE
> select HAVE_ARCH_KASAN if MMU && 64BIT
> select HAVE_ARCH_KGDB
> select HAVE_ARCH_KGDB_QXFER_PKT
> diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
> index 4da4886246a4..d58c93efb603 100644
> --- a/arch/riscv/configs/defconfig
> +++ b/arch/riscv/configs/defconfig
> @@ -17,6 +17,7 @@ CONFIG_BPF_SYSCALL=y
>  CONFIG_SOC_SIFIVE=y
>  CONFIG_SOC_VIRT=y
>  CONFIG_SMP=y
> +CONFIG_JUMP_LABEL=y
>  CONFIG_MODULES=y
>  CONFIG_MODULE_UNLOAD=y
>  CONFIG_NET=y
> diff --git a/arch/riscv/configs/nommu_k210_defconfig 
> b/arch/riscv/configs/nommu_k210_defconfig
> index b48138e329ea..cd1df62b13c7 100644
> --- a/arch/riscv/configs/nommu_k210_defconfig
> +++ b/arch/riscv/configs/nommu_k210_defconfig
> @@ -33,6 +33,7 @@ CONFIG_SMP=y
>  CONFIG_NR_CPUS=2
>  CONFIG_CMDLINE="earlycon console=ttySIF0"
>  CONFIG_CMDLINE_FORCE=y
> +CONFIG_JUMP_LABEL=y
>  # CONFIG_BLOCK is not set
>  CONFIG_BINFMT_FLAT=y
>  # CONFIG_COREDUMP is not set
> diff --git a/arch/riscv/configs/nommu_virt_defconfig 
> b/arch/riscv/configs/nommu_virt_defconfig
> index cf74e179bf90..f27596e9663e 100644
> --- a/arch/riscv/configs/nommu_virt_defconfig
> +++ b/arch/riscv/configs/nommu_virt_defconfig
> @@ -30,6 +30,7 @@ CONFIG_MAXPHYSMEM_2GB=y
>  CONFIG_SMP=y
>  CONFIG_CMDLINE="root=/dev/vda rw earlycon=uart8250,mmio,0x1000,115200n8 
> console=ttyS0"
>  CONFIG_CMDLINE_FORCE=y
> +CONFIG_JUMP_LABEL=y
>  # CONFIG_BLK_DEV_BSG is not set
>  CONFIG_PARTITION_ADVANCED=y
>  # CONFIG_MSDOS_PARTITION is not set
> diff --git a/arch/riscv/configs/rv32_defconfig 
> b/arch/riscv/configs/rv32_defconfig
> index 05bbf5240569..3a55f0e00d6c 100644
> --- a/arch/riscv/configs/rv32_defconfig
> +++ b/arch/riscv/configs/rv32_defconfig
> @@ -17,6 +17,7 @@ CONFIG_BPF_SYSCALL=y
>  CONFIG_SOC_VIRT=y
>  CONFIG_ARCH_RV32I=y
>  CONFIG_SMP=y
> +CONFIG_JUMP_LABEL=y
>  CONFIG_MODULES=y
>  CONFIG_MODULE_UNLOAD=y
>  CONFIG_NET=y
> diff --git a/arch/riscv/include/asm/jump_label.h 
> b/arch/riscv/include/asm/jump_label.h
> new file mode 100644
> index ..d5fb342bfccf
> --- /dev/null
> +++ b/arch/riscv/include/asm/jump_label.h
> @@ -0,0 +1,59 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Copyright (C) 2020 Emil Renner Berthing
> + *
> + * Based on arch/arm64/include/asm/jump_label.h
> + */
> +#ifndef __ASM_JUMP_LABEL_H
> +#define __ASM_JUMP_LABEL_H
> +
> +#ifndef __ASSEMBLY__
> +
> +#include 
> +
> +#define JUMP_LABEL_NOP_SIZE 4
> +
> +static __always_inline bool arch_static_branch(struct static_key *key,
> +  

[PATCH 2/2] doc, mm: clarify /proc//oom_score value range

2020-07-09 Thread Michal Hocko
From: Michal Hocko 

The exported value includes oom_score_adj so the range is no [0, 1000]
as described in the previous section but rather [0, 2000]. Mention that
fact explicitly.

Signed-off-by: Michal Hocko 
---
 Documentation/filesystems/proc.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/Documentation/filesystems/proc.rst 
b/Documentation/filesystems/proc.rst
index 8e3b5dffcfa8..78a0dec323a3 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -1673,6 +1673,9 @@ requires CAP_SYS_RESOURCE.
 3.2 /proc//oom_score - Display current oom-killer score
 -
 
+Please note that the exported value includes oom_score_adj so it is effectively
+in range [0,2000].
+
 This file can be used to check the current score used by the oom-killer is for
 any given . Use it together with /proc//oom_score_adj to tune which
 process should be killed in an out-of-memory situation.
-- 
2.27.0



[PATCH 1/2] doc, mm: sync up oom_score_adj documentation

2020-07-09 Thread Michal Hocko
From: Michal Hocko 

There are at least two notes in the oom section. The 3% discount for
root processes is gone since d46078b28889 ("mm, oom: remove 3% bonus for
CAP_SYS_ADMIN processes").

Likewise children of the selected oom victim are not sacrificed since
bbbe48029720 ("mm, oom: remove 'prefer children over parent' heuristic")

Drop both of them.

Signed-off-by: Michal Hocko 
---
 Documentation/filesystems/proc.rst | 8 
 1 file changed, 8 deletions(-)

diff --git a/Documentation/filesystems/proc.rst 
b/Documentation/filesystems/proc.rst
index 996f3cfe7030..8e3b5dffcfa8 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -1634,9 +1634,6 @@ may allocate from based on an estimation of its current 
memory and swap use.
 For example, if a task is using all allowed memory, its badness score will be
 1000.  If it is using half of its allowed memory, its score will be 500.
 
-There is an additional factor included in the badness score: the current memory
-and swap usage is discounted by 3% for root processes.
-
 The amount of "allowed" memory depends on the context in which the oom killer
 was called.  If it is due to the memory assigned to the allocating task's 
cpuset
 being exhausted, the allowed memory represents the set of mems assigned to that
@@ -1672,11 +1669,6 @@ The value of /proc//oom_score_adj may be reduced no 
lower than the last
 value set by a CAP_SYS_RESOURCE process. To reduce the value any lower
 requires CAP_SYS_RESOURCE.
 
-Caveat: when a parent task is selected, the oom killer will sacrifice any first
-generation children with separate address spaces instead, if possible.  This
-avoids servers and important system daemons from being killed and loses the
-minimal amount of work.
-
 
 3.2 /proc//oom_score - Display current oom-killer score
 -
-- 
2.27.0



Re: [RFC PATCH 00/22] Enhance VHOST to enable SoC-to-SoC communication

2020-07-09 Thread Jason Wang



On 2020/7/8 下午9:13, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/8/2020 4:52 PM, Jason Wang wrote:

On 2020/7/7 下午10:45, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/7/2020 3:17 PM, Jason Wang wrote:

On 2020/7/6 下午5:32, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/3/2020 12:46 PM, Jason Wang wrote:

On 2020/7/2 下午9:35, Kishon Vijay Abraham I wrote:

Hi Jason,

On 7/2/2020 3:40 PM, Jason Wang wrote:

On 2020/7/2 下午5:51, Michael S. Tsirkin wrote:

On Thu, Jul 02, 2020 at 01:51:21PM +0530, Kishon Vijay Abraham I wrote:

This series enhances Linux Vhost support to enable SoC-to-SoC
communication over MMIO. This series enables rpmsg communication between
two SoCs using both PCIe RC<->EP and HOST1-NTB-HOST2

1) Modify vhost to use standard Linux driver model
2) Add support in vring to access virtqueue over MMIO
3) Add vhost client driver for rpmsg
4) Add PCIe RC driver (uses virtio) and PCIe EP driver (uses vhost) for
    rpmsg communication between two SoCs connected to each other
5) Add NTB Virtio driver and NTB Vhost driver for rpmsg communication
    between two SoCs connected via NTB
6) Add configfs to configure the components

UseCase1 :

  VHOST RPMSG VIRTIO RPMSG
   +   +
   |   |
   |   |
   |   |
   |   |
+-v--+ +--v---+
|   Linux    | | Linux    |
|  Endpoint  | | Root Complex |
|    <->  |
|    | |  |
|    SOC1    | | SOC2 |
++ +--+

UseCase 2:

  VHOST RPMSG  VIRTIO RPMSG
   + +
   | |
   | |
   | |
   | |
    +--v--+   +--v--+
    | |   | |
    |    HOST1    |   |    HOST2    |
    | |   | |
    +--^--+   +--^--+
   | |
   | |
+-+
|  +--v--+   +--v--+  |
|  | |   | |  |
|  | EP  |   | EP  |  |
|  | CONTROLLER1 |   | CONTROLLER2 |  |
|  | <---> |  |
|  | |   | |  |
|  | |   | |  |
|  | |  SoC With Multiple EP Instances   | |  |
|  | |  (Configured using NTB Function)  | |  |
|  +-+   +-+  |
+-+

Software Layering:

The high-level SW layering should look something like below. This series
adds support only for RPMSG VHOST, however something similar should be
done for net and scsi. With that any vhost device (PCI, NTB, Platform
device, user) can use any of the vhost client driver.


     ++  +---+  ++  +--+
     |  RPMSG VHOST   |  | NET VHOST |  | SCSI VHOST |  |    X |
     +---^+  +-^-+  +-^--+  +^-+
     | |  |  |
     | |  |  |
     | |  |  |
+---v-v--v--v--+
|    VHOST CORE    |
+^---^^--^-+
  |   |    |  |
  |   |    |  |
  |   |    |  |
+v---+  +v--+  +--v--+  +v-+
|  PCI EPF VHOST |  | NTB VHOST |  |PLATFORM DEVICE VHOST|  |    X |
++  +---+  +-+  +--+

This was initially proposed here [1]

[1] ->

[PATCH] SONICS SILICON BACKPLANE DRIVER (SSB): Replace HTTP links with HTTPS ones

2020-07-09 Thread Alexander A. Klimov
Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.

Deterministic algorithm:
For each file:
  If not .svg:
For each line:
  If doesn't contain `\bxmlns\b`:
For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
  If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
If both the HTTP and HTTPS versions
return 200 OK and serve the same content:
  Replace HTTP with HTTPS.

Signed-off-by: Alexander A. Klimov 
---
 Continuing my work started at 93431e0607e5.
 See also: git log --oneline '--author=Alexander A. Klimov 
' v5.7..master
 (Actually letting a shell for loop submit all this stuff for me.)

 If there are any URLs to be removed completely or at least not HTTPSified:
 Just clearly say so and I'll *undo my change*.
 See also: https://lkml.org/lkml/2020/6/27/64

 If there are any valid, but yet not changed URLs:
 See: https://lkml.org/lkml/2020/6/26/837

 If you apply the patch, please let me know.


 drivers/ssb/driver_chipcommon.c | 4 ++--
 drivers/ssb/driver_chipcommon_pmu.c | 2 +-
 drivers/ssb/sprom.c | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/ssb/driver_chipcommon.c b/drivers/ssb/driver_chipcommon.c
index 3861cb659cb9..85542bfd7715 100644
--- a/drivers/ssb/driver_chipcommon.c
+++ b/drivers/ssb/driver_chipcommon.c
@@ -238,7 +238,7 @@ static void chipco_powercontrol_init(struct ssb_chipcommon 
*cc)
}
 }
 
-/* http://bcm-v4.sipsolutions.net/802.11/PmuFastPwrupDelay */
+/* https://bcm-v4.sipsolutions.net/802.11/PmuFastPwrupDelay */
 static u16 pmu_fast_powerup_delay(struct ssb_chipcommon *cc)
 {
struct ssb_bus *bus = cc->dev->bus;
@@ -255,7 +255,7 @@ static u16 pmu_fast_powerup_delay(struct ssb_chipcommon *cc)
}
 }
 
-/* http://bcm-v4.sipsolutions.net/802.11/ClkctlFastPwrupDelay */
+/* https://bcm-v4.sipsolutions.net/802.11/ClkctlFastPwrupDelay */
 static void calc_fast_powerup_delay(struct ssb_chipcommon *cc)
 {
struct ssb_bus *bus = cc->dev->bus;
diff --git a/drivers/ssb/driver_chipcommon_pmu.c 
b/drivers/ssb/driver_chipcommon_pmu.c
index 0f60e90ded26..888069e10659 100644
--- a/drivers/ssb/driver_chipcommon_pmu.c
+++ b/drivers/ssb/driver_chipcommon_pmu.c
@@ -513,7 +513,7 @@ static void ssb_pmu_resources_init(struct ssb_chipcommon 
*cc)
chipco_write32(cc, SSB_CHIPCO_PMU_MAXRES_MSK, max_msk);
 }
 
-/* http://bcm-v4.sipsolutions.net/802.11/SSB/PmuInit */
+/* https://bcm-v4.sipsolutions.net/802.11/SSB/PmuInit */
 void ssb_pmu_init(struct ssb_chipcommon *cc)
 {
u32 pmucap;
diff --git a/drivers/ssb/sprom.c b/drivers/ssb/sprom.c
index 42d620cee8a9..7cd553127861 100644
--- a/drivers/ssb/sprom.c
+++ b/drivers/ssb/sprom.c
@@ -186,7 +186,7 @@ int ssb_fill_sprom_with_fallback(struct ssb_bus *bus, 
struct ssb_sprom *out)
return get_fallback_sprom(bus, out);
 }
 
-/* http://bcm-v4.sipsolutions.net/802.11/IsSpromAvailable */
+/* https://bcm-v4.sipsolutions.net/802.11/IsSpromAvailable */
 bool ssb_is_sprom_available(struct ssb_bus *bus)
 {
/* status register only exists on chipcomon rev >= 11 and we need check
-- 
2.27.0



[PATCH v4 2/2] Add Intel LGM soc DMA support.

2020-07-09 Thread Amireddy Mallikarjuna reddy
Add DMA controller driver for Lightning Mountain(LGM) family of SoCs.

The main function of the DMA controller is the transfer of data from/to any
DPlus compliant peripheral to/from the memory. A memory to memory copy
capability can also be configured.

This ldma driver is used for configure the device and channnels for data
and control paths.

Signed-off-by: Amireddy Mallikarjuna reddy 
---
v1:
- Initial version.

v2:
- Fix device tree bot issues, correspondign driver changes done.
- Fix kerntel test robot warnings.
  
  >> drivers/dma/lgm/lgm-dma.c:729:5: warning: no previous prototype for 
function 'intel_dma_chan_desc_cfg' [-Wmissing-prototypes]
  int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
  ^
  drivers/dma/lgm/lgm-dma.c:729:1: note: declare 'static' if the function is 
not intended to be used outside of this translation unit
  int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
  ^
  static
  1 warning generated.

  vim +/intel_dma_chan_desc_cfg +729 drivers/dma/lgm/lgm-dma.c

728
  > 729 int intel_dma_chan_desc_cfg(struct dma_chan *chan, dma_addr_t desc_base,
730 int desc_num)
731 {
732 return ldma_chan_desc_cfg(to_ldma_chan(chan), desc_base, 
desc_num);
733 }
734 EXPORT_SYMBOL_GPL(intel_dma_chan_desc_cfg);
735

   Reported-by: kernel test robot 
   ---

v3:
- Fix smatch warning.
  
  smatch warnings:
  drivers/dma/lgm/lgm-dma.c:1306 ldma_cfg_init() error: uninitialized symbol 
'ret'.

  Reported-by: kernel test robot 
  Reported-by: Dan Carpenter 
  

v4:
- Address Thomas Langer comments in dtbinding and corresponding driver side 
changes.
---
 drivers/dma/Kconfig |2 +
 drivers/dma/Makefile|1 +
 drivers/dma/lgm/Kconfig |9 +
 drivers/dma/lgm/Makefile|2 +
 drivers/dma/lgm/lgm-dma.c   | 1941 +++
 include/linux/dma/lgm_dma.h |   27 +
 6 files changed, 1982 insertions(+)
 create mode 100644 drivers/dma/lgm/Kconfig
 create mode 100644 drivers/dma/lgm/Makefile
 create mode 100644 drivers/dma/lgm/lgm-dma.c
 create mode 100644 include/linux/dma/lgm_dma.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index de41d7928bff..caeaf12fd524 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -737,6 +737,8 @@ source "drivers/dma/ti/Kconfig"
 
 source "drivers/dma/fsl-dpaa2-qdma/Kconfig"
 
+source "drivers/dma/lgm/Kconfig"
+
 # clients
 comment "DMA Clients"
depends on DMA_ENGINE
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index e60f81331d4c..0b899b076f4e 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
 obj-$(CONFIG_ZX_DMA) += zx_dma.o
 obj-$(CONFIG_ST_FDMA) += st_fdma.o
 obj-$(CONFIG_FSL_DPAA2_QDMA) += fsl-dpaa2-qdma/
+obj-$(CONFIG_INTEL_LDMA) += lgm/
 
 obj-y += mediatek/
 obj-y += qcom/
diff --git a/drivers/dma/lgm/Kconfig b/drivers/dma/lgm/Kconfig
new file mode 100644
index ..bdb5b0d91afb
--- /dev/null
+++ b/drivers/dma/lgm/Kconfig
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0-only
+config INTEL_LDMA
+   bool "Lightning Mountain centralized low speed DMA and high speed DMA 
controllers"
+   select DMA_ENGINE
+   select DMA_VIRTUAL_CHANNELS
+   help
+ Enable support for intel Lightning Mountain SOC DMA controllers.
+ These controllers provide DMA capabilities for a variety of on-chip
+ devices such as SSC, HSNAND and GSWIP.
diff --git a/drivers/dma/lgm/Makefile b/drivers/dma/lgm/Makefile
new file mode 100644
index ..f318a8eff464
--- /dev/null
+++ b/drivers/dma/lgm/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_INTEL_LDMA)   += lgm-dma.o
diff --git a/drivers/dma/lgm/lgm-dma.c b/drivers/dma/lgm/lgm-dma.c
new file mode 100644
index ..91c0a28fe8fb
--- /dev/null
+++ b/drivers/dma/lgm/lgm-dma.c
@@ -0,0 +1,1941 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Lightning Mountain centralized low speed and high speed DMA controller 
driver
+ *
+ * Copyright (c) 2016 ~ 2020 Intel Corporation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../dmaengine.h"
+#include "../virt-dma.h"
+
+#define DRIVER_NAME"lgm-ldma"
+
+#define DMA_ID 0x0008
+#define DMA_ID_REV GENMASK(7, 0)
+#define DMA_ID_PNR GENMASK(19, 16)
+#define DMA_ID_CHNRGENMASK(26, 20)
+#define DMA_ID_DW_128B BIT(27)
+#define DMA_ID_AW_36B  BIT(28)
+#define 

Re: Re: [PATCH] [v2] PCI: rcar: Fix runtime PM imbalance on error

2020-07-09 Thread dinghao . liu
> On Sun, Jun 07, 2020 at 05:31:33PM +0800, Dinghao Liu wrote:
> > pm_runtime_get_sync() increments the runtime PM usage counter even
> > the call returns an error code. Thus a corresponding decrement is
> > needed on the error handling path to keep the counter balanced.
> > 
> > Signed-off-by: Dinghao Liu 
> > ---
> > 
> > Changelog:
> > 
> > v2: - Remove unnecessary 'err_pm_put' label.
> >   Refine commit message.
> > ---
> >  drivers/pci/controller/pcie-rcar.c | 6 ++
> >  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> Can you rebase it on top of v5.8-rc1 and resend it with Yoshihiro's tags
> please ?
> 

Sure, I will resend it soon.

Regards,
Dinghao


[PATCH v4] rtc: rtc-ds1374: wdt: Use watchdog core for watchdog part

2020-07-09 Thread 陳昭勳
Let ds1374 watchdog use watchdog core functions. It also includes
improving watchdog timer setting and nowayout, and just uses ioctl()
of watchdog core.

Signed-off-by: Johnson Chen 
---
v3->v4:
- Fix coding styles 
- Remove dev_info() in ds1374_wdt_settimeout()
- Fix missing error check

v2->v3:
- Fix a problem reported by WATCHDOG_CORE if WATCHDOG
- Remove save_client
- Let wdt_margin be 0 for watchdog_init_timeout()
- Use dev_info() rather than pr_info()
- Avoid more strings in this driver

v1->v2:
- Use ds1374_wdt_settimeout() before registering the watchdog
- Remove watchdog_unregister_device() because devm_watchdog_register_device() 
is used
- Remove ds1374_wdt_ping()
- TIMER_MARGIN_MAX to 4095 for 24-bit value
- Keep wdt_margin
- Fix coding styles

 drivers/rtc/Kconfig  |   1 +
 drivers/rtc/rtc-ds1374.c | 258 +--
 2 files changed, 62 insertions(+), 197 deletions(-)

diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
index b54d87d45c89..c25d51f35f0c 100644
--- a/drivers/rtc/Kconfig
+++ b/drivers/rtc/Kconfig
@@ -282,6 +282,7 @@ config RTC_DRV_DS1374
 config RTC_DRV_DS1374_WDT
bool "Dallas/Maxim DS1374 watchdog timer"
depends on RTC_DRV_DS1374
+   select WATCHDOG_CORE if WATCHDOG
help
  If you say Y here you will get support for the
  watchdog timer in the Dallas Semiconductor DS1374
diff --git a/drivers/rtc/rtc-ds1374.c b/drivers/rtc/rtc-ds1374.c
index 9c51a12cf70f..c71065d26cd2 100644
--- a/drivers/rtc/rtc-ds1374.c
+++ b/drivers/rtc/rtc-ds1374.c
@@ -46,6 +46,7 @@
 #define DS1374_REG_WDALM2  0x06
 #define DS1374_REG_CR  0x07 /* Control */
 #define DS1374_REG_CR_AIE  0x01 /* Alarm Int. Enable */
+#define DS1374_REG_CR_WDSTR0x08 /* 1=INT, 0=RST */
 #define DS1374_REG_CR_WDALM0x20 /* 1=Watchdog, 0=Alarm */
 #define DS1374_REG_CR_WACE 0x40 /* WD/Alarm counter enable */
 #define DS1374_REG_SR  0x08 /* Status */
@@ -71,7 +72,9 @@ struct ds1374 {
struct i2c_client *client;
struct rtc_device *rtc;
struct work_struct work;
-
+#ifdef CONFIG_RTC_DRV_DS1374_WDT
+   struct watchdog_device wdt;
+#endif
/* The mutex protects alarm operations, and prevents a race
 * between the enable_irq() in the workqueue and the free_irq()
 * in the remove function.
@@ -369,238 +372,98 @@ static const struct rtc_class_ops ds1374_rtc_ops = {
  *
  *
  */
-static struct i2c_client *save_client;
 /* Default margin */
-#define WD_TIMO 131762
+#define TIMER_MARGIN_DEFAULT   32
+#define TIMER_MARGIN_MIN   1
+#define TIMER_MARGIN_MAX   4095 /* 24-bit value */
 
 #define DRV_NAME "DS1374 Watchdog"
 
-static int wdt_margin = WD_TIMO;
-static unsigned long wdt_is_open;
+static int wdt_margin;
 module_param(wdt_margin, int, 0);
 MODULE_PARM_DESC(wdt_margin, "Watchdog timeout in seconds (default 32s)");
 
+static bool nowayout = WATCHDOG_NOWAYOUT;
+module_param(nowayout, bool, 0);
+MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default ="
+   __MODULE_STRING(WATCHDOG_NOWAYOUT)")");
+
 static const struct watchdog_info ds1374_wdt_info = {
.identity   = "DS1374 WTD",
.options= WDIOF_SETTIMEOUT | WDIOF_KEEPALIVEPING |
WDIOF_MAGICCLOSE,
 };
 
-static int ds1374_wdt_settimeout(unsigned int timeout)
+static int ds1374_wdt_settimeout(struct watchdog_device *wdt, unsigned int 
timeout)
 {
-   int ret = -ENOIOCTLCMD;
-   int cr;
+   struct ds1374 *ds1374 = watchdog_get_drvdata(wdt);
+   struct i2c_client *client = ds1374->client;
+   int ret, cr;
 
-   ret = cr = i2c_smbus_read_byte_data(save_client, DS1374_REG_CR);
-   if (ret < 0)
-   goto out;
+   wdt->timeout = timeout;
+
+   cr = i2c_smbus_read_byte_data(client, DS1374_REG_CR);
+   if (cr < 0)
+   return cr;
 
/* Disable any existing watchdog/alarm before setting the new one */
cr &= ~DS1374_REG_CR_WACE;
 
-   ret = i2c_smbus_write_byte_data(save_client, DS1374_REG_CR, cr);
+   ret = i2c_smbus_write_byte_data(client, DS1374_REG_CR, cr);
if (ret < 0)
-   goto out;
+   return ret;
 
/* Set new watchdog time */
-   ret = ds1374_write_rtc(save_client, timeout, DS1374_REG_WDALM0, 3);
-   if (ret) {
-   pr_info("couldn't set new watchdog time\n");
-   goto out;
-   }
+   timeout = timeout * 4096;
+   ret = ds1374_write_rtc(client, timeout, DS1374_REG_WDALM0, 3);
+   if (ret)
+   return ret;
 
/* Enable watchdog timer */
cr |= DS1374_REG_CR_WACE | DS1374_REG_CR_WDALM;
+   cr &= ~DS1374_REG_CR_WDSTR;/* for RST PIN */
cr &= ~DS1374_REG_CR_AIE;
 
-   ret = i2c_smbus_write_byte_data(save_client, DS1374_REG_CR, cr);
+   ret 

Re: [PATCH v6 4/7] pidfd: Replace open-coded partial receive_fd()

2020-07-09 Thread Kees Cook
On Tue, Jul 07, 2020 at 02:22:20PM +0200, Christian Brauner wrote:
> So while the patch is correct it leaves 5.6 and 5.7 with a bug in the
> pidfd_getfd() implementation and that just doesn't seem right. I'm
> wondering whether we should introduce:
> 
> void sock_update(struct file *file)
> {
>   struct socket *sock;
>   int error;
> 
>   sock = sock_from_file(file, );
>   if (sock) {
>   sock_update_netprioidx(>sk->sk_cgrp_data);
>   sock_update_classid(>sk->sk_cgrp_data);
>   }
> }
> 
> and switch pidfd_getfd() over to:
> 
> diff --git a/kernel/pid.c b/kernel/pid.c
> index f1496b757162..c26bba822be3 100644
> --- a/kernel/pid.c
> +++ b/kernel/pid.c
> @@ -642,10 +642,12 @@ static int pidfd_getfd(struct pid *pid, int fd)
> }
> 
> ret = get_unused_fd_flags(O_CLOEXEC);
> -   if (ret < 0)
> +   if (ret < 0) {
> fput(file);
> -   else
> +   } else {
> +   sock_update(file);
> fd_install(ret, file);
> +   }
> 
> return ret;
>  }
> 
> first thing in the series and then all of the other patches on top of it
> so that we can Cc stable for this and that can get it backported to 5.6,
> 5.7, and 5.8.
> 
> Alternatively, I can make this a separate bugfix patch series which I'll
> send upstream soonish. Or we have specific patches just for 5.6, 5.7,
> and 5.8. Thoughts?

Okay, I looked at hch's clean-ups again and I'm reminded why they
don't make great -stable material. :) The compat bug (also missing the
sock_update()) needs a similar fix (going back to 3.6...), so, yeah,
for ease of backport, probably an explicit sock_update() implementation
(with compat and native scm using it), and a second patch for pidfd.

Let me see what I looks best...

-- 
Kees Cook


Re: [V2 PATCH] usb: mtu3: fix NULL pointer dereference

2020-07-09 Thread Felipe Balbi
Chunfeng Yun  writes:

> Some pointers are dereferenced before successful checks.
>
> Reported-by: Markus Elfring 
> Signed-off-by: Chunfeng Yun 

do you need a Fixes tag here? Perhaps a Cc stable too?

-- 
balbi


signature.asc
Description: PGP signature


Re: [V2 PATCH] usb: mtu3: fix NULL pointer dereference

2020-07-09 Thread Felipe Balbi

Hi,

Chunfeng Yun  writes:
>> > @@ -373,8 +380,8 @@ static int mtu3_gadget_dequeue(struct usb_ep *ep, 
>> > struct usb_request *req)
>> >   */
>> >  static int mtu3_gadget_ep_set_halt(struct usb_ep *ep, int value)
>> >  {
>> > -  struct mtu3_ep *mep = to_mtu3_ep(ep);
>> > -  struct mtu3 *mtu = mep->mtu;
>> > +  struct mtu3_ep *mep;
>> > +  struct mtu3 *mtu;
>> >struct mtu3_request *mreq;
>> >unsigned long flags;
>> >int ret = 0;
>> > @@ -382,6 +389,9 @@ static int mtu3_gadget_ep_set_halt(struct usb_ep *ep, 
>> > int value)
>> >if (!ep)
>> >return -EINVAL;
>> 
>> Same here, how can that ever happen?
> Maybe when the class driver has something wrong:)
>
> You mean it's better to remove these unnecessary checks?

if we need those checks, I'd rather have them at a central location,
such as udc/core.c. But, as Greg mentioned, the kernel doesn't call
these with NULL pointers.

-- 
balbi


signature.asc
Description: PGP signature


general protection fault in khugepaged

2020-07-09 Thread syzbot
Hello,

syzbot found the following crash on:

HEAD commit:e44f65fd xen-netfront: remove redundant assignment to vari..
git tree:   net-next
console output: https://syzkaller.appspot.com/x/log.txt?x=15de54a710
kernel config:  https://syzkaller.appspot.com/x/.config?x=829871134ca5e230
dashboard link: https://syzkaller.appspot.com/bug?extid=ed318e8b790ca72c5ad0
compiler:   gcc (GCC) 10.1.0-syz 20200507
syz repro:  https://syzkaller.appspot.com/x/repro.syz?x=113406a710
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=175597d310

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+ed318e8b790ca72c5...@syzkaller.appspotmail.com

general protection fault, probably for non-canonical address 
0xdc00:  [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x-0x0007]
CPU: 1 PID: 1155 Comm: khugepaged Not tainted 5.8.0-rc2-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 
01/01/2011
RIP: 0010:anon_vma_lock_write include/linux/rmap.h:120 [inline]
RIP: 0010:collapse_huge_page mm/khugepaged.c:1110 [inline]
RIP: 0010:khugepaged_scan_pmd mm/khugepaged.c:1349 [inline]
RIP: 0010:khugepaged_scan_mm_slot mm/khugepaged.c:2110 [inline]
RIP: 0010:khugepaged_do_scan mm/khugepaged.c:2193 [inline]
RIP: 0010:khugepaged+0x3bba/0x5a10 mm/khugepaged.c:2238
Code: 01 00 00 48 8d bb 88 00 00 00 48 89 f8 48 c1 e8 03 42 80 3c 30 00 0f 85 
fa 0f 00 00 48 8b 9b 88 00 00 00 48 89 d8 48 c1 e8 03 <42> 80 3c 30 00 0f 85 d4 
0f 00 00 48 8b 3b 48 83 c7 08 e8 9f ff 30
RSP: 0018:c90004be7c80 EFLAGS: 00010246
RAX:  RBX:  RCX: 81a72d8b
RDX: 8880a69d8100 RSI: 81b7606b RDI: 88809f0577c0
RBP:  R08:  R09: 8881ff213a7f
R10: 0080 R11:  R12: 8aae6110
R13: c90004be7de0 R14: dc00 R15: 2000
FS:  () GS:8880ae70() knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2:  CR3: 0001fe0cf000 CR4: 001406e0
DR0:  DR1:  DR2: 
DR3:  DR6: fffe0ff0 DR7: 0400
Call Trace:
 kthread+0x3b5/0x4a0 kernel/kthread.c:291
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:293
Modules linked in:
---[ end trace f1f03dbd2ea0777e ]---
RIP: 0010:anon_vma_lock_write include/linux/rmap.h:120 [inline]
RIP: 0010:collapse_huge_page mm/khugepaged.c:1110 [inline]
RIP: 0010:khugepaged_scan_pmd mm/khugepaged.c:1349 [inline]
RIP: 0010:khugepaged_scan_mm_slot mm/khugepaged.c:2110 [inline]
RIP: 0010:khugepaged_do_scan mm/khugepaged.c:2193 [inline]
RIP: 0010:khugepaged+0x3bba/0x5a10 mm/khugepaged.c:2238
Code: 01 00 00 48 8d bb 88 00 00 00 48 89 f8 48 c1 e8 03 42 80 3c 30 00 0f 85 
fa 0f 00 00 48 8b 9b 88 00 00 00 48 89 d8 48 c1 e8 03 <42> 80 3c 30 00 0f 85 d4 
0f 00 00 48 8b 3b 48 83 c7 08 e8 9f ff 30
RSP: 0018:c90004be7c80 EFLAGS: 00010246
RAX:  RBX:  RCX: 81a72d8b
RDX: 8880a69d8100 RSI: 81b7606b RDI: 88809f0577c0
RBP:  R08:  R09: 8881ff213a7f
R10: 0080 R11:  R12: 8aae6110
R13: c90004be7de0 R14: dc00 R15: 2000
FS:  () GS:8880ae60() knlGS:
CS:  0010 DS:  ES:  CR0: 80050033
CR2: 004c00c8 CR3: 0001f7ac5000 CR4: 001406f0
DR0:  DR1:  DR2: 
DR3:  DR6: fffe0ff0 DR7: 0400


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkal...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches


Re: [PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-09 Thread Michal Hocko
On Wed 08-07-20 09:41:06, Michal Hocko wrote:
> On Wed 08-07-20 16:16:02, Joonsoo Kim wrote:
> > On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote:
> > > On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> > > > From: Joonsoo Kim 
> > > > 
> > > > new_non_cma_page() in gup.c which try to allocate migration target page
> > > > requires to allocate the new page that is not on the CMA area.
> > > > new_non_cma_page() implements it by removing __GFP_MOVABLE flag.  This 
> > > > way
> > > > works well for THP page or normal page but not for hugetlb page.
> > > > 
> > > > hugetlb page allocation process consists of two steps.  First is 
> > > > dequeing
> > > > from the pool.  Second is, if there is no available page on the queue,
> > > > allocating from the page allocator.
> > > > 
> > > > new_non_cma_page() can control allocation from the page allocator by
> > > > specifying correct gfp flag.  However, dequeing cannot be controlled 
> > > > until
> > > > now, so, new_non_cma_page() skips dequeing completely.  It is a 
> > > > suboptimal
> > > > since new_non_cma_page() cannot utilize hugetlb pages on the queue so 
> > > > this
> > > > patch tries to fix this situation.
> > > > 
> > > > This patch makes the deque function on hugetlb CMA aware and skip CMA
> > > > pages if newly added skip_cma argument is passed as true.
> > > 
> > > Hmm, can't you instead change dequeue_huge_page_node_exact() to test the 
> > > PF_
> > > flag and avoid adding bool skip_cma everywhere?
> > 
> > Okay! Please check following patch.
> > > 
> > > I think that's what Michal suggested [1] except he said "the code already 
> > > does
> > > by memalloc_nocma_{save,restore} API". It needs extending a bit though, 
> > > AFAICS.
> > > __gup_longterm_locked() indeed does the save/restore, but restore comes 
> > > before
> > > check_and_migrate_cma_pages() and thus new_non_cma_page() is called, so an
> > > adjustment is needed there, but that's all?
> > > 
> > > Hm the adjustment should be also done because save/restore is done around
> > > __get_user_pages_locked(), but check_and_migrate_cma_pages() also calls
> > > __get_user_pages_locked(), and that call not being between nocma save and
> > > restore is thus also a correctness issue?
> > 
> > Simply, I call memalloc_nocma_{save,restore} in new_non_cma_page(). It
> > would not cause any problem.
> 
> I believe a proper fix is the following. The scope is really defined for
> FOLL_LONGTERM pins and pushing it inside check_and_migrate_cma_pages
> will solve the problem as well but it imho makes more sense to do it in
> the caller the same way we do for any others. 
> 
> Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages 
> allocated from CMA region")
> 
> I am not sure this is worth backporting to stable yet.

Should I post it as a separate patch do you plan to include this into your next 
version?

> 
> diff --git a/mm/gup.c b/mm/gup.c
> index de9e36262ccb..75980dd5a2fc 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1794,7 +1794,6 @@ static long __gup_longterm_locked(struct task_struct 
> *tsk,
>vmas_tmp, NULL, gup_flags);
>  
>   if (gup_flags & FOLL_LONGTERM) {
> - memalloc_nocma_restore(flags);
>   if (rc < 0)
>   goto out;
>  
> @@ -1802,11 +1801,13 @@ static long __gup_longterm_locked(struct task_struct 
> *tsk,
>   for (i = 0; i < rc; i++)
>   put_page(pages[i]);
>   rc = -EOPNOTSUPP;
> + memalloc_nocma_restore(flags);
>   goto out;
>   }
>  
>   rc = check_and_migrate_cma_pages(tsk, mm, start, rc, pages,
>vmas_tmp, gup_flags);
> + memalloc_nocma_restore(flags);
>   }
>  
>  out:
> -- 
> Michal Hocko
> SUSE Labs

-- 
Michal Hocko
SUSE Labs


[PATCH] [v3] PCI: rcar: Fix runtime PM imbalance on error

2020-07-09 Thread Dinghao Liu
pm_runtime_get_sync() increments the runtime PM usage counter even
the call returns an error code. Thus a corresponding decrement is
needed on the error handling path to keep the counter balanced.

Fixes: 0df6150e7ceb ("PCI: rcar: Use runtime PM to control controller
clock")

Signed-off-by: Dinghao Liu 
---

Changelog:

v2: - Remove unnecessary 'err_pm_put' label.
  Refine commit message.

v3: - Add Fixes tag.
  Rebase the patch on top of the latest kernel.
---
 drivers/pci/controller/pcie-rcar-host.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/pci/controller/pcie-rcar-host.c 
b/drivers/pci/controller/pcie-rcar-host.c
index d210a36561be..060c24f5221e 100644
--- a/drivers/pci/controller/pcie-rcar-host.c
+++ b/drivers/pci/controller/pcie-rcar-host.c
@@ -986,7 +986,7 @@ static int rcar_pcie_probe(struct platform_device *pdev)
err = pm_runtime_get_sync(pcie->dev);
if (err < 0) {
dev_err(pcie->dev, "pm_runtime_get_sync failed\n");
-   goto err_pm_disable;
+   goto err_pm_put;
}
 
err = rcar_pcie_get_resources(host);
@@ -1057,8 +1057,6 @@ static int rcar_pcie_probe(struct platform_device *pdev)
 
 err_pm_put:
pm_runtime_put(dev);
-
-err_pm_disable:
pm_runtime_disable(dev);
pci_free_resource_list(>resources);
 
-- 
2.17.1



Re: [PATCH] efi/libstub: EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER should not default to yes

2020-07-09 Thread Ard Biesheuvel
On Thu, 25 Jun 2020 at 19:11, Ard Biesheuvel  wrote:
>
> On Tue, 23 Jun 2020 at 17:09, Geert Uytterhoeven
>  wrote:
> >
> > EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER is deprecated, so it should not
> > be enabled by default.
> >
> > In light of commit 4da0b2b7e67524cc ("efi/libstub: Re-enable command
> > line initrd loading for x86"), keep the default for X86.
> >
> > Fixes: cf6b83664895a5c7 ("efi/libstub: Make initrd file loader 
> > configurable")
> > Signed-off-by: Geert Uytterhoeven 
>
> Queued as a fix, thanks.
>

I am going to have to postpone this one - it appears kernelCI uses
QEMU firmware that does not implement the new initrd loading protocol
yet, so I will need to get that fixed first.


> > ---
> >  drivers/firmware/efi/Kconfig | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
> > index e6fc022bc87e03ab..56055c61904e49f4 100644
> > --- a/drivers/firmware/efi/Kconfig
> > +++ b/drivers/firmware/efi/Kconfig
> > @@ -127,7 +127,7 @@ config EFI_ARMSTUB_DTB_LOADER
> >  config EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER
> > bool "Enable the command line initrd loader" if !X86
> > depends on EFI_STUB && (EFI_GENERIC_STUB || X86)
> > -   default y
> > +   default y if X86
> > help
> >   Select this config option to add support for the initrd= command
> >   line parameter, allowing an initrd that resides on the same volume
> > --
> > 2.17.1
> >


Re: [PATCH] Replace HTTP links with HTTPS ones: YAMA SECURITY MODULE

2020-07-09 Thread Alexander A. Klimov




Am 09.07.20 um 00:54 schrieb Kees Cook:

On Wed, Jul 08, 2020 at 08:22:03PM +0200, Alexander A. Klimov wrote:



Am 08.07.20 um 10:05 schrieb Kees Cook:

On Wed, Jul 08, 2020 at 09:33:46AM +0200, Alexander A. Klimov wrote:

Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.

Deterministic algorithm:
For each file:
If not .svg:
  For each line:
If doesn't contain `\bxmlns\b`:
  For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
  If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
  If both the HTTP and HTTPS versions
  return 200 OK and serve the same content:
Replace HTTP with HTTPS.

Signed-off-by: Alexander A. Klimov 
---
   Continuing my work started at 93431e0607e5.
   See also: git log --oneline '--author=Alexander A. Klimov 
' v5.7..master
   (Actually letting a shell for loop submit all this stuff for me.)

   If there are any URLs to be removed completely or at least not HTTPSified:
   Just clearly say so and I'll *undo my change*.

As written here...


I interpreted that as "any URLs [changed by this patch]". I wanted no
URLs you changed to be removed nor not HTTPSified.


   See also: https://lkml.org/lkml/2020/6/27/64


(You seem to be saying "any URLs [in the file]".)


   If there are any valid, but yet not changed URLs:
   See: https://lkml.org/lkml/2020/6/26/837


The URL I commented on was not valid and not changed by your patch.



   If you apply the patch, please let me know.


   Documentation/admin-guide/LSM/Yama.rst | 2 +-
   1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/LSM/Yama.rst 
b/Documentation/admin-guide/LSM/Yama.rst
index d0a060de3973..64fd62507ae5 100644
--- a/Documentation/admin-guide/LSM/Yama.rst
+++ b/Documentation/admin-guide/LSM/Yama.rst
@@ -21,7 +21,7 @@ of their attack without resorting to user-assisted phishing.
   This is not a theoretical problem. SSH session hijacking
   (http://www.storm.net.nz/projects/7) and arbitrary code injection


This link is dead. It is likely best replaced by:

... I'd undo this change.


You sent me a patch to update URLs, gave me (seemingly) explicit
instructions about which things would cause you to undo individual
changes, none of which seemed to trigger, so I offered an improvement,
that would add another HTTPS URL -- which is entirely within your stated
desires to have "[one] commit ... per one thing [you've]i done" for
a patch where the Subject is literally "Replace HTTP links with HTTPS
ones", for which I suggested an improvement.


But as it's the only one here, just forget this patch.


You seem hostile to accepting feedback on how this patch could be
improved. It's one thing to use automation to help generate patches,
and I understand your apparent desires to keep it automated, but that
is not always how patch development turns out.

Your instructions appear to take a long way to just say "here's a patch,
take it or leave it" which seems pretty anti-collaborative to me.

No, no and no.

If you look up other discussions (especially the very first one) on such 
patches, you'll see that *I react to change requests with improved* 
(shortened) patches.


*I do them manually* and I've no problem with doing things manually, I 
just automate everything that is possible.


*I don't demand to accept my patches as-is.* The only thing I demand is 
letting me focus on one thing at a time.

https://lkml.org/lkml/2020/6/27/64
You requested reanimatig a dead link. That's a legit thing, but it's 
*another* thing. Another than (the yet not done task of mine of) just 
HTTPSifying URLs.


And as it's the only URL here, of course the whole patch makes no sense 
anymore. If I'd replace the URL as you said, I'd make a *new patch* with 
a *new title* and just send it --in-reply-to here. And my statement 
"just forget [the old] patch" would still stand.


Also IMAO in this particular case *I don't deserve* to be the author of 
the new patch as *you did all the work* for it – i.e. figured out the 
replacement URL.






[PATCH] VIRTIO CORE AND NET DRIVERS: Replace HTTP links with HTTPS ones

2020-07-09 Thread Alexander A. Klimov
Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.

Deterministic algorithm:
For each file:
  If not .svg:
For each line:
  If doesn't contain `\bxmlns\b`:
For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
  If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
If both the HTTP and HTTPS versions
return 200 OK and serve the same content:
  Replace HTTP with HTTPS.

Signed-off-by: Alexander A. Klimov 
---
 Continuing my work started at 93431e0607e5.
 See also: git log --oneline '--author=Alexander A. Klimov 
' v5.7..master
 (Actually letting a shell for loop submit all this stuff for me.)

 If there are any URLs to be removed completely or at least not HTTPSified:
 Just clearly say so and I'll *undo my change*.
 See also: https://lkml.org/lkml/2020/6/27/64

 If there are any valid, but yet not changed URLs:
 See: https://lkml.org/lkml/2020/6/26/837

 If you apply the patch, please let me know.


 Documentation/devicetree/bindings/virtio/mmio.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/virtio/mmio.txt 
b/Documentation/devicetree/bindings/virtio/mmio.txt
index 21af30fbb81f..0a575f329f6e 100644
--- a/Documentation/devicetree/bindings/virtio/mmio.txt
+++ b/Documentation/devicetree/bindings/virtio/mmio.txt
@@ -1,6 +1,6 @@
 * virtio memory mapped device
 
-See http://ozlabs.org/~rusty/virtio-spec/ for more details.
+See https://ozlabs.org/~rusty/virtio-spec/ for more details.
 
 Required properties:
 
-- 
2.27.0



[PATCH] net: enetc: use eth_broadcast_addr() to assign broadcast

2020-07-09 Thread Xu Wang
This patch is to use eth_broadcast_addr() to assign broadcast address
insetad of memset().

Signed-off-by: Xu Wang 
---
 drivers/net/ethernet/freescale/enetc/enetc_qos.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c 
b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
index fd3df19eaa32..6fc0c275306f 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
@@ -487,7 +487,7 @@ static int enetc_streamid_hw_set(struct enetc_ndev_priv 
*priv,
 
cbd.addr[0] = lower_32_bits(dma);
cbd.addr[1] = upper_32_bits(dma);
-   memset(si_data->dmac, 0xff, ETH_ALEN);
+   eth_broadcast_addr(si_data->dmac);
si_data->vid_vidm_tg =
cpu_to_le16(ENETC_CBDR_SID_VID_MASK
+ ((0x3 << 14) | ENETC_CBDR_SID_VIDM));
-- 
2.17.1



Re:[PATCH] arm64/module-plts: Consider the special case where plt_max_entries is 0

2020-07-09 Thread Richard
On Wed, 8 Jul 2020 at 13:03, 彭浩(Richard)  wrote:
>>
>>
>> On Tue, Jul 07, 2020 at 07:46:08AM -0400, Peng Hao wrote:
>> >> If plt_max_entries is 0, a warning is triggered.
>> >> WARNING: CPU: 200 PID: 3000 at arch/arm64/kernel/module-plts.c:97 
>> >> module_emit_plt_entry+0xa4/0x150
>> >
>> > Which kernel are you seeing this with? There is a PLT-related change in
>> > for-next/core, and I'd like to rule if out if possible.
>> >
>> 5.6.0-rc3+
>> >> Signed-off-by: Peng Hao 
>> >> ---
>> >>  arch/arm64/kernel/module-plts.c | 3 ++-
>> >>  1 file changed, 2 insertions(+), 1 deletion(-)
>> >>
>> >> diff --git a/arch/arm64/kernel/module-plts.c 
>> >> b/arch/arm64/kernel/module-plts.c
>> >> index 65b08a74aec6..1868c9ac13f2 100644
>> >> --- a/arch/arm64/kernel/module-plts.c
>> >> +++ b/arch/arm64/kernel/module-plts.c
>> >> @@ -79,7 +79,8 @@ u64 module_emit_plt_entry(struct module *mod, 
>> >> Elf64_Shdr *sechdrs,
>> >>  int i = pltsec->plt_num_entries;
>> >>  int j = i - 1;
>> >>  u64 val = sym->st_value + rela->r_addend;
>> >> -
>> >> +if (pltsec->plt_max_entries == 0)
>> >> +return 0;
>> >
>> >Hmm, but if there aren't any PLTs then how do we end up here?
>> >
>> We also returned 0 when warning was triggered.
>
>That doesn't really answer the question.
>
>Apparently, you are hitting a R_AARCH64_JUMP26 or R_AARCH64_CALL26
>relocation that operates on a b or bl instruction that is more than
>128 megabytes away from its target.
>
My understanding is that a module that calls functions that are not part of the 
module will use PLT.
Plt_max_entries =0 May occur if a module does not depend on other module 
functions.

>In module_frob_arch_sections(), we count all such relocations that
>point to other sections, and allocate a PLT slot for each (and update
>plt_max_entries) accordingly. So this means that the relocation in
>question was disregarded, and this could happen for only two reasons:
>- the branch instruction and its target are both in the same section,
>in which case this section is *really* large,
>- CONFIG_RANDOMIZE_BASE is disabled, but you are still ending up in a
>situation where the modules are really far away from the core kernel
>or from other modules.
>
>Do you have a lot of [large] modules loaded when this happens?
I don’t think I have [large] modules.  I'll trace which module caused this 
warning.
Thanks.


[PATCH v3] spi: use kthread_create_worker() helper

2020-07-09 Thread Marek Szyprowski
Use kthread_create_worker() helper to simplify the code. It uses
the kthread worker API the right way. It will eventually allow
to remove the FIXME in kthread_worker_fn() and add more consistency
checks in the future.

Reviewed-by: Petr Mladek 
Signed-off-by: Marek Szyprowski 
---
v3:
- rebased onto latest spi-next branch
---
 drivers/spi/spi.c   | 26 --
 include/linux/spi/spi.h |  6 ++
 2 files changed, 14 insertions(+), 18 deletions(-)

diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
index d4ba723a30da..1d7bba434225 100644
--- a/drivers/spi/spi.c
+++ b/drivers/spi/spi.c
@@ -1368,7 +1368,7 @@ static void __spi_pump_messages(struct spi_controller 
*ctlr, bool in_kthread)
 
/* If another context is idling the device then defer */
if (ctlr->idling) {
-   kthread_queue_work(>kworker, >pump_messages);
+   kthread_queue_work(ctlr->kworker, >pump_messages);
spin_unlock_irqrestore(>queue_lock, flags);
return;
}
@@ -1382,7 +1382,7 @@ static void __spi_pump_messages(struct spi_controller 
*ctlr, bool in_kthread)
 
/* Only do teardown in the thread */
if (!in_kthread) {
-   kthread_queue_work(>kworker,
+   kthread_queue_work(ctlr->kworker,
   >pump_messages);
spin_unlock_irqrestore(>queue_lock, flags);
return;
@@ -1618,7 +1618,7 @@ static void spi_set_thread_rt(struct spi_controller *ctlr)
 
dev_info(>dev,
"will run message pump with realtime priority\n");
-   sched_setscheduler(ctlr->kworker_task, SCHED_FIFO, );
+   sched_setscheduler(ctlr->kworker->task, SCHED_FIFO, );
 }
 
 static int spi_init_queue(struct spi_controller *ctlr)
@@ -1626,13 +1626,12 @@ static int spi_init_queue(struct spi_controller *ctlr)
ctlr->running = false;
ctlr->busy = false;
 
-   kthread_init_worker(>kworker);
-   ctlr->kworker_task = kthread_run(kthread_worker_fn, >kworker,
-"%s", dev_name(>dev));
-   if (IS_ERR(ctlr->kworker_task)) {
-   dev_err(>dev, "failed to create message pump task\n");
-   return PTR_ERR(ctlr->kworker_task);
+   ctlr->kworker = kthread_create_worker(0, dev_name(>dev));
+   if (IS_ERR(ctlr->kworker)) {
+   dev_err(>dev, "failed to create message pump kworker\n");
+   return PTR_ERR(ctlr->kworker);
}
+
kthread_init_work(>pump_messages, spi_pump_messages);
 
/*
@@ -1716,7 +1715,7 @@ void spi_finalize_current_message(struct spi_controller 
*ctlr)
ctlr->cur_msg = NULL;
ctlr->cur_msg_prepared = false;
ctlr->fallback = false;
-   kthread_queue_work(>kworker, >pump_messages);
+   kthread_queue_work(ctlr->kworker, >pump_messages);
spin_unlock_irqrestore(>queue_lock, flags);
 
trace_spi_message_done(mesg);
@@ -1742,7 +1741,7 @@ static int spi_start_queue(struct spi_controller *ctlr)
ctlr->cur_msg = NULL;
spin_unlock_irqrestore(>queue_lock, flags);
 
-   kthread_queue_work(>kworker, >pump_messages);
+   kthread_queue_work(ctlr->kworker, >pump_messages);
 
return 0;
 }
@@ -1798,8 +1797,7 @@ static int spi_destroy_queue(struct spi_controller *ctlr)
return ret;
}
 
-   kthread_flush_worker(>kworker);
-   kthread_stop(ctlr->kworker_task);
+   kthread_destroy_worker(ctlr->kworker);
 
return 0;
 }
@@ -1822,7 +1820,7 @@ static int __spi_queued_transfer(struct spi_device *spi,
 
list_add_tail(>queue, >queue);
if (!ctlr->busy && need_pump)
-   kthread_queue_work(>kworker, >pump_messages);
+   kthread_queue_work(ctlr->kworker, >pump_messages);
 
spin_unlock_irqrestore(>queue_lock, flags);
return 0;
diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
index 0e67a9a3a1d3..5fcf5da13fdb 100644
--- a/include/linux/spi/spi.h
+++ b/include/linux/spi/spi.h
@@ -358,8 +358,7 @@ static inline void spi_unregister_driver(struct spi_driver 
*sdrv)
  * @cleanup: frees controller-specific state
  * @can_dma: determine whether this controller supports DMA
  * @queued: whether this controller is providing an internal message queue
- * @kworker: thread struct for message pump
- * @kworker_task: pointer to task for message pump kworker thread
+ * @kworker: pointer to thread struct for message pump
  * @pump_messages: work struct for scheduling work to the message pump
  * @queue_lock: spinlock to syncronise access to message queue
  * @queue: message queue
@@ -593,8 +592,7 @@ struct spi_controller {
 * Over time we expect SPI drivers to be phased over to this API.
 */
boolqueued;
-   struct kthread_worker   kworker;
-   struct task_struct  

Re: [RFC PATCH v5 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64

2020-07-09 Thread Zhenyu Ye
On 2020/7/9 2:24, Catalin Marinas wrote:
> On Wed, Jul 08, 2020 at 08:40:31PM +0800, Zhenyu Ye wrote:
>> Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range().
>>
>> In this patch, we only use the TLBI RANGE feature if the stride == PAGE_SIZE,
>> because when stride > PAGE_SIZE, usually only a small number of pages need
>> to be flushed and classic tlbi intructions are more effective.
> 
> Why are they more effective? I guess a range op would work on this as
> well, say unmapping a large THP range. If we ignore this stride ==
> PAGE_SIZE, it could make the code easier to read.
> 

OK, I will remove the stride == PAGE_SIZE here.

>> We can also use 'end - start < threshold number' to decide which way
>> to go, however, different hardware may have different thresholds, so
>> I'm not sure if this is feasible.
>>
>> Signed-off-by: Zhenyu Ye 
>> ---
>>  arch/arm64/include/asm/tlbflush.h | 104 ++
>>  1 file changed, 90 insertions(+), 14 deletions(-)
> 
> Could you please rebase these patches on top of the arm64 for-next/tlbi
> branch:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/tlbi
> 

OK, I will send a formal version patch of this series soon.

>>  
>> -if ((end - start) >= (MAX_TLBI_OPS * stride)) {
>> +if ((!cpus_have_const_cap(ARM64_HAS_TLBI_RANGE) &&
>> +(end - start) >= (MAX_TLBI_OPS * stride)) ||
>> +range_pages >= MAX_TLBI_RANGE_PAGES) {
>>  flush_tlb_mm(vma->vm_mm);
>>  return;
>>  }
> 
> Is there any value in this range_pages check here? What's the value of
> MAX_TLBI_RANGE_PAGES? If we have TLBI range ops, we make a decision here
> but without including the stride. Further down we use the stride to skip
> the TLBI range ops.
> 

MAX_TLBI_RANGE_PAGES is defined as __TLBI_RANGE_PAGES(31, 3), which is
decided by ARMv8.4 spec. The address range is determined by below formula:

[BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE)

Which has nothing to do with the stride.  After removing the stride ==
PAGE_SIZE below, there will be more clear.


>>  }
> 
> I think the algorithm is correct, though I need to work it out on a
> piece of paper.
> 
> The code could benefit from some comments (above the loop) on how the
> range is built and the right scale found.
> 

OK.

Thanks,
Zhenyu



Re: [PATCH] drm/aspeed: Call drm_fbdev_generic_setup after drm_dev_register

2020-07-09 Thread Joel Stanley
On Wed, 1 Jul 2020 at 09:10, Sam Ravnborg  wrote:
>
> Hi Guenter.
>
> On Tue, Jun 30, 2020 at 05:10:02PM -0700, Guenter Roeck wrote:
> > The following backtrace is seen when running aspeed G5 kernels.
> >
> > WARNING: CPU: 0 PID: 1 at drivers/gpu/drm/drm_fb_helper.c:2233 
> > drm_fbdev_generic_setup+0x138/0x198
> > aspeed_gfx 1e6e6000.display: Device has not been registered.
> > CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc3 #1
> > Hardware name: Generic DT based system
> > Backtrace:
> > [<8010d6d0>] (dump_backtrace) from [<8010d9b8>] (show_stack+0x20/0x24)
> > r7:0009 r6:6153 r5: r4:8119fa94
> > [<8010d998>] (show_stack) from [<80b8cb98>] (dump_stack+0xcc/0xec)
> > [<80b8cacc>] (dump_stack) from [<80123ef0>] (__warn+0xd8/0xfc)
> > r7:0009 r6:80e62ed0 r5: r4:974c3ccc
> > [<80123e18>] (__warn) from [<80123f98>] (warn_slowpath_fmt+0x84/0xc4)
> > r9:0009 r8:806a0140 r7:08b9 r6:80e62ed0 r5:80e631f8 r4:974c2000
> > [<80123f18>] (warn_slowpath_fmt) from [<806a0140>] 
> > (drm_fbdev_generic_setup+0x138/0x198)
> > r9:0001 r8:9758fc10 r7:9758fc00 r6: r5:0020 r4:9768a000
> > [<806a0008>] (drm_fbdev_generic_setup) from [<806d4558>] 
> > (aspeed_gfx_probe+0x204/0x32c)
> > r7:9758fc00 r6: r5: r4:9768a000
> > [<806d4354>] (aspeed_gfx_probe) from [<806dfca0>] 
> > (platform_drv_probe+0x58/0xa8)
> >
> > Since commit 1aed9509b29a6 ("drm/fb-helper: Remove return value from
> > drm_fbdev_generic_setup()"), drm_fbdev_generic_setup() must be called
> > after drm_dev_register() to avoid the warning. Do that.
> >
> > Fixes: 1aed9509b29a6 ("drm/fb-helper: Remove return value from 
> > drm_fbdev_generic_setup()")
> > Signed-off-by: Guenter Roeck 
>
> I thought we had this fixed already - but could not find the patch.
> Must have been another driver then.
>
> Acked-by: Sam Ravnborg 
>
> I assume Joel Stanley will pick up this patch.

I do not have the drm maintainer tools set up at the moment. Could one
of the other maintainers put this in the drm-misc tree?

Acked-by: Joel Stanley 

Cheers,

Joel

>
> Sam
>
> > ---
> >  drivers/gpu/drm/aspeed/aspeed_gfx_drv.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_drv.c 
> > b/drivers/gpu/drm/aspeed/aspeed_gfx_drv.c
> > index 6b27242b9ee3..bca3fcff16ec 100644
> > --- a/drivers/gpu/drm/aspeed/aspeed_gfx_drv.c
> > +++ b/drivers/gpu/drm/aspeed/aspeed_gfx_drv.c
> > @@ -173,8 +173,6 @@ static int aspeed_gfx_load(struct drm_device *drm)
> >
> >   drm_mode_config_reset(drm);
> >
> > - drm_fbdev_generic_setup(drm, 32);
> > -
> >   return 0;
> >  }
> >
> > @@ -225,6 +223,7 @@ static int aspeed_gfx_probe(struct platform_device 
> > *pdev)
> >   if (ret)
> >   goto err_unload;
> >
> > + drm_fbdev_generic_setup(>drm, 32);
> >   return 0;
> >
> >  err_unload:
> > --
> > 2.17.1
> >
> > ___
> > dri-devel mailing list
> > dri-de...@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 1/2] iommu: iommu_aux_at(de)tach_device() extension

2020-07-09 Thread Lu Baolu

On 2020/7/7 9:39, Lu Baolu wrote:

The hardware assistant vfio mediated device is a use case of iommu
aux-domain. The interactions between vfio/mdev and iommu during mdev
creation and passthr are:

- Create a group for mdev with iommu_group_alloc();
- Add the device to the group with
 group = iommu_group_alloc();
 if (IS_ERR(group))
 return PTR_ERR(group);

 ret = iommu_group_add_device(group, >dev);
 if (!ret)
 dev_info(>dev, "MDEV: group_id = %d\n",
  iommu_group_id(group));
- Allocate an aux-domain
 iommu_domain_alloc()
- Attach the aux-domain to the physical device from which the mdev is
   created.
 iommu_aux_attach_device()

In the whole process, an iommu group was allocated for the mdev and an
iommu domain was attached to the group, but the group->domain leaves
NULL. As the result, iommu_get_domain_for_dev() doesn't work anymore.

The iommu_get_domain_for_dev() is a necessary interface for device
drivers that want to support aux-domain. For example,

 struct iommu_domain *domain;
 struct device *dev = mdev_dev(mdev);
 unsigned long pasid;

 domain = iommu_get_domain_for_dev(dev);
 if (!domain)
 return -ENODEV;

 pasid = iommu_aux_get_pasid(domain, dev->parent);
 if (pasid == IOASID_INVALID)
 return -EINVAL;

  /* Program the device context with the PASID value */
  

This extends iommu_aux_at(de)tach_device() so that the users could pass
in an optional device pointer (struct device for vfio/mdev for example),
and the necessary check and data link could be done.

Fixes: a3a195929d40b ("iommu: Add APIs for multiple domains per device")
Cc: Robin Murphy 
Cc: Alex Williamson 
Signed-off-by: Lu Baolu 
---
  drivers/iommu/iommu.c   | 86 +
  drivers/vfio/vfio_iommu_type1.c |  5 +-
  include/linux/iommu.h   | 12 +++--
  3 files changed, 87 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 1ed1e14a1f0c..435835058209 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2723,26 +2723,92 @@ EXPORT_SYMBOL_GPL(iommu_dev_feature_enabled);
   * This should make us safe against a device being attached to a guest as a
   * whole while there are still pasid users on it (aux and sva).
   */
-int iommu_aux_attach_device(struct iommu_domain *domain, struct device *dev)
+int iommu_aux_attach_device(struct iommu_domain *domain,
+   struct device *phys_dev, struct device *dev)


I hit a lock issue during internal test. Will fix it in the next
version.

Best regards,
baolu


  {
-   int ret = -ENODEV;
+   struct iommu_group *group;
+   int ret;
  
-	if (domain->ops->aux_attach_dev)

-   ret = domain->ops->aux_attach_dev(domain, dev);
+   if (!domain->ops->aux_attach_dev ||
+   !iommu_dev_feature_enabled(phys_dev, IOMMU_DEV_FEAT_AUX))
+   return -ENODEV;
  
-	if (!ret)

-   trace_attach_device_to_domain(dev);
+   /* Bare use only. */
+   if (!dev) {
+   ret = domain->ops->aux_attach_dev(domain, phys_dev);
+   if (!ret)
+   trace_attach_device_to_domain(phys_dev);
+
+   return ret;
+   }
+
+   /*
+* The caller has created a made-up device (for example, vfio/mdev)
+* and allocated an iommu_group for user level direct assignment.
+* Make sure that the group has only single device and hasn't been
+* attached by any other domain.
+*/
+   group = iommu_group_get(dev);
+   if (!group)
+   return -ENODEV;
+
+   /*
+* Lock the group to make sure the device-count doesn't change while
+* we are attaching.
+*/
+   mutex_lock(>mutex);
+   ret = -EINVAL;
+   if ((iommu_group_device_count(group) != 1) || group->domain)
+   goto out_unlock;
+
+   ret = -EBUSY;
+   if (group->default_domain && group->domain != group->default_domain)
+   goto out_unlock;
+
+   ret = domain->ops->aux_attach_dev(domain, phys_dev);
+   if (!ret) {
+   trace_attach_device_to_domain(phys_dev);
+   group->domain = domain;
+   }
+
+out_unlock:
+   mutex_unlock(>mutex);
+   iommu_group_put(group);
  
  	return ret;

  }
  EXPORT_SYMBOL_GPL(iommu_aux_attach_device);
  
-void iommu_aux_detach_device(struct iommu_domain *domain, struct device *dev)

+void iommu_aux_detach_device(struct iommu_domain *domain,
+struct device *phys_dev, struct device *dev)
  {
-   if (domain->ops->aux_detach_dev) {
-   domain->ops->aux_detach_dev(domain, dev);
-   trace_detach_device_from_domain(dev);
+   struct iommu_group *group;
+
+   if (WARN_ON_ONCE(!domain->ops->aux_detach_dev))
+ 

[PATCH] arm64: topology: Don't support AMU without cpufreq

2020-07-09 Thread Viresh Kumar
The commit cd0ed03a8903 ("arm64: use activity monitors for frequency
invariance"), mentions that:

  "if CONFIG_CPU_FREQ is not enabled, the use of counters is
   enabled on all CPUs only if all possible CPUs correctly support
   the necessary counters"

But that's not really true as validate_cpu_freq_invariance_counters()
fails if max_freq_hz is returned as 0 (in case there is no policy for
the CPU). And the AMUs won't be supported in that case.

Make the code reflect this reality.

Signed-off-by: Viresh Kumar 
---
 arch/arm64/kernel/topology.c | 19 +++
 1 file changed, 3 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 0801a0f3c156..b7da372819fc 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -187,14 +187,13 @@ static int validate_cpu_freq_invariance_counters(int cpu)
return 0;
 }
 
-static inline bool
-enable_policy_freq_counters(int cpu, cpumask_var_t valid_cpus)
+static inline void update_amu_fie_cpus(int cpu, cpumask_var_t valid_cpus)
 {
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
 
if (!policy) {
pr_debug("CPU%d: No cpufreq policy found.\n", cpu);
-   return false;
+   return;
}
 
if (cpumask_subset(policy->related_cpus, valid_cpus))
@@ -202,8 +201,6 @@ enable_policy_freq_counters(int cpu, cpumask_var_t 
valid_cpus)
   amu_fie_cpus);
 
cpufreq_cpu_put(policy);
-
-   return true;
 }
 
 static DEFINE_STATIC_KEY_FALSE(amu_fie_key);
@@ -212,7 +209,6 @@ static DEFINE_STATIC_KEY_FALSE(amu_fie_key);
 static int __init init_amu_fie(void)
 {
cpumask_var_t valid_cpus;
-   bool have_policy = false;
int ret = 0;
int cpu;
 
@@ -228,18 +224,9 @@ static int __init init_amu_fie(void)
if (validate_cpu_freq_invariance_counters(cpu))
continue;
cpumask_set_cpu(cpu, valid_cpus);
-   have_policy |= enable_policy_freq_counters(cpu, valid_cpus);
+   update_amu_fie_cpus(cpu, valid_cpus);
}
 
-   /*
-* If we are not restricted by cpufreq policies, we only enable
-* the use of the AMU feature for FIE if all CPUs support AMU.
-* Otherwise, enable_policy_freq_counters has already enabled
-* policy cpus.
-*/
-   if (!have_policy && cpumask_equal(valid_cpus, cpu_present_mask))
-   cpumask_or(amu_fie_cpus, amu_fie_cpus, valid_cpus);
-
if (!cpumask_empty(amu_fie_cpus)) {
pr_info("CPUs[%*pbl]: counters will be used for FIE.",
cpumask_pr_args(amu_fie_cpus));
-- 
2.25.0.rc1.19.g042ed3e048af



Re: [PATCH 0/2] spi: spi-qcom-qspi: Avoid some per-transfer overhead

2020-07-09 Thread Mukesh, Savaliya



On 7/8/2020 1:46 AM, Douglas Anderson wrote:

Not to be confused with the similar series I posed for the _other_
Qualcomm SPI controller (spi-geni-qcom) [1], this one avoids the
overhead on the Quad SPI controller.

It's based atop the current Qualcomm tree including Rajendra's ("spi:
spi-qcom-qspi: Use OPP API to set clk/perf state").  As discussed in
individual patches, these could ideally land through the Qualcomm tree
with Mark's Ack.

Measuring:
* Before OPP / Interconnect patches reading all flash takes: ~3.4 seconds
* After OPP / Interconnect patches reading all flash takes: ~4.7 seconds
* After this patch reading all flash takes: ~3.3 seconds

[1] https://lore.kernel.org/r/20200702004509.2333554-1-diand...@chromium.org
[2] 
https://lore.kernel.org/r/1593769293-6354-2-git-send-email-rna...@codeaurora.org


Douglas Anderson (2):
   spi: spi-qcom-qspi: Avoid clock setting if not needed
   spi: spi-qcom-qspi: Set an autosuspend delay of 250 ms

  drivers/spi/spi-qcom-qspi.c | 45 -
  1 file changed, 35 insertions(+), 10 deletions(-)


Reviewed-by: Mukesh Kumar Savaliya 


Re: [PATCH] arm64/module-plts: Consider the special case where plt_max_entries is 0

2020-07-09 Thread Ard Biesheuvel
On Thu, 9 Jul 2020 at 09:50, 彭浩(Richard)  wrote:
>
> On Wed, 8 Jul 2020 at 13:03, 彭浩(Richard)  wrote:
> >>
> >>
> >> On Tue, Jul 07, 2020 at 07:46:08AM -0400, Peng Hao wrote:
> >> >> If plt_max_entries is 0, a warning is triggered.
> >> >> WARNING: CPU: 200 PID: 3000 at arch/arm64/kernel/module-plts.c:97 
> >> >> module_emit_plt_entry+0xa4/0x150
> >> >
> >> > Which kernel are you seeing this with? There is a PLT-related change in
> >> > for-next/core, and I'd like to rule if out if possible.
> >> >
> >> 5.6.0-rc3+
> >> >> Signed-off-by: Peng Hao 
> >> >> ---
> >> >>  arch/arm64/kernel/module-plts.c | 3 ++-
> >> >>  1 file changed, 2 insertions(+), 1 deletion(-)
> >> >>
> >> >> diff --git a/arch/arm64/kernel/module-plts.c 
> >> >> b/arch/arm64/kernel/module-plts.c
> >> >> index 65b08a74aec6..1868c9ac13f2 100644
> >> >> --- a/arch/arm64/kernel/module-plts.c
> >> >> +++ b/arch/arm64/kernel/module-plts.c
> >> >> @@ -79,7 +79,8 @@ u64 module_emit_plt_entry(struct module *mod, 
> >> >> Elf64_Shdr *sechdrs,
> >> >>  int i = pltsec->plt_num_entries;
> >> >>  int j = i - 1;
> >> >>  u64 val = sym->st_value + rela->r_addend;
> >> >> -
> >> >> +if (pltsec->plt_max_entries == 0)
> >> >> +return 0;
> >> >
> >> >Hmm, but if there aren't any PLTs then how do we end up here?
> >> >
> >> We also returned 0 when warning was triggered.
> >
> >That doesn't really answer the question.
> >
> >Apparently, you are hitting a R_AARCH64_JUMP26 or R_AARCH64_CALL26
> >relocation that operates on a b or bl instruction that is more than
> >128 megabytes away from its target.
> >
> My understanding is that a module that calls functions that are not part of 
> the module will use PLT.
> Plt_max_entries =0 May occur if a module does not depend on other module 
> functions.
>

A PLT slot is allocated for each b or bl instruction that refers to a
symbol that lives in a different section, either of the same module
(e.g., bl in .init calling into .text), of another module, or of the
core kernel.

I don't see how you end up with plt_max_entries in this case, though.
Are you sure you have CONFIG_RANDOMIZE_BASE enabled?

> >In module_frob_arch_sections(), we count all such relocations that
> >point to other sections, and allocate a PLT slot for each (and update
> >plt_max_entries) accordingly. So this means that the relocation in
> >question was disregarded, and this could happen for only two reasons:
> >- the branch instruction and its target are both in the same section,
> >in which case this section is *really* large,
> >- CONFIG_RANDOMIZE_BASE is disabled, but you are still ending up in a
> >situation where the modules are really far away from the core kernel
> >or from other modules.
> >
> >Do you have a lot of [large] modules loaded when this happens?
> I don’t think I have [large] modules.  I'll trace which module caused this 
> warning.

Yes please.


[PATCH v2 3/3] misc: cxl: flash: Remove unused variable 'drc_index'

2020-07-09 Thread Lee Jones
Keeping the pointer increment though.

Fixes the following W=1 kernel build warning:

 drivers/misc/cxl/flash.c: In function ‘update_devicetree’:
 drivers/misc/cxl/flash.c:178:16: warning: variable ‘drc_index’ set but not 
used [-Wunused-but-set-variable]
 178 | __be32 *data, drc_index, phandle;
 | ^

Cc: Frederic Barrat 
Cc: Andrew Donnellan 
Cc: linuxppc-...@lists.ozlabs.org
Signed-off-by: Lee Jones 
---
Changelog:

v1 => v2:
 - Fix "flash.c:216:6: error: value computed is not used [-Werror=unused-value]"
   - ... as reported by Intel's Kernel Test Robot

drivers/misc/cxl/flash.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/misc/cxl/flash.c b/drivers/misc/cxl/flash.c
index cb9cca35a2263..5b93ff51d82a5 100644
--- a/drivers/misc/cxl/flash.c
+++ b/drivers/misc/cxl/flash.c
@@ -175,7 +175,7 @@ static int update_devicetree(struct cxl *adapter, s32 scope)
struct update_nodes_workarea *unwa;
u32 action, node_count;
int token, rc, i;
-   __be32 *data, drc_index, phandle;
+   __be32 *data, phandle;
char *buf;
 
token = rtas_token("ibm,update-nodes");
@@ -213,7 +213,7 @@ static int update_devicetree(struct cxl *adapter, s32 scope)
break;
case OPCODE_ADD:
/* nothing to do, just move pointer */
-   drc_index = *data++;
+   data++;
break;
}
}
-- 
2.25.1


Re: [PATCH 4.14 105/136] usb/ehci-platform: Set PM runtime as active on resume

2020-07-09 Thread Eugeniu Rosca
Hello everyone,

Cc: linux-renesas-soc
Cc: linux-pm

On Tue, Jun 23, 2020 at 09:59:21PM +0200, Greg Kroah-Hartman wrote:
> From: Qais Yousef 
> 
> [ Upstream commit 16bdc04cc98ab0c74392ceef2475ecc5e73fcf49 ]
> 
> Follow suit of ohci-platform.c and perform pm_runtime_set_active() on
> resume.
> 
> ohci-platform.c had a warning reported due to the missing
> pm_runtime_set_active() [1].
> 
> [1] 
> https://lore.kernel.org/lkml/20200323143857.db5zphxhq4hz3...@e107158-lin.cambridge.arm.com/
> 
> Acked-by: Alan Stern 
> Signed-off-by: Qais Yousef 
> CC: Tony Prisk 
> CC: Greg Kroah-Hartman 
> CC: Mathias Nyman 
> CC: Oliver Neukum 
> CC: linux-arm-ker...@lists.infradead.org
> CC: linux-...@vger.kernel.org
> CC: linux-kernel@vger.kernel.org
> Link: https://lore.kernel.org/r/20200518154931.6144-3-qais.you...@arm.com
> Signed-off-by: Greg Kroah-Hartman 
> Signed-off-by: Sasha Levin 
> ---
>  drivers/usb/host/ehci-platform.c | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/usb/host/ehci-platform.c 
> b/drivers/usb/host/ehci-platform.c
> index f1908ea9fbd86..6fcd332880143 100644
> --- a/drivers/usb/host/ehci-platform.c
> +++ b/drivers/usb/host/ehci-platform.c
> @@ -390,6 +390,11 @@ static int ehci_platform_resume(struct device *dev)
>   }
>  
>   ehci_resume(hcd, priv->reset_on_resume);
> +
> + pm_runtime_disable(dev);
> + pm_runtime_set_active(dev);
> + pm_runtime_enable(dev);
> +
>   return 0;

After integrating v4.14.186 commit 5410d158ca2a50 ("usb/ehci-platform:
Set PM runtime as active on resume") into downstream v4.14.x, we started
to consistently experience below panic [1] on every second s2ram of
R-Car H3 Salvator-X Renesas reference board.

After some investigations, we concluded the following:
 - the issue does not exist in vanilla v5.8-rc4+
 - [bisecting shows that] the panic on v4.14.186 is caused by the lack
   of v5.6-rc1 commit 987351e1ea7772 ("phy: core: Add consumer device
   link support"). Getting evidence for that is easy. Reverting
   987351e1ea7772 in vanilla leads to a similar backtrace [2].

Questions:
 - Backporting 987351e1ea7772 ("phy: core: Add consumer device
   link support") to v4.14.187 looks challenging enough, so probably not
   worth it. Anybody to contradict this?
 - Assuming no plans to backport the missing mainline commit to v4.14.x,
   should the following three v4.14.186 commits be reverted on v4.14.x?
   * baef809ea497a4 ("usb/ohci-platform: Fix a warning when hibernating")
   * 9f33eff4958885 ("usb/xhci-plat: Set PM runtime as active on resume")
   * 5410d158ca2a50 ("usb/ehci-platform: Set PM runtime as active on resume")

[1] https://elinux.org/R-Car/Boards/Salvator-X#Suspend-to-RAM
root@rcar-gen3:~# cat s2ram.sh 
modprobe i2c-dev
echo 9 > /proc/sys/kernel/printk
i2cset -f -y 7 0x30 0x20 0x0F
echo 0 > /sys/module/suspend/parameters/pm_test_delay
echo core  > /sys/power/pm_test
echo deep > /sys/power/mem_sleep
echo 1 > /sys/power/pm_debug_messages
echo 0 > /sys/power/pm_print_times
echo mem > /sys/power/state;
root@rcar-gen3:~#
root@rcar-gen3:~# sh s2ram.sh 
[   65.853020] PM: suspend entry (deep)
[   65.858395] PM: Syncing filesystems ... done.
[   65.895890] PM: Preparing system for sleep (deep)
[   65.906272] Freezing user space processes ... (elapsed 0.004 seconds) done.
[   65.918350] OOM killer disabled.
[   65.921827] Freezing remaining freezable tasks ... (elapsed 0.005 seconds) 
done.
[   65.935063] PM: Suspending system (deep)
[   66.094910] PM: suspend of devices complete after 143.807 msecs
[   66.101282] PM: suspend devices took 0.162 seconds
[   66.133020] PM: late suspend of devices complete after 26.225 msecs
[   66.166806] PM: noirq suspend of devices complete after 24.050 msecs
[   66.173518] Disabling non-boot CPUs ...
[   66.199539] CPU1: shutdown
[   66.202563] psci: CPU1 killed (polled 0 ms)
[   66.230911] CPU2: shutdown
[   66.233923] psci: CPU2 killed (polled 0 ms)
[   66.261940] CPU3: shutdown
[   66.265351] psci: CPU3 killed (polled 0 ms)
[   66.300596] CPU4: shutdown
[   66.303837] psci: CPU4 killed (polled 0 ms)
[   66.340455] NOHZ: local_softirq_pending 202
[   66.346818] CPU5: shutdown
[   66.349811] psci: CPU5 killed (polled 0 ms)
[   66.388761] CPU6: shutdown
[   66.391732] psci: CPU6 killed (polled 0 ms)
[   66.442557] CPU7: shutdown
[   66.445659] psci: CPU7 killed (polled 0 ms)
[   66.452730] PM: suspend debug: Waiting for 0 second(s).
[   66.452730] PM: Timekeeping suspended for 0.005 seconds
[   66.452898] Enabling non-boot CPUs ...
[   66.470705] CPU1 is up
[   66.480825] CPU2 is up
[   66.491482] CPU3 is up
[   66.517818] CPU4 is up
[   66.537699] CPU5 is up
[   66.558622] CPU6 is up
[   66.580985] CPU7 is up
[   66.597724] PM: noirq resume of devices complete after 13.979 msecs
[   66.689793] PM: early resume of devices complete after 83.851 msecs
[   66.700577] Bad mode in Error handler detected on CPU3, code 0xbf02 -- 
SError
[   66.700610] Bad mode in Error handler detected on CPU2, 

Re: [PATCH 0/9] timer: Reduce timers softirq v2

2020-07-09 Thread Juri Lelli
Hi,

On 07/07/20 03:32, Frederic Weisbecker wrote:
> Hi,
> 
> No huge change here, just addressed reviews and fixed warnings:
> 
> * Reposted patch 1 separately with appropriate "Fixes:" tag and stable Cc'ed:
>   https://lore.kernel.org/lkml/20200703010657.2302-1-frede...@kernel.org/
> 
> * Fix missing initialization of next_expiry in 4/9 (thanks Juri)
> 
> * Dropped "timer: Simplify LVL_START() and calc_index()" and added comments
>   to explain current layout instead in 2/9 (thanks Thomas)
> 
> * Rewrote changelog of 9/9 (Thanks Thomas)
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
>   timers/softirq-v2
> 
> HEAD: 5545d80b7b9bd69ede1c17fda194ac6620e7063f
> 
> Thanks,
>   Frederic
> ---

Testing of this set looks good (even with RT). Feel free to add

Tested-by: Juri Lelli 

to all the patches and to the patch you posted separately (old 01).

Thanks!

Juri



RE: [PATCH v2 0/2] Add support to get/set PHY attributes using PHY framework

2020-07-09 Thread Swapnil Kashinath Jakhade
Ping requesting review comments.
https://lkml.org/lkml/2020/5/26/507

Thanks & regards,
Swapnil

> -Original Message-
> From: Yuti Amonkar 
> Sent: Tuesday, May 26, 2020 8:05 PM
> To: linux-kernel@vger.kernel.org; kis...@ti.com; robh...@kernel.org;
> mark.rutl...@arm.com; max...@cerno.tech
> Cc: nsek...@ti.com; jsa...@ti.com; tomi.valkei...@ti.com;
> prane...@ti.com; Milind Parab ; Swapnil Kashinath
> Jakhade ; Yuti Suresh Amonkar
> 
> Subject: [PATCH v2 0/2] Add support to get/set PHY attributes using PHY
> framework
> 
> This patch series adds support to use kernel PHY subsystem APIs to get/set
> PHY attributes like number of lanes and maximum link rate.
> 
> It includes following patches:
> 
> 1. v2-0001-phy-Add-max_link_rate-as-a-PHY-attribute-and-APIs.patch
> This patch adds max_link_rate as a PHY attribute along with a pair of APIs
> that allow the generic PHY subsystem to get/set PHY attributes supported by
> the PHY.
> The PHY provider driver may use phy_set_attrs() API to set the values that
> PHY supports.
> The controller driver may then use phy_get_attrs() API to fetch the PHY
> attributes in order to properly configure the controller.
> 
> 2. v2-0002-phy-phy-cadence-torrent-Use-kernel-PHY-API-to-set.patch
> This patch uses kernel PHY API phy_set_attrs to set corresponding PHY
> properties in Cadence Torrent PHY driver. This will enable drivers using this
> PHY to read these properties using PHY framework.
> 
> The phy_get_attrs() API will be used in the DRM bridge driver [1] which is in
> process of upstreaming.
> 
> [1]
> 
> https://lkml.org/lkml/2020/2/26/263
> 
> Version History:
> 
> v2:
> - Implemented single pair of functions to get/set all PHY attributes
> 
> Swapnil Jakhade (1):
>   phy: phy-cadence-torrent: Use kernel PHY API to set PHY attributes
> 
> Yuti Amonkar (1):
>   phy: Add max_link_rate as a PHY attribute and APIs to get/set
> phy_attrs
> 
>  drivers/phy/cadence/phy-cadence-torrent.c |  7 +++
>  include/linux/phy/phy.h   | 25 +++
>  2 files changed, 32 insertions(+)
> 
> --
> 2.17.1



Re: [PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-09 Thread Joonsoo Kim
2020년 7월 9일 (목) 오후 3:43, Michal Hocko 님이 작성:
>
> On Wed 08-07-20 09:41:06, Michal Hocko wrote:
> > On Wed 08-07-20 16:16:02, Joonsoo Kim wrote:
> > > On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote:
> > > > On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> > > > > From: Joonsoo Kim 
> > > > >
> > > > > new_non_cma_page() in gup.c which try to allocate migration target 
> > > > > page
> > > > > requires to allocate the new page that is not on the CMA area.
> > > > > new_non_cma_page() implements it by removing __GFP_MOVABLE flag.  
> > > > > This way
> > > > > works well for THP page or normal page but not for hugetlb page.
> > > > >
> > > > > hugetlb page allocation process consists of two steps.  First is 
> > > > > dequeing
> > > > > from the pool.  Second is, if there is no available page on the queue,
> > > > > allocating from the page allocator.
> > > > >
> > > > > new_non_cma_page() can control allocation from the page allocator by
> > > > > specifying correct gfp flag.  However, dequeing cannot be controlled 
> > > > > until
> > > > > now, so, new_non_cma_page() skips dequeing completely.  It is a 
> > > > > suboptimal
> > > > > since new_non_cma_page() cannot utilize hugetlb pages on the queue so 
> > > > > this
> > > > > patch tries to fix this situation.
> > > > >
> > > > > This patch makes the deque function on hugetlb CMA aware and skip CMA
> > > > > pages if newly added skip_cma argument is passed as true.
> > > >
> > > > Hmm, can't you instead change dequeue_huge_page_node_exact() to test 
> > > > the PF_
> > > > flag and avoid adding bool skip_cma everywhere?
> > >
> > > Okay! Please check following patch.
> > > >
> > > > I think that's what Michal suggested [1] except he said "the code 
> > > > already does
> > > > by memalloc_nocma_{save,restore} API". It needs extending a bit though, 
> > > > AFAICS.
> > > > __gup_longterm_locked() indeed does the save/restore, but restore comes 
> > > > before
> > > > check_and_migrate_cma_pages() and thus new_non_cma_page() is called, so 
> > > > an
> > > > adjustment is needed there, but that's all?
> > > >
> > > > Hm the adjustment should be also done because save/restore is done 
> > > > around
> > > > __get_user_pages_locked(), but check_and_migrate_cma_pages() also calls
> > > > __get_user_pages_locked(), and that call not being between nocma save 
> > > > and
> > > > restore is thus also a correctness issue?
> > >
> > > Simply, I call memalloc_nocma_{save,restore} in new_non_cma_page(). It
> > > would not cause any problem.
> >
> > I believe a proper fix is the following. The scope is really defined for
> > FOLL_LONGTERM pins and pushing it inside check_and_migrate_cma_pages
> > will solve the problem as well but it imho makes more sense to do it in
> > the caller the same way we do for any others.
> >
> > Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages 
> > allocated from CMA region")
> >
> > I am not sure this is worth backporting to stable yet.
>
> Should I post it as a separate patch do you plan to include this into your 
> next version?

It's better to include it on my next version since this patch would
cause a conflict with
the next tree that includes my v3 of this patchset.

Thanks.


Re: [PATCH v4 0/4] printk: replace ringbuffer

2020-07-09 Thread John Ogness
On 2020-07-08, Petr Mladek  wrote:
> OK, I think that we are ready to try this in linux-next.
> I am going to push it there via printk/linux.git.
>
> [...]
> 
> Of course, there are still many potential problems. The following comes
> to my mind:
>
> [...]
>
>+ Debugging tools accessing the buffer directly would need to
>  understand the new structure. Fortunately John provided
>  patches for the most prominent ones.

The next series in the printk-rework (move LOG_CONT handling from
writers to readers) makes some further changes that, while not
incompatible, could affect the output of existing tools. It may be a
good idea to let the new ringbuffer sit in linux-next until the next
series has been discussed/reviewed/merged. After the next series,
everything will be in place (with regard to userspace tools) to finish
the rework.

As reminder, here are all the steps planned for the full rework:

1. replace ringbuffer
2. implement NMI-safe LOG_CONT (i.e. move handling to readers)
3. remove logbuf_lock
4. remove safe buffers
5. implement per-console printing kthreads
6. implement emergency mode and write_atomic() support
7. implement write_atomic() for 8250 UART

Some of the steps may be combined into a single series if the changes
are not too dramatic.

John Ogness


Re: [V2 PATCH] usb: mtu3: fix NULL pointer dereference

2020-07-09 Thread Chunfeng Yun
On Thu, 2020-07-09 at 09:40 +0300, Felipe Balbi wrote:
> Chunfeng Yun  writes:
> 
> > Some pointers are dereferenced before successful checks.
> >
> > Reported-by: Markus Elfring 
> > Signed-off-by: Chunfeng Yun 
> 
> do you need a Fixes tag here? Perhaps a Cc stable too?
It will not cause somes issues, I think no need add it.

According to Greg's comment, I guess he means no need check these
pointers at all, so I'll send a new version to remove checks.

Thank you

> 



Re: [PATCH v2 3/3] misc: cxl: flash: Remove unused variable 'drc_index'

2020-07-09 Thread Andrew Donnellan

On 9/7/20 4:56 pm, Lee Jones wrote:

Keeping the pointer increment though.

Fixes the following W=1 kernel build warning:

  drivers/misc/cxl/flash.c: In function ‘update_devicetree’:
  drivers/misc/cxl/flash.c:178:16: warning: variable ‘drc_index’ set but not 
used [-Wunused-but-set-variable]
  178 | __be32 *data, drc_index, phandle;
  | ^

Cc: Frederic Barrat 
Cc: Andrew Donnellan 
Cc: linuxppc-...@lists.ozlabs.org
Signed-off-by: Lee Jones 


Acked-by: Andrew Donnellan 

--
Andrew Donnellan  OzLabs, ADL Canberra
a...@linux.ibm.com IBM Australia Limited


[PATCH] EDAC-GHES: Replace HTTP links with HTTPS ones

2020-07-09 Thread Alexander A. Klimov
Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.

Deterministic algorithm:
For each file:
  If not .svg:
For each line:
  If doesn't contain `\bxmlns\b`:
For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
  If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
If both the HTTP and HTTPS versions
return 200 OK and serve the same content:
  Replace HTTP with HTTPS.

Signed-off-by: Alexander A. Klimov 
---
 Continuing my work started at 93431e0607e5.
 See also: git log --oneline '--author=Alexander A. Klimov 
' v5.7..master
 (Actually letting a shell for loop submit all this stuff for me.)

 If there are any URLs to be removed completely or at least not HTTPSified:
 Just clearly say so and I'll *undo my change*.
 See also: https://lkml.org/lkml/2020/6/27/64

 If there are any valid, but yet not changed URLs:
 See: https://lkml.org/lkml/2020/6/26/837

 If you apply the patch, please let me know.


 drivers/edac/ghes_edac.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c
index cb3dab56a875..e019319d7c2d 100644
--- a/drivers/edac/ghes_edac.c
+++ b/drivers/edac/ghes_edac.c
@@ -4,7 +4,7 @@
  *
  * Copyright (c) 2013 by Mauro Carvalho Chehab
  *
- * Red Hat Inc. http://www.redhat.com
+ * Red Hat Inc. https://www.redhat.com
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-- 
2.27.0



Re: linux-next: build warning after merge of the scmi tree

2020-07-09 Thread Sudeep Holla
On Thu, Jul 09, 2020 at 09:54:12AM +1000, Stephen Rothwell wrote:
> Hi all,
> 
> After merging the scmi tree, today's linux-next build (x86_64
> allmodconfig) produced this warning:
> 
> drivers/firmware/arm_scmi/clock.c: In function 'rate_cmp_func':
> drivers/firmware/arm_scmi/clock.c:128:12: warning: initialization discards 
> 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
>   128 |  u64 *r1 = _r1, *r2 = _r2;
>   |^~~
> drivers/firmware/arm_scmi/clock.c:128:23: warning: initialization discards 
> 'const' qualifier from pointer target type [-Wdiscarded-qualifiers]
>   128 |  u64 *r1 = _r1, *r2 = _r2;
>   |   ^~~
> 
> Introduced by commit
> 
>   f0a2500a2a05 ("firmware: arm_scmi: Keep the discrete clock rates sorted")
> 

Sorry for both the issues, I will update the tree with proper patch.

-- 
Regards,
Sudeep


Re: [PATCH] drm/aspeed: Call drm_fbdev_generic_setup after drm_dev_register

2020-07-09 Thread Thomas Zimmermann


Am 09.07.20 um 08:51 schrieb Joel Stanley:
> On Wed, 1 Jul 2020 at 09:10, Sam Ravnborg  wrote:
>>
>> Hi Guenter.
>>
>> On Tue, Jun 30, 2020 at 05:10:02PM -0700, Guenter Roeck wrote:
>>> The following backtrace is seen when running aspeed G5 kernels.
>>>
>>> WARNING: CPU: 0 PID: 1 at drivers/gpu/drm/drm_fb_helper.c:2233 
>>> drm_fbdev_generic_setup+0x138/0x198
>>> aspeed_gfx 1e6e6000.display: Device has not been registered.
>>> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc3 #1
>>> Hardware name: Generic DT based system
>>> Backtrace:
>>> [<8010d6d0>] (dump_backtrace) from [<8010d9b8>] (show_stack+0x20/0x24)
>>> r7:0009 r6:6153 r5: r4:8119fa94
>>> [<8010d998>] (show_stack) from [<80b8cb98>] (dump_stack+0xcc/0xec)
>>> [<80b8cacc>] (dump_stack) from [<80123ef0>] (__warn+0xd8/0xfc)
>>> r7:0009 r6:80e62ed0 r5: r4:974c3ccc
>>> [<80123e18>] (__warn) from [<80123f98>] (warn_slowpath_fmt+0x84/0xc4)
>>> r9:0009 r8:806a0140 r7:08b9 r6:80e62ed0 r5:80e631f8 r4:974c2000
>>> [<80123f18>] (warn_slowpath_fmt) from [<806a0140>] 
>>> (drm_fbdev_generic_setup+0x138/0x198)
>>> r9:0001 r8:9758fc10 r7:9758fc00 r6: r5:0020 r4:9768a000
>>> [<806a0008>] (drm_fbdev_generic_setup) from [<806d4558>] 
>>> (aspeed_gfx_probe+0x204/0x32c)
>>> r7:9758fc00 r6: r5: r4:9768a000
>>> [<806d4354>] (aspeed_gfx_probe) from [<806dfca0>] 
>>> (platform_drv_probe+0x58/0xa8)
>>>
>>> Since commit 1aed9509b29a6 ("drm/fb-helper: Remove return value from
>>> drm_fbdev_generic_setup()"), drm_fbdev_generic_setup() must be called
>>> after drm_dev_register() to avoid the warning. Do that.
>>>
>>> Fixes: 1aed9509b29a6 ("drm/fb-helper: Remove return value from 
>>> drm_fbdev_generic_setup()")
>>> Signed-off-by: Guenter Roeck 
>>
>> I thought we had this fixed already - but could not find the patch.
>> Must have been another driver then.
>>
>> Acked-by: Sam Ravnborg 
>>
>> I assume Joel Stanley will pick up this patch.
> 
> I do not have the drm maintainer tools set up at the moment. Could one
> of the other maintainers put this in the drm-misc tree?

Added to drm-misc-fixes

Best regards
Thomas

> 
> Acked-by: Joel Stanley 
> 
> Cheers,
> 
> Joel
> 
>>
>> Sam
>>
>>> ---
>>>  drivers/gpu/drm/aspeed/aspeed_gfx_drv.c | 3 +--
>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_drv.c 
>>> b/drivers/gpu/drm/aspeed/aspeed_gfx_drv.c
>>> index 6b27242b9ee3..bca3fcff16ec 100644
>>> --- a/drivers/gpu/drm/aspeed/aspeed_gfx_drv.c
>>> +++ b/drivers/gpu/drm/aspeed/aspeed_gfx_drv.c
>>> @@ -173,8 +173,6 @@ static int aspeed_gfx_load(struct drm_device *drm)
>>>
>>>   drm_mode_config_reset(drm);
>>>
>>> - drm_fbdev_generic_setup(drm, 32);
>>> -
>>>   return 0;
>>>  }
>>>
>>> @@ -225,6 +223,7 @@ static int aspeed_gfx_probe(struct platform_device 
>>> *pdev)
>>>   if (ret)
>>>   goto err_unload;
>>>
>>> + drm_fbdev_generic_setup(>drm, 32);
>>>   return 0;
>>>
>>>  err_unload:
>>> --
>>> 2.17.1
>>>
>>> ___
>>> dri-devel mailing list
>>> dri-de...@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



signature.asc
Description: OpenPGP digital signature


[v3 PATCH] usb: mtu3: remove unnecessary NULL pointer checks

2020-07-09 Thread Chunfeng Yun
The class driver will ensure the parameters are not NULL
pointers before call the hook function of usb_ep_ops,
so no need check them again.

Reported-by: Markus Elfring 
Signed-off-by: Chunfeng Yun 
---
v3: remove unnecessary NULL pointer checks but not add more checks.

v2: nothing changed, but abandon another patch.
---
 drivers/usb/mtu3/mtu3_gadget.c | 25 ++---
 1 file changed, 6 insertions(+), 19 deletions(-)

diff --git a/drivers/usb/mtu3/mtu3_gadget.c b/drivers/usb/mtu3/mtu3_gadget.c
index f93732e..6b26cb8 100644
--- a/drivers/usb/mtu3/mtu3_gadget.c
+++ b/drivers/usb/mtu3/mtu3_gadget.c
@@ -263,23 +263,15 @@ void mtu3_free_request(struct usb_ep *ep, struct 
usb_request *req)
 static int mtu3_gadget_queue(struct usb_ep *ep,
struct usb_request *req, gfp_t gfp_flags)
 {
-   struct mtu3_ep *mep;
-   struct mtu3_request *mreq;
-   struct mtu3 *mtu;
+   struct mtu3_ep *mep = to_mtu3_ep(ep);
+   struct mtu3_request *mreq = to_mtu3_request(req);
+   struct mtu3 *mtu = mep->mtu;
unsigned long flags;
int ret = 0;
 
-   if (!ep || !req)
-   return -EINVAL;
-
if (!req->buf)
return -ENODATA;
 
-   mep = to_mtu3_ep(ep);
-   mtu = mep->mtu;
-   mreq = to_mtu3_request(req);
-   mreq->mtu = mtu;
-
if (mreq->mep != mep)
return -EINVAL;
 
@@ -303,6 +295,7 @@ static int mtu3_gadget_queue(struct usb_ep *ep,
return -ESHUTDOWN;
}
 
+   mreq->mtu = mtu;
mreq->request.actual = 0;
mreq->request.status = -EINPROGRESS;
 
@@ -335,11 +328,11 @@ static int mtu3_gadget_dequeue(struct usb_ep *ep, struct 
usb_request *req)
struct mtu3_ep *mep = to_mtu3_ep(ep);
struct mtu3_request *mreq = to_mtu3_request(req);
struct mtu3_request *r;
+   struct mtu3 *mtu = mep->mtu;
unsigned long flags;
int ret = 0;
-   struct mtu3 *mtu = mep->mtu;
 
-   if (!ep || !req || mreq->mep != mep)
+   if (mreq->mep != mep)
return -EINVAL;
 
dev_dbg(mtu->dev, "%s : req=%p\n", __func__, req);
@@ -379,9 +372,6 @@ static int mtu3_gadget_ep_set_halt(struct usb_ep *ep, int 
value)
unsigned long flags;
int ret = 0;
 
-   if (!ep)
-   return -EINVAL;
-
dev_dbg(mtu->dev, "%s : %s...", __func__, ep->name);
 
spin_lock_irqsave(>lock, flags);
@@ -424,9 +414,6 @@ static int mtu3_gadget_ep_set_wedge(struct usb_ep *ep)
 {
struct mtu3_ep *mep = to_mtu3_ep(ep);
 
-   if (!ep)
-   return -EINVAL;
-
mep->wedged = 1;
 
return usb_ep_set_halt(ep);
-- 
1.9.1


[PATCH v3 1/4] iommu/vt-d: Refactor device_to_iommu() helper

2020-07-09 Thread Lu Baolu
It is refactored in two ways:

- Make it global so that it could be used in other files.

- Make bus/devfn optional so that callers could ignore these two returned
values when they only want to get the coresponding iommu pointer.

Signed-off-by: Lu Baolu 
Reviewed-by: Kevin Tian 
---
 drivers/iommu/intel/iommu.c | 55 +++--
 drivers/iommu/intel/svm.c   |  8 +++---
 include/linux/intel-iommu.h |  3 +-
 3 files changed, 21 insertions(+), 45 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 2ce490c2eab8..4a6b6960fc32 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -778,16 +778,16 @@ is_downstream_to_pci_bridge(struct device *dev, struct 
device *bridge)
return false;
 }
 
-static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 
*devfn)
+struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn)
 {
struct dmar_drhd_unit *drhd = NULL;
+   struct pci_dev *pdev = NULL;
struct intel_iommu *iommu;
struct device *tmp;
-   struct pci_dev *pdev = NULL;
u16 segment = 0;
int i;
 
-   if (iommu_dummy(dev))
+   if (!dev || iommu_dummy(dev))
return NULL;
 
if (dev_is_pci(dev)) {
@@ -818,8 +818,10 @@ static struct intel_iommu *device_to_iommu(struct device 
*dev, u8 *bus, u8 *devf
if (pdev && pdev->is_virtfn)
goto got_pdev;
 
-   *bus = drhd->devices[i].bus;
-   *devfn = drhd->devices[i].devfn;
+   if (bus && devfn) {
+   *bus = drhd->devices[i].bus;
+   *devfn = drhd->devices[i].devfn;
+   }
goto out;
}
 
@@ -829,8 +831,10 @@ static struct intel_iommu *device_to_iommu(struct device 
*dev, u8 *bus, u8 *devf
 
if (pdev && drhd->include_all) {
got_pdev:
-   *bus = pdev->bus->number;
-   *devfn = pdev->devfn;
+   if (bus && devfn) {
+   *bus = pdev->bus->number;
+   *devfn = pdev->devfn;
+   }
goto out;
}
}
@@ -5146,11 +5150,10 @@ static int aux_domain_add_dev(struct dmar_domain 
*domain,
  struct device *dev)
 {
int ret;
-   u8 bus, devfn;
unsigned long flags;
struct intel_iommu *iommu;
 
-   iommu = device_to_iommu(dev, , );
+   iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu)
return -ENODEV;
 
@@ -5236,9 +5239,8 @@ static int prepare_domain_attach_device(struct 
iommu_domain *domain,
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
struct intel_iommu *iommu;
int addr_width;
-   u8 bus, devfn;
 
-   iommu = device_to_iommu(dev, , );
+   iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu)
return -ENODEV;
 
@@ -5658,9 +5660,8 @@ static bool intel_iommu_capable(enum iommu_cap cap)
 static struct iommu_device *intel_iommu_probe_device(struct device *dev)
 {
struct intel_iommu *iommu;
-   u8 bus, devfn;
 
-   iommu = device_to_iommu(dev, , );
+   iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu)
return ERR_PTR(-ENODEV);
 
@@ -5673,9 +5674,8 @@ static struct iommu_device 
*intel_iommu_probe_device(struct device *dev)
 static void intel_iommu_release_device(struct device *dev)
 {
struct intel_iommu *iommu;
-   u8 bus, devfn;
 
-   iommu = device_to_iommu(dev, , );
+   iommu = device_to_iommu(dev, NULL, NULL);
if (!iommu)
return;
 
@@ -5825,37 +5825,14 @@ static struct iommu_group 
*intel_iommu_device_group(struct device *dev)
return generic_device_group(dev);
 }
 
-#ifdef CONFIG_INTEL_IOMMU_SVM
-struct intel_iommu *intel_svm_device_to_iommu(struct device *dev)
-{
-   struct intel_iommu *iommu;
-   u8 bus, devfn;
-
-   if (iommu_dummy(dev)) {
-   dev_warn(dev,
-"No IOMMU translation for device; cannot enable 
SVM\n");
-   return NULL;
-   }
-
-   iommu = device_to_iommu(dev, , );
-   if ((!iommu)) {
-   dev_err(dev, "No IOMMU for device; cannot enable SVM\n");
-   return NULL;
-   }
-
-   return iommu;
-}
-#endif /* CONFIG_INTEL_IOMMU_SVM */
-
 static int intel_iommu_enable_auxd(struct device *dev)
 {
struct device_domain_info *info;
struct intel_iommu *iommu;
unsigned long flags;
-   u8 bus, devfn;
int ret;
 
-   iommu = device_to_iommu(dev, , );
+   iommu = device_to_iommu(dev, NULL, 

[PATCH v3 0/4] iommu/vt-d: Add prq report and response support

2020-07-09 Thread Lu Baolu
Hi,

This series adds page request event reporting and response support to
the Intel IOMMU driver. This is necessary when the page requests must
be processed by any component other than the vendor IOMMU driver. For
example, when a guest page table was bound to a PASID through the
iommu_ops->sva_bind_gpasid() api, the page requests should be routed to
the guest, and after the page is served, the device should be responded
with the result.

Your review comments are very appreciated.

Best regards,
baolu

Change log:
v2->v3:
  - Adress Kevin's review comments

https://lore.kernel.org/linux-iommu/20200706002535.9381-1-baolu...@linux.intel.com/T/#t
  - Set IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID flag

https://lore.kernel.org/linux-iommu/20200706002535.9381-1-baolu...@linux.intel.com/T/#m0190af2f6cf967217e9def6fa0fed4e0fe5a477e

v1->v2:
  - v1 posted at https://lkml.org/lkml/2020/6/27/387
  - Remove unnecessary pci_get_domain_bus_and_slot()
  - Return error when sdev == NULL in intel_svm_page_response()

Lu Baolu (4):
  iommu/vt-d: Refactor device_to_iommu() helper
  iommu/vt-d: Add a helper to get svm and sdev for pasid
  iommu/vt-d: Report page request faults for guest SVA
  iommu/vt-d: Add page response ops support

 drivers/iommu/intel/iommu.c |  56 ++
 drivers/iommu/intel/svm.c   | 332 
 include/linux/intel-iommu.h |   6 +-
 3 files changed, 278 insertions(+), 116 deletions(-)

-- 
2.17.1



[PATCH v3 2/4] iommu/vt-d: Add a helper to get svm and sdev for pasid

2020-07-09 Thread Lu Baolu
There are several places in the code that need to get the pointers of
svm and sdev according to a pasid and device. Add a helper to achieve
this for code consolidation and readability.

Signed-off-by: Lu Baolu 
Reviewed-by: Kevin Tian 
---
 drivers/iommu/intel/svm.c | 121 +-
 1 file changed, 68 insertions(+), 53 deletions(-)

diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index 25dd74f27252..c23167877b2b 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -228,6 +228,50 @@ static LIST_HEAD(global_svm_list);
list_for_each_entry((sdev), &(svm)->devs, list) \
if ((d) != (sdev)->dev) {} else
 
+static int pasid_to_svm_sdev(struct device *dev, unsigned int pasid,
+struct intel_svm **rsvm,
+struct intel_svm_dev **rsdev)
+{
+   struct intel_svm_dev *d, *sdev = NULL;
+   struct intel_svm *svm;
+
+   /* The caller should hold the pasid_mutex lock */
+   if (WARN_ON(!mutex_is_locked(_mutex)))
+   return -EINVAL;
+
+   if (pasid == INVALID_IOASID || pasid >= PASID_MAX)
+   return -EINVAL;
+
+   svm = ioasid_find(NULL, pasid, NULL);
+   if (IS_ERR(svm))
+   return PTR_ERR(svm);
+
+   if (!svm)
+   goto out;
+
+   /*
+* If we found svm for the PASID, there must be at least one device
+* bond.
+*/
+   if (WARN_ON(list_empty(>devs)))
+   return -EINVAL;
+
+   rcu_read_lock();
+   list_for_each_entry_rcu(d, >devs, list) {
+   if (d->dev == dev) {
+   sdev = d;
+   break;
+   }
+   }
+   rcu_read_unlock();
+
+out:
+   *rsvm = svm;
+   *rsdev = sdev;
+
+   return 0;
+}
+
 int intel_svm_bind_gpasid(struct iommu_domain *domain, struct device *dev,
  struct iommu_gpasid_bind_data *data)
 {
@@ -261,39 +305,27 @@ int intel_svm_bind_gpasid(struct iommu_domain *domain, 
struct device *dev,
dmar_domain = to_dmar_domain(domain);
 
mutex_lock(_mutex);
-   svm = ioasid_find(NULL, data->hpasid, NULL);
-   if (IS_ERR(svm)) {
-   ret = PTR_ERR(svm);
+   ret = pasid_to_svm_sdev(dev, data->hpasid, , );
+   if (ret)
goto out;
-   }
 
-   if (svm) {
+   if (sdev) {
/*
-* If we found svm for the PASID, there must be at
-* least one device bond, otherwise svm should be freed.
+* For devices with aux domains, we should allow
+* multiple bind calls with the same PASID and pdev.
 */
-   if (WARN_ON(list_empty(>devs))) {
-   ret = -EINVAL;
-   goto out;
+   if (iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)) {
+   sdev->users++;
+   } else {
+   dev_warn_ratelimited(dev,
+"Already bound with PASID %u\n",
+svm->pasid);
+   ret = -EBUSY;
}
+   goto out;
+   }
 
-   for_each_svm_dev(sdev, svm, dev) {
-   /*
-* For devices with aux domains, we should allow
-* multiple bind calls with the same PASID and pdev.
-*/
-   if (iommu_dev_feature_enabled(dev,
- IOMMU_DEV_FEAT_AUX)) {
-   sdev->users++;
-   } else {
-   dev_warn_ratelimited(dev,
-"Already bound with PASID 
%u\n",
-svm->pasid);
-   ret = -EBUSY;
-   }
-   goto out;
-   }
-   } else {
+   if (!svm) {
/* We come here when PASID has never been bond to a device. */
svm = kzalloc(sizeof(*svm), GFP_KERNEL);
if (!svm) {
@@ -376,25 +408,17 @@ int intel_svm_unbind_gpasid(struct device *dev, int pasid)
struct intel_iommu *iommu = device_to_iommu(dev, NULL, NULL);
struct intel_svm_dev *sdev;
struct intel_svm *svm;
-   int ret = -EINVAL;
+   int ret;
 
if (WARN_ON(!iommu))
return -EINVAL;
 
mutex_lock(_mutex);
-   svm = ioasid_find(NULL, pasid, NULL);
-   if (!svm) {
-   ret = -EINVAL;
-   goto out;
-   }
-
-   if (IS_ERR(svm)) {
-   ret = PTR_ERR(svm);
+   ret = pasid_to_svm_sdev(dev, pasid, , );
+   if (ret)
goto out;
-   }
 
-   for_each_svm_dev(sdev, svm, dev) {
-  

[PATCH v3 4/4] iommu/vt-d: Add page response ops support

2020-07-09 Thread Lu Baolu
After page requests are handled, software must respond to the device
which raised the page request with the result. This is done through
the iommu ops.page_response if the request was reported to outside of
vendor iommu driver through iommu_report_device_fault(). This adds the
VT-d implementation of page_response ops.

Co-developed-by: Jacob Pan 
Signed-off-by: Jacob Pan 
Co-developed-by: Liu Yi L 
Signed-off-by: Liu Yi L 
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel/iommu.c |   1 +
 drivers/iommu/intel/svm.c   | 100 
 include/linux/intel-iommu.h |   3 ++
 3 files changed, 104 insertions(+)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 4a6b6960fc32..98390a6d8113 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -6057,6 +6057,7 @@ const struct iommu_ops intel_iommu_ops = {
.sva_bind   = intel_svm_bind,
.sva_unbind = intel_svm_unbind,
.sva_get_pasid  = intel_svm_get_pasid,
+   .page_response  = intel_svm_page_response,
 #endif
 };
 
diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index d24e71bac8db..839d2af377b6 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -1082,3 +1082,103 @@ int intel_svm_get_pasid(struct iommu_sva *sva)
 
return pasid;
 }
+
+int intel_svm_page_response(struct device *dev,
+   struct iommu_fault_event *evt,
+   struct iommu_page_response *msg)
+{
+   struct iommu_fault_page_request *prm;
+   struct intel_svm_dev *sdev = NULL;
+   struct intel_svm *svm = NULL;
+   struct intel_iommu *iommu;
+   bool private_present;
+   bool pasid_present;
+   bool last_page;
+   u8 bus, devfn;
+   int ret = 0;
+   u16 sid;
+
+   if (!dev || !dev_is_pci(dev))
+   return -ENODEV;
+
+   iommu = device_to_iommu(dev, , );
+   if (!iommu)
+   return -ENODEV;
+
+   if (!msg || !evt)
+   return -EINVAL;
+
+   mutex_lock(_mutex);
+
+   prm = >fault.prm;
+   sid = PCI_DEVID(bus, devfn);
+   pasid_present = prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
+   private_present = prm->flags & IOMMU_FAULT_PAGE_REQUEST_PRIV_DATA;
+   last_page = prm->flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
+
+   if (pasid_present) {
+   if (prm->pasid == 0 || prm->pasid >= PASID_MAX) {
+   ret = -EINVAL;
+   goto out;
+   }
+
+   ret = pasid_to_svm_sdev(dev, prm->pasid, , );
+   if (ret || !sdev) {
+   ret = -ENODEV;
+   goto out;
+   }
+
+   /*
+* For responses from userspace, need to make sure that the
+* pasid has been bound to its mm.
+   */
+   if (svm->flags & SVM_FLAG_GUEST_MODE) {
+   struct mm_struct *mm;
+
+   mm = get_task_mm(current);
+   if (!mm) {
+   ret = -EINVAL;
+   goto out;
+   }
+
+   if (mm != svm->mm) {
+   ret = -ENODEV;
+   mmput(mm);
+   goto out;
+   }
+
+   mmput(mm);
+   }
+   } else {
+   pr_err_ratelimited("Invalid page response: no pasid\n");
+   ret = -EINVAL;
+   goto out;
+   }
+
+   /*
+* Per VT-d spec. v3.0 ch7.7, system software must respond
+* with page group response if private data is present (PDP)
+* or last page in group (LPIG) bit is set. This is an
+* additional VT-d requirement beyond PCI ATS spec.
+*/
+   if (last_page || private_present) {
+   struct qi_desc desc;
+
+   desc.qw0 = QI_PGRP_PASID(prm->pasid) | QI_PGRP_DID(sid) |
+   QI_PGRP_PASID_P(pasid_present) |
+   QI_PGRP_PDP(private_present) |
+   QI_PGRP_RESP_CODE(msg->code) |
+   QI_PGRP_RESP_TYPE;
+   desc.qw1 = QI_PGRP_IDX(prm->grpid) | QI_PGRP_LPIG(last_page);
+   desc.qw2 = 0;
+   desc.qw3 = 0;
+   if (private_present)
+   memcpy(, prm->private_data,
+  sizeof(prm->private_data));
+
+   qi_submit_sync(iommu, , 1, 0);
+   }
+out:
+   mutex_unlock(_mutex);
+   return ret;
+}
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index fc2cfc3db6e1..bf6009a344f5 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -741,6 +741,9 @@ struct iommu_sva *intel_svm_bind(struct 

[PATCH v3 3/4] iommu/vt-d: Report page request faults for guest SVA

2020-07-09 Thread Lu Baolu
A pasid might be bound to a page table from a VM guest via the iommu
ops.sva_bind_gpasid. In this case, when a DMA page fault is detected
on the physical IOMMU, we need to inject the page fault request into
the guest. After the guest completes handling the page fault, a page
response need to be sent back via the iommu ops.page_response().

This adds support to report a page request fault. Any external module
which is interested in handling this fault should regiester a notifier
with iommu_register_device_fault_handler().

Co-developed-by: Jacob Pan 
Signed-off-by: Jacob Pan 
Co-developed-by: Liu Yi L 
Signed-off-by: Liu Yi L 
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel/svm.c | 103 +++---
 1 file changed, 85 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index c23167877b2b..d24e71bac8db 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -815,8 +815,63 @@ static void intel_svm_drain_prq(struct device *dev, int 
pasid)
}
 }
 
+static int prq_to_iommu_prot(struct page_req_dsc *req)
+{
+   int prot = 0;
+
+   if (req->rd_req)
+   prot |= IOMMU_FAULT_PERM_READ;
+   if (req->wr_req)
+   prot |= IOMMU_FAULT_PERM_WRITE;
+   if (req->exe_req)
+   prot |= IOMMU_FAULT_PERM_EXEC;
+   if (req->pm_req)
+   prot |= IOMMU_FAULT_PERM_PRIV;
+
+   return prot;
+}
+
+static int
+intel_svm_prq_report(struct device *dev, struct page_req_dsc *desc)
+{
+   struct iommu_fault_event event;
+
+   /* Fill in event data for device specific processing */
+   memset(, 0, sizeof(struct iommu_fault_event));
+   event.fault.type = IOMMU_FAULT_PAGE_REQ;
+   event.fault.prm.addr = desc->addr;
+   event.fault.prm.pasid = desc->pasid;
+   event.fault.prm.grpid = desc->prg_index;
+   event.fault.prm.perm = prq_to_iommu_prot(desc);
+
+   if (!dev || !dev_is_pci(dev))
+   return -ENODEV;
+
+   if (desc->lpig)
+   event.fault.prm.flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
+   if (desc->pasid_present) {
+   event.fault.prm.flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
+   event.fault.prm.flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
+   }
+   if (desc->priv_data_present) {
+   /*
+* Set last page in group bit if private data is present,
+* page response is required as it does for LPIG.
+* iommu_report_device_fault() doesn't understand this vendor
+* specific requirement thus we set last_page as a workaround.
+*/
+   event.fault.prm.flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
+   event.fault.prm.flags |= IOMMU_FAULT_PAGE_REQUEST_PRIV_DATA;
+   memcpy(event.fault.prm.private_data, desc->priv_data,
+  sizeof(desc->priv_data));
+   }
+
+   return iommu_report_device_fault(dev, );
+}
+
 static irqreturn_t prq_event_thread(int irq, void *d)
 {
+   struct intel_svm_dev *sdev = NULL;
struct intel_iommu *iommu = d;
struct intel_svm *svm = NULL;
int head, tail, handled = 0;
@@ -828,7 +883,6 @@ static irqreturn_t prq_event_thread(int irq, void *d)
tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
while (head != tail) {
-   struct intel_svm_dev *sdev;
struct vm_area_struct *vma;
struct page_req_dsc *req;
struct qi_desc resp;
@@ -864,6 +918,20 @@ static irqreturn_t prq_event_thread(int irq, void *d)
}
}
 
+   if (!sdev || sdev->sid != req->rid) {
+   struct intel_svm_dev *t;
+
+   sdev = NULL;
+   rcu_read_lock();
+   list_for_each_entry_rcu(t, >devs, list) {
+   if (t->sid == req->rid) {
+   sdev = t;
+   break;
+   }
+   }
+   rcu_read_unlock();
+   }
+
result = QI_RESP_INVALID;
/* Since we're using init_mm.pgd directly, we should never take
 * any faults on kernel addresses. */
@@ -874,6 +942,17 @@ static irqreturn_t prq_event_thread(int irq, void *d)
if (!is_canonical_address(address))
goto bad_req;
 
+   /*
+* If prq is to be handled outside iommu driver via receiver of
+* the fault notifiers, we skip the page response here.
+*/
+   if (svm->flags & SVM_FLAG_GUEST_MODE) {
+   if (sdev && !intel_svm_prq_report(sdev->dev, req))
+   

Re: [f2fs-dev] [PATCH] f2fs: don't skip writeback of quota data

2020-07-09 Thread Chao Yu
On 2020/7/9 13:30, Jaegeuk Kim wrote:
> It doesn't need to bypass flushing quota data in background.

The condition is used to flush quota data in batch to avoid random
small-sized udpate, did you hit any problem here?

Thanks,

> 
> Signed-off-by: Jaegeuk Kim 
> ---
>  fs/f2fs/data.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 44645f4f914b6..72e8b50e588c1 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -3148,7 +3148,7 @@ static int __f2fs_write_data_pages(struct address_space 
> *mapping,
>   if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
>   goto skip_write;
>  
> - if ((S_ISDIR(inode->i_mode) || IS_NOQUOTA(inode)) &&
> + if (S_ISDIR(inode->i_mode) &&
>   wbc->sync_mode == WB_SYNC_NONE &&
>   get_dirty_pages(inode) < nr_pages_to_skip(sbi, DATA) &&
>   f2fs_available_free_memory(sbi, DIRTY_DENTS))
> 


Re: [PATCH 5/6] phy: exynos5-usbdrd: use correct format for structure description

2020-07-09 Thread Marek Szyprowski


On 08.07.2020 15:28, Vinod Koul wrote:
> We get warning with W=1 build:
> drivers/phy/samsung/phy-exynos5-usbdrd.c:211: warning: Function
> parameter or member 'phys' not described in 'exynos5_usbdrd_phy'
> drivers/phy/samsung/phy-exynos5-usbdrd.c:211: warning: Function
> parameter or member 'vbus' not described in 'exynos5_usbdrd_phy'
> drivers/phy/samsung/phy-exynos5-usbdrd.c:211: warning: Function
> parameter or member 'vbus_boost' not described in 'exynos5_usbdrd_phy'
>
> These members are provided with description but format is not quite
> right resulting in above warnings
>
> Cc: Marek Szyprowski 
> Signed-off-by: Vinod Koul 
Acked-by: Marek Szyprowski 
> ---
>   drivers/phy/samsung/phy-exynos5-usbdrd.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/phy/samsung/phy-exynos5-usbdrd.c 
> b/drivers/phy/samsung/phy-exynos5-usbdrd.c
> index eb06ce9f748f..bfb0e8914103 100644
> --- a/drivers/phy/samsung/phy-exynos5-usbdrd.c
> +++ b/drivers/phy/samsung/phy-exynos5-usbdrd.c
> @@ -180,14 +180,14 @@ struct exynos5_usbdrd_phy_drvdata {
>* @utmiclk: clock for utmi+ phy
>* @itpclk: clock for ITP generation
>* @drv_data: pointer to SoC level driver data structure
> - * @phys[]: array for 'EXYNOS5_DRDPHYS_NUM' number of PHY
> + * @phys: array for 'EXYNOS5_DRDPHYS_NUM' number of PHY
>*  instances each with its 'phy' and 'phy_cfg'.
>* @extrefclk: frequency select settings when using 'separate
>* reference clocks' for SS and HS operations
>* @ref_clk: reference clock to PHY block from which PHY's
>*   operational clocks are derived
> - * vbus: VBUS regulator for phy
> - * vbus_boost: Boost regulator for VBUS present on few Exynos boards
> + * @vbus: VBUS regulator for phy
> + * @vbus_boost: Boost regulator for VBUS present on few Exynos boards
>*/
>   struct exynos5_usbdrd_phy {
>   struct device *dev;

Best regards
-- 
Marek Szyprowski, PhD
Samsung R Institute Poland



Re: [Proposal] DRM: AMD: Convert logging to drm_* functions.

2020-07-09 Thread Christian König

Am 08.07.20 um 18:11 schrieb Suraj Upadhyay:

Hii AMD Maintainers,
I plan to convert logging of information, error and warnings
inside the AMD driver(s) to drm_* functions and macros for loggin,
as described by the TODO list in the DRM documentation[1].

I need your approval for the change before sending any patches, to make
sure that this is a good idea and that the patches will be merged.

The patches will essentially convert all the dev_info(), dev_warn(),
dev_err() and dev_err_once() to drm_info(), drm_warn(), drm_err() and
drm_err_once() respectively.


Well to be honest I would rather like see the conversion done in the 
other direction.


I think the drm_* functions are just an unnecessary extra layer on top 
of the core kernel functions and should probably be removed sooner or 
later because of midlayering.


Regards,
Christian.



Thank You,

Suraj Upadhyay.

[1] 
https://dri.freedesktop.org/docs/drm/gpu/todo.html#convert-logging-to-drm-functions-with-drm-device-paramater





Re: [PATCH v4 06/11] mm/migrate: make a standard migration target allocation function

2020-07-09 Thread Joonsoo Kim
2020년 7월 8일 (수) 오전 4:00, Michal Hocko 님이 작성:
>
> On Tue 07-07-20 16:49:51, Vlastimil Babka wrote:
> > On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> > > From: Joonsoo Kim 
> > >
> > > There are some similar functions for migration target allocation.  Since
> > > there is no fundamental difference, it's better to keep just one rather
> > > than keeping all variants.  This patch implements base migration target
> > > allocation function.  In the following patches, variants will be converted
> > > to use this function.
> > >
> > > Changes should be mechanical but there are some differences. First, Some
> > > callers' nodemask is assgined to NULL since NULL nodemask will be
> > > considered as all available nodes, that is, _states[N_MEMORY].
> > > Second, for hugetlb page allocation, gfp_mask is ORed since a user could
> > > provide a gfp_mask from now on.
> >
> > I think that's wrong. See how htlb_alloc_mask() determines between
> > GFP_HIGHUSER_MOVABLE and GFP_HIGHUSER, but then you OR it with 
> > __GFP_MOVABLE so
> > it's always GFP_HIGHUSER_MOVABLE.

Indeed.

> Right you are! Not that it would make any real difference because only
> migrateable hugetlb pages will get __GFP_MOVABLE and so we shouldn't
> really end up here for !movable pages in the first place (not sure about
> soft offlining at this moment). But yeah it would be simply better to
> override gfp mask for hugetlb which we have been doing anyway.

Override gfp mask doesn't work since some users will call this function with
__GFP_THISNODE.  I will use hugepage_movable_supported() here and
clear __GFP_MOVABLE if needed.

Thanks.

Thanks.


RE: [PATCH v3 06/14] vfio/type1: Add VFIO_IOMMU_PASID_REQUEST (alloc/free)

2020-07-09 Thread Liu, Yi L
Hi Alex,

After more thinking, looks like adding a r-b tree is still not enough to
solve the potential problem for free a range of PASID in one ioctl. If
caller gives [0, MAX_UNIT] in the free request, kernel anyhow should
loop all the PASIDs and search in the r-b tree. Even VFIO can track the
smallest/largest allocated PASID, and limit the free range to an accurate
range, it is still no efficient. For example, user has allocated two PASIDs
( 1 and 999), and user gives the [0, MAX_UNIT] range in free request. VFIO
will limit the free range to be [1, 999], but still needs to loop PASID 1 -
999, and search in r-b tree.

So I'm wondering can we fall back to prior proposal which only free one
PASID for a free request. how about your opinion?

https://lore.kernel.org/linux-iommu/20200416084031.7266a...@w520.home/

Regards,
Yi Liu

> From: Liu, Yi L 
> Sent: Thursday, July 9, 2020 10:26 AM
> 
> Hi Kevin,
> 
> > From: Tian, Kevin 
> > Sent: Thursday, July 9, 2020 10:18 AM
> >
> > > From: Liu, Yi L 
> > > Sent: Thursday, July 9, 2020 10:08 AM
> > >
> > > Hi Kevin,
> > >
> > > > From: Tian, Kevin 
> > > > Sent: Thursday, July 9, 2020 9:57 AM
> > > >
> > > > > From: Liu, Yi L 
> > > > > Sent: Thursday, July 9, 2020 8:32 AM
> > > > >
> > > > > Hi Alex,
> > > > >
> > > > > > Alex Williamson 
> > > > > > Sent: Thursday, July 9, 2020 3:55 AM
> > > > > >
> > > > > > On Wed, 8 Jul 2020 08:16:16 + "Liu, Yi L"
> > > > > >  wrote:
> > > > > >
> > > > > > > Hi Alex,
> > > > > > >
> > > > > > > > From: Liu, Yi L < yi.l@intel.com>
> > > > > > > > Sent: Friday, July 3, 2020 2:28 PM
> > > > > > > >
> > > > > > > > Hi Alex,
> > > > > > > >
> > > > > > > > > From: Alex Williamson 
> > > > > > > > > Sent: Friday, July 3, 2020 5:19 AM
> > > > > > > > >
> > > > > > > > > On Wed, 24 Jun 2020 01:55:19 -0700 Liu Yi L
> > > > > > > > >  wrote:
> > > > > > > > >
> > > > > > > > > > This patch allows user space to request PASID
> > > > > > > > > > allocation/free,
> > > e.g.
> > > > > > > > > > when serving the request from the guest.
> > > > > > > > > >
> > > > > > > > > > PASIDs that are not freed by userspace are
> > > > > > > > > > automatically freed
> > > > > when
> > > > > > > > > > the IOASID set is destroyed when process exits.
> > > > > > > [...]
> > > > > > > > > > +static int vfio_iommu_type1_pasid_request(struct
> > > > > > > > > > +vfio_iommu
> > > > > *iommu,
> > > > > > > > > > + unsigned long arg) {
> > > > > > > > > > +   struct vfio_iommu_type1_pasid_request req;
> > > > > > > > > > +   unsigned long minsz;
> > > > > > > > > > +
> > > > > > > > > > +   minsz = offsetofend(struct
> > vfio_iommu_type1_pasid_request,
> > > > > > range);
> > > > > > > > > > +
> > > > > > > > > > +   if (copy_from_user(, (void __user *)arg, minsz))
> > > > > > > > > > +   return -EFAULT;
> > > > > > > > > > +
> > > > > > > > > > +   if (req.argsz < minsz || (req.flags &
> > > > > > ~VFIO_PASID_REQUEST_MASK))
> > > > > > > > > > +   return -EINVAL;
> > > > > > > > > > +
> > > > > > > > > > +   if (req.range.min > req.range.max)
> > > > > > > > >
> > > > > > > > > Is it exploitable that a user can spin the kernel for a
> > > > > > > > > long time in the case of a free by calling this with [0,
> > > > > > > > > MAX_UINT] regardless of their
> > > > > > actual
> > > > > > > > allocations?
> > > > > > > >
> > > > > > > > IOASID can ensure that user can only free the PASIDs
> > > > > > > > allocated to the
> > > > > user.
> > > > > > but
> > > > > > > > it's true, kernel needs to loop all the PASIDs within the
> > > > > > > > range provided by user.
> > > > > > it
> > > > > > > > may take a long time. is there anything we can do? one
> > > > > > > > thing may limit
> > > > > the
> > > > > > range
> > > > > > > > provided by user?
> > > > > > >
> > > > > > > thought about it more, we have per-VM pasid quota (say
> > > > > > > 1000), so even if user passed down [0, MAX_UNIT], kernel
> > > > > > > will only loop the
> > > > > > > 1000 pasids at most. do you think we still need to do something 
> > > > > > > on it?
> > > > > >
> > > > > > How do you figure that?  vfio_iommu_type1_pasid_request()
> > > > > > accepts the user's min/max so long as (max > min) and passes
> > > > > > that to vfio_iommu_type1_pasid_free(), then to
> > > > > > vfio_pasid_free_range() which loops as:
> > > > > >
> > > > > > ioasid_t pasid = min;
> > > > > > for (; pasid <= max; pasid++)
> > > > > > ioasid_free(pasid);
> > > > > >
> > > > > > A user might only be able to allocate 1000 pasids, but
> > > > > > apparently they can ask to free all they want.
> > > > > >
> > > > > > It's also not obvious to me that calling ioasid_free() is only
> > > > > > allowing the user to free their own passid.  Does it?  It
> > > > > > would be a pretty
> > > >
> > > > Agree. I thought ioasid_free should at least carry a token since
> > > > the user
> > > space is
> > > > only allowed to manage PASIDs in its own 

Re: [PATCH v4 05/11] mm/migrate: clear __GFP_RECLAIM for THP allocation for migration

2020-07-09 Thread Joonsoo Kim
2020년 7월 7일 (화) 오후 9:17, Vlastimil Babka 님이 작성:
>
> On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> > From: Joonsoo Kim 
> >
> > In mm/migrate.c, THP allocation for migration is called with the provided
> > gfp_mask | GFP_TRANSHUGE. This gfp_mask contains __GFP_RECLAIM and it
> > would be conflict with the intention of the GFP_TRANSHUGE.
> >
> > GFP_TRANSHUGE/GFP_TRANSHUGE_LIGHT is introduced to control the reclaim
> > behaviour by well defined manner since overhead of THP allocation is
> > quite large and the whole system could suffer from it. So, they deals
> > with __GFP_RECLAIM mask deliberately. If gfp_mask contains __GFP_RECLAIM
> > and uses gfp_mask | GFP_TRANSHUGE(_LIGHT) for THP allocation, it means
> > that it breaks the purpose of the GFP_TRANSHUGE(_LIGHT).
> >
> > This patch fixes this situation by clearing __GFP_RECLAIM in provided
> > gfp_mask. Note that there are some other THP allocations for migration
> > and they just uses GFP_TRANSHUGE(_LIGHT) directly. This patch would make
> > all THP allocation for migration consistent.
> >
> > Signed-off-by: Joonsoo Kim 
> > ---
> >  mm/migrate.c | 5 +
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 02b31fe..ecd7615 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -1547,6 +1547,11 @@ struct page *new_page_nodemask(struct page *page,
> >   }
> >
> >   if (PageTransHuge(page)) {
> > + /*
> > +  * clear __GFP_RECALIM since GFP_TRANSHUGE is the gfp_mask
> > +  * that chooses the reclaim masks deliberately.
> > +  */
> > + gfp_mask &= ~__GFP_RECLAIM;
> >   gfp_mask |= GFP_TRANSHUGE;
>
> In addition to what Michal said...
>
> The mask is not passed to this function, so I would just redefine it, as is 
> done
> in the hugetlb case. We probably don't even need the __GFP_RETRY_MAYFAIL for 
> the
> THP case asi it's just there to prevent OOM kill (per commit 0f55685627d6d ) 
> and
> the costly order of THP is enough for that.

As I said in another reply, provided __GFP_THISNODE should be handled
so just redefining it would not work.

Thanks.


Re: [PATCH v4] mm/zswap: move to use crypto_acomp API for hardware acceleration

2020-07-09 Thread Sebastian Andrzej Siewior
On 2020-07-08 21:45:47 [+], Song Bao Hua (Barry Song) wrote:
> > On 2020-07-08 00:52:10 [+1200], Barry Song wrote:
> > > @@ -127,9 +129,17 @@
> > > +struct crypto_acomp_ctx {
> > > + struct crypto_acomp *acomp;
> > > + struct acomp_req *req;
> > > + struct crypto_wait wait;
> > > + u8 *dstmem;
> > > + struct mutex mutex;
> > > +};
> > …
> > > @@ -1074,12 +1138,32 @@ static int zswap_frontswap_store(unsigned
> > type, pgoff_t offset,
> > >   }
> > >
> > >   /* compress */
> > > - dst = get_cpu_var(zswap_dstmem);
> > > - tfm = *get_cpu_ptr(entry->pool->tfm);
> > > - src = kmap_atomic(page);
> > > - ret = crypto_comp_compress(tfm, src, PAGE_SIZE, dst, );
> > > - kunmap_atomic(src);
> > > - put_cpu_ptr(entry->pool->tfm);
> > > + acomp_ctx = *this_cpu_ptr(entry->pool->acomp_ctx);
> > > +
> > > + mutex_lock(_ctx->mutex);
> > > +
> > > + src = kmap(page);
> > > + dst = acomp_ctx->dstmem;
> > 
> > that mutex is per-CPU, per-context. The dstmem pointer is per-CPU. So if
> > I read this right, you can get preempted after crypto_wait_req() and
> > another context in this CPU writes its data to the same dstmem and then…
> > 
> 
> This isn't true. Another thread in this cpu will be blocked by the mutex.
> It is impossible for two threads to write the same dstmem.
> If thread1 ran on cpu1, it held cpu1's mutex; if another thread wants to run 
> on cpu1, it is blocked.
> If thread1 ran on cpu1 first, it held cpu1's mutex, then it migrated to cpu2 
> (with very rare chance)
>   a. if another thread wants to run on cpu1, it is blocked;

How it is blocked? That "struct crypto_acomp_ctx" is
"this_cpu_ptr(entry->pool->acomp_ctx)" - which is per-CPU of a pool
which you can have multiple of. But `dstmem' you have only one per-CPU
no matter have many pools you have.
So pool1 on CPU1 uses the same `dstmem' as pool2 on CPU1. But pool1 and
pool2 on CPU1 use a different mutex for protection of this `dstmem'.

> Thanks
> Barry

Sebastian


Re:[PATCH] arm64/module-plts: Consider the special case where plt_max_entries is 0

2020-07-09 Thread Richard
On Thu, 9 Jul 2020 at 09:50, 彭浩(Richard)  wrote:
>> >Apparently, you are hitting a R_AARCH64_JUMP26 or R_AARCH64_CALL26
>> >relocation that operates on a b or bl instruction that is more than
>> >128 megabytes away from its target.
>> >
>> My understanding is that a module that calls functions that are not part of 
>> the module will use PLT.
>> Plt_max_entries =0 May occur if a module does not depend on other module 
>> functions.
>>
>
>A PLT slot is allocated for each b or bl instruction that refers to a
>symbol that lives in a different section, either of the same module
> (e.g., bl in .init calling into .text), of another module, or of the
>core kernel.
>
>I don't see how you end up with plt_max_entries in this case, though.
if a module does not depend on other module functions, PLT entries in the 
module is equal to 0.
If this is the case I don't think I need to do anything, just return 0.What do 
you think should be 
done about this situation? Any Suggestions?
Thanks.
>Are you sure you have CONFIG_RANDOMIZE_BASE enabled?
>
CONFIG_RANDOMIZE_BASE = y or n has this warning (two servers, kernel version is 
different).

>> >In module_frob_arch_sections(), we count all such relocations that
>> >point to other sections, and allocate a PLT slot for each (and update
>> >plt_max_entries) accordingly. So this means that the relocation in
>> >question was disregarded, and this could happen for only two reasons:
>> >- the branch instruction and its target are both in the same section,
>> >in which case this section is *really* large,
>> >- CONFIG_RANDOMIZE_BASE is disabled, but you are still ending up in a
>> >situation where the modules are really far away from the core kernel
>> >or from other modules.
>> >
>> >Do you have a lot of [large] modules loaded when this happens?
>> I don’t think I have [large] modules.  I'll trace which module caused this 
>> warning.
>
>Yes please.
I can't print debug until someone else is not using the server.


Re: [PATCH] bfq: fix blkio cgroup leakage

2020-07-09 Thread Paolo Valente



> Il giorno 8 lug 2020, alle ore 19:48, Dmitry Monakhov  
> ha scritto:
> 
> Paolo Valente  writes:
> 
>> Hi,
>> sorry for the delay.  The commit you propose to drop fix the issues
>> reported in [1].
>> 
>> Such a commit does introduce the leak that you report (thank you for
>> spotting it).  Yet, according to the threads mentioned in [1],
>> dropping that commit would take us back to those issues.
>> 
>> Maybe the solution is to fix the unbalance that you spotted?
> I'm not quite shure that do I understand which bug was addressed for commit 
> db37a34c563b.
> AFAIU both bugs mentioned in original patchset was fixed by:
> 478de3380 ("block, bfq: deschedule empty bfq_queues not referred by any 
> proces")
> f718b0932 ( block, bfq: do not plug I/O for bfq_queues with no proc refs)"
> 
> So I review commit db37a34c563b as independent one.
> It introduces extra reference for bfq_groups via bfqg_and_blkg_get(),
> but do we actually need it here?
> 
> #IF CONFIG_BFQ_GROUP_IOSCHED is enabled:
> bfqd->root_group is holded by bfqd from bfq_init_queue()
> other bfq_queue objects are owned by corresponding blkcg from bfq_pd_alloc()
> So bfq_queue can not disappear under us.
> 

You are right, but incomplete.  No extra ref is needed for an entity
that represents a bfq_queue.  And this consideration mistook me before
I realized that that commit was needed.  The problem is that an entity
may also represent a group of entities.  In that case no reference is
taken through any bfq_queue.  The commit you want to remove takes this
missing reference.

Paolo

> #IF CONFIG_BFQ_GROUP_IOSCHED is disabled:
> we have only one  bfqd->root_group object which allocated from 
> bfq_create_group_hierarch()
> and bfqg_and_blkg_get() bfqg_and_blkg_put() are noop
> 
> Resume: in both cases extra reference is not required, so I continue to
> insist that we should revert  commit db37a34c563b because it tries to
> solve a non existing issue, but introduce the real one.
> 
> Please correct me if I'm wrong.
>> 
>> I'll check it ASAP, unless you do it before me.
>> 
>> Thanks,
>> Paolo
>> 
>> [1] https://lkml.org/lkml/2020/1/31/94
>> 
>>> Il giorno 2 lug 2020, alle ore 12:57, Dmitry Monakhov  
>>> ha scritto:
>>> 
>>> commit db37a34c563b ("block, bfq: get a ref to a group when adding it to a 
>>> service tree")
>>> introduce leak forbfq_group and blkcg_gq objects because of get/put
>>> imbalance. See trace balow:
>>> -> blkg_alloc
>>>  -> bfq_pq_alloc
>>>-> bfqg_get (+1)
>>> ->bfq_activate_bfqq
>>> ->bfq_activate_requeue_entity
>>>   -> __bfq_activate_entity
>>>  ->bfq_get_entity
> ->> ->bfqg_and_blkg_get (+1)  < : Note1
>>> ->bfq_del_bfqq_busy
>>> ->bfq_deactivate_entity+0x53/0xc0 [bfq]
>>>   ->__bfq_deactivate_entity+0x1b8/0x210 [bfq]
>>> -> bfq_forget_entity(is_in_service = true)
>>>  entity->on_st_or_in_serv = false   <=== :Note2
>>>  if (is_in_service)
>>>  return;  ==> do not touch reference
>>> -> blkcg_css_offline
>>> -> blkcg_destroy_blkgs
>>> -> blkg_destroy
>>>  -> bfq_pd_offline
>>>   -> __bfq_deactivate_entity
>>>if (!entity->on_st_or_in_serv) /* true, because (Note2)
>>> return false;
>>> -> bfq_pd_free
>>>   -> bfqg_put() (-1, byt bfqg->ref == 2) because of (Note2)
>>> So bfq_group and blkcg_gq  will leak forever, see test-case below.
>>> If fact bfq_group objects reference counting are quite different
>>> from bfq_queue. bfq_groups object are referenced by blkcg_gq via
>>> blkg_policy_data pointer, so  neither nor blkg_get() neither bfqg_get
>>> required here.
>>> 
>>> 
>>> This patch drop commit db37a34c563b ("block, bfq: get a ref to a group when 
>>> adding it to a service tree")
>>> and add corresponding comment.
>>> 
>>> ##TESTCASE_BEGIN:
>>> #!/bin/bash
>>> 
>>> max_iters=${1:-100}
>>> #prep cgroup mounts
>>> mount -t tmpfs cgroup_root /sys/fs/cgroup
>>> mkdir /sys/fs/cgroup/blkio
>>> mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
>>> 
>>> # Prepare blkdev
>>> grep blkio /proc/cgroups
>>> truncate -s 1M img
>>> losetup /dev/loop0 img
>>> echo bfq > /sys/block/loop0/queue/scheduler
>>> 
>>> grep blkio /proc/cgroups
>>> for ((i=0;i>> do
>>>   mkdir -p /sys/fs/cgroup/blkio/a
>>>   echo 0 > /sys/fs/cgroup/blkio/a/cgroup.procs
>>>   dd if=/dev/loop0 bs=4k count=1 of=/dev/null iflag=direct 2> /dev/null
>>>   echo 0 > /sys/fs/cgroup/blkio/cgroup.procs
>>>   rmdir /sys/fs/cgroup/blkio/a
>>>   grep blkio /proc/cgroups
>>> done
>>> ##TESTCASE_END:
>>> 
>>> Signed-off-by: Dmitry Monakhov 
>>> ---
>>> block/bfq-cgroup.c  |  2 +-
>>> block/bfq-iosched.h |  1 -
>>> block/bfq-wf2q.c| 15 +--
>>> 3 files changed, 6 insertions(+), 12 deletions(-)
>>> 
>>> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
>>> index 68882b9..b791e20 100644
>>> --- a/block/bfq-cgroup.c
>>> +++ b/block/bfq-cgroup.c
>>> @@ -332,7 +332,7 @@ static void bfqg_put(struct bfq_group *bfqg)
>>> kfree(bfqg);
>>> }
>>> 
>>> -void bfqg_and_blkg_get(struct bfq_group *bfqg)

Re: [PATCH] Replace HTTP links with HTTPS ones: PWM SUBSYSTEM

2020-07-09 Thread Uwe Kleine-König
On Wed, Jul 08, 2020 at 07:59:24PM +0200, Alexander A. Klimov wrote:
> Rationale:
> Reduces attack surface on kernel devs opening the links for MITM
> as HTTPS traffic is much harder to manipulate.
> 
> Deterministic algorithm:
> For each file:
>   If not .svg:
> For each line:
>   If doesn't contain `\bxmlns\b`:
> For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
> If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
> If both the HTTP and HTTPS versions
> return 200 OK and serve the same content:
>   Replace HTTP with HTTPS.
> 
> Signed-off-by: Alexander A. Klimov 

LGTM:

Acked-by: Uwe Kleine-König 

Thanks
Uwe

-- 
Pengutronix e.K.   | Uwe Kleine-König|
Industrial Linux Solutions | https://www.pengutronix.de/ |


signature.asc
Description: PGP signature


Re: [PATCH] scsi: fcoe: add missed kfree() in an error path

2020-07-09 Thread Markus Elfring
>>> fcoe_fdmi_info() misses to call kfree() in an error path.
>>> Add the missed function call to fix it.
>>
>> I suggest to use an additional jump target for the completion
>> of the desired exception handling.
>>
>>
>> …
>>> +++ b/drivers/scsi/fcoe/fcoe.c
>>> @@ -830,6 +830,7 @@ static void fcoe_fdmi_info(struct fc_lport *lport, 
>>> struct net_device *netdev)
>>>   if (rc) {
>>>   printk(KERN_INFO "fcoe: Failed to retrieve FDMI "
>>>   "information from netdev.\n");
>>> +    kfree(fdmi);
>>>   return;
>>>   }
>>
>> -    return;
>> +    goto free_fdmi;
>>
>>
>> How do you think about to apply any further coding style adjustments?
>
> The local variable "fdmi" is invisible to the function.

I have got understanding difficulties for this information.
The function call “kfree(fdmi)” is already used at the end of this if branch.
Thus I propose to add a label there.

Do you notice any additional improvement possibilities for this software module?

Regards,
Markus


Re: [Ksummit-discuss] [PATCH v3] CodingStyle: Inclusive Terminology

2020-07-09 Thread Daniel Vetter
On Wed, Jul 8, 2020 at 8:30 PM Dan Williams  wrote:
>
> Linux maintains a coding-style and its own idiomatic set of terminology.
> Update the style guidelines to recommend replacements for the terms
> master/slave and blacklist/whitelist.
>
> Link: 
> http://lore.kernel.org/r/159389297140.2210796.13590142254668787525.st...@dwillia2-desk3.amr.corp.intel.com
> Acked-by: Randy Dunlap 
> Acked-by: Dave Airlie 
> Acked-by: SeongJae Park 
> Acked-by: Christian Brauner 
> Acked-by: James Bottomley 
> Reviewed-by: Mark Brown 
> Signed-off-by: Theodore Ts'o 
> Signed-off-by: Shuah Khan 
> Signed-off-by: Dan Carpenter 
> Signed-off-by: Kees Cook 
> Signed-off-by: Olof Johansson 
> Signed-off-by: Jonathan Corbet 
> Signed-off-by: Chris Mason 
> Signed-off-by: Greg Kroah-Hartman 
> Signed-off-by: Dan Williams 

Replied to the old version, once more here so it's not lost.

Acked-by: Daniel Vetter 

> ---
> Changes since v2 [1]:
> - Pick up missed sign-offs and acks from Jon, Shuah, and Christian
>   (sorry about missing those earlier).
>
> - Reformat the replacement list to make it easier to read.
>
> - Add 'controller' as a suggested replacement (Kees and Mark)
>
> - Fix up the paired term for 'performer' to be 'director' (Kees)
>
> - Collect some new acks, reviewed-by's, and sign-offs for v2.
>
> - Fix up Chris's email
>
> [1]: 
> http://lore.kernel.org/r/159419296487.2464622.863943877093636532.st...@dwillia2-desk3.amr.corp.intel.com
>
>
>  Documentation/process/coding-style.rst |   20 
>  1 file changed, 20 insertions(+)
>
> diff --git a/Documentation/process/coding-style.rst 
> b/Documentation/process/coding-style.rst
> index 2657a55c6f12..1bee6f8affdb 100644
> --- a/Documentation/process/coding-style.rst
> +++ b/Documentation/process/coding-style.rst
> @@ -319,6 +319,26 @@ If you are afraid to mix up your local variable names, 
> you have another
>  problem, which is called the function-growth-hormone-imbalance syndrome.
>  See chapter 6 (Functions).
>
> +For symbol names and documentation, avoid introducing new usage of
> +'master / slave' (or 'slave' independent of 'master') and 'blacklist /
> +whitelist'.
> +
> +Recommended replacements for 'master / slave' are:
> +'{primary,main} / {secondary,replica,subordinate}'
> +'{initiator,requester} / {target,responder}'
> +'{controller,host} / {device,worker,proxy}'
> +'leader / follower'
> +'director / performer'
> +
> +Recommended replacements for 'blacklist/whitelist' are:
> +'denylist / allowlist'
> +'blocklist / passlist'
> +
> +Exceptions for introducing new usage is to maintain a userspace ABI/API,
> +or when updating code for an existing (as of 2020) hardware or protocol
> +specification that mandates those terms. For new specifications
> +translate specification usage of the terminology to the kernel coding
> +standard where possible.
>
>  5) Typedefs
>  ---
>
> ___
> Ksummit-discuss mailing list
> ksummit-disc...@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH v1] platform/chrome: cros_ec_debugfs: conditionally create uptime node

2020-07-09 Thread Enric Balletbo i Serra
Hi Eizan,

Thank you for your patch

On 8/7/20 6:53, Eizan Miyamoto wrote:
> Before creating an 'uptime' node in debugfs, this change adds a check to
> see if a EC_CMD_GET_UPTIME_INFO command can be successfully run.
> 
> If the uptime node is created, userspace programs may periodically poll
> it (e.g., timberslide), causing commands to be sent to the EC each time.
> If the EC doesn't support EC_CMD_GET_UPTIME_INFO, an error will be
> emitted in the EC console, producing noise.
> 

A similar patch with the same purpose sent by Gwendal was already accepted and
queued for 5.9. See [1].


Thanks,
 Enric

[1]
https://git.kernel.org/pub/scm/linux/kernel/git/chrome-platform/linux.git/commit/?h=for-next=d378cdd0113878e3860f954d16dd3e91defb1492



> Signed-off-by: Eizan Miyamoto 
> ---
> 
>  drivers/platform/chrome/cros_ec_debugfs.c | 35 +--
>  1 file changed, 26 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/platform/chrome/cros_ec_debugfs.c 
> b/drivers/platform/chrome/cros_ec_debugfs.c
> index ecfada00e6c51..8708fe12f8ca8 100644
> --- a/drivers/platform/chrome/cros_ec_debugfs.c
> +++ b/drivers/platform/chrome/cros_ec_debugfs.c
> @@ -242,17 +242,14 @@ static ssize_t cros_ec_pdinfo_read(struct file *file,
>  read_buf, p - read_buf);
>  }
>  
> -static ssize_t cros_ec_uptime_read(struct file *file, char __user *user_buf,
> -size_t count, loff_t *ppos)
> +static int cros_ec_get_uptime(struct cros_ec_device *ec_dev,
> +   uint32_t *uptime)
>  {
> - struct cros_ec_debugfs *debug_info = file->private_data;
> - struct cros_ec_device *ec_dev = debug_info->ec->ec_dev;
>   struct {
>   struct cros_ec_command cmd;
>   struct ec_response_uptime_info resp;
>   } __packed msg = {};
>   struct ec_response_uptime_info *resp;
> - char read_buf[32];
>   int ret;
>  
>   resp = (struct ec_response_uptime_info *)
> @@ -264,8 +261,24 @@ static ssize_t cros_ec_uptime_read(struct file *file, 
> char __user *user_buf,
>   if (ret < 0)
>   return ret;
>  
> - ret = scnprintf(read_buf, sizeof(read_buf), "%u\n",
> - resp->time_since_ec_boot_ms);
> + *uptime = resp->time_since_ec_boot_ms;
> + return 0;
> +}
> +
> +static ssize_t cros_ec_uptime_read(struct file *file, char __user *user_buf,
> +size_t count, loff_t *ppos)
> +{
> + struct cros_ec_debugfs *debug_info = file->private_data;
> + struct cros_ec_device *ec_dev = debug_info->ec->ec_dev;
> + char read_buf[32];
> + int ret;
> + uint32_t uptime;
> +
> + ret = cros_ec_get_uptime(ec_dev, );
> + if (ret < 0)
> + return ret;
> +
> + ret = scnprintf(read_buf, sizeof(read_buf), "%u\n", uptime);
>  
>   return simple_read_from_buffer(user_buf, count, ppos, read_buf, ret);
>  }
> @@ -425,6 +438,7 @@ static int cros_ec_debugfs_probe(struct platform_device 
> *pd)
>   const char *name = ec_platform->ec_name;
>   struct cros_ec_debugfs *debug_info;
>   int ret;
> + uint32_t uptime;
>  
>   debug_info = devm_kzalloc(ec->dev, sizeof(*debug_info), GFP_KERNEL);
>   if (!debug_info)
> @@ -444,8 +458,11 @@ static int cros_ec_debugfs_probe(struct platform_device 
> *pd)
>   debugfs_create_file("pdinfo", 0444, debug_info->dir, debug_info,
>   _ec_pdinfo_fops);
>  
> - debugfs_create_file("uptime", 0444, debug_info->dir, debug_info,
> - _ec_uptime_fops);
> + if (cros_ec_get_uptime(debug_info->ec->ec_dev, ) >= 0)
> + debugfs_create_file("uptime", 0444, debug_info->dir, debug_info,
> + _ec_uptime_fops);
> + else
> + dev_dbg(ec->dev, "EC does not provide uptime");
>  
>   debugfs_create_x32("last_resume_result", 0444, debug_info->dir,
>  >ec_dev->last_resume_result);
> 


[PATCH] PCI: Replace kmalloc with kzalloc in the comment/message

2020-07-09 Thread Yi Wang
From: Liao Pingfang 

Use kzalloc instead of kmalloc in the comment/message according to
the previous kzalloc() call.

Signed-off-by: Liao Pingfang 
Signed-off-by: Yi Wang 
---
 drivers/pci/hotplug/ibmphp_pci.c | 2 +-
 drivers/pci/setup-bus.c  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/hotplug/ibmphp_pci.c b/drivers/pci/hotplug/ibmphp_pci.c
index e22d023..2d36992 100644
--- a/drivers/pci/hotplug/ibmphp_pci.c
+++ b/drivers/pci/hotplug/ibmphp_pci.c
@@ -205,7 +205,7 @@ int ibmphp_configure_card(struct pci_func *func, u8 slotno)
cur_func->next 
= newfunc;
 
rc = 
ibmphp_configure_card(newfunc, slotno);
-   /* This could only 
happen if kmalloc failed */
+   /* This could only 
happen if kzalloc failed */
if (rc) {
/* We need to 
do this in case bridge itself got configured properly, but devices behind it 
failed */
func->bus = 1; 
/* To indicate to the unconfigure function that this is a PPB */
diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
index bbcef1a..13c5a44 100644
--- a/drivers/pci/setup-bus.c
+++ b/drivers/pci/setup-bus.c
@@ -151,7 +151,7 @@ static void pdev_sort_resources(struct pci_dev *dev, struct 
list_head *head)
 
tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
if (!tmp)
-   panic("pdev_sort_resources(): kmalloc() failed!\n");
+   panic("%s: kzalloc() failed!\n", __func__);
tmp->res = r;
tmp->dev = dev;
 
-- 
2.9.5



[PATCH v3 1/4] iomap: Constify ioreadX() iomem argument (as in generic implementation)

2020-07-09 Thread Krzysztof Kozlowski
The ioreadX() and ioreadX_rep() helpers have inconsistent interface.  On
some architectures void *__iomem address argument is a pointer to const,
on some not.

Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among architectures.

Suggested-by: Geert Uytterhoeven 
Signed-off-by: Krzysztof Kozlowski 
Reviewed-by: Geert Uytterhoeven 
Reviewed-by: Arnd Bergmann 
---
 arch/alpha/include/asm/core_apecs.h   |  6 +--
 arch/alpha/include/asm/core_cia.h |  6 +--
 arch/alpha/include/asm/core_lca.h |  6 +--
 arch/alpha/include/asm/core_marvel.h  |  4 +-
 arch/alpha/include/asm/core_mcpcia.h  |  6 +--
 arch/alpha/include/asm/core_t2.h  |  2 +-
 arch/alpha/include/asm/io.h   | 12 ++---
 arch/alpha/include/asm/io_trivial.h   | 16 +++---
 arch/alpha/include/asm/jensen.h   |  2 +-
 arch/alpha/include/asm/machvec.h  |  6 +--
 arch/alpha/kernel/core_marvel.c   |  2 +-
 arch/alpha/kernel/io.c| 12 ++---
 arch/parisc/include/asm/io.h  |  4 +-
 arch/parisc/lib/iomap.c   | 72 +--
 arch/powerpc/kernel/iomap.c   | 28 +--
 arch/sh/kernel/iomap.c| 22 
 drivers/sh/clk/cpg.c  |  2 +-
 include/asm-generic/iomap.h   | 28 +--
 include/linux/io-64-nonatomic-hi-lo.h |  4 +-
 include/linux/io-64-nonatomic-lo-hi.h |  4 +-
 lib/iomap.c   | 30 +--
 21 files changed, 137 insertions(+), 137 deletions(-)

diff --git a/arch/alpha/include/asm/core_apecs.h 
b/arch/alpha/include/asm/core_apecs.h
index 0a07055bc0fe..2d9726fc02ef 100644
--- a/arch/alpha/include/asm/core_apecs.h
+++ b/arch/alpha/include/asm/core_apecs.h
@@ -384,7 +384,7 @@ struct el_apecs_procdata
}   \
} while (0)
 
-__EXTERN_INLINE unsigned int apecs_ioread8(void __iomem *xaddr)
+__EXTERN_INLINE unsigned int apecs_ioread8(const void __iomem *xaddr)
 {
unsigned long addr = (unsigned long) xaddr;
unsigned long result, base_and_type;
@@ -420,7 +420,7 @@ __EXTERN_INLINE void apecs_iowrite8(u8 b, void __iomem 
*xaddr)
*(vuip) ((addr << 5) + base_and_type) = w;
 }
 
-__EXTERN_INLINE unsigned int apecs_ioread16(void __iomem *xaddr)
+__EXTERN_INLINE unsigned int apecs_ioread16(const void __iomem *xaddr)
 {
unsigned long addr = (unsigned long) xaddr;
unsigned long result, base_and_type;
@@ -456,7 +456,7 @@ __EXTERN_INLINE void apecs_iowrite16(u16 b, void __iomem 
*xaddr)
*(vuip) ((addr << 5) + base_and_type) = w;
 }
 
-__EXTERN_INLINE unsigned int apecs_ioread32(void __iomem *xaddr)
+__EXTERN_INLINE unsigned int apecs_ioread32(const void __iomem *xaddr)
 {
unsigned long addr = (unsigned long) xaddr;
if (addr < APECS_DENSE_MEM)
diff --git a/arch/alpha/include/asm/core_cia.h 
b/arch/alpha/include/asm/core_cia.h
index c706a7f2b061..cb22991f6761 100644
--- a/arch/alpha/include/asm/core_cia.h
+++ b/arch/alpha/include/asm/core_cia.h
@@ -342,7 +342,7 @@ struct el_CIA_sysdata_mcheck {
 #define vuip   volatile unsigned int __force *
 #define vulp   volatile unsigned long __force *
 
-__EXTERN_INLINE unsigned int cia_ioread8(void __iomem *xaddr)
+__EXTERN_INLINE unsigned int cia_ioread8(const void __iomem *xaddr)
 {
unsigned long addr = (unsigned long) xaddr;
unsigned long result, base_and_type;
@@ -374,7 +374,7 @@ __EXTERN_INLINE void cia_iowrite8(u8 b, void __iomem *xaddr)
*(vuip) ((addr << 5) + base_and_type) = w;
 }
 
-__EXTERN_INLINE unsigned int cia_ioread16(void __iomem *xaddr)
+__EXTERN_INLINE unsigned int cia_ioread16(const void __iomem *xaddr)
 {
unsigned long addr = (unsigned long) xaddr;
unsigned long result, base_and_type;
@@ -404,7 +404,7 @@ __EXTERN_INLINE void cia_iowrite16(u16 b, void __iomem 
*xaddr)
*(vuip) ((addr << 5) + base_and_type) = w;
 }
 
-__EXTERN_INLINE unsigned int cia_ioread32(void __iomem *xaddr)
+__EXTERN_INLINE unsigned int cia_ioread32(const void __iomem *xaddr)
 {
unsigned long addr = (unsigned long) xaddr;
if (addr < CIA_DENSE_MEM)
diff --git a/arch/alpha/include/asm/core_lca.h 
b/arch/alpha/include/asm/core_lca.h
index 84d5e5b84f4f..ec86314418cb 100644
--- a/arch/alpha/include/asm/core_lca.h
+++ b/arch/alpha/include/asm/core_lca.h
@@ -230,7 +230,7 @@ union el_lca {
} while (0)
 
 
-__EXTERN_INLINE unsigned int lca_ioread8(void __iomem *xaddr)
+__EXTERN_INLINE unsigned int lca_ioread8(const void __iomem *xaddr)
 {
unsigned long addr = (unsigned long) xaddr;
unsigned long result, base_and_type;
@@ -266,7 +266,7 @@ __EXTERN_INLINE void lca_iowrite8(u8 b, void __iomem *xaddr)
*(vuip) ((addr << 5) + base_and_type) = w;
 }
 
-__EXTERN_INLINE unsigned int lca_ioread16(void __iomem *xaddr)
+__EXTERN_INLINE unsigned int lca_ioread16(const void __iomem 

[PATCH] TI DAVINCI SERIES MEDIA DRIVER: Replace HTTP links with HTTPS ones

2020-07-09 Thread Alexander A. Klimov
Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.

Deterministic algorithm:
For each file:
  If not .svg:
For each line:
  If doesn't contain `\bxmlns\b`:
For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
  If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
If both the HTTP and HTTPS versions
return 200 OK and serve the same content:
  Replace HTTP with HTTPS.

Signed-off-by: Alexander A. Klimov 
---
 Continuing my work started at 93431e0607e5.
 See also: git log --oneline '--author=Alexander A. Klimov 
' v5.7..master
 (Actually letting a shell for loop submit all this stuff for me.)

 If there are any URLs to be removed completely or at least not HTTPSified:
 Just clearly say so and I'll *undo my change*.
 See also: https://lkml.org/lkml/2020/6/27/64

 If there are any valid, but yet not changed URLs:
 See: https://lkml.org/lkml/2020/6/26/837

 If you apply the patch, please let me know.


 drivers/media/platform/davinci/vpbe_display.c | 2 +-
 drivers/media/platform/davinci/vpif.c | 2 +-
 drivers/media/platform/davinci/vpif.h | 2 +-
 drivers/media/platform/davinci/vpif_display.c | 2 +-
 drivers/media/platform/davinci/vpif_display.h | 2 +-
 include/media/davinci/vpbe_display.h  | 2 +-
 6 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/media/platform/davinci/vpbe_display.c 
b/drivers/media/platform/davinci/vpbe_display.c
index 7ab13eb7527d..d19bad997f30 100644
--- a/drivers/media/platform/davinci/vpbe_display.c
+++ b/drivers/media/platform/davinci/vpbe_display.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * Copyright (C) 2010 Texas Instruments Incorporated - http://www.ti.com/
+ * Copyright (C) 2010 Texas Instruments Incorporated - https://www.ti.com/
  */
 #include 
 #include 
diff --git a/drivers/media/platform/davinci/vpif.c 
b/drivers/media/platform/davinci/vpif.c
index df66461f5d4f..e9794c9fc7fe 100644
--- a/drivers/media/platform/davinci/vpif.c
+++ b/drivers/media/platform/davinci/vpif.c
@@ -5,7 +5,7 @@
  * The hardware supports SDTV, HDTV formats, raw data capture.
  * Currently, the driver supports NTSC and PAL standards.
  *
- * Copyright (C) 2009 Texas Instruments Incorporated - http://www.ti.com/
+ * Copyright (C) 2009 Texas Instruments Incorporated - https://www.ti.com/
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of the GNU General Public License as
diff --git a/drivers/media/platform/davinci/vpif.h 
b/drivers/media/platform/davinci/vpif.h
index 2466c7c77deb..c6d1d890478a 100644
--- a/drivers/media/platform/davinci/vpif.h
+++ b/drivers/media/platform/davinci/vpif.h
@@ -1,7 +1,7 @@
 /*
  * VPIF header file
  *
- * Copyright (C) 2009 Texas Instruments Incorporated - http://www.ti.com/
+ * Copyright (C) 2009 Texas Instruments Incorporated - https://www.ti.com/
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of the GNU General Public License as
diff --git a/drivers/media/platform/davinci/vpif_display.c 
b/drivers/media/platform/davinci/vpif_display.c
index 7d55fd45240e..46afc029138f 100644
--- a/drivers/media/platform/davinci/vpif_display.c
+++ b/drivers/media/platform/davinci/vpif_display.c
@@ -2,7 +2,7 @@
  * vpif-display - VPIF display driver
  * Display driver for TI DaVinci VPIF
  *
- * Copyright (C) 2009 Texas Instruments Incorporated - http://www.ti.com/
+ * Copyright (C) 2009 Texas Instruments Incorporated - https://www.ti.com/
  * Copyright (C) 2014 Lad, Prabhakar 
  *
  * This program is free software; you can redistribute it and/or
diff --git a/drivers/media/platform/davinci/vpif_display.h 
b/drivers/media/platform/davinci/vpif_display.h
index af2765fdcea8..f731a65eefd6 100644
--- a/drivers/media/platform/davinci/vpif_display.h
+++ b/drivers/media/platform/davinci/vpif_display.h
@@ -1,7 +1,7 @@
 /*
  * VPIF display header file
  *
- * Copyright (C) 2009 Texas Instruments Incorporated - http://www.ti.com/
+ * Copyright (C) 2009 Texas Instruments Incorporated - https://www.ti.com/
  *
  * This program is free software; you can redistribute it and/or
  * modify it under the terms of the GNU General Public License as
diff --git a/include/media/davinci/vpbe_display.h 
b/include/media/davinci/vpbe_display.h
index 56d05a855140..6d2a93740130 100644
--- a/include/media/davinci/vpbe_display.h
+++ b/include/media/davinci/vpbe_display.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 /*
- * Copyright (C) 2010 Texas Instruments Incorporated - http://www.ti.com/
+ * Copyright (C) 2010 Texas Instruments Incorporated - https://www.ti.com/
  */
 #ifndef VPBE_DISPLAY_H
 #define VPBE_DISPLAY_H
-- 
2.27.0



[PATCH v3 0/4] iomap: Constify ioreadX() iomem argument

2020-07-09 Thread Krzysztof Kozlowski
Hi,

Multiple architectures are affected in the first patch and all further
patches depend on the first.

Maybe this could go in through Andrew Morton's tree?


Changes since v2

1. Drop all non-essential patches (cleanups),
2. Update also drivers/sh/clk/cpg.c .


Changes since v1

https://lore.kernel.org/lkml/1578415992-24054-1-git-send-email-k...@kernel.org/
1. Constify also ioreadX_rep() and mmio_insX(),
2. Squash lib+alpha+powerpc+parisc+sh into one patch for bisectability,
3. Add acks and reviews,
4. Re-order patches so all optional driver changes are at the end.


Description
===
The ioread8/16/32() and others have inconsistent interface among the
architectures: some taking address as const, some not.

It seems there is nothing really stopping all of them to take
pointer to const.

Patchset was only compile tested on affected architectures.  No real
testing.


volatile

There is still interface inconsistency between architectures around
"volatile" qualifier:
 - include/asm-generic/io.h:static inline u32 ioread32(const volatile void 
__iomem *addr)
 - include/asm-generic/iomap.h:extern unsigned int ioread32(const void __iomem 
*);

This is still discussed and out of scope of this patchset.


Best regards,
Krzysztof


Krzysztof Kozlowski (4):
  iomap: Constify ioreadX() iomem argument (as in generic
implementation)
  rtl818x: Constify ioreadX() iomem argument (as in generic
implementation)
  ntb: intel: Constify ioreadX() iomem argument (as in generic
implementation)
  virtio: pci: Constify ioreadX() iomem argument (as in generic
implementation)

 arch/alpha/include/asm/core_apecs.h   |  6 +-
 arch/alpha/include/asm/core_cia.h |  6 +-
 arch/alpha/include/asm/core_lca.h |  6 +-
 arch/alpha/include/asm/core_marvel.h  |  4 +-
 arch/alpha/include/asm/core_mcpcia.h  |  6 +-
 arch/alpha/include/asm/core_t2.h  |  2 +-
 arch/alpha/include/asm/io.h   | 12 ++--
 arch/alpha/include/asm/io_trivial.h   | 16 ++---
 arch/alpha/include/asm/jensen.h   |  2 +-
 arch/alpha/include/asm/machvec.h  |  6 +-
 arch/alpha/kernel/core_marvel.c   |  2 +-
 arch/alpha/kernel/io.c| 12 ++--
 arch/parisc/include/asm/io.h  |  4 +-
 arch/parisc/lib/iomap.c   | 72 +--
 arch/powerpc/kernel/iomap.c   | 28 
 arch/sh/kernel/iomap.c| 22 +++---
 .../realtek/rtl818x/rtl8180/rtl8180.h |  6 +-
 drivers/ntb/hw/intel/ntb_hw_gen1.c|  2 +-
 drivers/ntb/hw/intel/ntb_hw_gen3.h|  2 +-
 drivers/ntb/hw/intel/ntb_hw_intel.h   |  2 +-
 drivers/sh/clk/cpg.c  |  2 +-
 drivers/virtio/virtio_pci_modern.c|  6 +-
 include/asm-generic/iomap.h   | 28 
 include/linux/io-64-nonatomic-hi-lo.h |  4 +-
 include/linux/io-64-nonatomic-lo-hi.h |  4 +-
 lib/iomap.c   | 30 
 26 files changed, 146 insertions(+), 146 deletions(-)

-- 
2.17.1



[PATCH v3 2/4] rtl818x: Constify ioreadX() iomem argument (as in generic implementation)

2020-07-09 Thread Krzysztof Kozlowski
The ioreadX() helpers have inconsistent interface.  On some architectures
void *__iomem address argument is a pointer to const, on some not.

Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among architectures.

Signed-off-by: Krzysztof Kozlowski 
Reviewed-by: Geert Uytterhoeven 
Acked-by: Kalle Valo 
---
 drivers/net/wireless/realtek/rtl818x/rtl8180/rtl8180.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8180/rtl8180.h 
b/drivers/net/wireless/realtek/rtl818x/rtl8180/rtl8180.h
index 7948a2da195a..2ff00800d45b 100644
--- a/drivers/net/wireless/realtek/rtl818x/rtl8180/rtl8180.h
+++ b/drivers/net/wireless/realtek/rtl818x/rtl8180/rtl8180.h
@@ -150,17 +150,17 @@ void rtl8180_write_phy(struct ieee80211_hw *dev, u8 addr, 
u32 data);
 void rtl8180_set_anaparam(struct rtl8180_priv *priv, u32 anaparam);
 void rtl8180_set_anaparam2(struct rtl8180_priv *priv, u32 anaparam2);
 
-static inline u8 rtl818x_ioread8(struct rtl8180_priv *priv, u8 __iomem *addr)
+static inline u8 rtl818x_ioread8(struct rtl8180_priv *priv, const u8 __iomem 
*addr)
 {
return ioread8(addr);
 }
 
-static inline u16 rtl818x_ioread16(struct rtl8180_priv *priv, __le16 __iomem 
*addr)
+static inline u16 rtl818x_ioread16(struct rtl8180_priv *priv, const __le16 
__iomem *addr)
 {
return ioread16(addr);
 }
 
-static inline u32 rtl818x_ioread32(struct rtl8180_priv *priv, __le32 __iomem 
*addr)
+static inline u32 rtl818x_ioread32(struct rtl8180_priv *priv, const __le32 
__iomem *addr)
 {
return ioread32(addr);
 }
-- 
2.17.1



[PATCH v3 3/4] ntb: intel: Constify ioreadX() iomem argument (as in generic implementation)

2020-07-09 Thread Krzysztof Kozlowski
The ioreadX() helpers have inconsistent interface.  On some architectures
void *__iomem address argument is a pointer to const, on some not.

Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among architectures.

Signed-off-by: Krzysztof Kozlowski 
Reviewed-by: Geert Uytterhoeven 
Acked-by: Dave Jiang 
---
 drivers/ntb/hw/intel/ntb_hw_gen1.c  | 2 +-
 drivers/ntb/hw/intel/ntb_hw_gen3.h  | 2 +-
 drivers/ntb/hw/intel/ntb_hw_intel.h | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/ntb/hw/intel/ntb_hw_gen1.c 
b/drivers/ntb/hw/intel/ntb_hw_gen1.c
index 423f9b8fbbcf..3185efeab487 100644
--- a/drivers/ntb/hw/intel/ntb_hw_gen1.c
+++ b/drivers/ntb/hw/intel/ntb_hw_gen1.c
@@ -1205,7 +1205,7 @@ int intel_ntb_peer_spad_write(struct ntb_dev *ntb, int 
pidx, int sidx,
   ndev->peer_reg->spad);
 }
 
-static u64 xeon_db_ioread(void __iomem *mmio)
+static u64 xeon_db_ioread(const void __iomem *mmio)
 {
return (u64)ioread16(mmio);
 }
diff --git a/drivers/ntb/hw/intel/ntb_hw_gen3.h 
b/drivers/ntb/hw/intel/ntb_hw_gen3.h
index 2bc5d8356045..dea93989942d 100644
--- a/drivers/ntb/hw/intel/ntb_hw_gen3.h
+++ b/drivers/ntb/hw/intel/ntb_hw_gen3.h
@@ -91,7 +91,7 @@
 #define GEN3_DB_TOTAL_SHIFT33
 #define GEN3_SPAD_COUNT16
 
-static inline u64 gen3_db_ioread(void __iomem *mmio)
+static inline u64 gen3_db_ioread(const void __iomem *mmio)
 {
return ioread64(mmio);
 }
diff --git a/drivers/ntb/hw/intel/ntb_hw_intel.h 
b/drivers/ntb/hw/intel/ntb_hw_intel.h
index d61fcd91714b..05e2335c9596 100644
--- a/drivers/ntb/hw/intel/ntb_hw_intel.h
+++ b/drivers/ntb/hw/intel/ntb_hw_intel.h
@@ -103,7 +103,7 @@ struct intel_ntb_dev;
 struct intel_ntb_reg {
int (*poll_link)(struct intel_ntb_dev *ndev);
int (*link_is_up)(struct intel_ntb_dev *ndev);
-   u64 (*db_ioread)(void __iomem *mmio);
+   u64 (*db_ioread)(const void __iomem *mmio);
void (*db_iowrite)(u64 db_bits, void __iomem *mmio);
unsigned long   ntb_ctl;
resource_size_t db_size;
-- 
2.17.1



Re: [Intel-gfx] [PATCH 03/18] dma-fence: basic lockdep annotations

2020-07-09 Thread Daniel Stone
Hi,
Jumping in after a couple of weeks where I've paged most everything
out of my brain ...

On Fri, 19 Jun 2020 at 10:43, Daniel Vetter  wrote:
> On Fri, Jun 19, 2020 at 10:13:35AM +0100, Chris Wilson wrote:
> > > The proposed patches might very well encode the wrong contract, that's
> > > all up for discussion. But fundamentally questioning that we need one
> > > is missing what upstream is all about.
> >
> > Then I have not clearly communicated, as my opinion is not that
> > validation is worthless, but that the implementation is enshrining a
> > global property on a low level primitive that prevents it from being
> > used elsewhere. And I want to replace completion [chains] with fences, and
> > bio with fences, and closures with fences, and what other equivalencies
> > there are in the kernel. The fence is as central a locking construct as
> > struct completion and deserves to be a foundational primitive provided
> > by kernel/ used throughout all drivers for discrete problem domains.
> >
> > This is narrowing dma_fence whereby adding
> >   struct lockdep_map *dma_fence::wait_map
> > and annotating linkage, allows you to continue to specify that all
> > dma_fence used for a particular purpose must follow common rules,
> > without restricting the primitive for uses outside of this scope.
>
> Somewhere else in this thread I had discussions with Jason Gunthorpe about
> this topic. It might maybe change somewhat depending upon exact rules, but
> his take is very much "I don't want dma_fence in rdma". Or pretty close to
> that at least.
>
> Similar discussions with habanalabs, they're using dma_fence internally
> without any of the uapi. Discussion there has also now concluded that it's
> best if they remove them, and simply switch over to a wait_queue or
> completion like every other driver does.
>
> The next round of the patches already have a paragraph to at least
> somewhat limit how non-gpu drivers use dma_fence. And I guess actual
> consensus might be pointing even more strongly at dma_fence being solely
> something for gpus and closely related subsystem (maybe media) for syncing
> dma-buf access.
>
> So dma_fence as general replacement for completion chains I think just
> wont happen.
>
> What might make sense is if e.g. the lockdep annotations could be reused,
> at least in design, for wait_queue or completion or anything else
> really. I do think that has a fair chance compared to the automagic
> cross-release annotations approach, which relied way too heavily on
> guessing where barriers are. My experience from just a bit of playing
> around with these patches here and discussing them with other driver
> maintainers is that accurately deciding where critical sections start and
> end is a job for humans only. And if you get it wrong, you will have a
> false positive.
>
> And you're indeed correct that if we'd do annotations for completions and
> wait queues, then that would need to have a class per semantically
> equivalent user, like we have lockdep classes for mutexes, not just one
> overall.
>
> But dma_fence otoh is something very specific, which comes with very
> specific rules attached - it's not a generic wait_queue at all. Originally
> it did start out as one even, but it is a very specialized wait_queue.
>
> So there's imo two cases:
>
> - Your completion is entirely orthogonal of dma_fences, and can never ever
>   block a dma_fence. Don't use dma_fence for this, and no problem. It's
>   just another wait_queue somewhere.
>
> - Your completion can eventually, maybe through lots of convolutions and
>   depdencies, block a dma_fence. In that case full dma_fence rules apply,
>   and the only thing you can do with a custom annotation is make the rules
>   even stricter. E.g. if a sub-timeline in the scheduler isn't allowed to
>   take certain scheduler locks. But the userspace visible/published fence
>   do take them, maybe as part of command submission or retirement.
>   Entirely hypotethical, no idea any driver actually needs this.

I don't claim to understand the implementation of i915's scheduler and
GEM handling, and it seems like there's some public context missing
here. But to me, the above is a good statement of what I (and a lot of
other userspace) have been relying on - that dma-fence is a very
tightly scoped thing which is very predictable but in extremis.

It would be great to have something like this enshrined in dma-fence
documentation, visible to both kernel and external users. The
properties we've so far been assuming for the graphics pipeline -
covering production & execution of vertex/fragment workloads on the
GPU, framebuffer display, and to the extent this is necessary
involving compute - are something like this:

A single dma-fence with no dependencies represents (the tail of) a
unit of work, which has been all but committed to the hardware. Once
committed to the hardware, this work will complete (successfully or in
error) in bounded time. The unit of work referred to 

[PATCH v2 1/2] riscv: Add STACKPROTECTOR supported

2020-07-09 Thread guoren
From: Guo Ren 

The -fstack-protector & -fstack-protector-strong features are from
gcc. The patch only add basic kernel support to stack-protector
feature and some arch could have its own solution such as
ARM64_PTR_AUTH.

After enabling STACKPROTECTOR and STACKPROTECTOR_STRONG, the .text
size is expanded from  0x7de066 to 0x81fb32 (only 5%) to add canary
checking code.

Signed-off-by: Guo Ren 
Reviewed-by: Kees Cook 
Cc: Paul Walmsley 
Cc: Palmer Dabbelt 
Cc: Albert Ou 
Cc: Masami Hiramatsu 
Cc: Björn Töpel 
Cc: Greentime Hu 
Cc: Atish Patra 
---
 arch/riscv/Kconfig  |  1 +
 arch/riscv/include/asm/stackprotector.h | 33 +
 arch/riscv/kernel/process.c |  6 ++
 3 files changed, 40 insertions(+)
 create mode 100644 arch/riscv/include/asm/stackprotector.h

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index f927a91..4b0e308 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -63,6 +63,7 @@ config RISCV
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
+   select HAVE_STACKPROTECTOR
select HAVE_SYSCALL_TRACEPOINTS
select IRQ_DOMAIN
select MODULES_USE_ELF_RELA if MODULES
diff --git a/arch/riscv/include/asm/stackprotector.h 
b/arch/riscv/include/asm/stackprotector.h
new file mode 100644
index ..8e1ef2c
--- /dev/null
+++ b/arch/riscv/include/asm/stackprotector.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_RISCV_STACKPROTECTOR_H
+#define _ASM_RISCV_STACKPROTECTOR_H
+
+#include 
+#include 
+#include 
+
+extern unsigned long __stack_chk_guard;
+
+/*
+ * Initialize the stackprotector canary value.
+ *
+ * NOTE: this must only be called from functions that never return,
+ * and it must always be inlined.
+ */
+static __always_inline void boot_init_stack_canary(void)
+{
+   unsigned long canary;
+   unsigned long tsc;
+
+   /* Try to get a semi random initial value. */
+   get_random_bytes(, sizeof(canary));
+   tsc = get_cycles();
+   canary += tsc + (tsc << 32UL);
+   canary ^= LINUX_VERSION_CODE;
+   canary &= CANARY_MASK;
+
+   current->stack_canary = canary;
+   __stack_chk_guard = current->stack_canary;
+}
+#endif /* _ASM_RISCV_STACKPROTECTOR_H */
diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c
index 824d117..6548929 100644
--- a/arch/riscv/kernel/process.c
+++ b/arch/riscv/kernel/process.c
@@ -24,6 +24,12 @@
 
 register unsigned long gp_in_global __asm__("gp");
 
+#ifdef CONFIG_STACKPROTECTOR
+#include 
+unsigned long __stack_chk_guard __read_mostly;
+EXPORT_SYMBOL(__stack_chk_guard);
+#endif
+
 extern asmlinkage void ret_from_fork(void);
 extern asmlinkage void ret_from_kernel_thread(void);
 
-- 
2.7.4



[PATCH v3 4/4] virtio: pci: Constify ioreadX() iomem argument (as in generic implementation)

2020-07-09 Thread Krzysztof Kozlowski
The ioreadX() helpers have inconsistent interface.  On some architectures
void *__iomem address argument is a pointer to const, on some not.

Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among architectures.

Signed-off-by: Krzysztof Kozlowski 
Reviewed-by: Geert Uytterhoeven 
---
 drivers/virtio/virtio_pci_modern.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/virtio/virtio_pci_modern.c 
b/drivers/virtio/virtio_pci_modern.c
index db93cedd262f..90eff165a719 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -27,16 +27,16 @@
  * method, i.e. 32-bit accesses for 32-bit fields, 16-bit accesses
  * for 16-bit fields and 8-bit accesses for 8-bit fields.
  */
-static inline u8 vp_ioread8(u8 __iomem *addr)
+static inline u8 vp_ioread8(const u8 __iomem *addr)
 {
return ioread8(addr);
 }
-static inline u16 vp_ioread16 (__le16 __iomem *addr)
+static inline u16 vp_ioread16 (const __le16 __iomem *addr)
 {
return ioread16(addr);
 }
 
-static inline u32 vp_ioread32(__le32 __iomem *addr)
+static inline u32 vp_ioread32(const __le32 __iomem *addr)
 {
return ioread32(addr);
 }
-- 
2.17.1



[PATCH v2 2/2] riscv: Enable per-task stack canaries

2020-07-09 Thread guoren
From: Guo Ren 

This enables the use of per-task stack canary values if GCC has
support for emitting the stack canary reference relative to the
value of tp, which holds the task struct pointer in the riscv
kernel.

After compare arm64 and x86 implementations, seems arm64's is more
flexible and readable. The key point is how gcc get the offset of
stack_canary from gs/el0_sp.

x86: Use a fix offset from gs, not flexible.

struct fixed_percpu_data {
/*
 * GCC hardcodes the stack canary as %gs:40.  Since the
 * irq_stack is the object at %gs:0, we reserve the bottom
 * 48 bytes of the irq stack for the canary.
 */
chargs_base[40]; // :(
unsigned long   stack_canary;
};

arm64: Use -mstack-protector-guard-offset & guard-reg
gcc options:
-mstack-protector-guard=sysreg
-mstack-protector-guard-reg=sp_el0
-mstack-protector-guard-offset=xxx

riscv: Use -mstack-protector-guard-offset & guard-reg
gcc options:
-mstack-protector-guard=tls
-mstack-protector-guard-reg=tp
-mstack-protector-guard-offset=xxx

Here is riscv gcc's work [1].

[1] https://gcc.gnu.org/pipermail/gcc-patches/2020-July/549583.html

In the end, these codes are inserted by gcc before return:

*  0xffe00020b396 <+120>:   ld  a5,1008(tp) # 0x3f0
*  0xffe00020b39a <+124>:   xor a5,a5,a4
*  0xffe00020b39c <+126>:   mv  a0,s5
*  0xffe00020b39e <+128>:   bneza5,0xffe00020b61c <_do_fork+766>
   0xffe00020b3a2 <+132>:   ld  ra,136(sp)
   0xffe00020b3a4 <+134>:   ld  s0,128(sp)
   0xffe00020b3a6 <+136>:   ld  s1,120(sp)
   0xffe00020b3a8 <+138>:   ld  s2,112(sp)
   0xffe00020b3aa <+140>:   ld  s3,104(sp)
   0xffe00020b3ac <+142>:   ld  s4,96(sp)
   0xffe00020b3ae <+144>:   ld  s5,88(sp)
   0xffe00020b3b0 <+146>:   ld  s6,80(sp)
   0xffe00020b3b2 <+148>:   ld  s7,72(sp)
   0xffe00020b3b4 <+150>:   addisp,sp,144
   0xffe00020b3b6 <+152>:   ret
   ...
*  0xffe00020b61c <+766>:   auipc   ra,0x7f8
*  0xffe00020b620 <+770>:   jalr-1764(ra) # 0xffe000a02f38 
<__stack_chk_fail>

Signed-off-by: Guo Ren 
Signed-off-by: cooper 
Cc: cooper 
Cc: Kees Cook 
---
Change v2:
 - Change to -mstack-protector-guard=tls for gcc final define
 - Solve compile error by changing position of KBUILD_CFLAGS in
   Makefile

Signed-off-by: Guo Ren 
---
 arch/riscv/Kconfig  |  7 +++
 arch/riscv/Makefile | 10 ++
 arch/riscv/include/asm/stackprotector.h |  3 ++-
 arch/riscv/kernel/asm-offsets.c |  3 +++
 arch/riscv/kernel/process.c |  2 +-
 5 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 4b0e308..d98ce29 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -394,6 +394,13 @@ config CMDLINE_FORCE
 
 endchoice
 
+config CC_HAVE_STACKPROTECTOR_TLS
+   def_bool $(cc-option,-mstack-protector-guard=tls 
-mstack-protector-guard-reg=tp -mstack-protector-guard-offset=0)
+
+config STACKPROTECTOR_PER_TASK
+   def_bool y
+   depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_TLS
+
 endmenu
 
 config BUILTIN_DTB
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index fb6e37d..f5f8ee9 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -68,6 +68,16 @@ KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
 # architectures.  It's faster to have GCC emit only aligned accesses.
 KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
 
+ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y)
+prepare: stack_protector_prepare
+stack_protector_prepare: prepare0
+   $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls   \
+   -mstack-protector-guard-reg=tp\
+   -mstack-protector-guard-offset=$(shell\
+   awk '{if ($$2 == "TSK_STACK_CANARY") print $$3;}' \
+   include/generated/asm-offsets.h))
+endif
+
 # arch specific predefines for sparse
 CHECKFLAGS += -D__riscv -D__riscv_xlen=$(BITS)
 
diff --git a/arch/riscv/include/asm/stackprotector.h 
b/arch/riscv/include/asm/stackprotector.h
index 8e1ef2c..bda4d83 100644
--- a/arch/riscv/include/asm/stackprotector.h
+++ b/arch/riscv/include/asm/stackprotector.h
@@ -28,6 +28,7 @@ static __always_inline void boot_init_stack_canary(void)
canary &= CANARY_MASK;
 
current->stack_canary = canary;
-   __stack_chk_guard = current->stack_canary;
+   if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK))
+   __stack_chk_guard = current->stack_canary;
 }
 #endif /* _ASM_RISCV_STACKPROTECTOR_H */
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
index 07cb9c1..999b465 100644
--- a/arch/riscv/kernel/asm-offsets.c
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -29,6 +29,9 @@ void 

Re: [PATCH] spi: spi-geni-qcom: Set the clock properly at runtime resume

2020-07-09 Thread Akash Asthana

Hi Doug,

  
@@ -670,7 +674,13 @@ static int __maybe_unused spi_geni_runtime_resume(struct device *dev)

if (ret)
return ret;
  
-	return geni_se_resources_on(>se);

+   ret = geni_se_resources_on(>se);
+   if (ret)
+   return ret;
+
+   dev_pm_opp_set_rate(mas->dev, mas->cur_sclk_hz);
+
+   return 0;
  }


Should we fail to resume if error is returned from 'opp_set_rate'?

'spi_geni_prepare_message' use to fail for any error from 
'opp_set_rate'  before patch series "Avoid clock setting if not needed".


But now it's possible that 'prepare_message' can return success even 
when opp are not at desired state(from previous resume call).


Regards,

Akash

  
  static int __maybe_unused spi_geni_suspend(struct device *dev)


--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,\na 
Linux Foundation Collaborative Project



Re: [PATCH] arm64/module-plts: Consider the special case where plt_max_entries is 0

2020-07-09 Thread Will Deacon
On Thu, Jul 09, 2020 at 07:18:01AM +, 彭浩(Richard) wrote:
> On Thu, 9 Jul 2020 at 09:50, 彭浩(Richard)  wrote:
> >> >Apparently, you are hitting a R_AARCH64_JUMP26 or R_AARCH64_CALL26
> >> >relocation that operates on a b or bl instruction that is more than
> >> >128 megabytes away from its target.
> >> >
> >> My understanding is that a module that calls functions that are not part 
> >> of the module will use PLT.
> >> Plt_max_entries =0 May occur if a module does not depend on other module 
> >> functions.
> >>
> >
> >A PLT slot is allocated for each b or bl instruction that refers to a
> >symbol that lives in a different section, either of the same module
> > (e.g., bl in .init calling into .text), of another module, or of the
> >core kernel.
> >
> >I don't see how you end up with plt_max_entries in this case, though.
> if a module does not depend on other module functions, PLT entries in the 
> module is equal to 0.

This brings me back to my earlier question: if there are no PLT entries in
the module, then count_plts() will not find any R_AARCH64_JUMP26 or
R_AARCH64_CALL26 relocations that require PLTs and will therefore return 0.
The absence of these relocations means that module_emit_plt_entry() will not
be called by apply_relocate_add(), and so your patch should have no effect.

You seem to be saying that module_emit_plt_entry() _is_ being called,
despite count_plts() returning 0. One way that can happen is if PLTs are
needed for branches within a single, very large text section, but you also
say that's not the case.

So I think we need more information from you so that we can either reproduce
this ourselves, or better understand where things are going wrong.

Finally, you said that your kernel is "5.6.0-rc3+". Are you able to
reproduce with mainline (5.8-rc4)?

Will

P.S. whenever you reply, the mail threading breaks :(


Re: [PATCH 4/5] iommu/arm-smmu-qcom: Consstently initialize stream mappings

2020-07-09 Thread Vinod Koul
On 08-07-20, 22:01, Bjorn Andersson wrote:
> Firmware that traps writes to S2CR to translate BYPASS into FAULT also
> ignores writes of type FAULT. As such booting with "disable_bypass" set
> will result in all S2CR registers left as configured by the bootloader.
> 
> This has been seen to result in indeterministic results, as these
> mappings might linger and reference context banks that Linux is
> reconfiguring.
> 
> Use the fact that BYPASS writes result in FAULT type to force all stream
> mappings to FAULT.

s/Consstently/Consistently in patch subject

-- 
~Vinod


Re: [PATCH v3 1/4] iomap: Constify ioreadX() iomem argument (as in generic implementation)

2020-07-09 Thread Krzysztof Kozlowski
On Thu, Jul 09, 2020 at 09:28:34AM +0200, Krzysztof Kozlowski wrote:
> The ioreadX() and ioreadX_rep() helpers have inconsistent interface.  On
> some architectures void *__iomem address argument is a pointer to const,
> on some not.
> 
> Implementations of ioreadX() do not modify the memory under the address
> so they can be converted to a "const" version for const-safety and
> consistency among architectures.
> 
> Suggested-by: Geert Uytterhoeven 
> Signed-off-by: Krzysztof Kozlowski 
> Reviewed-by: Geert Uytterhoeven 
> Reviewed-by: Arnd Bergmann 

I forgot to put here one more Ack, for PowerPC:
Acked-by: Michael Ellerman  (powerpc)

https://lore.kernel.org/lkml/87ftedj0zz@mpe.ellerman.id.au/

Best regards,
Krzysztof



Re: [PATCH 0/5] iommu/arm-smmu: Support maintaining bootloader mappings

2020-07-09 Thread Vinod Koul
On 08-07-20, 22:01, Bjorn Andersson wrote:
> Based on previous attempts and discussions this is the latest attempt at
> inheriting stream mappings set up by the bootloader, for e.g. boot splash or
> efifb.
> 
> The first patch is an implementation of Robin's suggestion that we should just
> mark the relevant stream mappings as BYPASS. Relying on something else to set
> up the stream mappings wanted - e.g. by reading it back in platform specific
> implementation code.
> 
> The series then tackles the problem seen in most versions of Qualcomm 
> firmware,
> that the hypervisor intercepts BYPASS writes and turn them into FAULTs. It 
> does
> this by allocating context banks for identity domains as well, with 
> translation
> disabled.
> 
> Lastly it amends the stream mapping initialization code to allocate a specific
> identity domain that is used for any mappings inherited from the bootloader, 
> if
> above Qualcomm quirk is required.
> 
> 
> The series has been tested and shown to allow booting SDM845, SDM850, SM8150,
> SM8250 with boot splash screen setup by the bootloader. Specifically it also
> allows the Lenovo Yoga C630 to boot with SMMU and efifb enabled.

This resolves issue on RB3 for me so:

Tested-by: Vinod Koul 

-- 
~Vinod


Re: [PATCH] drm/vc4: dsi: Only register our component once a DSI device is attached

2020-07-09 Thread Maxime Ripard
Hi Eric,

On Tue, Jul 07, 2020 at 09:48:45AM -0700, Eric Anholt wrote:
> On Tue, Jul 7, 2020 at 3:26 AM Maxime Ripard  wrote:
> >
> > If the DSI driver is the last to probe, component_add will try to run all
> > the bind callbacks straight away and return the error code.
> >
> > However, since we depend on a power domain, we're pretty much guaranteed to
> > be in that case on the BCM2711, and are just lucky on the previous SoCs
> > since the v3d also depends on that power domain and is further in the probe
> > order.
> >
> > In that case, the DSI host will not stick around in the system: the DSI
> > bind callback will be executed, will not find any DSI device attached and
> > will return EPROBE_DEFER, and we will then remove the DSI host and ask to
> > be probed later on.
> >
> > But since that host doesn't stick around, DSI devices like the RaspberryPi
> > touchscreen whose probe is not linked to the DSI host (unlike the usual DSI
> > devices that will be probed through the call to mipi_dsi_host_register)
> > cannot attach to the DSI host, and we thus end up in a situation where the
> > DSI host cannot probe because the panel hasn't probed yet, and the panel
> > cannot probe because the DSI host hasn't yet.
> >
> > In order to break this cycle, let's wait until there's a DSI device that
> > attaches to the DSI host to register the component and allow to progress
> > further.
> >
> > Suggested-by: Andrzej Hajda 
> > Signed-off-by: Maxime Ripard 
> 
> I feel like I've written this patch before, but I've thankfully
> forgotten most of my battle with DSI probing.  As long as this still
> lets vc4 probe in the absence of a DSI panel in the DT as well, then
> this is enthusiastically acked.

I'm not really sure what you mean by that, did you mean vc4 has to probe
when the DSI controller is enabled but there's no panel described, or it
has to probe when the DSI controller is disabled?

Maxime


Re: [PATCH v2 2/3] media: rockchip: Introduce driver for Rockhip's camera interface

2020-07-09 Thread Maxime Chevallier
Hi Ezequiel,

Sorry for the late reply, some answers to your very useful comments
below :)

On Sun, 31 May 2020 10:40:14 -0300
Ezequiel Garcia  wrote:

>Hi Maxime,
>
>Thanks for posting this patch. I think you can still improve it,
>but it's a neat first try! :-)
>
>On Fri, 29 May 2020 at 10:05, Maxime Chevallier
> wrote:
>>
>> Introduce a driver for the camera interface on some Rockchip platforms.
>>
>> This controller supports CSI2, Parallel and BT656 interfaces, but for
>> now only the parallel interface could be tested, hence it's the only one
>> that's supported in the first version of this driver.
>>  
>
>I'm confused, you mention parallel as the only tested interface,
>but the cover letters mentions PAL. Doesn't PAL mean BT.656
>or am I completely lost?

No you are correct, this is a misunderstanding on my part about the
various formats and naming schemes.

The main point I wanted to outline is that the hardware supports a CSI2
interface, which this version of the driver doesn't implement.

>(I am not super familiar with parallel sensors).
>
>> This controller can be fond on PX30, RK1808, RK3128, RK3288 and RK3288,
>> but for now it's only be tested on PX30.
>>  
>
>My RK3288 and RK3326 (i.e. PX30) refer to this IP block as "Video
>Input interface".
>I am wondering if it won't be clearer for developers / users if we
>rename the driver
>to rockchip-vip (and of course s/cif/vip and s/CIF/VIP).

After looking into the datasheets for these SoCs, it's clear that the
denomination should indeed be "VIP" and not "CIF", thanks !

>> Most of this driver was written follwing the BSP driver from rockchip,
>> removing the parts that either didn't fit correctly the guidelines, or
>> that couldn't be tested.
>>
>> This basic version doesn't support cropping nor scaling, and is only
>> designed with one sensor being attached to it a any time.
>>
>> Signed-off-by: Maxime Chevallier 
>> ---
>>
>> Changes since V1 :
>>
>>  - Convert to the bulk APIs for clocks and resets  
>
>Note that the bulk API clock conversion was not
>properly done.
>
>>  - remove useless references to priv data
>>  - Move around some init functions at probe time
>>  - Upate some helpers to more suitable ones
>>
>> Here is the output from v4l2-compliance. There are no fails in the final
>> summary, but there is one in the output that I didn't catch previously.
>>
>> Still, here's the V2 in the meantime, if you have any further reviews
>> ompliance SHA: not available, 64 bits
>>
>> Compliance test for rkcif device /dev/video0:
>>
>> Driver Info:
>> Driver name  : rkcif
>> Card type: rkcif
>> Bus info : platform:ff49.cif
>> Driver version   : 5.7.0
>> Capabilities : 0x84201000
>> Video Capture Multiplanar
>> Streaming
>> Extended Pix Format
>> Device Capabilities
>> Device Caps  : 0x04201000
>> Video Capture Multiplanar
>> Streaming
>> Extended Pix Format
>> Media Driver Info:
>> Driver name  : rkcif
>> Model: rkcif
>> Serial   :
>> Bus info :
>> Media version: 5.7.0
>> Hardware revision: 0x (0)
>> Driver version   : 5.7.0
>> Interface Info:
>> ID   : 0x0302
>> Type : V4L Video
>> Entity Info:
>> ID   : 0x0001 (1)
>> Name : video_rkcif
>> Function : V4L2 I/O
>> Pad 0x0104   : 0: Sink
>>   Link 0x0207: from remote pad 0x106 of entity 'tw9900 
>> 2-0044': Data, Enabled
>>
>> Required ioctls:
>> test MC information (see 'Media Driver Info' above): OK
>> test VIDIOC_QUERYCAP: OK
>>
>> Allow for multiple opens:
>> test second /dev/video0 open: OK
>> test VIDIOC_QUERYCAP: OK
>> test VIDIOC_G/S_PRIORITY: OK
>> test for unlimited opens: OK
>>
>> Debug ioctls:
>> test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported)
>> test VIDIOC_LOG_STATUS: OK (Not Supported)
>>
>> Input ioctls:
>> test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported)
>> test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
>> test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported)
>> test VIDIOC_ENUMAUDIO: OK (Not Supported)
>> test VIDIOC_G/S/ENUMINPUT: OK
>> test VIDIOC_G/S_AUDIO: OK (Not Supported)
>> Inputs: 1 Audio Inputs: 0 Tuners: 0
>>
>> Output ioctls:
>> test VIDIOC_G/S_MODULATOR: OK (Not Supported)
>> test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
>> test VIDIOC_ENUMAUDOUT: OK (Not Supported)
>> test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported)
>> test VIDIOC_G/S_AUDOUT: OK (Not Supported)
>> Outputs: 0 Audio Outputs: 0 Modulators: 0
>>
>> Input/Output configuration ioctls:
>> test VIDIOC_ENUM/G/S/QUERY_STD: 

Re: [PATCH v4 11/18] nitro_enclaves: Add logic for enclave memory region set

2020-07-09 Thread Paraschiv, Andra-Irina




On 06/07/2020 13:46, Alexander Graf wrote:



On 22.06.20 22:03, Andra Paraschiv wrote:

Another resource that is being set for an enclave is memory. User space
memory regions, that need to be backed by contiguous memory regions,
are associated with the enclave.

One solution for allocating / reserving contiguous memory regions, that
is used for integration, is hugetlbfs. The user space process that is
associated with the enclave passes to the driver these memory regions.

The enclave memory regions need to be from the same NUMA node as the
enclave CPUs.

Add ioctl command logic for setting user space memory region for an
enclave.

Signed-off-by: Alexandru Vasile 
Signed-off-by: Andra Paraschiv 
---
Changelog

v3 -> v4

* Check enclave memory regions are from the same NUMA node as the
   enclave CPUs.
* Use dev_err instead of custom NE log pattern.
* Update the NE ioctl call to match the decoupling from the KVM API.

v2 -> v3

* Remove the WARN_ON calls.
* Update static calls sanity checks.
* Update kzfree() calls to kfree().

v1 -> v2

* Add log pattern for NE.
* Update goto labels to match their purpose.
* Remove the BUG_ON calls.
* Check if enclave max memory regions is reached when setting an enclave
   memory region.
* Check if enclave state is init when setting an enclave memory region.
---
  drivers/virt/nitro_enclaves/ne_misc_dev.c | 257 ++
  1 file changed, 257 insertions(+)

diff --git a/drivers/virt/nitro_enclaves/ne_misc_dev.c 
b/drivers/virt/nitro_enclaves/ne_misc_dev.c

index cfdefa52ed2a..17ccb6cdbd75 100644
--- a/drivers/virt/nitro_enclaves/ne_misc_dev.c
+++ b/drivers/virt/nitro_enclaves/ne_misc_dev.c
@@ -476,6 +476,233 @@ static int ne_create_vcpu_ioctl(struct 
ne_enclave *ne_enclave, u32 vcpu_id)

  return rc;
  }
  +/**
+ * ne_sanity_check_user_mem_region - Sanity check the userspace memory
+ * region received during the set user memory region ioctl call.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ * @mem_region: user space memory region to be sanity checked.
+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_sanity_check_user_mem_region(struct ne_enclave 
*ne_enclave,

+    struct ne_user_memory_region *mem_region)
+{
+    if (ne_enclave->mm != current->mm)
+    return -EIO;
+
+    if ((mem_region->memory_size % NE_MIN_MEM_REGION_SIZE) != 0) {
+    dev_err_ratelimited(ne_misc_dev.this_device,
+    "Mem size not multiple of 2 MiB\n");
+
+    return -EINVAL;


Can we make this an error that gets propagated to user space 
explicitly? I'd rather have a clear error return value of this 
function than a random message in dmesg.


We can make this, will add memory checks specific NE error codes, as for 
the other call paths in the series e.g. enclave CPU(s) setup.





+    }
+
+    if ((mem_region->userspace_addr & (NE_MIN_MEM_REGION_SIZE - 1)) ||


This logic already relies on the fact that NE_MIN_MEM_REGION_SIZE is a 
power of two. Can you do the same above on the memory_size check?


Done.



+    !access_ok((void __user *)(unsigned 
long)mem_region->userspace_addr,

+   mem_region->memory_size)) {
+    dev_err_ratelimited(ne_misc_dev.this_device,
+    "Invalid user space addr range\n");
+
+    return -EINVAL;


Same comment again. Return different errors for different conditions, 
so that user space has a chance to print proper errors to its users.


Also, don't we have to check alignment of userspace_addr as well?



Would need an alignment check for 2 MiB at least, yes.


+    }
+
+    return 0;
+}
+
+/**
+ * ne_set_user_memory_region_ioctl - Add user space memory region to 
the slot

+ * associated with the current enclave.
+ *
+ * This function gets called with the ne_enclave mutex held.
+ *
+ * @ne_enclave: private data associated with the current enclave.
+ * @mem_region: user space memory region to be associated with the 
given slot.

+ *
+ * @returns: 0 on success, negative return value on failure.
+ */
+static int ne_set_user_memory_region_ioctl(struct ne_enclave 
*ne_enclave,

+    struct ne_user_memory_region *mem_region)
+{
+    struct ne_pci_dev_cmd_reply cmd_reply = {};
+    long gup_rc = 0;
+    unsigned long i = 0;
+    struct ne_mem_region *ne_mem_region = NULL;
+    unsigned long nr_phys_contig_mem_regions = 0;
+    unsigned long nr_pinned_pages = 0;
+    struct page **phys_contig_mem_regions = NULL;
+    int rc = -EINVAL;
+    struct slot_add_mem_req slot_add_mem_req = {};
+
+    rc = ne_sanity_check_user_mem_region(ne_enclave, mem_region);
+    if (rc < 0)
+    return rc;
+
+    ne_mem_region = kzalloc(sizeof(*ne_mem_region), GFP_KERNEL);
+    if (!ne_mem_region)
+    return -ENOMEM;
+
+    /*
+ * TODO: Update nr_pages value to handle contiguous virtual address
+ * ranges mapped to non-contiguous physical regions. Hugetlbfs 
can 

  1   2   3   4   5   6   7   8   9   10   >