Re: Live patching on ARM64

2021-03-18 Thread Singh, Balbir
On 15/1/21 11:33 pm, Mark Rutland wrote: > On Thu, Jan 14, 2021 at 04:07:55PM -0600, Madhavan T. Venkataraman wrote: >> Hi all, >> >> My name is Madhavan Venkataraman. > > Hi Madhavan, > >> Microsoft is very interested in Live Patching support for ARM64. >> On behalf of Microsoft, I would like

Re: [PATCH] mm: memcontrol: switch to rstat fix

2021-03-15 Thread Singh, Balbir
On 16/3/21 10:41 am, Johannes Weiner wrote: > Fix a sleep in atomic section problem: wb_writeback() takes a spinlock > and calls wb_over_bg_thresh() -> mem_cgroup_wb_stats, but the regular > rstat flushing function called from in there does lockbreaking and may > sleep. Switch to the atomic

Re: [PATCH v2 1/2] mm/memcg: rename mem_cgroup_split_huge_fixup to split_page_memcg

2021-03-10 Thread Singh, Balbir
On 11/3/21 9:00 am, Hugh Dickins wrote: > On Thu, 11 Mar 2021, Singh, Balbir wrote: >> On 9/3/21 7:28 pm, Michal Hocko wrote: >>> On Tue 09-03-21 09:37:29, Balbir Singh wrote: >>>> On 4/3/21 6:40 pm, Zhou Guanghui wrote: >>> [...] &g

Re: [PATCH v2 1/2] mm/memcg: rename mem_cgroup_split_huge_fixup to split_page_memcg

2021-03-10 Thread Singh, Balbir
On 9/3/21 7:28 pm, Michal Hocko wrote: > On Tue 09-03-21 09:37:29, Balbir Singh wrote: >> On 4/3/21 6:40 pm, Zhou Guanghui wrote: > [...] >>> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE >>> /* >>> - * Because page_memcg(head) is not set on compound tails, set it now. >>> + * Because page_memcg(head) is

Re: [PATCH v2 1/2] mm/memcg: rename mem_cgroup_split_huge_fixup to split_page_memcg

2021-03-08 Thread Singh, Balbir
On 4/3/21 6:40 pm, Zhou Guanghui wrote: > Rename mem_cgroup_split_huge_fixup to split_page_memcg and explicitly > pass in page number argument. > > In this way, the interface name is more common and can be used by > potential users. In addition, the complete info(memcg and flag) of > the memcg

Re: [PATCH v17 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page

2021-03-04 Thread Singh, Balbir
On 26/2/21 12:21 am, Muchun Song wrote: > Every HugeTLB has more than one struct page structure. We __know__ that > we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures > to store metadata associated with each HugeTLB. > > There are a lot of struct page structures associated

Re: [PATCH v17 0/9] Free some vmemmap pages of HugeTLB page

2021-03-03 Thread Singh, Balbir
On 26/2/21 12:21 am, Muchun Song wrote: > Hi all, > > This patch series will free some vmemmap pages(struct page structures) > associated with each hugetlbpage when preallocated to save memory. > > In order to reduce the difficulty of the first version of code review. > From this version, we

Re: [PATCH v17 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c

2021-03-03 Thread Singh, Balbir
On 26/2/21 12:21 am, Muchun Song wrote: > Move bootmem info registration common API to individual bootmem_info.c. > And we will use {get,put}_page_bootmem() to initialize the page for the > vmemmap pages or free the vmemmap pages to buddy in the later patch. > So move them out of

Re: [PATCH v4 0/5] Next revision of the L1D flush patches

2021-01-25 Thread Singh, Balbir
On Fri, 2021-01-08 at 23:10 +1100, Balbir Singh wrote: > Implement a mechanism that allows tasks to conditionally flush > their L1D cache (mitigation mechanism suggested in [2]). The previous > posts of these patches were sent for inclusion (see [3]) and were not > included due to the concern for

Re: [tip:x86/pti 4/5] arch/x86/mm/tlb.c:319:6: warning: variable 'cpu' set but not used

2021-01-20 Thread Singh, Balbir
On Sat, 2021-01-16 at 11:21 +0800, kernel test robot wrote: > > tree: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/pti > head: 767d46ab566dd489733666efe48732d523c8c332 > commit: b6724f118d44606fddde391ba7527526b3cad211 [4/5] prctl: Hook L1D > flushing in via prctl >

Re: [PATCH v3 1/5] x86/mm: change l1d flush runtime prctl behaviour

2020-12-04 Thread Singh, Balbir
On Fri, 2020-12-04 at 22:07 +0100, Thomas Gleixner wrote: > CAUTION: This email originated from outside of the organization. Do not click > links or open attachments unless you can confirm the sender and know the > content is safe. > > > > On Fri, Nov 27 2020 at 17:59, Balbir Singh wrote: >

Re: [PATCH v3 3/5] x86/mm: Optionally flush L1D on context switch

2020-12-04 Thread Singh, Balbir
On Fri, 2020-12-04 at 22:21 +0100, Thomas Gleixner wrote: > CAUTION: This email originated from outside of the organization. Do not click > links or open attachments unless you can confirm the sender and know the > content is safe. > > > > On Fri, Nov 27 2020 at 17:59, Balbir Singh wrote: > >

Re: [PATCH -tip 03/32] sched/fair: Fix pick_task_fair crashes due to empty rbtree

2020-11-20 Thread Singh, Balbir
On 18/11/20 10:19 am, Joel Fernandes (Google) wrote: > From: Peter Zijlstra > > pick_next_entity() is passed curr == NULL during core-scheduling. Due to > this, if the rbtree is empty, the 'left' variable is set to NULL within > the function. This can cause crashes within the function. > > This

Re: [PATCH -tip 02/32] sched: Introduce sched_class::pick_task()

2020-11-19 Thread Singh, Balbir
On 18/11/20 10:19 am, Joel Fernandes (Google) wrote: > From: Peter Zijlstra > > Because sched_class::pick_next_task() also implies > sched_class::set_next_task() (and possibly put_prev_task() and > newidle_balance) it is not state invariant. This makes it unsuitable > for remote task selection.

Re: [PATCH -tip 01/32] sched: Wrap rq::lock access

2020-11-19 Thread Singh, Balbir
On 18/11/20 10:19 am, Joel Fernandes (Google) wrote: > From: Peter Zijlstra > > In preparation of playing games with rq->lock, abstract the thing > using an accessor. > Could you clarify games? I presume the intention is to redefine the scope of the lock based on whether core sched is enabled

Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()

2020-09-30 Thread Singh, Balbir
On 1/10/20 9:49 am, Singh, Balbir wrote: > On 1/10/20 7:38 am, Thomas Gleixner wrote: > >> >> >> >> On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote: >>> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote: >>>> On Wed, Sep 30 202

Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()

2020-09-30 Thread Singh, Balbir
On 1/10/20 7:38 am, Thomas Gleixner wrote: > > > > On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote: >> On Wed, Sep 30, 2020 at 08:00:59PM +0200, Thomas Gleixner wrote: >>> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote: On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner

Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()

2020-09-30 Thread Singh, Balbir
On 1/10/20 7:38 am, Thomas Gleixner wrote: > CAUTION: This email originated from outside of the organization. Do not click > links or open attachments unless you can confirm the sender and know the > content is safe. > > > > On Wed, Sep 30 2020 at 20:35, Peter Zijlstra wrote: >> On Wed, Sep

Re: [PATCH -next for tip:x86/pti] x86/tlb: drop unneeded local vars in enable_l1d_flush_for_task()

2020-09-30 Thread Singh, Balbir
On 1/10/20 4:00 am, Thomas Gleixner wrote: > CAUTION: This email originated from outside of the organization. Do not click > links or open attachments unless you can confirm the sender and know the > content is safe. > > > > On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote: >> On Wed, Sep

Re: [PATCH v2 4/5] prctl: Hook L1D flushing in via prctl

2020-07-29 Thread Singh, Balbir
On 29/7/20 11:14 pm, Tom Lendacky wrote: > > > On 7/28/20 7:11 PM, Balbir Singh wrote: >> Use the existing PR_GET/SET_SPECULATION_CTRL API to expose the L1D >> flush capability. For L1D flushing PR_SPEC_FORCE_DISABLE and >> PR_SPEC_DISABLE_NOEXEC are not supported. >> >> There is also no seccomp

Re: [GIT PULL] x86/mm changes for v5.8

2020-06-02 Thread Singh, Balbir
On Tue, 2020-06-02 at 16:28 -0700, Linus Torvalds wrote: > CAUTION: This email originated from outside of the organization. Do not click > links or open attachments unless you can confirm the sender and know the > content is safe. > > > > On Tue, Jun 2, 2020 at 4:01 PM

Re: [GIT PULL] x86/mm changes for v5.8

2020-06-02 Thread Singh, Balbir
On Tue, 2020-06-02 at 12:14 -0700, Linus Torvalds wrote: > > On Tue, Jun 2, 2020 at 11:29 AM Thomas Gleixner wrote: > > > > It's trivial enough to fix. We have a static key already which is > > telling us whether SMT scheduling is active. > > .. but should we do it here, in switch_mm() in the

Re: [GIT PULL] x86/mm changes for v5.8

2020-06-02 Thread Singh, Balbir
On Mon, 2020-06-01 at 19:35 -0700, Linus Torvalds wrote: > > On Mon, Jun 1, 2020 at 6:06 PM Balbir Singh wrote: > > > > I think apps can do this independently today as in do the flush > > via software fallback in the app themselves. > > Sure, but they can't force the kernel to do crazy things

Re: linux-next: manual merge of the akpm-current tree with the tip tree

2020-05-25 Thread Singh, Balbir
On Mon, 2020-05-25 at 21:04 +1000, Stephen Rothwell wrote: > Hi all, > > Today's linux-next merge of the akpm-current tree got a conflict in: > > arch/x86/mm/tlb.c > > between commit: > > 83ce56f712af ("x86/mm: Refactor cond_ibpb() to support other use cases") > > from the tip tree and

Re: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and hibernation

2020-05-21 Thread Singh, Balbir
> @@ -1057,7 +1063,7 @@ static int xen_translate_vdev(int vdevice, int *minor, > unsigned int *offset) > case XEN_SCSI_DISK5_MAJOR: > case XEN_SCSI_DISK6_MAJOR: > case XEN_SCSI_DISK7_MAJOR: > - *offset = (*minor / PARTS_PER_DISK) + >

Re: [PATCH v2 4/4] arch/x86: Add L1D flushing Documentation

2020-05-19 Thread Singh, Balbir
On Tue, 2020-05-19 at 08:39 -0700, Randy Dunlap wrote: > > Hi-- > > Comments below. Sorry about the delay. > > On 4/5/20 8:19 PM, Balbir Singh wrote: > > Add documentation of l1d flushing, explain the need for the > > feature and how it can be used. > > > > Signed-off-by: Balbir Singh > > ---

Re: [PATCH v2 3/4] arch/x86: Optionally flush L1D on context switch

2020-05-19 Thread Singh, Balbir
On Tue, 2020-04-07 at 11:26 -0700, Kees Cook wrote: > > > On Mon, Apr 06, 2020 at 01:19:45PM +1000, Balbir Singh wrote: > > Implement a mechanism to selectively flush the L1D cache. The goal is to > > allow tasks that are paranoid due to the recent snoop assisted data sampling > >

Re: [PATCH 05/12] genirq: Shutdown irq chips in suspend/resume during hibernation

2020-05-19 Thread Singh, Balbir
On Tue, 2020-05-19 at 23:26 +, Anchal Agarwal wrote: > Signed-off--by: Thomas Gleixner The Signed-off-by line needs to be fixed (hint: you have --) Balbir Singh

Re: [PATCH v6 5/6] Optionally flush L1D on context switch

2020-05-14 Thread Singh, Balbir
On Wed, 2020-05-13 at 17:27 +0200, Thomas Gleixner wrote: > CAUTION: This email originated from outside of the organization. Do > not click links or open attachments unless you can confirm the sender > and know the content is safe. > > > > Balbir Singh writes: > > > Implement a mechanism to

Re: [PATCH v6 1/6] arch/x86/kvm: Refactor l1d flush lifecycle management

2020-05-14 Thread Singh, Balbir
On Wed, 2020-05-13 at 15:53 +0200, Thomas Gleixner wrote: > > > Balbir Singh writes: > > +++ b/arch/x86/kernel/l1d_flush.c > > @@ -0,0 +1,36 @@ > > Lacks > > +// SPDX-License-Identifier: GPL-2.0-only > Agreed, it should match the license in arch/x86/kvm/vmx/vmx.c Thanks, Balbir

Re: [PATCH v6 1/6] arch/x86/kvm: Refactor l1d flush lifecycle management

2020-05-14 Thread Singh, Balbir
On Wed, 2020-05-13 at 15:35 +0200, Thomas Gleixner wrote: > CAUTION: This email originated from outside of the organization. Do > not click links or open attachments unless you can confirm the sender > and know the content is safe. > > > > Balbir Singh writes: > > > Subject: [PATCH v6 1/6]

Re: [PATCH v6 5/6] Optionally flush L1D on context switch

2020-05-14 Thread Singh, Balbir
On Wed, 2020-05-13 at 17:04 +0200, Thomas Gleixner wrote: > > > Balbir Singh writes: > > > > + if (prev_mm & LAST_USER_MM_L1D_FLUSH) > > + arch_l1d_flush(0); /* Just flush, don't populate the > > TLB */ > > Bah. I fundamentally hate tail comments. They are just disturbing the

Re: [PATCH v6 5/6] Optionally flush L1D on context switch

2020-05-14 Thread Singh, Balbir
On Wed, 2020-05-13 at 18:16 +0200, Thomas Gleixner wrote: > Balbir Singh writes: > > This part: > > > --- a/include/uapi/linux/prctl.h > > +++ b/include/uapi/linux/prctl.h > > @@ -238,4 +238,8 @@ struct prctl_mm_map { > > #define PR_SET_IO_FLUSHER57 > > #define PR_GET_IO_FLUSHER

Re: [PATCH v6 6/6] Documentation: Add L1D flushing Documentation

2020-05-13 Thread Singh, Balbir
On Wed, 2020-05-13 at 15:33 +0200, Thomas Gleixner wrote: > > > Balbir Singh writes: > > +With an increasing number of vulnerabilities being reported around > > data > > +leaks from L1D, a new user space mechanism to flush the L1D cache > > on > > +context switch is added to the kernel. This

Re: [PATCH v5 5/6] Optionally flush L1D on context switch

2020-05-04 Thread Singh, Balbir
On Mon, 2020-05-04 at 11:39 -0700, Kees Cook wrote: > > On Mon, May 04, 2020 at 02:13:42PM +1000, Balbir Singh wrote: > > Implement a mechanism to selectively flush the L1D cache. The goal > > is to > > allow tasks that are paranoid due to the recent snoop assisted data > > sampling > >

Re: [PATCH v4 1/6] arch/x86/kvm: Refactor l1d flush lifecycle management

2020-04-30 Thread Singh, Balbir
On Sat, 2020-04-25 at 11:49 +1000, Balbir Singh wrote: > On Fri, 2020-04-24 at 13:59 -0500, Tom Lendacky wrote: > > > > On 4/23/20 9:01 AM, Balbir Singh wrote: > > > Split out the allocation and free routines to be used in a follow > > > up set of patches (to reuse for L1D flushing). > > > > > >

Re: [PATCH] nvme-pci: Shutdown when removing dead controller

2019-10-07 Thread Singh, Balbir
On Thu, 2019-10-03 at 15:13 -0400, Tyler Ramer wrote: > Always shutdown the controller when nvme_remove_dead_controller is > reached. > > It's possible for nvme_remove_dead_controller to be called as part of a > failed reset, when there is a bad NVME_CSTS. The controller won't > be comming back

Re: [PATCH] nvme-pci: Shutdown when removing dead controller

2019-10-04 Thread Singh, Balbir
On Fri, 2019-10-04 at 11:36 -0400, Tyler Ramer wrote: > Here's a failure we had which represents the issue the patch is > intended to solve: > > Aug 26 15:00:56 testhost kernel: nvme nvme4: async event result 00010300 > Aug 26 15:01:27 testhost kernel: nvme nvme4: controller is down; will >

Re: [RFC 1/1] Add dm verity root hash pkcs7 sig validation.

2019-05-20 Thread Singh, Balbir
On 5/21/19 7:54 AM, Jaskaran Khurana wrote: > Adds in-kernel pkcs7 signature checking for the roothash of > the dm-verity hash tree. > > The verification is to support cases where the roothash is not secured by > Trusted Boot, UEFI Secureboot or similar technologies. > One of the use cases for

Re: [PATCH v7 2/3] arm64: implement ftrace with regs

2019-01-23 Thread Singh, Balbir
On 1/23/19 2:09 AM, Torsten Duwe wrote: > Hi Balbir! > Hi, Torsten! > On Tue, Jan 22, 2019 at 02:39:32PM +1300, Singh, Balbir wrote: >> >> On 1/19/19 5:39 AM, Torsten Duwe wrote: >>> + */ >>> +ftrace_common_return: >>> + /* restore function

Re: [PATCH v7 2/3] arm64: implement ftrace with regs

2019-01-21 Thread Singh, Balbir
On 1/19/19 5:39 AM, Torsten Duwe wrote: > Once gcc8 adds 2 NOPs at the beginning of each function, replace the > first NOP thus generated with a quick LR saver (move it to scratch reg > x9), so the 2nd replacement insn, the call to ftrace, does not clobber > the value. Ftrace will then generate

Re: [PATCH 0/10] psi: pressure stall information for CPU, memory, and IO v2

2018-07-25 Thread Singh, Balbir
On 7/25/18 1:15 AM, Johannes Weiner wrote: > Hi Balbir, > > On Tue, Jul 24, 2018 at 07:14:02AM +1000, Balbir Singh wrote: >> Does the mechanism scale? I am a little concerned about how frequently >> this infrastructure is monitored/read/acted upon. > > I expect most users to poll in the

Re: [PATCH 0/10] psi: pressure stall information for CPU, memory, and IO v2

2018-07-25 Thread Singh, Balbir
On 7/25/18 1:15 AM, Johannes Weiner wrote: > Hi Balbir, > > On Tue, Jul 24, 2018 at 07:14:02AM +1000, Balbir Singh wrote: >> Does the mechanism scale? I am a little concerned about how frequently >> this infrastructure is monitored/read/acted upon. > > I expect most users to poll in the

Re: Showing /sys/fs/cgroup/memory/memory.stat very slow on some machines

2018-07-25 Thread Singh, Balbir
On 7/19/18 3:40 AM, Bruce Merry wrote: > On 18 July 2018 at 17:49, Shakeel Butt wrote: >> On Wed, Jul 18, 2018 at 8:37 AM Bruce Merry wrote: >>> That sounds promising. Is there any way to tell how many zombies there >>> are, and is there any way to deliberately create zombies? If I can >>>

Re: Showing /sys/fs/cgroup/memory/memory.stat very slow on some machines

2018-07-25 Thread Singh, Balbir
On 7/19/18 3:40 AM, Bruce Merry wrote: > On 18 July 2018 at 17:49, Shakeel Butt wrote: >> On Wed, Jul 18, 2018 at 8:37 AM Bruce Merry wrote: >>> That sounds promising. Is there any way to tell how many zombies there >>> are, and is there any way to deliberately create zombies? If I can >>>

Reliability of serial console driver

2001-02-21 Thread Singh Balbir
Hello All, I am not on the list, so please reply to me with the list with your comments. I was going through some code in serial.c and noticed that there are page allocations/deallocations in rs_open and startup (serial.c). These allocations could fail. This affects reliablity in some

Reliability of serial console driver

2001-02-21 Thread Singh Balbir
Hello All, I am not on the list, so please reply to me with the list with your comments. I was going through some code in serial.c and noticed that there are page allocations/deallocations in rs_open and startup (serial.c). These allocations could fail. This affects reliablity in some