Re: [PATCH] mm: Expose lazy vfree pages to control via sysctl

2019-01-21 Thread Ashish Mhetre
The issue is not seen on new kernel. This patch won't be needed. Thanks. On 06/01/19 2:12 PM, Ashish Mhetre wrote: Matthew, this issue was last reported in September 2018 on K4.9. I verified that the optimization patches mentioned by you were not present in our downstream kernel when we faced

Re: [PATCH] mm: Expose lazy vfree pages to control via sysctl

2019-01-06 Thread Ashish Mhetre
/19 11:33 PM, Matthew Wilcox wrote: On Fri, Jan 04, 2019 at 09:05:41PM +0530, Ashish Mhetre wrote: From: Hiroshi Doyu The purpose of lazy_max_pages is to gather virtual address space till it reaches the lazy_max_pages limit and then purge with a TLB flush and hence reduce the number of global TLB

[PATCH] mm: Expose lazy vfree pages to control via sysctl

2019-01-04 Thread Ashish Mhetre
r possible solution would be to configure lazy_vfree_pages through kernel cmdline. Signed-off-by: Hiroshi Doyu Signed-off-by: Ashish Mhetre --- kernel/sysctl.c | 8 mm/vmalloc.c| 5 - 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/kernel/sysctl.c b/kernel/sysc

Re: [PATCH] scatterlist: Update size type to support greater then 4GB size.

2018-12-24 Thread Ashish Mhetre
We don't have a real use-case for this. Understanding the consequences, we are questioning the patch in downstream itself. Please ignore this patch for now. On 12/12/18 1:36 PM, Christoph Hellwig wrote: scatterlist elements longer than 4GB sound odd. Please submit it in a series with your

Re: [PATCH] scatterlist: Update size type to support greater then 4GB size.

2018-12-11 Thread Ashish Mhetre
On 12/12/18 12:19 PM, Sagi Grimberg wrote:   struct nvme_sgl_desc {   __le64    addr; -    __le32    length; +    __le64    length;   __u8    rsvd[3];   __u8    type;   }; Isn't this a device or protocol defined datastructure?  You can't just change it like this. You're

[PATCH] scatterlist: Update size type to support greater then 4GB size.

2018-12-11 Thread Ashish Mhetre
catterlist) has changed from 28 bytes to 40 bytes, so updating NVME_MAX_SEGS from 127 to 88 to correspond to original nvme alloc size value. Signed-off-by: Krishna Reddy Signed-off-by: Ashish Mhetre --- crypto/shash.c| 2 +- drivers/ata/libata-sff.c | 2 +- drivers/mmc/ho

[PATCH] kernfs: Add check for NULL pointer before writing to it.

2018-11-20 Thread Ashish Mhetre
the strlcpy would lead to illegal memory access. This issue is reported by coverity as strlcpy might end up using a NULL buffer and non-zero buf_length value. To avoid this, add check and return -EINVAL in this case. Signed-off-by: Bo Yan Signed-off-by: Ashish Mhetre --- fs/kernfs/dir.c | 3 +++ 1

[PATCH] kernfs: Add check for NULL pointer before writing to it.

2018-11-20 Thread Ashish Mhetre
the strlcpy would lead to illegal memory access. This issue is reported by coverity as strlcpy might end up using a NULL buffer and non-zero buf_length value. To avoid this, add check and return -EINVAL in this case. Signed-off-by: Bo Yan Signed-off-by: Ashish Mhetre --- fs/kernfs/dir.c | 3 +++ 1

[PATCH V3] arm64: Don't flush tlb while clearing the accessed bit

2018-10-29 Thread Ashish Mhetre
: Alex Van Brunt Signed-off-by: Ashish Mhetre --- arch/arm64/include/asm/pgtable.h | 20 1 file changed, 20 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2ab2031..080d842 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/

[PATCH V3] arm64: Don't flush tlb while clearing the accessed bit

2018-10-29 Thread Ashish Mhetre
: Alex Van Brunt Signed-off-by: Ashish Mhetre --- arch/arm64/include/asm/pgtable.h | 20 1 file changed, 20 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2ab2031..080d842 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/

[PATCH V2] arm64: Don't flush tlb while clearing the accessed bit

2018-10-29 Thread Ashish Mhetre
% ~ 40%. So for performance optimisation don't flush TLB when clearing the accessed bit on ARM64. x86 made the same optimization even though their TLB invalidate is much faster as it doesn't broadcast to other CPUs. Signed-off-by: Alex Van Brunt Signed-off-by: Ashish Mhetre --- v2: Added comments

[PATCH V2] arm64: Don't flush tlb while clearing the accessed bit

2018-10-29 Thread Ashish Mhetre
% ~ 40%. So for performance optimisation don't flush TLB when clearing the accessed bit on ARM64. x86 made the same optimization even though their TLB invalidate is much faster as it doesn't broadcast to other CPUs. Signed-off-by: Alex Van Brunt Signed-off-by: Ashish Mhetre --- v2: Added comments

[PATCH] arm64: Don't flush tlb while clearing the accessed bit

2018-10-25 Thread Ashish Mhetre
bit on ARM64. x86 made the same optimization even though their TLB invalidate is much faster as it doesn't broadcast to other CPUs. Signed-off-by: Alex Van Brunt Signed-off-by: Ashish Mhetre --- arch/arm64/include/asm/pgtable.h | 7 +++ 1 file changed, 7 insertions(+) diff --git a/arch

[PATCH] arm64: Don't flush tlb while clearing the accessed bit

2018-10-25 Thread Ashish Mhetre
bit on ARM64. x86 made the same optimization even though their TLB invalidate is much faster as it doesn't broadcast to other CPUs. Signed-off-by: Alex Van Brunt Signed-off-by: Ashish Mhetre --- arch/arm64/include/asm/pgtable.h | 7 +++ 1 file changed, 7 insertions(+) diff --git a/arch