The issue is not seen on new kernel. This patch won't be needed. Thanks.
On 06/01/19 2:12 PM, Ashish Mhetre wrote:
Matthew, this issue was last reported in September 2018 on K4.9.
I verified that the optimization patches mentioned by you were not
present in our downstream kernel when we faced
/19 11:33 PM, Matthew Wilcox wrote:
On Fri, Jan 04, 2019 at 09:05:41PM +0530, Ashish Mhetre wrote:
From: Hiroshi Doyu
The purpose of lazy_max_pages is to gather virtual address space till it
reaches the lazy_max_pages limit and then purge with a TLB flush and hence
reduce the number of global TLB
r possible solution would be to configure
lazy_vfree_pages through kernel cmdline.
Signed-off-by: Hiroshi Doyu
Signed-off-by: Ashish Mhetre
---
kernel/sysctl.c | 8
mm/vmalloc.c| 5 -
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/sysctl.c b/kernel/sysc
We don't have a real use-case for this. Understanding the consequences,
we are questioning the patch in downstream itself.
Please ignore this patch for now.
On 12/12/18 1:36 PM, Christoph Hellwig wrote:
scatterlist elements longer than 4GB sound odd. Please submit it
in a series with your
On 12/12/18 12:19 PM, Sagi Grimberg wrote:
struct nvme_sgl_desc {
__le64 addr;
- __le32 length;
+ __le64 length;
__u8 rsvd[3];
__u8 type;
};
Isn't this a device or protocol defined datastructure? You can't just
change it like this.
You're
catterlist)
has changed from 28 bytes to 40 bytes, so updating NVME_MAX_SEGS from 127
to 88 to correspond to original nvme alloc size value.
Signed-off-by: Krishna Reddy
Signed-off-by: Ashish Mhetre
---
crypto/shash.c| 2 +-
drivers/ata/libata-sff.c | 2 +-
drivers/mmc/ho
the
strlcpy would lead to illegal memory access.
This issue is reported by coverity as strlcpy might end up using a NULL
buffer and non-zero buf_length value.
To avoid this, add check and return -EINVAL in this case.
Signed-off-by: Bo Yan
Signed-off-by: Ashish Mhetre
---
fs/kernfs/dir.c | 3 +++
1
the
strlcpy would lead to illegal memory access.
This issue is reported by coverity as strlcpy might end up using a NULL
buffer and non-zero buf_length value.
To avoid this, add check and return -EINVAL in this case.
Signed-off-by: Bo Yan
Signed-off-by: Ashish Mhetre
---
fs/kernfs/dir.c | 3 +++
1
: Alex Van Brunt
Signed-off-by: Ashish Mhetre
---
arch/arm64/include/asm/pgtable.h | 20
1 file changed, 20 insertions(+)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 2ab2031..080d842 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/
: Alex Van Brunt
Signed-off-by: Ashish Mhetre
---
arch/arm64/include/asm/pgtable.h | 20
1 file changed, 20 insertions(+)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 2ab2031..080d842 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/
% ~ 40%.
So for performance optimisation don't flush TLB when clearing the accessed
bit on ARM64.
x86 made the same optimization even though their TLB invalidate is much
faster as it doesn't broadcast to other CPUs.
Signed-off-by: Alex Van Brunt
Signed-off-by: Ashish Mhetre
---
v2: Added comments
% ~ 40%.
So for performance optimisation don't flush TLB when clearing the accessed
bit on ARM64.
x86 made the same optimization even though their TLB invalidate is much
faster as it doesn't broadcast to other CPUs.
Signed-off-by: Alex Van Brunt
Signed-off-by: Ashish Mhetre
---
v2: Added comments
bit on ARM64.
x86 made the same optimization even though their TLB invalidate is much
faster as it doesn't broadcast to other CPUs.
Signed-off-by: Alex Van Brunt
Signed-off-by: Ashish Mhetre
---
arch/arm64/include/asm/pgtable.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch
bit on ARM64.
x86 made the same optimization even though their TLB invalidate is much
faster as it doesn't broadcast to other CPUs.
Signed-off-by: Alex Van Brunt
Signed-off-by: Ashish Mhetre
---
arch/arm64/include/asm/pgtable.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch
14 matches
Mail list logo