[PATCH 1/2 v2] arm, arm64: change_memory_common with numpages == 0 should be no-op.

2016-01-25 Thread mika.penttila
From: Mika Penttilä 

This makes the caller set_memory_xx() consistent with x86.

arm64 part is rebased on 4.5.0-rc1 with Ard's patch
 lkml.kernel.org/g/<1453125665-26627-1-git-send-email-ard.biesheu...@linaro.org>
applied.

Signed-off-by: Mika Penttilä mika.pentt...@nextfour.com
Reviewed-by: Laura Abbott 
Acked-by: David Rientjes 

---
 arch/arm/mm/pageattr.c   | 3 +++
 arch/arm64/mm/pageattr.c | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c
index cf30daf..d19b1ad 100644
--- a/arch/arm/mm/pageattr.c
+++ b/arch/arm/mm/pageattr.c
@@ -49,6 +49,9 @@ static int change_memory_common(unsigned long addr, int 
numpages,
WARN_ON_ONCE(1);
}
 
+   if (!numpages)
+   return 0;
+
if (start < MODULES_VADDR || start >= MODULES_END)
return -EINVAL;
 
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 1360a02..b582fc2 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -53,6 +53,9 @@ static int change_memory_common(unsigned long addr, int 
numpages,
WARN_ON_ONCE(1);
}
 
+   if (!numpages)
+   return 0;
+
/*
 * Kernel VA mappings are always live, and splitting live section
 * mappings into page mappings may cause TLB conflicts. This means
-- 
1.9.1



[PATCH 1/2 v2] arm, arm64: change_memory_common with numpages == 0 should be no-op.

2016-01-25 Thread mika.penttila
From: Mika Penttilä 

This makes the caller set_memory_xx() consistent with x86.

arm64 part is rebased on 4.5.0-rc1 with Ard's patch
 lkml.kernel.org/g/<1453125665-26627-1-git-send-email-ard.biesheu...@linaro.org>
applied.

Signed-off-by: Mika Penttilä mika.pentt...@nextfour.com
Reviewed-by: Laura Abbott 
Acked-by: David Rientjes 

---
 arch/arm/mm/pageattr.c   | 3 +++
 arch/arm64/mm/pageattr.c | 3 +++
 2 files changed, 6 insertions(+)

diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c
index cf30daf..d19b1ad 100644
--- a/arch/arm/mm/pageattr.c
+++ b/arch/arm/mm/pageattr.c
@@ -49,6 +49,9 @@ static int change_memory_common(unsigned long addr, int 
numpages,
WARN_ON_ONCE(1);
}
 
+   if (!numpages)
+   return 0;
+
if (start < MODULES_VADDR || start >= MODULES_END)
return -EINVAL;
 
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 1360a02..b582fc2 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -53,6 +53,9 @@ static int change_memory_common(unsigned long addr, int 
numpages,
WARN_ON_ONCE(1);
}
 
+   if (!numpages)
+   return 0;
+
/*
 * Kernel VA mappings are always live, and splitting live section
 * mappings into page mappings may cause TLB conflicts. This means
-- 
1.9.1