Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Pasha Tatashin

On 2017-08-08 09:15, David Laight wrote:

From: Pasha Tatashin

Sent: 08 August 2017 12:49
Thank you for looking at this change. What you described was in my
previous iterations of this project.

See for example here: https://lkml.org/lkml/2017/5/5/369

I was asked to remove that flag, and only zero memory in place when
needed. Overall the current approach is better everywhere else in the
kernel, but it adds a little extra code to kasan initialization.


Perhaps you could #define the function prototype(s?) so that the flags
are not passed unless it is a kasan build?



Hi David,

Thank you for suggestion. I think a kasan specific vmemmap (what I 
described in the previous e-mail) would be a better solution over having 
different prototypes with different builds.  It would be cleaner to have 
all kasan specific code in one place.


Pasha


Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Pasha Tatashin

On 2017-08-08 09:15, David Laight wrote:

From: Pasha Tatashin

Sent: 08 August 2017 12:49
Thank you for looking at this change. What you described was in my
previous iterations of this project.

See for example here: https://lkml.org/lkml/2017/5/5/369

I was asked to remove that flag, and only zero memory in place when
needed. Overall the current approach is better everywhere else in the
kernel, but it adds a little extra code to kasan initialization.


Perhaps you could #define the function prototype(s?) so that the flags
are not passed unless it is a kasan build?



Hi David,

Thank you for suggestion. I think a kasan specific vmemmap (what I 
described in the previous e-mail) would be a better solution over having 
different prototypes with different builds.  It would be cleaner to have 
all kasan specific code in one place.


Pasha


RE: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread David Laight
From: Pasha Tatashin
> Sent: 08 August 2017 12:49
> Thank you for looking at this change. What you described was in my
> previous iterations of this project.
> 
> See for example here: https://lkml.org/lkml/2017/5/5/369
> 
> I was asked to remove that flag, and only zero memory in place when
> needed. Overall the current approach is better everywhere else in the
> kernel, but it adds a little extra code to kasan initialization.

Perhaps you could #define the function prototype(s?) so that the flags
are not passed unless it is a kasan build?

David



RE: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread David Laight
From: Pasha Tatashin
> Sent: 08 August 2017 12:49
> Thank you for looking at this change. What you described was in my
> previous iterations of this project.
> 
> See for example here: https://lkml.org/lkml/2017/5/5/369
> 
> I was asked to remove that flag, and only zero memory in place when
> needed. Overall the current approach is better everywhere else in the
> kernel, but it adds a little extra code to kasan initialization.

Perhaps you could #define the function prototype(s?) so that the flags
are not passed unless it is a kasan build?

David



Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Pasha Tatashin

Hi Will,

> Damn, I actually prefer the flag :)
>
> But actually, if you look at our implementation of vmemmap_populate, 
then we

> have our own version of vmemmap_populate_basepages that terminates at the
> pmd level anyway if ARM64_SWAPPER_USES_SECTION_MAPS. If there's 
resistance

> to do this in the core code, then I'd be inclined to replace our
> vmemmap_populate implementation in the arm64 code with a single 
version that

> can terminate at either the PMD or the PTE level, and do zeroing if
> required. We're already special-casing it, so we don't really lose 
anything

> imo.

Another approach is to create a new mapping interface for kasan only. As 
what Ard Biesheuvel wrote:


> KASAN uses vmemmap_populate as a convenience: kasan has nothing to do
> with vmemmap, but the function already existed and happened to do what
> KASAN requires.
>
> Given that that will no longer be the case, it would be far better to
> stop using vmemmap_populate altogether, and clone it into a KASAN
> specific version (with an appropriate name) with the zeroing folded
> into it.

I agree with this statement, but I think it should not be part of this 
project.


Pasha


Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Pasha Tatashin

Hi Will,

> Damn, I actually prefer the flag :)
>
> But actually, if you look at our implementation of vmemmap_populate, 
then we

> have our own version of vmemmap_populate_basepages that terminates at the
> pmd level anyway if ARM64_SWAPPER_USES_SECTION_MAPS. If there's 
resistance

> to do this in the core code, then I'd be inclined to replace our
> vmemmap_populate implementation in the arm64 code with a single 
version that

> can terminate at either the PMD or the PTE level, and do zeroing if
> required. We're already special-casing it, so we don't really lose 
anything

> imo.

Another approach is to create a new mapping interface for kasan only. As 
what Ard Biesheuvel wrote:


> KASAN uses vmemmap_populate as a convenience: kasan has nothing to do
> with vmemmap, but the function already existed and happened to do what
> KASAN requires.
>
> Given that that will no longer be the case, it would be far better to
> stop using vmemmap_populate altogether, and clone it into a KASAN
> specific version (with an appropriate name) with the zeroing folded
> into it.

I agree with this statement, but I think it should not be part of this 
project.


Pasha


Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Will Deacon
On Tue, Aug 08, 2017 at 07:49:22AM -0400, Pasha Tatashin wrote:
> Hi Will,
> 
> Thank you for looking at this change. What you described was in my previous
> iterations of this project.
> 
> See for example here: https://lkml.org/lkml/2017/5/5/369
> 
> I was asked to remove that flag, and only zero memory in place when needed.
> Overall the current approach is better everywhere else in the kernel, but it
> adds a little extra code to kasan initialization.

Damn, I actually prefer the flag :)

But actually, if you look at our implementation of vmemmap_populate, then we
have our own version of vmemmap_populate_basepages that terminates at the
pmd level anyway if ARM64_SWAPPER_USES_SECTION_MAPS. If there's resistance
to do this in the core code, then I'd be inclined to replace our
vmemmap_populate implementation in the arm64 code with a single version that
can terminate at either the PMD or the PTE level, and do zeroing if
required. We're already special-casing it, so we don't really lose anything
imo.

Will


Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Will Deacon
On Tue, Aug 08, 2017 at 07:49:22AM -0400, Pasha Tatashin wrote:
> Hi Will,
> 
> Thank you for looking at this change. What you described was in my previous
> iterations of this project.
> 
> See for example here: https://lkml.org/lkml/2017/5/5/369
> 
> I was asked to remove that flag, and only zero memory in place when needed.
> Overall the current approach is better everywhere else in the kernel, but it
> adds a little extra code to kasan initialization.

Damn, I actually prefer the flag :)

But actually, if you look at our implementation of vmemmap_populate, then we
have our own version of vmemmap_populate_basepages that terminates at the
pmd level anyway if ARM64_SWAPPER_USES_SECTION_MAPS. If there's resistance
to do this in the core code, then I'd be inclined to replace our
vmemmap_populate implementation in the arm64 code with a single version that
can terminate at either the PMD or the PTE level, and do zeroing if
required. We're already special-casing it, so we don't really lose anything
imo.

Will


Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Pasha Tatashin

Hi Will,

Thank you for looking at this change. What you described was in my 
previous iterations of this project.


See for example here: https://lkml.org/lkml/2017/5/5/369

I was asked to remove that flag, and only zero memory in place when 
needed. Overall the current approach is better everywhere else in the 
kernel, but it adds a little extra code to kasan initialization.


Pasha

On 08/08/2017 05:07 AM, Will Deacon wrote:

On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:

To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.

We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.

Signed-off-by: Pavel Tatashin 
Reviewed-by: Steven Sistare 
Reviewed-by: Daniel Jordan 
Reviewed-by: Bob Picco 
---
  arch/arm64/mm/kasan_init.c | 42 ++
  1 file changed, 42 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 81f03959a4ab..e78a9ecbb687 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
set_pgd(pgd_offset_k(start), __pgd(0));
  }
  
+/*

+ * Memory that was allocated by vmemmap_populate is not zeroed, so we must
+ * zero it here explicitly.
+ */
+static void
+zero_vmemmap_populated_memory(void)
+{
+   struct memblock_region *reg;
+   u64 start, end;
+
+   for_each_memblock(memory, reg) {
+   start = __phys_to_virt(reg->base);
+   end = __phys_to_virt(reg->base + reg->size);
+
+   if (start >= end)
+   break;
+
+   start = (u64)kasan_mem_to_shadow((void *)start);
+   end = (u64)kasan_mem_to_shadow((void *)end);
+
+   /* Round to the start end of the mapped pages */
+   start = round_down(start, SWAPPER_BLOCK_SIZE);
+   end = round_up(end, SWAPPER_BLOCK_SIZE);
+   memset((void *)start, 0, end - start);
+   }
+
+   start = (u64)kasan_mem_to_shadow(_text);
+   end = (u64)kasan_mem_to_shadow(_end);
+
+   /* Round to the start end of the mapped pages */
+   start = round_down(start, SWAPPER_BLOCK_SIZE);
+   end = round_up(end, SWAPPER_BLOCK_SIZE);
+   memset((void *)start, 0, end - start);
+}


I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will



Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Pasha Tatashin

Hi Will,

Thank you for looking at this change. What you described was in my 
previous iterations of this project.


See for example here: https://lkml.org/lkml/2017/5/5/369

I was asked to remove that flag, and only zero memory in place when 
needed. Overall the current approach is better everywhere else in the 
kernel, but it adds a little extra code to kasan initialization.


Pasha

On 08/08/2017 05:07 AM, Will Deacon wrote:

On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:

To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.

We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.

Signed-off-by: Pavel Tatashin 
Reviewed-by: Steven Sistare 
Reviewed-by: Daniel Jordan 
Reviewed-by: Bob Picco 
---
  arch/arm64/mm/kasan_init.c | 42 ++
  1 file changed, 42 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 81f03959a4ab..e78a9ecbb687 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
set_pgd(pgd_offset_k(start), __pgd(0));
  }
  
+/*

+ * Memory that was allocated by vmemmap_populate is not zeroed, so we must
+ * zero it here explicitly.
+ */
+static void
+zero_vmemmap_populated_memory(void)
+{
+   struct memblock_region *reg;
+   u64 start, end;
+
+   for_each_memblock(memory, reg) {
+   start = __phys_to_virt(reg->base);
+   end = __phys_to_virt(reg->base + reg->size);
+
+   if (start >= end)
+   break;
+
+   start = (u64)kasan_mem_to_shadow((void *)start);
+   end = (u64)kasan_mem_to_shadow((void *)end);
+
+   /* Round to the start end of the mapped pages */
+   start = round_down(start, SWAPPER_BLOCK_SIZE);
+   end = round_up(end, SWAPPER_BLOCK_SIZE);
+   memset((void *)start, 0, end - start);
+   }
+
+   start = (u64)kasan_mem_to_shadow(_text);
+   end = (u64)kasan_mem_to_shadow(_end);
+
+   /* Round to the start end of the mapped pages */
+   start = round_down(start, SWAPPER_BLOCK_SIZE);
+   end = round_up(end, SWAPPER_BLOCK_SIZE);
+   memset((void *)start, 0, end - start);
+}


I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will



Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Will Deacon
On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
> 
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
> 
> Signed-off-by: Pavel Tatashin 
> Reviewed-by: Steven Sistare 
> Reviewed-by: Daniel Jordan 
> Reviewed-by: Bob Picco 
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>   set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> + struct memblock_region *reg;
> + u64 start, end;
> +
> + for_each_memblock(memory, reg) {
> + start = __phys_to_virt(reg->base);
> + end = __phys_to_virt(reg->base + reg->size);
> +
> + if (start >= end)
> + break;
> +
> + start = (u64)kasan_mem_to_shadow((void *)start);
> + end = (u64)kasan_mem_to_shadow((void *)end);
> +
> + /* Round to the start end of the mapped pages */
> + start = round_down(start, SWAPPER_BLOCK_SIZE);
> + end = round_up(end, SWAPPER_BLOCK_SIZE);
> + memset((void *)start, 0, end - start);
> + }
> +
> + start = (u64)kasan_mem_to_shadow(_text);
> + end = (u64)kasan_mem_to_shadow(_end);
> +
> + /* Round to the start end of the mapped pages */
> + start = round_down(start, SWAPPER_BLOCK_SIZE);
> + end = round_up(end, SWAPPER_BLOCK_SIZE);
> + memset((void *)start, 0, end - start);
> +}

I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will


Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

2017-08-08 Thread Will Deacon
On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
> 
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
> 
> Signed-off-by: Pavel Tatashin 
> Reviewed-by: Steven Sistare 
> Reviewed-by: Daniel Jordan 
> Reviewed-by: Bob Picco 
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>   set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> + struct memblock_region *reg;
> + u64 start, end;
> +
> + for_each_memblock(memory, reg) {
> + start = __phys_to_virt(reg->base);
> + end = __phys_to_virt(reg->base + reg->size);
> +
> + if (start >= end)
> + break;
> +
> + start = (u64)kasan_mem_to_shadow((void *)start);
> + end = (u64)kasan_mem_to_shadow((void *)end);
> +
> + /* Round to the start end of the mapped pages */
> + start = round_down(start, SWAPPER_BLOCK_SIZE);
> + end = round_up(end, SWAPPER_BLOCK_SIZE);
> + memset((void *)start, 0, end - start);
> + }
> +
> + start = (u64)kasan_mem_to_shadow(_text);
> + end = (u64)kasan_mem_to_shadow(_end);
> +
> + /* Round to the start end of the mapped pages */
> + start = round_down(start, SWAPPER_BLOCK_SIZE);
> + end = round_up(end, SWAPPER_BLOCK_SIZE);
> + memset((void *)start, 0, end - start);
> +}

I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will