Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section

2019-04-26 Thread Baoquan He
On 04/26/19 at 12:04pm, Borislav Petkov wrote:
> On Fri, Apr 26, 2019 at 05:23:48PM +0800, Baoquan He wrote:
> > I sent private mail to Kirill and Kees. Kirill haven't replied yet, he
> > could be busy with something else as he doesn't show up recently on
> > lkml.
> 
> I don't understand what the hurry is?
> 
> The merge window is imminent and we only pick obvious fixes. That
> doesn't qualify as such, AFAICT.

OK.

> 
> > Kees kindly replied, and said he couldn't find this mail thread. He told
> > I can add his Reviewed-by, as he has acked this patchset in v2
> > thread. I just updated later to tune log and correct typos.
> > http://lkml.kernel.org/r/cagxu5j+o4asx9mmdjqtmop-vrvwes-2yewr1f29z8dm0ruf...@mail.gmail.com
> 
> Yes, when you get Reviewed-by:'s or other tags from reviewers, you
> *add* them to your next submission when the patch doesn't change in
> non-trivial fashion. You should know that...

OK, will remember it. Thanks.


Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section

2019-04-26 Thread Borislav Petkov
On Fri, Apr 26, 2019 at 05:23:48PM +0800, Baoquan He wrote:
> I sent private mail to Kirill and Kees. Kirill haven't replied yet, he
> could be busy with something else as he doesn't show up recently on
> lkml.

I don't understand what the hurry is?

The merge window is imminent and we only pick obvious fixes. That
doesn't qualify as such, AFAICT.

> Kees kindly replied, and said he couldn't find this mail thread. He told
> I can add his Reviewed-by, as he has acked this patchset in v2
> thread. I just updated later to tune log and correct typos.
> http://lkml.kernel.org/r/cagxu5j+o4asx9mmdjqtmop-vrvwes-2yewr1f29z8dm0ruf...@mail.gmail.com

Yes, when you get Reviewed-by:'s or other tags from reviewers, you
*add* them to your next submission when the patch doesn't change in
non-trivial fashion. You should know that...

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.


Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section

2019-04-26 Thread Baoquan He
Hi Boris,

On 04/15/19 at 09:47pm, Borislav Petkov wrote:
> On Sun, Apr 14, 2019 at 03:28:04PM +0800, Baoquan He wrote:
> > kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
> > to support the maximum amount of system RAM in 4-level paging mode, 64 TB.
> > 
> > However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
> > the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
> > 64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
> > stamping into the following cpu_entry_area section, if KASLR puts vmemmap
> > very close to cpu_entry_area, and the actual area of vmemmap is much bigger
> > than 1 TB.
 
> 
> Kirill, ack?

I sent private mail to Kirill and Kees. Kirill haven't replied yet, he
could be busy with something else as he doesn't show up recently on
lkml.

Kees kindly replied, and said he couldn't find this mail thread. He told
I can add his Reviewed-by, as he has acked this patchset in v2
thread. I just updated later to tune log and correct typos.
http://lkml.kernel.org/r/cagxu5j+o4asx9mmdjqtmop-vrvwes-2yewr1f29z8dm0ruf...@mail.gmail.com

Can this be picked into tip with Kees' ack?

Thanks
Baoquan


Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section

2019-04-17 Thread Baoquan He
On 04/15/19 at 09:47pm, Borislav Petkov wrote:
> On Sun, Apr 14, 2019 at 03:28:04PM +0800, Baoquan He wrote:
> > kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
> > to support the maximum amount of system RAM in 4-level paging mode, 64 TB.
> > 
> > However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
> > the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
> > 64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
> > stamping into the following cpu_entry_area section, if KASLR puts vmemmap
> > very close to cpu_entry_area, and the actual area of vmemmap is much bigger
> > than 1 TB.
> > 
> > So here calculate the actual size of vmemmap region, then align up to 1 TB
> > boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
> > The current code reserves 0.5 PB for vmemmap in 5-level. In this new methor,
>  ^^^
> 
> Please introduce a spellchecker into your patch creation workflow.

Sorry, forgot running checkpatch this time. Will update.

> 
> > the left space can be saved to join randomization to increase the entropy.
> > 
> > Signed-off-by: Baoquan He 
> > ---
> >  arch/x86/mm/kaslr.c | 11 ++-
> >  1 file changed, 10 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> > index 387d4ed25d7c..4679a0075048 100644
> > --- a/arch/x86/mm/kaslr.c
> > +++ b/arch/x86/mm/kaslr.c
> > @@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
> >  } kaslr_regions[] = {
> > { _offset_base, 0 },
> > { _base, 0 },
> > -   { _base, 1 },
> > +   { _base, 0 },
> >  };
> >  
> >  /* Get size in bytes used by the memory region */
> > @@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
> > unsigned long rand, memory_tb;
> > struct rnd_state rand_state;
> > unsigned long remain_entropy;
> > +   unsigned long vmemmap_size;
> >  
> > vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : 
> > __PAGE_OFFSET_BASE_L4;
> > vaddr = vaddr_start;
> > @@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
> > if (memory_tb < kaslr_regions[0].size_tb)
> > kaslr_regions[0].size_tb = memory_tb;
> >  
> > +   /**
> > +* Calculate how many TB vmemmap region needs, and aligned to
> > +* 1TB boundary.
> > +*/
> > +   vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
> > +   sizeof(struct page);
> > +   kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
> > +
> > /* Calculate entropy available between regions */
> > remain_entropy = vaddr_end - vaddr_start;
> > for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
> > -- 
> 
> Kirill, ack?
> 
> -- 
> Regards/Gruss,
> Boris.
> 
> Good mailing practices for 400: avoid top-posting and trim the reply.


Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section

2019-04-15 Thread Borislav Petkov
On Sun, Apr 14, 2019 at 03:28:04PM +0800, Baoquan He wrote:
> kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
> to support the maximum amount of system RAM in 4-level paging mode, 64 TB.
> 
> However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
> the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
> 64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
> stamping into the following cpu_entry_area section, if KASLR puts vmemmap
> very close to cpu_entry_area, and the actual area of vmemmap is much bigger
> than 1 TB.
> 
> So here calculate the actual size of vmemmap region, then align up to 1 TB
> boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
> The current code reserves 0.5 PB for vmemmap in 5-level. In this new methor,
   ^^^

Please introduce a spellchecker into your patch creation workflow.

> the left space can be saved to join randomization to increase the entropy.
> 
> Signed-off-by: Baoquan He 
> ---
>  arch/x86/mm/kaslr.c | 11 ++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> index 387d4ed25d7c..4679a0075048 100644
> --- a/arch/x86/mm/kaslr.c
> +++ b/arch/x86/mm/kaslr.c
> @@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
>  } kaslr_regions[] = {
>   { _offset_base, 0 },
>   { _base, 0 },
> - { _base, 1 },
> + { _base, 0 },
>  };
>  
>  /* Get size in bytes used by the memory region */
> @@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
>   unsigned long rand, memory_tb;
>   struct rnd_state rand_state;
>   unsigned long remain_entropy;
> + unsigned long vmemmap_size;
>  
>   vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : 
> __PAGE_OFFSET_BASE_L4;
>   vaddr = vaddr_start;
> @@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
>   if (memory_tb < kaslr_regions[0].size_tb)
>   kaslr_regions[0].size_tb = memory_tb;
>  
> + /**
> +  * Calculate how many TB vmemmap region needs, and aligned to
> +  * 1TB boundary.
> +  */
> + vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
> + sizeof(struct page);
> + kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
> +
>   /* Calculate entropy available between regions */
>   remain_entropy = vaddr_end - vaddr_start;
>   for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
> -- 

Kirill, ack?

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.


[PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section

2019-04-14 Thread Baoquan He
kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
to support the maximum amount of system RAM in 4-level paging mode, 64 TB.

However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
stamping into the following cpu_entry_area section, if KASLR puts vmemmap
very close to cpu_entry_area, and the actual area of vmemmap is much bigger
than 1 TB.

So here calculate the actual size of vmemmap region, then align up to 1 TB
boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
The current code reserves 0.5 PB for vmemmap in 5-level. In this new methor,
the left space can be saved to join randomization to increase the entropy.

Signed-off-by: Baoquan He 
---
 arch/x86/mm/kaslr.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 387d4ed25d7c..4679a0075048 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
 } kaslr_regions[] = {
{ _offset_base, 0 },
{ _base, 0 },
-   { _base, 1 },
+   { _base, 0 },
 };
 
 /* Get size in bytes used by the memory region */
@@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
unsigned long rand, memory_tb;
struct rnd_state rand_state;
unsigned long remain_entropy;
+   unsigned long vmemmap_size;
 
vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : 
__PAGE_OFFSET_BASE_L4;
vaddr = vaddr_start;
@@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
if (memory_tb < kaslr_regions[0].size_tb)
kaslr_regions[0].size_tb = memory_tb;
 
+   /**
+* Calculate how many TB vmemmap region needs, and aligned to
+* 1TB boundary.
+*/
+   vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
+   sizeof(struct page);
+   kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
+
/* Calculate entropy available between regions */
remain_entropy = vaddr_end - vaddr_start;
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
-- 
2.17.2