Re: [PATCH v5 13/32] x86/boot/e820: Add support to determine the E820 type of an address

2017-05-05 Thread Borislav Petkov
On Tue, Apr 18, 2017 at 04:18:31PM -0500, Tom Lendacky wrote:
> Add a function that will return the E820 type associated with an address
> range.

...

> @@ -110,9 +111,28 @@ bool __init e820__mapped_all(u64 start, u64 end, enum 
> e820_type type)
>* coverage of the desired range exists:
>*/
>   if (start >= end)
> - return 1;
> + return entry;
>   }
> - return 0;
> +
> + return NULL;
> +}
> +
> +/*
> + * This function checks if the entire range  is mapped with type.
> + */
> +bool __init e820__mapped_all(u64 start, u64 end, enum e820_type type)
> +{
> + return __e820__mapped_all(start, end, type) ? 1 : 0;

return !!__e820__mapped_all(start, end, type);

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v3 2/2] x86_64/kexec: Use PUD level 1GB page for identity mapping if available

2017-05-05 Thread Xunlei Pang
On 05/05/2017 at 05:20 PM, Ingo Molnar wrote:
> * Xunlei Pang  wrote:
>
>> On 05/05/2017 at 02:52 PM, Ingo Molnar wrote:
>>> * Xunlei Pang  wrote:
>>>
 @@ -122,6 +122,10 @@ static int init_pgtable(struct kimage *image, 
 unsigned long start_pgtable)
  
level4p = (pgd_t *)__va(start_pgtable);
clear_page(level4p);
 +
 +  if (direct_gbpages)
 +  info.direct_gbpages = true;
>>> No, this should be keyed off the CPU feature (X86_FEATURE_GBPAGES) 
>>> automatically, 
>>> not set blindly! AFAICS this patch will crash kexec on any CPU that does 
>>> not 
>>> support gbpages.
>> It should be fine, probe_page_size_mask() already takes care of this:
>> if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) {
>> printk(KERN_INFO "Using GB pages for direct mapping\n");
>> page_size_mask |= 1 << PG_LEVEL_1G;
>> } else {
>> direct_gbpages = 0;
>> }
>>
>> So if X86_FEATURE_GBPAGES is not supported, direct_gbpages will be set to 0.
> So why is the introduction of the info.direct_gbpages flag necessary? AFAICS 
> it 
> just duplicates the kernel's direct_gbpages flag. One outcome is that 
> hibernation 
> won't use gbpages, which is silly.

boot/compressed/pagetable.c  also uses kernel_ident_mapping_init() for kaslr, 
at the moment
we don't have "direct_gbpages" definition or X86_FEATURE_GBPAGES feature 
detection.

I thought that we can change the other call sites when found really needed.

Regards,
Xunlei


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v3 2/2] x86_64/kexec: Use PUD level 1GB page for identity mapping if available

2017-05-05 Thread Ingo Molnar

* Xunlei Pang  wrote:

> On 05/05/2017 at 02:52 PM, Ingo Molnar wrote:
> > * Xunlei Pang  wrote:
> >
> >> @@ -122,6 +122,10 @@ static int init_pgtable(struct kimage *image, 
> >> unsigned long start_pgtable)
> >>  
> >>level4p = (pgd_t *)__va(start_pgtable);
> >>clear_page(level4p);
> >> +
> >> +  if (direct_gbpages)
> >> +  info.direct_gbpages = true;
> > No, this should be keyed off the CPU feature (X86_FEATURE_GBPAGES) 
> > automatically, 
> > not set blindly! AFAICS this patch will crash kexec on any CPU that does 
> > not 
> > support gbpages.
> 
> It should be fine, probe_page_size_mask() already takes care of this:
> if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) {
> printk(KERN_INFO "Using GB pages for direct mapping\n");
> page_size_mask |= 1 << PG_LEVEL_1G;
> } else {
> direct_gbpages = 0;
> }
> 
> So if X86_FEATURE_GBPAGES is not supported, direct_gbpages will be set to 0.

So why is the introduction of the info.direct_gbpages flag necessary? AFAICS it 
just duplicates the kernel's direct_gbpages flag. One outcome is that 
hibernation 
won't use gbpages, which is silly.

Thanks,

Ingo

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v3 2/2] x86_64/kexec: Use PUD level 1GB page for identity mapping if available

2017-05-05 Thread Xunlei Pang
On 05/05/2017 at 02:52 PM, Ingo Molnar wrote:
> * Xunlei Pang  wrote:
>
>> @@ -122,6 +122,10 @@ static int init_pgtable(struct kimage *image, unsigned 
>> long start_pgtable)
>>  
>>  level4p = (pgd_t *)__va(start_pgtable);
>>  clear_page(level4p);
>> +
>> +if (direct_gbpages)
>> +info.direct_gbpages = true;
> No, this should be keyed off the CPU feature (X86_FEATURE_GBPAGES) 
> automatically, 
> not set blindly! AFAICS this patch will crash kexec on any CPU that does not 
> support gbpages.

It should be fine, probe_page_size_mask() already takes care of this:
if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) {
printk(KERN_INFO "Using GB pages for direct mapping\n");
page_size_mask |= 1 << PG_LEVEL_1G;
} else {
direct_gbpages = 0;
}

So if X86_FEATURE_GBPAGES is not supported, direct_gbpages will be set to 0.

>
> I only noticed this problem after having fixed/enhanced all the changelogs - 
> so 
> please pick up the new changelog up from the log below.

Thanks for the rewrite, it looks better.

Regards,
Xunlei

>
> Thanks,
>
>   Ingo
>
>
> >
>
> Author: Xunlei Pang 
>
> x86/mm: Add support for gbpages to kernel_ident_mapping_init()
>
> Kernel identity mappings on x86-64 kernels are created in two
> ways: by the early x86 boot code, or by kernel_ident_mapping_init().
>
> Native kernels (which is the dominant usecase) use the former,
> but the kexec and the hibernation code uses kernel_ident_mapping_init().
>
> There's a subtle difference between these two ways of how identity
> mappings are created, the current kernel_ident_mapping_init() code
> creates identity mappings always using 2MB page(PMD level) - while
> the native kernel boot path also utilizes gbpages where available.
>
> This difference is suboptimal both for performance and for memory
> usage: kernel_ident_mapping_init() needs to allocate pages for the
> page tables when creating the new identity mappings.
>
> This patch adds 1GB page(PUD level) support to kernel_ident_mapping_init()
> to address these concerns.
>
> The primary advantage would be better TLB coverage/performance,
> because we'd utilize 1GB TLBs instead of 2MB ones.
>
> It is also useful for machines with large number of memory to
> save paging structure allocations(around 4MB/TB using 2MB page)
> when setting identity mappings for all the memory, after using
> 1GB page it will consume only 8KB/TB.
>
> ( Note that this change alone does not activate gbpages in kexec,
>   we are doing that in a separate patch. )
>


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v3 2/2] x86_64/kexec: Use PUD level 1GB page for identity mapping if available

2017-05-05 Thread Ingo Molnar

* Xunlei Pang  wrote:

> @@ -122,6 +122,10 @@ static int init_pgtable(struct kimage *image, unsigned 
> long start_pgtable)
>  
>   level4p = (pgd_t *)__va(start_pgtable);
>   clear_page(level4p);
> +
> + if (direct_gbpages)
> + info.direct_gbpages = true;

No, this should be keyed off the CPU feature (X86_FEATURE_GBPAGES) 
automatically, 
not set blindly! AFAICS this patch will crash kexec on any CPU that does not 
support gbpages.

I only noticed this problem after having fixed/enhanced all the changelogs - so 
please pick up the new changelog up from the log below.

Thanks,

Ingo


>

Author: Xunlei Pang 

x86/mm: Add support for gbpages to kernel_ident_mapping_init()

Kernel identity mappings on x86-64 kernels are created in two
ways: by the early x86 boot code, or by kernel_ident_mapping_init().

Native kernels (which is the dominant usecase) use the former,
but the kexec and the hibernation code uses kernel_ident_mapping_init().

There's a subtle difference between these two ways of how identity
mappings are created, the current kernel_ident_mapping_init() code
creates identity mappings always using 2MB page(PMD level) - while
the native kernel boot path also utilizes gbpages where available.

This difference is suboptimal both for performance and for memory
usage: kernel_ident_mapping_init() needs to allocate pages for the
page tables when creating the new identity mappings.

This patch adds 1GB page(PUD level) support to kernel_ident_mapping_init()
to address these concerns.

The primary advantage would be better TLB coverage/performance,
because we'd utilize 1GB TLBs instead of 2MB ones.

It is also useful for machines with large number of memory to
save paging structure allocations(around 4MB/TB using 2MB page)
when setting identity mappings for all the memory, after using
1GB page it will consume only 8KB/TB.

( Note that this change alone does not activate gbpages in kexec,
  we are doing that in a separate patch. )


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec