There's one PTI related layout asymmetry I noticed between 4-level and 5-level
kernels:
47-bit:
> + |
> + | Kernel-space
> virtual memory, shared between all processes:
> +____________________________________________________________|___________________________________________________________
> + | | | |
> + ffff800000000000 | -128 TB | ffff87ffffffffff | 8 TB | ... guard
> hole, also reserved for hypervisor
> + ffff880000000000 | -120 TB | ffffc7ffffffffff | 64 TB | direct mapping
> of all physical memory (page_offset_base)
> + ffffc80000000000 | -56 TB | ffffc8ffffffffff | 1 TB | ... unused hole
> + ffffc90000000000 | -55 TB | ffffe8ffffffffff | 32 TB |
> vmalloc/ioremap space (vmalloc_base)
> + ffffe90000000000 | -23 TB | ffffe9ffffffffff | 1 TB | ... unused hole
> + ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB | virtual memory
> map (vmemmap_base)
> + ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused hole
> + ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN shadow
> memory
> + fffffc0000000000 | -4 TB | fffffdffffffffff | 2 TB | ... unused hole
> + | | | | vaddr_end for
> KASLR
> + fffffe0000000000 | -2 TB | fffffe7fffffffff | 0.5 TB | cpu_entry_area
> mapping
> + fffffe8000000000 | -1.5 TB | fffffeffffffffff | 0.5 TB | LDT remap for
> PTI
> + ffffff0000000000 | -1 TB | ffffff7fffffffff | 0.5 TB | %esp fixup
> stacks
> +__________________|____________|__________________|_________|____________________________________________________________
> + |
56-bit:
> + |
> + | Kernel-space
> virtual memory, shared between all processes:
> +____________________________________________________________|___________________________________________________________
> + | | | |
> + ff00000000000000 | -64 PB | ff0fffffffffffff | 4 PB | ... guard
> hole, also reserved for hypervisor
> + ff10000000000000 | -60 PB | ff8fffffffffffff | 32 PB | direct mapping
> of all physical memory (page_offset_base)
> + ff90000000000000 | -28 PB | ff9fffffffffffff | 4 PB | LDT remap for
> PTI
> + ffa0000000000000 | -24 PB | ffd1ffffffffffff | 12.5 PB |
> vmalloc/ioremap space (vmalloc_base)
> + ffd2000000000000 | -11.5 PB | ffd3ffffffffffff | 0.5 PB | ... unused hole
> + ffd4000000000000 | -11 PB | ffd5ffffffffffff | 0.5 PB | virtual memory
> map (vmemmap_base)
> + ffd6000000000000 | -10.5 PB | ffdeffffffffffff | 2.25 PB | ... unused hole
> + ffdf000000000000 | -8.25 PB | fffffdffffffffff | ~8 PB | KASAN shadow
> memory
> + fffffc0000000000 | -4 TB | fffffdffffffffff | 2 TB | ... unused hole
> + | | | | vaddr_end for
> KASLR
> + fffffe0000000000 | -2 TB | fffffe7fffffffff | 0.5 TB | cpu_entry_area
> mapping
> + fffffe8000000000 | -1.5 TB | fffffeffffffffff | 0.5 TB | ... unused hole
> + ffffff0000000000 | -1 TB | ffffff7fffffffff | 0.5 TB | %esp fixup
> stacks
The two layouts are very similar beyond the shift in the offset and the region
sizes, except
one big asymmetry: is the placement of the LDT remap for PTI.
Is there any fundamental reason why the LDT area is mapped into a 4 petabyte
(!) area on 56-bit
kernels, instead of being at the -1.5 TB offset like on 47-bit kernels?
The only reason I can see is that this way is that it's currently coded at the
PGD level only:
static void map_ldt_struct_to_user(struct mm_struct *mm)
{
pgd_t *pgd = pgd_offset(mm, LDT_BASE_ADDR);
if (static_cpu_has(X86_FEATURE_PTI) && !mm->context.ldt)
set_pgd(kernel_to_user_pgdp(pgd), *pgd);
}
( BTW., the 4 petabyte size of the area is misleading: a 5-level PGD entry
covers 256 TB of
virtual memory, i.e 0.25 PB, not 4 PB. So in reality we have a 0.25 PB area
there, used up
by the LDT mapping in a single PGD entry, plus a 3.75 PB hole after that. )
... but unless I'm missing something it's not really fundamental for it to be
at the PGD level
- it could be two levels lower as well, and it could move back to the same
place where it's on
the 47-bit kernel.
The LDT mapping operation is pretty heavy already, and the actual use of the
LDT is not
impacted by where it's mapped, as the LDT is per mm so no remapping is required
on context
switch.
I.e. could we move the LDT over to the same place? This would make an even
larger area of the
address space identical between 47-bit and 56-bit kernels:
|
| Identical layout
to the 47-bit one from here on:
____________________________________________________________|____________________________________________________________
| | | |
fffffc0000000000 | -4 TB | fffffdffffffffff | 2 TB | ... unused hole
| | | | vaddr_end for
KASLR
fffffe0000000000 | -2 TB | fffffe7fffffffff | 0.5 TB | cpu_entry_area
mapping
fffffe8000000000 | -1.5 TB | fffffeffffffffff | 0.5 TB | LDT remap for PTI
ffffff0000000000 | -1 TB | ffffff7fffffffff | 0.5 TB | %esp fixup stacks
ffffff8000000000 | -512 GB | ffffffeeffffffff | 444 GB | ... unused hole
ffffffef00000000 | -68 GB | fffffffeffffffff | 64 GB | EFI region
mapping space
ffffffff00000000 | -4 GB | ffffffff7fffffff | 2 GB | ... unused hole
ffffffff80000000 | -2 GB | ffffffff9fffffff | 512 MB | kernel text
mapping, mapped to physical address 0
ffffffff80000000 |-2048 MB | | |
ffffffffa0000000 |-1536 MB | fffffffffeffffff | 1520 MB | module mapping
space
ffffffffff000000 | -16 MB | | |
FIXADDR_START | ~-11 MB | ffffffffff5fffff | ~0.5 MB | kernel-internal
fixmap range, variable size and offset
ffffffffff600000 | -10 MB | ffffffffff600fff | 4 kB | legacy vsyscall
ABI
ffffffffffe00000 | -2 MB | ffffffffffffffff | 2 MB | ... unused hole
__________________|____________|__________________|_________|___________________________________________________________
And the rest would basically just be 4 areas: the direct-mapping, vmalloc,
vmemmap and KASAN
areas - which are scaled according to whether it's a 47-bit or 56-bit kernel.
Thoughts?
Thanks,
Ingo