On Thu, Jan 5, 2017 at 10:01 AM, Andy Lutomirski <l...@amacapital.net> wrote: > On Thu, Jan 5, 2017 at 9:54 AM, Thomas Garnier <thgar...@google.com> wrote: >> On Thu, Jan 5, 2017 at 9:51 AM, Andy Lutomirski <l...@amacapital.net> wrote: >>> On Wed, Jan 4, 2017 at 2:16 PM, Thomas Garnier <thgar...@google.com> wrote: >>>> Each processor holds a GDT in its per-cpu structure. The sgdt >>>> instruction gives the base address of the current GDT. This address can >>>> be used to bypass KASLR memory randomization. With another bug, an >>>> attacker could target other per-cpu structures or deduce the base of the >>>> main memory section (PAGE_OFFSET). >>>> >>>> In this change, a space is reserved at the end of the memory range >>>> available for KASLR memory randomization. The space is big enough to hold >>>> the maximum number of CPUs (as defined by setup_max_cpus). Each GDT is >>>> mapped at specific offset based on the target CPU. Note that if there is >>>> not enough space available, the GDTs are not remapped. >>> >>> Can we remap it read-only? I.e. use PAGE_KERNEL_RO instead of >>> PAGE_KERNEL. After all, the ability to modify the GDT is instant >>> root. >> >> That's my goal too. I started by doing a RO remap and got couple >> problems with hibernation. I can try again for the next iteration or >> delay it for another patch. I also need to look at KVM GDT usage, I am >> not familiar with it yet. > > If you want a small adventure, I think a significant KVM-related > performance improvement is available. Specifically, on VMX exits, the > GDT limit is hardwired to 0xffff (IIRC -- I could be remembering the > actual vaue wrong). KVM does LGDT to fix it. > > If we actually made the GDT have limit 0xffff (presumably by mapping > the zero page a few times to pad it out without wasting memory), then > we would avoid the LGDT. LGDT is incredibly slow, so this would be a > big win. Want to see if you can make this work with your patch set?
I can always take a look. If you have any prototype or more details, feel free to send it to me on a separate thread. -- Thomas