Having read several papers on the exploitation of cache latency to defeat
aslr (kernel or not), it appears that disabling the rdtsc instruction is a
good mitigation on x86. However, some applications can legitimately use it,
so I would rather suggest restricting it to root instead.

The idea is simple: we set CR4_TSD in %cr4, the first time an application
uses rdtsc it faults, we look at the creds of the lwp, if it is root we
remove CR4_TSD from %cr4 and re-execute the instruction in userland (which
won't fault this time), otherwise we send a segfault.

Obviously we need to take care of context switches, but that's not a big
deal. The result is that a process must have super-user rights the first time
it uses rdtsc. It seems that it is sufficient to mitigate side-channels - if
an attacker can execute super-user code we're fucked anyway - and it leaves
a possibility for userland to still use rdtsc as root.

What about this?

Note: rdtsc is not a serializing instruction, but it does not matter since
iret is, and therefore we are not reducing the accuracy of the counter when
returning to userland with this method.

Reply via email to