> > Therefore, W^X has always been a policy for software to follow. Meaning,
> > the libraries won't ask for WX, ld.so won't ask for WX, nothing will.
> > If something wants to shoot itself in the foot, we could not stop it,
> > because well.. firefox was asking for it until a few months ago...
>
> Yes, we actually have fairly strict W^X enforcement as an option (which
> can still be tricked by aliasing), and there's an exception for Firefox
> in it.
In OpenBSD, there is nowhere to "mark" a binary with a "knob" to say
whether it may do that, or not. We don't have an outside subsystem
keeping track of knobs, nor does our filesystem have markers (because
NFS). Only method which is really comparable is that pledge(2)'d
software cannot set X unless requesting "prot_exec".
We don't have such a marking mechanism, and never used it for previous
security advances. we did not find it neccessary -- we simply jumped
forward and mandated the newer strict behaviour, or acceptance of
greater object randomization -- the rules changed, that thing you do
is no longer allowed, go fix your code... yes, it is a luxury that we
can do this..
Mandatory W^X could be handled the same, but it requires heavy lifting
in the final pieces of (monster software) which request W|X, generally
these are JIT engines, I believe that is due to a meme which developed
back in ~2000 that mprotect X/W flips are expensive (they were on some
systems; that was a bug).
There are not many pieces of software left, but fixing them will
require investment.
> So that the process cannot make memory W|X even some code
> is injected into it, and use that to inject parasitic code?
If code has been injected, and then does a W|X allocation, what's the
point. Code has already been injected, the attacker does not need to
do this. There are other avenues for such an attacker, he does not
need to create a W|X memory segment to gain further benefit since he
already is running his own code. mmap PROT_WRITE, place data,
mprotect PROT_EXEC. In general once an attacker is in control, we
don't need to investigate complex avenues.
The prot parameter in code flow to reach mmap/mprotect is invariably
a static parameter, and not easily influenced.
> My expectation is that once an attacker can force a process to do that,
> they can also perform the mprotect after the copy of the injected code,
> or use some other mechanism to install the parasite (dlopen, for
> example). Lack of a W|X mapping would not be a substantial hurdle at
> this point.
EXACTLY.
> And parasites probably aren't that relevant as a threat
> until you have an ecosystem of various forms of host-based intrusion
> detection.
Exactly.
And that's why W^X as a programmer policy has been effective.
The programs which still request W|X memory are essentially following
bad practice, and creating a knob which we set for the "good programs"
and leave off for the "bad programs" doesn't act as more than a
"quality assessment" marker.
Mandating W^X for chrome will simply break chrome. Then the "knob"
gets turned off. The existance of a knob will not influence the
chrome developers to move towards W^X policy, it is like waggling a
stick in front of them, with them laughing that all the users flip the
knob the other way.
We need to socialize mandatory W^X in such communities. We've been
doing this for quite a while.
> Thanks for the explanation. It would still be useful for testing
> purposes, I think, to find any transient W|X mappings which don't show
> up in /proc.
In portable software, a grep for PROT_EXEC finds almost all the work
which still needs to be done...
Fixing them, that's another matter. W|X-using software tends to be on
the large side (liike chrome), and the communities around them have to
start believing in this policy and apply it to their software --
hopefully realizing that W|X mappings are not really on the hot-path
for most JIT engines. Basically those projects have to invest time
making such changes.
But I am repeating myself..
> > Well, alias mappings are generally an unsafe practice; in a ROP attack
> > environment it is likely that variables -- pointing towards the
> > aliased space -- will be found in registers... or at least registers
> > pointing at some object ... which points at some object ... which
> > knows where the alias space is..
>
> Oh. But once one uses PC-relative addressing to reach data (both
> read-only and read-write), then data pointers leak code address
> information, too. And if you don't use PC-relative addressing, the
> address has to come from somewhere else.
Imagine a pointer to a structure with { void *x_mem, void *w_mem; }
being valid at the point an attacker finds a bug, then all bets are
up. From a high-level language, it is not possible to control nor
measure whether there is dangerous leakage. Similar situations could
occur even if a high level programmer tries to be cautious and avoid
such a structure, because CPUs with a lot of registers carry
substantial register damage far through the call chain. This
component of modern attack methodology is not well known by the
programming community... the best practice for JIT is to do mprotect
flips. It will cost a bit of cpu, but in real life there are different
kinds of costs aren't there, and user's who upgraded from a celeron
deserve a bit better.