Martin wrote:
On 4/4/21 11:38 PM, Jacob Bachmeyer wrote:
Martin wrote:
In a perfect world if everything is reproducible than all the
compilations are deterministic. It means that for a given
environment your source code will always produce the same binaries.
Briefly DDC method is using mix of different environments in order
to analyze the binary patterns of the same source code.
The downside of this is that we are right back to a binary
monoculture, and an exploit that works on one machine would be
trivially guaranteed to work everywhere. We really need some kind
of
controlled randomization that allows provably equivalent executables
to be produced, but such that exploits relying on hardcoded offsets
will only work on a limited subset.
I don't understand what you mean by "binary monoculture" in this
context can you elaborate more about it pls?
Exploits are easier to develop when hardcoded offsets, virtual
addresses, etc. can be used. In a "binary monoculture" environment,
that is possible. This contributes to and worsens security problems in
proprietary software, which is almost always distributed as a single
identical set of binaries.
Reproducible builds are useful for validating the compiler, but there is
a potential downside in that they make any exploit that can be found in
the reproducibly built program much more reliable, since everyone will
have exactly identical binaries. Note that this is an identical risk
with binary distributions: if you simply install the binaries form
Debian, an exploit can be tuned to Debian's version of that binary and
it will work on your machine.
-- Jacob