On 09/07/2015 21:52, Karl Vogel wrote:
On 08/07/2015 20:23, Salz, Rich wrote:
   > 1. Is there any good reason to remove this code?

R> Yes.  If it's not tested, reviewed, or in general use, then it's
R> more likely to be harmful (source of bugs) than useful.

On Wed, 08 Jul 2015 20:47:43 +0200, Jakob Bohm replied:
J> That's an overly general criteria...

    Nope, Rich is right on the money.
You are obviously quoting others without deep understanding.

J> To objectively consider the potential harm of rarely used code,
J> one must clearly determine if there is any way this code could be
J> invoked inadvertently or remotely.

    How do stack-smashers work?  Don't they trick a system into running
    part of a program inadvertently, often with elevated privileges?
Actually, mostly they work by tricking a system into
running code that was *part of* the stack smasher itself.
Second most popular option is to use parts of the general
system code (libc etc.) loaded in every process (because
attackers like their attack code to be reusable across
different victims).  Reusing part of whichever program or
library that had the remote code execution flaw is
typically last on their list, because it is so much more
work.

    How many of us build and run OpenSSL using compiler optimization?
    Have a look at http://pdos.csail.mit.edu/~xi/papers/stack-sosp13.pdf
    From the blurb:

      What if you put security into your code but your compiler took it
      out without you realizing it?  That's exactly what's happening when
      you use most compilers on the market, according to researchers at
      MIT as disclosed in a 2013 paper.

    The authors describe some security operations (null pointer checks,
    buffer overflow safeguards, etc) seen by the compiler as being
    unnecessary, and hence removed.  I don't know that this is actually
    happening anywhere in the codebase, but it's a *big* codebase, and
    that's the problem.
That paper was hopefully a major wake up for compiler
writers, nothing anyone else can do about (short of
writing pure assembler or turning off optimizations,
both very ugly "solutions").
    How about the NTP reflection attacks we saw recently?  From
    http://www.mail-archive.com/tech@openbsd.org/msg21729.html

      [...] openntpd is a modern piece of code <5000 lines long written
      using best known practices of the time, whereas ntp.org's codebase
      is reportedly 100,000 lines of unknown or *largely unused code*,
      poorly smithed in the past when these kinds of programming mistakes
      were not a significant consideration.
Generalization beyond relevance, yes ntpd contains
lots of hard to fathom code, and yes some of that
may have been involved in attacks.  But most of the
recent ntpd related attacks didn't actually involve
bugs in the code *at all*.

Those were attacks on the protocol and on the
incompetent ISPs not implementing standard anti-
spoofing filters.   So by sending a *valid* but
obscure query with a false return address, people
got the ntp servers to respond with *valid* larger
replies to the victims whose address had been
falsified.  The primary changes added to ntpd were
to actively detect and block overly frequent info
queries pretending to be from the same address.

Openntpd just happened not to support the
diagnostic protocolcommands used in the attacks,
it was too simple to fall victim.  The code in
question was probably some of the most heavily
tested in ntpd, since its heaviest users are the
NTP expert teams diagnosing and fine tuning
production servers.
J> For example the heartbeat code was obviously callable from network
J> packets (even if it had no bugs), so needed this kind of cleanup,

    Was this only obvious after the fact?
By definition, this code was intended to handle specific
network packets and generate responses.  The bug was a
massive input validation failure.  The code could *only*
beinvoked from the network.

J> while the original eay DES API is only invokable from code that
J> knows about it, and would thus not need to be removed for lack of
J> use/testing.

    What about Apple's SSL/TLS bug (AKA CVE-2014-1266, or the "goto fail
    bug") in February 2014?  That was caused by unreachable code that
    needed to be reached in order to work properly.  The point is, more
    code == more eyes and mind-share that have to be devoted to finding
    unintended consequences.
I have not reviewed that in detail, but it sounds like
there was a bug in a primary code path, not in a rarely
invoked separate function.

    Have a look here for more reasons to trim out old code:
    
http://www.techrepublic.com/blog/software-engineer/why-you-need-to-clean-out-dead-code-paths/
Just some guys opinion on a site that carries all sorts of
opinion pieces.  Not even worth reading.
    Cliff-notes version:

   * Code changes gets ugly because you are trying to keep orphaned code in
     line with the rest of the system, but there is often no real regression
     testing or anything else.
Applies in general,  but may or may not apply to any
specific case, therefore must be evaluated on a case-
by-case basis.
   * Maintaining code after a long period away from it (or by someone else)
     is very difficult, because no one really knows why a piece of code
     is there, they just know that it is there.
Is equally much an argument not to remove unknown code,
if you don't understand, you don't know what you are
breaking.
   * The code is no longer a faithful representation of the business logic,
     because it contains logic that the specifications and business logic
     are not aware of.
This assumes that there is a specification, *and* that
this specification does not cover the code in question.
Also assumes a completely different world (enterprise
development as opposed to general purpose development,
where the term "business logic" is nonsense).

In contrast, the code in question implements an actual
specification, and is there (amongst others) to exchange
data with anyone else using that specification.  The
discussion is that someone wants to stop supporting that
specification because *he* doesn't know its purpose.
   * It presents potential security risks, as unmaintained code can be
     reached (especially in Web applications, where tweaking parameters
     may trigger something you never intended).
This is not a web application.  This code is not reachable
except by explicit reference.  It may or may not be reachable
via a format-detecting data import function or a format-
selecting output function, in which case it may be debatable
if it should be demoted to explicit invocation only (as in
data conversion programs and programs that specifically need
that format).

   OpenSSL is a critical part of security in too many places for us to
   take on any unnecessary technical debt.
This is a somewhat empty argument as long as no one bothers
to properly determine if a piece of code is a debt or an
asset.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  http://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

_______________________________________________
openssl-users mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-users

Reply via email to