On Mon, Feb 03, 2014 at 03:09:24PM -0800, John Adams wrote:
> Reality: You don't understand business nor threat modeling.

Reality: I understand both *painfully* well, having worked for/consulted
to a number of Fortune 100 companies and several major universities as well
as a few ISPs and government agencies over a very long period of time.

This is not my first day on the job.

> Microsoft is, unfortunately, the backbone of most world-wide business.

That is one of the (many) painful things that I understand.  I also
recognize that it's a serious strategic error -- albeit a very
common one.  Anyone who does not immediately recognize it as a massive
blunder clearly requires a great deal of remedial education.

> Additionally, your statement of: "Closed-Source software cannot be secured"
> -- I prefer open source software but I disagree that it cannot completely
> be secured. It depends only on the motivation, financial resources, and
> merit of the company attempting to secure said software. 

I suggest you read the link I provided.  It is quite impossible to
secure closed-source software because of its very nature -- any such
claims may be instantly dismissed, with prejudice.  And not only "may be",
but "must be", because they are clearly fraudulent on their face.

The motivation doesn't matter.  The financial resources don't matter.
And the merits (or lack thereof) of the company (or university,
or government, or nonprofit, or whatever) don't matter.  These are
all totally irrelevant.  If they set for themselves the problem of
"securing closed-source software" then they have chosen a problem which
not only doesn't have a known solution, but *cannot* have a solution.

Let me quote Cory Doctorow from something just published this week,
as he expresses this in a pointed and entirely apropos manner:

        "Designing a security system without public review is a fool's
        errand, ensuring that you've designed a system that is secure
        against people stupider than you, and no one else."

        Excerpted from:

        What happens with digital rights management in the real world?
        
http://www.theguardian.com/technology/blog/2014/feb/05/digital-rights-management

        (which, by the way, is worth reading in its entirety)

This isn't a Microsoft-specific problem, although thanks to the
widespread infestation of their products they illustrate it on a
grand scale.  The same is true of Apple and Adobe and myriad others.
Which is why discerning operations that actually care about security
don't permit these inferior products to contaminate their environments,
and why incompetent operations that want to make vague noises about
security while doing nothing truly meaningful use them in profusion.

Let me give you one example -- out of thousands of possible ones --
that illustrates why "it depends only on the motivation, financial resources,
and merit of the company attempting to secure said software" is dead wrong.

Let's talk about Adobe Acrobat.  It's probably Adobe's most widely-used
product.  It's so ubiquitous that a kazillion web sites with PDFs say
"you'll need Acrobat" instead of the more accurate "you'll need a
PDF reader".  Adobe has piles of money.  Adobe has piles of employees.
They have every resource that they could possibly need to produce a simple
piece of software that reads (formatted) bytes and draws on a screen.
And they certainly have the motivation, since their name is on it
and it's installed all over the planet.

And yet it's absolute crap.  It's one of the worst pieces of commonly-used
software out there.  It has a security track record that is nothing short
of appalling.  Doubly so when you consider how long Adobe has taken to
fix some of the glaring holes in it.  You can almost set your watch by
the routine occurrence of zero-days in it.

(If you doubt this, go search for terms like "Adobe Acrobat security
hole zero-day".  Make coffee first, because you're going to be reading
for a while if you actually perform a reasonable amount of due
diligence and work your way through its sordid history.  After five
or six hours, I think the picture will be quite clear.)

Adobe could pour all of its enormous financial and personnel
resources into fixing it and they would *still* fail.  This is
obvious on inspection, as math texts like to say, because it
has nothing to do with their motivation, resources or merits,
and everything to do with the fact that it's closed-source.

So, to bring this back to the discussion thread: Stanford clearly doesn't
understand security first principles, because *if they did*, they wouldn't
be fiddling around with the worthless junk they're deploying now: they
would be attacking the problem at its root, by excising all closed-source
software from the campus.

Instead they're doubling down on failure.  They're going to spend a lot
of money and a lot of time, they're going to invade the privacy of their
staff and students, and in the end, they're going to fail anyway because
what they're doing is making their environment LESS secure.

But I'm sure when that inevitable day comes, they'll (a) blame "hackers"
and (b) go right back to what they're doing.

As Marcus Ranum so beautifully put it:

        "Information security's response to bitter failure,
        in any area of endeavour, is to try the same thing
        that didn't work -- only harder."

---rsk
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Reply via email to