On 10/13/06, David A. Wheeler <[EMAIL PROTECTED]> wrote:
> mikeiscool claimed:
> > Secure programming is good programming.
>  > Most books teach good programming.
>
> I strongly disagree with you, on both counts.

As is your right :)


> At the least, those who say they practice good programming
> practices, and books that say they teach good programming
> practices, are GROSSLY INADEQUATE for developing secure programs.

Sure.


> First, "Secure programming is good programming":
> Most people and organizations who claim that they perform
> good programming practices do NOT perform practices
> necessary for security.  You could argue that therefore they
> aren't really doing "good programming", but that doesn't
> help fix anything.

That's not an argument against it. I believe it will help, anyway, if
you can say to the public: "this program is written badly" instead of
"this program is written unsecurely". Almost everyone doesn't want to
hear the second version. If they ask why, well you can explain. But
keep it simple.


> And organizations STILL generally
> place features over security, and when there's a perceived
> conflict, security almost always loses... and they'll STILL
> say that they practice "good programming" practices (they just
> happen to correctly implement insecure programs).

You can design an insecure program that is coded well, and hence only
insecure in the ways you wish. You are mixing secure/good programming
with secure/good design.


> Here are some practices you should typically be doing
> if you're worried about security, and note that many are
> typically NOT considered "good programming"
> by the general community of software developers:
> * You need to identify your threats that you'll counter (as requirements)
> * Design so that the threats are countered, e.g., mutually
>    suspicious processes, small trusted computer base (TCB), etc.
> * Choose programming languages where you're less likely to
>    have security flaws, and where you can't (e.g., must use C/C++), use extra
>    security scanning tools and warning flags to reduce the risk.
> * Train on the specific common SECURITY failures of that
>    language, so you can avoid them. (E.G., gets() is verbotin.)
> * Have peer reviews of the code, so that others can help find
>    problems/vulnerabilities.
> * Test specifically for security properties, and use fuzz testing
>    rigged to test for them.
> Few of these are done, particularly the first two. I'll concede
> that many open source software projects do peer reviews, but you
> really want ALL of these practices.

Yep.


> Next, "Most books teach good programming." Pooey, though I wish they did.

Okay that was wishful thinking.


> I still find buffer overflows in examples inside books on C/C++.
> I know the first version of K&R used scanf("...%s..."..) without noting
> that you could NEVER use this on untrusted input; I think the
> second edition used gets() without commenting on its security problems.
> A typical PHP book is littered with examples that are XSS disasters.

Fair enough.


> The "Software Engineering Body of Knowledge" is supposed to
> summarize all you need to know to develop big projects.. yet
> it says essentially NOTHING about secure programming
> (it presumes that all programs are stand-alone, and never connect
> to an Internet or use data from an Internet - a ludicrous assumption).
> (P.S. I understand that it's being updated, hopefully it'll correct this.)
>
> I'd agree that "check your inputs" is a good programming
> practice, and is also critically important to secure programming.
> But it's not enough, and what people think of when you say
> "check your inputs" is VERY different when you talk to security-minded
> people vs. the general public.

Yep.


> One well-known book (I think it was "Joel on Software") has some
> nice suggestions, but strongly argues that you should accept
> data from anywhere and just run it (i.e., that you shouldn't
> treat data and code as something separate). It claimed that sandboxing
> is a waste of time, and not worthwhile, even when running code from
> arbitrary locations... just ask the user if it's okay or not
> (we know that users always say "yes"). When that author thinks
> "check your inputs", he's thinking "check the syntax" -
> not "prevent damage".  This is NOT a matter of "didn't implement it
> right" - the program is working AS DESIGNED.  These programs
> are SPECIALLY DESIGNED to be insecure.  And this was strongly
> argued as a GOOD programming practice.

Which is clearly wrong.


>  > People just don't care.
>
> There, unfortunately, we agree.  Though there's hope for the future.
>
> --- David A. Wheeler

-- mic
_______________________________________________
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php

Reply via email to