Tim Hollebeek writes... > Really, the root of the problem is the fact that the simple version > is short and easy to understand, and the secure version is five > times longer and completely unreadable. While there always is some > additional complexity inherent in a secure version, it is nowhere > near as bad as current toolkits make it seem.
I would say that secure versions that are *not* well thought out (particularly where security wasn't part of the original design) may tend to be FIVE times longer, but I don't think that's the typical case with code that is well designed. These security checks can be be modularized and reused like most other code. However, it may very well be that it is five times more difficult to develop the examples in the first place though, and THAT'S probably a major reason that we don't see it more often in example code. > Demo code generally demonstrates some fairly powerful capability; > the reason it is often short and sweet is because lots of effort > has gone into making it possible to do useful things with minimal > effort. Unfortunately, it is often the case that much less effort > has gone into making it possible to do the same thing securely, so > that code is quite a bit longer. You're right, if there was more > of a pushback against broken demo code, maybe more effort would go > into making it easy to do things securely, instead of insecurely. Well, I'm going to start pushing back when I can. Tomorrow I get the chance to bend the ear of some security folks from the Live.com site. I'm definitely going to be letting them know of my dissatisfaction wrt recent MS Atlas training and asking what I can do about it (other than completing the training evaluation). Every little bit helps. As Ed Reed pointed out, as an industry we did manage to get rid of computed gotos, spaghetti code, etc., so maybe there's hope. (But the pessimist in me says that it's probably easier to get people to _stop_ doing some poor practice than to start doing some good practice. I hope I'm wrong there.)-: For the moment, perhaps all we can do is try to publically shame them and bring about peer presure that way. I dunno. I think it's primarilly a people problem rather than simply a technological one, which is why it's so hard to solve. So while doing things like showing people secure coding idioms and secure design patterns (ala Marcus Schumacher) will only have minor impact until there is some major attitude change with both developers and upper management. > I think part of the problem is that people have fallen into the > trap of thinking that security is supposed to be hard and that > checking all your errors is supposed to bloat your code by a factor > of five, instead of wondering why library functions are designed > in such a way that omitting complex logic around them fails in an > insecure way. Secure code can be short and sweet, too, just not > with most of the languages and tools that are currently popular. That's definitely a large part of it. Historically, most libraries haven't taken much of a security slant unless they've been crypto related. Most often, they first become well entrenched, and *then* there's an outpour of security vulnerabilites discovered as the library usage builds up a critical mass of usage. E.g., libc was this way. It wasn't until the Morris Internet worm in 1988 that people really started paying much attention to libc security issues. By that time libc was pretty much everywhere and what's worse, there wasn't really any viable alternative unless you wanted to roll your own. (That was long before GNU was prevalent.) And yes, buffer overflows were known very early, before Unix/C were widespread. But it was a different world then. > This is an old, old problem. strcpy is insecure, and any code > involving strncpy or a length check will be longer and/or more > complex. But this is really just an artifact of the fact that > buffers don't know their own length, making an additional check > necessary. There is no reason why the secure version couldn't > have been just as short and sweet, it just wasn't done. Or when strcpy() and its ilk were originally written, no one was concerned about buffer overflows...they were more concerned with program speed and size. The world changes. IMO, if you are still writing in a unsafe language like C or C++ when you don't really have to and are only using it because that's ALL YOU KNOW, then someone should take away your keyboard. Obviously there are legitimate reasons for using C/C++ and other "pointy" languages, but those reasons are holding less and less water every day. In the security class I've taught for the past 4.5 yrs or so, one of the things I tell my students is, "if you have a choice, select a 'safe' language like Java or C# where you don't need to worry about buffer overflows or heap corruption. Not only is it safer, but it is also likely to improve your productivity." But at this point, I'd be (somewhat) okay with those showing example code while instructing to show it WITHOUT proper the security checks AS LONG AS: 1) They mention that these checks have been omitted to make the example code simpler to follow in the alotted time and emphasize that if you even think about using any of these examples in production code without those security checks then not only are you a fool, but you should have your fingers amputated so you can no longer code again. 2) The distributed code (whether from a web/ftp site or on a CD/DVD) has ALL the proper security codes checks in place and also a comment warning not to remove them. (Gee, could we come up with a version of something like GPL that would prohibit removing certain code fragments? Any lawyers out there?) For code that is likely to be used within IDEs, perhaps "hints" could be given in the code to the IDE that allow the code (but not the warning comment) to be hidden via collapsing, etc. For live instruction, I think that this is a reasonable compromise. For the most part, that leaves the issue of printed matter. Arguably there's merit to both side's argument. There, I would propose that the security checks are left in UNLESS the example is long and there is demo code WITH the ALL security checks on an accompanying CD/DVD at the back of the book...the assumption being is that most will copy the code from the CD rather than typing it in or scanning it and then using OCR to convert it to text. There no doubt would be exceptions to that, but I think if everyone practiced this, then it would at least get us a lot farther along than we are now. Also, perhaps we, as a group could start posting what we consider to be GOOD (security-worthy) examples and contrasting that with POOR (insecure) examples. I'm not sure where we'd put them. We'd need a repository and some way to reasonably search through them. -kevin --- Kevin W. Wall Qwest Information Technology, Inc. [EMAIL PROTECTED] Phone: 614.215.4788 "The reason you have people breaking into your software all over the place is because your software sucks..." -- Former whitehouse cybersecurity advisor, Richard Clarke, at eWeek Security Summit This communication is the property of Qwest and may contain confidential or privileged information. Unauthorized use of this communication is strictly prohibited and may be unlawful. If you have received this communication in error, please immediately notify the sender by reply e-mail and destroy all copies of the communication and any attachments. _______________________________________________ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php