On Thu, Nov 11, 2004 at 04:56:20PM -0500, ljknews wrote: At 2:48 PM -0500 11/11/04, Paco Hope wrote: On 11/11/04 11:46 AM, ljknews [EMAIL PROTECTED] wrote: As a software developer, I care about such issues, but the compiliations you list are largely not applicable to the operating system and programming languages with which I work. I am still looking for a forum that omits those problems due to choice of C and related programming languages that use null terminated string. I know that is a bad idea, and I don't do it. I am still looking for a forum that omits problems propagated over IP and related protocols. I don't do that either. I have yet to see a standard tool (as distinguished from social engineering technique) from elsewhere that fits VMS. RISK Digest http://www.risk.org/ (comp.risks) is about the closest, although not security focused it does discuss system failures beyond buffer overflows and TCP/IP protocol suite. It does not exclude familiar risks (and documented failures) of buffer overflows, but extends into numerous design related failures which can have security implications which transcend any given platoform or language. Of course VMS is not immune to security risks. I know, I created more than one insecure piece of software for VMS (in-house stuff that is now retired).
Concur that security is more colorless than most of the other ilities. My point is that the other domains which serve up the non-functional requirements are colorless to some degree as well. So in terms of how the other ility domains approach the quantification and elaboration of the goals that emerge from their domains and getting them in the hands of architects and developers, there may be some activities and artifacts in there that we can learn from. -gp Quoting Jeff Williams [EMAIL PROTECTED]: We certainly have a lot to learn from the other communities, but security is worse than the other *-ilities, because it is more difficult to see. Consumers can tell which operating system is easier to use, and which one is faster, but there is no way to know which is more secure today. Until consumers can tell the difference between a security program and one that is not, they will not pay more for the secure one. Which means that it is not going to make many managers' radar screen, and therefore developer awareness will never happen on a broad scale. In my opinion, the way out of this trap is to get more information to consumers about the security in software. Information like how many lines of code, what languages, what libraries, process used, security testing done, mechanisms included, and other information can and should be disclosed. --Jeff - Original Message - From: Gunnar Peterson [EMAIL PROTECTED] To: Yousef Syed [EMAIL PROTECTED] Cc: Secure Coding Mailing List [EMAIL PROTECTED] Sent: Friday, November 12, 2004 6:58 AM Subject: Re: [SC-L] How do we improve s/w developer awareness? Making software secure should be a requirement of the development process. I've had the priviledge to have worked on some very good projects where the managers emphasised security in the beginning of the projects life cycle since it was a requirement of the client. Making software secure absolutely should be part of the development lifecycle, and as early as possible, too. My overall point was that if you talk to the people who really care about usability (as distinguished from just features) you will hear very similar frustrations about their ability to get what they consider true usability requirements into the end product. So in terms of learning from other communities I think as opposed to beating our heads against the same wall it can be helpful to learn from another *-ility community to see what ways they have tried successfully/unsuccessfully to increase the quality in software from their viewpoint. My suggestion is that the problem is not just software security but run a little deeper to the main problem of software quality of which security is one of the factors (albeit an important one). So what are the common threads amongst usability and security? For examples it is interesting to note that both communities seem to value early involvement in the development lifecycle and striving for simplicity in design. Software security does not need more barriers, but to the extent that we can find allies with similar goals and issues from other communities (could be *-ilitity, business, compliance, legal btw) and collaborate with them to communicate the value of quality, then our chances for shipping better software are increased. -gp Societies have invested more than a trillion dollars in software and have grotesquely enriched minimally competent software producers whose marketing skills far exceed their programming skills. Despite this enormous long-run investment in software, economists were unable to detect overall gains in economic productivity from information technology until perhaps the mid-1990s or later; the economist Robert Solow once remarked that computers showed up everywhere except in productivity statistics. Quality may sometimes be the happy by-product of competition. The lack of competition for the PC operating system and key applications has reduced the quality and the possibilities for the user interface. There is no need on our interface for a visible OS, visible applications, or for turning the OS and browsers and e-mail programs into marketing experiences. None of this stuff appeared on the original graphical user interface designed by Xerox PARC. That interface consisted almost entirely of documents--which are, after all, what users care about. Vigorous competition might well have led to distinctly better PC interfaces--without computer administrative debris, without operating system imperialism, without unwanted marketing experiences--compared to what we have now on Windows and Mac. Today nearly all PC software competition is merely between the old release and the new release of the same damn product. It is hard to imagine a more perversely sluggish incentive system for quality. Indeed, under such a system, the optimal economic strategy for market
I think we have to go one step further. Its nice to know what the attack patterns are. A better thing to do is to know how to identify them during threat modeling, and then apply safeguards to mitigate the risk. ie: We need a merge of thoughts from Exploiting Software and Building Secure Software into a single source... where attack and defense can be spoken about together. We all like to spout out that until you know the threats to which you are susceptible to, you cannot build secure systems. The reality is, unless you know how to MITIGATE the threats... simply knowing they exist doesn't do much to protect the customer. Gary McGraw wrote: One of the reasons that Greg Hoglund and I wrote Exploiting Software was to gain a basic underdstanding of what we call attack patterns. The idea is to abstract away from platform and language considerations (at least some), and thus elevate the level of attack discussion. We identify and discuss 48 attack patterns in Exploiting Software. Each of them has a handful of associated examples from real exploits. I will paste in the complete list below. As you will see, we provided a start, but there is plenty of work here remaining to be done. Perhaps by talking about patterns of attack we can improve the signal to noise ratio in the exploit discussion department. gem Gary McGraw, Ph.D. CTO, Cigital http://www.cigital.com WE NEED PEOPLE! Make the Client Invisible Target Programs That Write to Privileged OS Resources Use a User-Supplied Configuration File to Run Commands That Elevate Privilege Make Use of Configuration File Search Paths Direct Access to Executable Files Embedding Scripts within Scripts Leverage Executable Code in Nonexecutable Files Argument Injection Command Delimiters Multiple Parsers and Double Escapes User-Supplied Variable Passed to File System Calls Postfix NULL Terminator Postfix, Null Terminate, and Backslash Relative Path Traversal Client-Controlled Environment Variables User-Supplied Global Variables (DEBUG=1, PHP Globals, and So Forth) Session ID, Resource ID, and Blind Trust Analog In-Band Switching Signals (aka Blue Boxing) Attack Pattern Fragment: Manipulating Terminal Devices Simple Script Injection Embedding Script in Nonscript Elements XSS in HTTP Headers HTTP Query Strings User-Controlled Filename Passing Local Filenames to Functions That Expect a URL Meta-characters in E-mail Header File System Function Injection, Content Based Client-side Injection, Buffer Overflow Cause Web Server Misclassification Alternate Encoding the Leading Ghost Characters Using Slashes in Alternate Encoding Using Escaped Slashes in Alternate Encoding Unicode Encoding UTF-8 Encoding URL Encoding Alternative IP Addresses Slashes and URL Encoding Combined Web Logs Overflow Binary Resource File Overflow Variables and Tags Overflow Symbolic Links MIME Conversion HTTP Cookies Filter Failure through Buffer Overflow Buffer Overflow with Environment Variables Buffer Overflow in an API Call Buffer Overflow in Local Command-Line Utilities Parameter Expansion String Format Overflow in syslog() This electronic message transmission contains information that may be confidential or privileged. The information contained herein is intended solely for the recipient and use by any other party is not authorized. If you are not the intended recipient (or otherwise authorized to receive this message by the intended recipient), any disclosure, copying, distribution or use of the contents of the information is prohibited. If you have received this electronic message transmission in error, please contact the sender by reply email and delete all copies of this message. Cigital, Inc. accepts no responsibility for any loss or damage resulting directly or indirectly from the use of this email or its contents. Thank You. -- Regards, Dana Epp [Blog: http://silverstr.ufies.org/blog/]