Re: [SC-L] State Department break-in last summer
Ed Reed wrote: http://news.yahoo.com/s/ap/20070419/ap_on_hi_te/hackers_state_department This article describes a Trojan horse attack introduced via MS Office (Word) documents that provided remote access by adversaries to compromised systems. It doesn't say if the exploit - design flaw - was intentionally introduced (a product of deliberate subversion) or not. ... Well, odds are not, given the source of the software in question (and, no, I don't mean that I think MS has much better security screening of its employees... 8-) ). ... While the article may provide comfort to the defense in depth crowd (the State department THINKS the issue was discovered immediately - but then again, after they were made aware of it - so they knew what to watch for - they found numerous other compromised systems, so I wonder how many haven't (yet) been caught). Indeed... This isn't terribly surprising, but it brings to mind a new insight (for me, anyway) into the issue that government and commercial customers are facing. We've (Aesec) been saying that subversion (deliberately introduced design and implementation defects into a customer's IT supply chain) is the preferred avenue of attack of professional adversaries, and I agree that it is. We've (Aesec) also noted that the commercial security industry is largely focused, instead, on discovering and patching software defects that can be easily discovered (via fuzzing and testing) and exploited to gain access to systems. Both those two avenues can lead to serious security breeches. But it's not necessary to plant an operative into a vendor's shop in a position to introduce flaws into software to gain advantage. Simply knowing enough about the internal design and implementation of the system is likely to provide the adversary with the knowledge and opportunity to discover paths of attack that can be researched at their leisure, held until needed as what would be considered a private zero day exploit. So at one end of the spectrum of malicious attacks are pure opportunists (including amateurs and script kiddies) using defects discovered through fuzzing interfaces and related black box testing techniques. At the other end of the scale are paid professional operatives infiltrating vendor development and delivery supply chains to introduce defects intentionally. And in the middle are those with gray box knowledge of products involved, who are in a better position than the public to identify attack vectors worth investigating. This middle ground would seem to significantly increase the threat - there are many more jobs in vendor organizations (and their supply and support chains) that provide privileged insight to product design, development, implementation and delivery than there are with direct code modification roles in the product. So I think you'd have to assume that the pool of unreported zero day exploits may be much larger than generally expected. I agree with all this, but... You -- and all journalistic and other commentaries I've seen/heard on the increasingly common use of these targetted Office exploits -- miss one very important option, I think; the attacker has access to (partial) source of the closed, supposedly closely-held, proprietary software in question. Recall the rumours and stories from a few years back of the MS source- code thefts? From memory, reputedly (most of) Win2K, some of WinXP (?) and (parts of) Office were stolen. Parts of these thefts were clearly confirmed with (parts of) Windows OS source becoming downloadable from various underground sources sometime later. Further, and more speculative, was the suggestion that the reputed (earlier) MS break-in (as opposed to the third-party licensee from which the OS source code was reputedly clearly obtained) was a Russian or Chinese hacker/hacking group. Some say that there were multiple break-ins at MS around that time and that both Russian and Chinese groups were involved. Nowadays most of the publicly discussed/disclosed targetted Office exploits have been attributed to Chinese-based attackers. Also of some interest might be the fact that it seems (at least to me) if there are version specificities in the exploits used in these targetted attacks, these more commonly restrict the applicability of the exploit to the older Office product versions. Now, this may be indicative of overall improvements in MS code standards due to SDLC (are newer versions of Office distilled through SDLC?) and compiler security improvements, but it might also be indicative of the attackers (or, at least those they buy their exploits from) having access to the reputed/rumoured stolen Office source which, if it ever was stolen, would be code of older versions of Office and thus be more likely to have changed, and thus not exhibit the same vulnerabilities, in newer versions. Just a thought. Ditto... Regards, Nick FitzGerald
RE: [SC-L] Bugs and flaws
Gary McGraw [EMAIL PROTECTED] wrote: To cycle this all back around to the original posting, lets talk about the WMF flaw in particular. Do we believe that the best way for Microsoft to find similar design problems is to do code review? Or should they use a higher level approach? I'll leave that to those with relevant specification/design/ implementation/review experiences... Were they correct in saying (officially) that flaws such as WMF are hard to anticipate? No. That claim is totally bogus on its face. It is an very well-established rule that you commingle code and data _at extreme risk_. We have also known for a very long time that our historically preferred use of (simple) von Neumann architectures make maintaining that distinction rather tricky. However, neither absolves us of the duty of care to be aware of these issues and to take suitable measures to ensure we don't design systems apparently intent on shooting themselves in the feet. I'd wager that even way back then, some designer and/or developer at MS, when doing the WMF design/implementation back in Win16 days (Win 3.0, no?) experienced one of those We really shouldn't be doing that like this... moments, but then dismissed it as an unnecessary concern because it's only for a personal computer (or some other cosmically shallow reason -- if I get this done by Friday I'll have a weekend for a change, if I get this done by Friday I'll make a nice bonus, usability is more important than anything else, performance is more important than anything else, etc, etc, etc). Given the intended userbase and extant computing environment at that time, the design probably was quite acceptable. The real fault is that it was then, repeatedly and apparently largely unquestioningly, ported into new implementations (Win 3.1, NT3x, Win9x, NT4, ME, Win2K, XP, XPSP2, W2K3) _including_ the ones done after Billy Boy's security is now more important than anything memo. At some point in that evolution, several someone's should have been raising their hands and saying, You know, now is the time we should fix this Someone on one of the the IE teams obviously noticed and flagged the issue, but why didn't that flag get raised bigger, higher, brighter? ... It is bogus for another reason too -- some of the people at MS making that official claim also said this is the first such flaw of this kind, and that's just BS. Long before WM/Concept.A (or its forerunner, the oft-forgotten WM/DMV.A) many security and antivirus folk were warning that embedding the more powerful, complex programming language and architecture macros (such as WordBasic, VBA and AccessBasic) into their associated document files was an inherently flawed design and would only lead to trouble. So, not only have we long-understood the theoretical reasons for why the underlying causes of WMF are inherently bad design and best avoided if at all possible, BUT MS has had its own, self-inflicted stupidities of exactly the same kind. If MS truly could not anticipate, at some point along the Win3x to W2K3 development timeline earlier than 28 Dec 2005, that this WMF design feature would cause trouble, one has to ask if MS should be allowed to make software for general consumption... Regards, Nick FitzGerald ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php
Re: [SC-L] Bugs and flaws
Al Eridani [EMAIL PROTECTED] wrote: If the design says For each fund that the user owns, do X and my code does X for all the funds but it skips the most recently acquired fund, I see it as a manufacturing error. On the other hand, if a user sells all of her funds and the design does not properly contemplate the situation where no funds are owned and therefore the software misbehaves, I see it as a design error. Maybe I'm confused, but... If the design in your second case is still the same one -- For each fund that the user owns, do X -- then this second example, like your first, is NOT a design error but an implementation (or manufacturing if you prefer) error. (Both are (probably) due to some or other form of improper bounds checking, and probably due to naïve use of zero- based counters controlling a loop... 8-) ) The design For each fund that the user owns, do X clearly (well, to me -- am I odd in this?) says that NOTHING be done if the number of funds is zero, hence the second result is an implemention error. Regards, Nick FitzGerald ___ Secure Coding mailing list (SC-L) SC-L@securecoding.org List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List charter available at - http://www.securecoding.org/list/charter.php