It’s been a little under a month since I last posted for various reasons - both work (some interesting engagements have taken my time) and personal (family/friends visiting - it’s a bit rude to be surfing and posting while you have guests ;)) - and although there’s been a lot software and web security news, honestly I’ve got a little "jaded" with it all. That’s not really me because I’m (still) really into security and software engineering in general, however, I just can’t shake the feeling of "déjà vu" that all the talk is about things we all ready know! So please bear with me while I revisit some old news and regurgitate my thoughts
XSS silliness Obama’s website was found to have lots of XSS <http://xssed.com/news/65/Barack_Obamas_official_site_hacked/> issues, and just to be non-partisan, Clinton’s also <http://xssed.com/search?key=clinton> seems to have had some issues as well. Is anyone really surprised here? XSS is the most widespread vulnerability on the web, despite it being one of the simplest to mitigate against. As the guys as Veracode <http://www.veracode.com/blog/?p=89> point out, and the guidance <http://www.owasp.org/index.php/XSS#How_to_Protect_Yourself> from OWASP, it’s not all about input validation (although that’s not a bad thing), but in this case more specifically output encoding. It would seem that still, programmers, site developers, etc, just aren’t concerned about doing a very simple API call (such as htmlentities <http://us2.php.net/manual/en/function.htmlentities.php> or Server.HTMLEncode <http://msdn.microsoft.com/en-us/library/ms525347%28VS.85%29.aspx> ) or perhaps even better using one of the AntiXSS <http://www.owasp.org/index.php/Category:OWASP_PHP_AntiXSS_Library_Project> libraries <http://www.microsoft.com/downloads/details.aspx?familyid=EFB9C819-53FF-4F82 -BFAF-E11625130C25&displaylang=en> before writing to the page, which IMO would mitigate probably 90%+ of all attacks. I’m not sure if this is simply people being uninformed, laziness, or simple risk acceptance, but I would have expected better of them knowing the exposure the site(s) would obviously get. Mass SQL injection If it’s not XSS, it seems that that vulnerability du jour is SQL injection. Recent news <http://www.darkreading.com/document.asp?doc_id=153770> has been circulating about a mass <http://blog.washingtonpost.com/securityfix/2008/04/hundreds_of_thousands_of _micro_1.html> SQLi attack on windows-based webservers. Looking at the attack myself (there’s good examples here <http://ddanchev.blogspot.com/2008/04/united-nations-serving-malware.html> and here <http://forums.iis.net/t/1148917.aspx?PageIndex=1> ) it seems that it’s not MSFT specific, but from reports, the script/bots were specifically targeting ASP pages. Seems reasonable targeting ASP as it’s an old(er) platform and doesn’t have the same level of in-build protection as ASP.NET and is probably "legacy" code that hasn’t been reviewed/fixed for known security issues as much as "new" code would have been. Michael Howard wrote about the issue on the SDL <http://blogs.msdn.com/sdl/archive/2008/05/15/giving-sql-injection-the-respe ct-it-deserves.aspx> blog, and once again, this is an issue that is well known and easily protected against - even more so by looking at the exploit (which was to write scripting to the page via SQLi to change any text in the DB that would in all likelihood get output at some point) and doing some output encoding as discussed above - just because the data is coming from the database doesn’t mean it’s "trusted". So, defense in depth people! Stored procedures, read only access, output encoding. This is why I’m a bit jaded - there’s a lot of talk about vulnerabilities we already know lots about, and have simple ways of mitigating. What’s at issue is, as I’ve already said above, is ignorance, inactivity, or arrogance, and I’m not sure which is worse. I’ll get onto each of these in the final section of this post PCI-DSS 6.6 I’ve said for some time that only two things really force companies to do things they don’t want to, security included, both begin with the letter ‘L’ - legislation <http://en.wikipedia.org/wiki/Legislation> or litigation <http://en.wikipedia.org/wiki/Litigation> . Legislation usually is associated with a government passing laws down (e.g. HIPPA, SOX, etc) so companies have to be in compliance, whereas companies may do certain activities to defend themselves and try to avoid costly litigation from people suing them if things go wrong. The PCI-DSS <http://en.wikipedia.org/wiki/PCI_DSS> sort of in my mind falls between these two, with legislation from the major credit card companies giving "guidance" of how to protect cardholder data (and with the vendor risking losing their ability to process credit cards if they are out of this "guidance", or not in "compliance") and legislation (where if there is a breach, a company that is in compliance can claim that they were at least not negligent in their security). Often the criticism is that PCI doesn’t have any real "teeth" and it’s mostly a CYA activity. I disagree to some level because at least it’s doing something (and although I’m not a network guy, it seems that from that aspect it’s doing a reasonably good job), however I do agree that from a software perspective it doesn’t go far enough. That’s why the next version (specifically the section PCI-DSS 6.6) was so eagerly anticipated, not to mention that it is a requirement (in contrast to optional) starting in June. There’s been lots <http://jeremiahgrossman.blogspot.com/2008/04/finally-finally-pci-66-clarifi cation.html> of comments <http://www.veracode.com/blog/?p=85> about this (too numerous to link to all of them, but there’s refs from these main two), including clarification <https://www.pcisecuritystandards.org/pdfs/infosupp_6_6_applicationfirewalls _codereviews.pdf> from PCI themselves. My take? Nothing really has changed and as ever things are (still) as clear as mud. It’s still using OWASP top 10 as it’s "secure coding guidelines" (although it’s dropped the "top 10" part), and there’s still general "fuzziness" over how the testing should be conducted. I’ve banged my head against this for some time, and not going to waste effort on it any more, but Mark Curphey wrote up what he (and I) feel are the main <http://securitybuddha.com/2007/03/23/the-problems-with-the-pci-data-securit y-standard-part-1/> problems <http://securitybuddha.com/2007/01/29/why-the-pci-standard-needs-as-serious- re-think/> some time ago, along with a possible <http://securitybuddha.com/2007/06/25/principles-of-a-good-security-evaluati on-criteria/> solution if the effort was ever put into it to really design/write it properly. Even if a such a "risk based" evaluation criteria ever does get completed, I very much doubt it will be ever replace something like PCI - an approach like that costs money, and no-one really want’s to spend much of it (especially now) on anything that doesn’t provide a clear "return". Automated scanning That last sentence is really why so many people signing up for "automated scanning" solutions - it’s generally cheaper, repeatable, and easier to "off-load" to some 3rd-party. I’ll have to be really careful here (as people that both know me, follow the PCI + scanning news, and know who I ultimately work for can appreciate), but I have huge issues with any fully automated system claiming that they comprehensively test security and give you any form of "you are secure" result. Gary McGraw calls them "Badness-ometers <http://www.cigital.com/justiceleague/2007/03/19/badness-ometers-are-good-do -you-own-one/> ", which is a great term. The danger of automated scanning today is that some sites are basing their entire security to them and believing the results these tools/services give them. When the tool/service fails (a vulnerability is discovered that the tool/service hasn’t found), there’s a big circle of blame that starts - the vuln isn’t all that serious/the tool didn’t find it because it was badly configured/it’s not "real world" exploitable/we don’t test for that/out of scope/etc/etc. What is even more worrying is that many companies just don’t know that many (most in fact) of the real security issues a site might be concerned about can’t be tested in any (current) meaningful way automatically. I say "current", because who knows what advances we might have in the next 5-10 years. Automated network scanning works to a useful level because what is being scanned is so homogeneous - there are only so many operating system, only so many network devices - so it’s a lot easier to write tests for them. From my experience though most websites are bespoke, and there are few similarities from one site to the next - writing generic/reusable tests for these are difficult at best. I’ ve a few ideas that would require more research and investigation, but I’ll leave those for another time. Software security takes effort and few-and-far-between are putting that effort in. Some simple activity to give "feel good" security may work to a small degree, but it shouldn’t <http://www.schneier.com/essay-165.html> - the buyer and the vendor don’t have equal information on what is being provided. Any attempt at pointing out these inconsistencies or providing a standardized way for evaluation results in finger pointing to biases, or just plain obstruction. Sometimes I wish that I were back in academia to actually do some real analysis on this issue and not worry about upsetting anyone or worrying about job security if you say something that isn’t taken favorably. One shouldn’t be worried about saying that the Emperor has no clothes, and looking to solve the issue (if it is an issues, and it’s necessary), but many of us are. Does security really matter? Watching all these items arrive in my RSS reader has led me to question if security really does matter outside of us that do it for a living - the first two sections of this post show that extremely common and well-known vulnerabilities are still not being addressed, whereas the last two (if I take them at face value from a very high level) are that web security seems to be heading toward becoming a "checklist" process which a "do the easiest" bent and few people are questioning this move. The "ignorance, inactivity, or arrogance" question that I mentioned above can be "all of the above", but at different times, like the 5 stages of grief <http://en.wikipedia.org/wiki/K%C3%BCbler-Ross_model> . Please correct me if I’m wrong on any of the above. The only answer I can possibly come up with is security doesn’t matter until it matters (no surprise there) and it appears that everyone is just waiting for that "silver bullet" to solve all their problems, most of which they don’t know/care/have the time to deal with. Jeremiah Grossman writes a similar <http://jeremiahgrossman.blogspot.com/2008/05/does-secure-software-really-ma tter.html> story, and his conclusion seems to be "virtual patching", which stands to reason as his company is pushing scanning + WAF integration. There’s a good comment feed that goes along with his post that’s worth following. This "point and shoot" method concerns me, even if I benefit from it somewhat in my current employment and industry. However, with the amount of legacy code out there, and the apparent unconcern (as identified above) at "doing the right thing", it really does seem that a paralysis is setting in while we wait for that "silver bullet" instead of incrementally doing better things (I’ve written about silver bullets and magic beans before) - what we appear to be doing is simply putting a band-aid on the problem. [Ph4nt0m] <http://www.ph4nt0m.org/> [Ph4nt0m Security Team] <http://blog.ph4nt0m.org/> [EMAIL PROTECTED] Email: [EMAIL PROTECTED] PingMe: <http://cn.pingme.messenger.yahoo.com/webchat/ajax_webchat.php?yid=hanqin_wu hq&sig=9ae1bbb1ae99009d8859e88e899ab2d1c2a17724> === V3ry G00d, V3ry Str0ng === === Ultim4te H4cking === === XPLOITZ ! === === #_# === #If you brave,there is nothing you cannot achieve.# --~--~---------~--~----~------------~-------~--~----~ 要向邮件组发送邮件,请发到 [email protected] 要退订此邮件,请发邮件至 [EMAIL PROTECTED] -~----------~----~----~----~------~----~------~--~---
<<inline: image001.gif>>

