Ah yes, the Good Time virus. What a silly idea that a virus can execute simply by reading an email message. Everyone knows that's impossible........
As a reader and modest poster, I have only seen links to articles and opinions from you lately, maybe you should rename yourself from 'computerbytesman' to the more appropreate 'security-needs-pr-2-man'.
As for the article, it tries to explore the all the angles, but leaving open, the most important one:
Full-disclosure, with the smallest ammount of time between discovery and publication, decreases the chances that one or more persons with maliscious intent among the people, who know about the vulnerability (and are generally considered to be 'in good faith'), have the knowledge to exploit the bug.
Two things apply here: -- the most dangerous threat comes from within -- it only takes one person, to create a virus to "cause millions of damage"
Disclosing all details and exploit code, allows systems outfitted with highly modified or less known operating systems and/or software installations to test there systems before somebody else does it for them. It also allows signatures to be created for IDS and/or firewall systems, if the provided solutions cannot be applied to the software in question.
There's another factor to consider. The information can also be misused, to launch a targeted attack to only a few systems, gaining access to information, no one wants unauthorized people to have access to. This is much less noticed than a virus, but the systems administrators affected don't know what hit them, and might keep on searching for answers months down the road.
The article also mentioned the 'Nimda' virus, which exploited bugs, some of which we're over two years old. I still receive KleZ.H more than once a month, judging from my maillogs, which is a 2001 virus. One thing full-disclosure has helped creating, is awareness, but clearly it is not enough. Why should we now start to trust companies, who have repeatly shown to be: 1) incompetent in basic security principles 2) hardly responsive to security alerts 3) capable of fixing bugs, thereby introducing bigger bugs
What I also miss in the article, is the drawbacks associated with initiatives, like the "Trusted Computing Group". It's in the name, for god's sake: a group who decides, who can be trusted, consisting of members who's goal it is to make profit. The argument "bad security is bad for profit" obviously doesn't apply.
Lastly the article does not mention any direct quote from Guninski (or why there
wasn't a direct quote from Guninski). Seems like a very basic journalism
principle is being ignored here.
@editors, seattletimes: In response to: Hackers, software companies feud over disclosure of weaknesses By Doug Bedell <http://seattletimes.nwsource.com/cgi-bin/PrintStory.pl?document_id=135262788&zsection_id=268448455&slug=softwarebugs14&date=20030714>
Met vriendelijke groeten / With kind regards,
Webmaster IDG.nl
Melvyn Sopacua -- "who appreciates the timely release of information, even though
it sometimes causes some very long nights".
"Freedom includes the freedom to disagree with me and still use my software." - Arnt Gulbrandsen, Author of Leafnode.
_______________________________________________ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html
