[This is a debate from the Bugtraq list. Richard also forwarded me his 
comments separately. My comments are near the end. --Declan]

**********

From: [EMAIL PROTECTED] (Richard M. Smith)
Subject: Can we afford full disclosure of security holes?
Date: Fri, 10 Aug 2001 14:39:06 -0400

Hello,

The research company Computer Economics is calling Code Red
the most expensive computer virus in the history of the Internet.
They put the estimated clean-up bill so far at $2 billion.
I happen to think the $2 billion figure is total hype,
but clearly a lot of time and money has been spent cleaning up after
Code Red.

For the sake of argument, let's say that Computer Economics
is off by a factor of one hundred.  That still puts the
clean-up costs at $20 million.

This $20 million figure begs the question was it really
necessary for eEye Digital Security to release full details
of the IIS buffer overflow that made the Code Red I and II worms
possible?  I think the answer is clearly no.

Wouldn't it have been much better for eEye to give the details
of the buffer overflow only to Microsoft?  They could have still
issued a security advisory saying that they found a problem in IIS
and where to get the  Microsoft patch.  I realized that a partial
disclosure policy isn't as sexy as a full disclosure policy, but
I believe that less revealing eEye advisory would have saved a lot
companies a lot of money and grief.

Unlike the eEye advisory, the Microsoft advisory on the IIS
security hole shows the right balance.  It gives IIS customers
enough information about the buffer overflow without giving a recipe
to virus writers of how to exploit it.

Thanks,
Richard M. Smith
CTO, Privacy Foundation
http://www.privacyfoundation.org

Links

Code Red Virus 'Most Expensive in History of Internet'
http://www.newsfactor.com/perl/story/12668.html

eEye security advisory -- All versions of Microsoft
IIS Remote buffer overflow (SYSTEM LevelAccess)
http://www.eeye.com/html/Research/Advisories/AD20010618.html

eEye security advisory -- .ida "Code Red" Worm
http://www.eeye.com/html/Research/Advisories/AL20010717.html

Unchecked Buffer in Index Server ISAPI Extension Could Enable Web Server
Compromise
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/secur
ity/bulletin/MS01-033.asp

**********

From: "Marc Maiffret" <[EMAIL PROTECTED]>
Subject: RE: Can we afford full disclosure of security holes?
Date: Fri, 10 Aug 2001 13:10:51 -0700

After about 3 weeks of little to no sleep and spending lots of my (and Ryan
Permeh's) personal time researching CodeRed and its many variants I have
grown tired of the small number of people who so ignorantly have pointed a
finger at eEye and are trying to somehow get people to think that we are
responsible. As an employee of a company I must hold back some of my
feelings... however as an individual I can tell you that this is all
complete and utter crap.

|Hello,
|
<snip>
|This $20 million figure begs the question was it really
|necessary for eEye Digital Security to release full details
|of the IIS buffer overflow that made the Code Red I and II worms
|possible?  I think the answer is clearly no.

Where the hell do you or anyone get off by saying that eEye's advisory made
CodeRed possible? This sort of ignorance being spread in a public forum is
just one of the many things wrong with the security industry. Your making
claims that you have no data to back other than "well I think so."

|Wouldn't it have been much better for eEye to give the details
|of the buffer overflow only to Microsoft?  They could have still
|issued a security advisory saying that they found a problem in IIS
|and where to get the  Microsoft patch.  I realized that a partial
|disclosure policy isn't as sexy as a full disclosure policy, but
|I believe that less revealing eEye advisory would have saved a lot
|companies a lot of money and grief.

Lets get the facts straight. CodeRed is based off of another worm that was
written for a .htr ISAPI buffer overflow. CodeRed is an almost identical
copy of the .htr worm. A worm which was released back in April. A worm which
exploited an UNPUBLISHED vulnerability within IIS which was silently patched
by Microsoft without notification to anyone. Therefore IDS vendors never had
a signature and the .htr worm went unnoticed. To bad a security company had
not found the flaw, then there would have been details, signatures made, and
IDS systems would have detected the first instance of CodeRed back in April.

So the facts are:
Someone found an unknown buffer overflow vulnerability within the IIS .htr
ISAPI filter, without any data from eEye.
Someone exploited that unknown buffer overflow vulnerability in order to
execute code on remote systems, without any data from eEye.
Someone took that exploit even further and turned it into a worm (Which is
what CodeRed is explicitly based off of) and launched it at the Internet,
without any data from eEye.

Now a few months later someone took that .htr worm and modified it to attack
the .ida vulnerability. They already had ALL of the knowledge they needed in
order to modify the .htr worm to be the .ida worm. There was nothing that
eEye gave them that made it any easier.

In fact when it comes down to it technically... eEye's technical exploit
information within the .ida ISAPI overflow advisory was actually put to
shame by a skilled programmer by the name of hsj. hsj published a working
.ida ISAPI overflow exploit which used a wide character overflow technique
which was far beyond (and nothing like) anything we talked about in our
advisory. So technically the CodeRed worm and hsj .ida exploit were
technically superior to anything that we (eEye) discussed in our .ida
advisory. They did not use ANY technique that had anything to do with our
advisory. If you, or any of the other small percentage of people pointing
fingers at eEye, actually had any technical understanding of buffer overflow
exploits then you might have understood that and not sent an eMail to a
public mailing list making harsh accusations which are totally inaccurate
and untrue.

|Unlike the eEye advisory, the Microsoft advisory on the IIS
|security hole shows the right balance.  It gives IIS customers
|enough information about the buffer overflow without giving a recipe
|to virus writers of how to exploit it.

This isn't the 70's. People are easily able to write exploits simply from
the data that Microsoft gives within their advisories. To say that hackers
are not able to write exploits solely based off of a Microsoft Advisory is
to underestimate the underground, which is a _bad_ thing to do. Most of the
hackers we know have automated tools that allow them to compare the files
held within a Microsoft security patch to system files that are being
replaced and after running them through custom modules for IDA etc... they
have pinpointed overflows etc... by ONLY using the information held within a
Microsoft security bulletin and its patch.

|Thanks,
|Richard M. Smith
|CTO, Privacy Foundation
|http://www.privacyfoundation.org

There is a big bad world out there far beyond the technical information seen
on mailing lists like Bugtraq.

Signed,
Marc Maiffret
Chief Hacking Officer
eEye Digital Security
T.949.349.9062
F.949.349.9538
http://eEye.com/Retina - Network Security Scanner
http://eEye.com/Iris - Network Traffic Analyzer
http://eEye.com/SecureIIS - Stop known and unknown IIS vulnerabilities

**********

Date: Fri, 10 Aug 2001 13:15:38 -0600
From: [EMAIL PROTECTED]
To: "Richard M . Smith" <[EMAIL PROTECTED]>
Subject: Re: Can we afford full disclosure of security holes?

   Without detailed information:

   How should third-parties develop countermeasures? In essence you are
arguing that only the vendor should be capable of fixing the vulnerable
software.

   How should authors of vulnerability scanners and intrusion detection
systems obtain information to produce new signatures? You may answer that
only qualified security vendors should have access to the information.
Who qualifies them? Who enforces these rules? What about non-commercial
or open source tools?

   How should academics obtain information for research purposes? You may
answer that only qualified security vendors should have access to the
information. Who qualifies them? Who enforces these rules?

   How should users verify the vendor fix works as described? Some vendors
have a history of requiring a few revisions of a patch to get it right.

   What do you do if the vendors is not cooperating, does not maintain
the product any longer, or no longer exist?

   Unless you can answer all this question successfully you will continue
to see detailed disclose of vulnerabilities.

   What it boils down to is this: disclosure of detailed vulnerability
information benefits security conscious people, while, in the short them,
hurts people that do not keep up with security, with the hope that it
also helps them in the longer term.

   Will security conscious people give up the benefits of detailed disclosure
of vulnerability information to help mitigate the short term risk of people
that are not keeping up with security? Doubtful.

-- 
Elias Levy
SecurityFocus.com
http://www.securityfocus.com/
Si vis pacem, para bellum

**********

From: [EMAIL PROTECTED] (Richard M. Smith)
To: <[EMAIL PROTECTED]>
Subject: RE: Can we afford full disclosure of security holes?
Date: Fri, 10 Aug 2001 15:32:53 -0400

I've probably found a dozen or so security holes in Microsoft products.
Many of these problems were reported on BugTraq list without full
disclosure.  How come so few people have ever approached me for the full
details?  I guess I don't see the same level of demand for
full-disclosure as you do.

However one thing is now crystal clear with Code Red: full-disclosure
comes with one of hell of a price tag.  There has to be a better way.

Richard

**********

Date: Fri, 10 Aug 2001 16:06:43 -0400
From: Randy Taylor <[EMAIL PROTECTED]>
Subject: Re: Can we afford full disclosure of security holes?

Replies inline below...

At 02:39 PM 8/10/2001 -0400, Richard M. Smith wrote:
 >Hello,
 >
 >The research company Computer Economics is calling Code Red
 >the most expensive computer virus in the history of the Internet.
 >They put the estimated clean-up bill so far at $2 billion.
 >I happen to think the $2 billion figure is total hype,
 >but clearly a lot of time and money has been spent cleaning up after
 >Code Red.
 >
 >For the sake of argument, let's say that Computer Economics
 >is off by a factor of one hundred.  That still puts the
 >clean-up costs at $20 million.
 >
 >This $20 million figure begs the question was it really
 >necessary for eEye Digital Security to release full details
 >of the IIS buffer overflow that made the Code Red I and II worms
 >possible?  I think the answer is clearly no.

eEye disclosed the details, but the exploit was already known
by a "close circle" - the persons or persons who originally
discovered the vulnerability and authored the exploit code. While
the vulnerability remained in that pre-disclosure state, it presented
more of a danger to the community than it does now.

As for eEye's method of disclosure, I don't think it differs too much
from the current standard  - and that has at least some of its origins
in the methods used to dissect and discuss the Morris Worm of 1988.
The point being that full disclosure has been around for a long
time, and it has been invaluable to those of us in the security community.

Having said all that, I do feel your pain. Back in the early-mid 90's I
questioned full disclosure, too. I often felt I did not have adequate time
to get the systems I needed to fix patched fast enough to escape
the onslaught of the "newest exploit" of the day. It was frustrating to say
the least. But when it was all said and done, I came to the conclusion
that knowing was much better than not knowing - full disclosure is better
than no disclosure or limited disclosure.



 >Wouldn't it have been much better for eEye to give the details
 >of the buffer overflow only to Microsoft?  They could have still
 >issued a security advisory saying that they found a problem in IIS
 >and where to get the  Microsoft patch.  I realized that a partial
 >disclosure policy isn't as sexy as a full disclosure policy, but
 >I believe that less revealing eEye advisory would have saved a lot
 >companies a lot of money and grief.

In a word, no. Dan Farmer often argued (and I am liberally paraphrasing
posts I read _years_ ago)  that full disclosure "forced the hand" of software
vendors to fix immediately what they would have waited until the next release
to patch in. Although back then I disagreed with that view, I've long since
changed my mind and support full disclosure. It's not pretty - but it
is very necessary. Imagine the fray that would have been caused by Code Red
if only Microsoft and eEye knew about it (not to mention the original
developers - _that_ circle would have expanded quickly). *shiver*

Further, I'd suggest that a "limited disclosure" policy would become
full disclosure by brute force of public opinion, or at least by brute force
reverse engineering. In other words, if any part of the cat is out of the bag,
it won't be long before the entire beast becomes visible, claws and all. That
much is attributable to human nature.


 >Unlike the eEye advisory, the Microsoft advisory on the IIS
 >security hole shows the right balance.  It gives IIS customers
 >enough information about the buffer overflow without giving a recipe
 >to virus writers of how to exploit it.

One thing I think you might not be taking into account is the wealth
of knowledge that already exists about discovering vulnerabilities. Code
Red didn't fall that far from the tree, capabilities-wise, and is a logical
extension of current trends. Finally, factor in an old military maxim,
"Defense always lags offense" - I'll always support shortening that
lag to the minimum amount of time possible. Full disclosure does that.
If I'm going to be hit by something, I'd like to know as much as I can
about it rather than get blindsided by it. It's been my experience
that people usually want to know _why_ they are getting stomped on
as well as how they can make it stop.



 >Thanks,
 >Richard M. Smith
 >CTO, Privacy Foundation
 >http://www.privacyfoundation.org

That's just my opinion, Richard. I could be wrong. ;)

Best regards,

Randy

**********

From: declan
Sent: Sunday, August 12, 2001 5:52 PM
To: Richard M. Smith
Subject: Re: Can we afford full disclosure of security holes?

On Fri, Aug 10, 2001 at 03:32:53PM -0400, Richard M. Smith wrote:
 > However one thing is now crystal clear with Code Red: full-disclosure
 > comes with one of hell of a price tag.  There has to be a better way.

But even partial disclosure points a malicious type in the right
direction. Maybe they don't get the street address, but they'll know the
neighborhood. Code Red could have happened anyway. Also, I'm not sure if
I'd try to establish general rules, or even an informal consensus, based
on ancedotes -- even notable ones like CR. You want to balance costs and
benefits, and an ancedote-driven approach may overlook a lot of benefits
that don't get saturation coverage in the press.

It strikes me that you have a better case for no-disclosure when you're
talking about proprietary vendor software. The same assumptions don't
hold true when you're talking about open-source sw that may have a loose
network of developers that may even do a lot of discussion on open
mailing lists.

-Declan

**********

From: [EMAIL PROTECTED] (Richard M. Smith)
Subject: RE: Can we afford full disclosure of security holes?
Date: Sun, 12 Aug 2001 18:04:17 -0400

Hi Declan,

You're the only person who pointed out that disclosure rules for open
source vs. closed source software maybe should be different.  This
actually is a bit where I am coming from.  Having never used any
flavor(s) of Unix, I see the world with Windows glasses on.  To fix
buffer overflows in IIS only requires Microsoft.  Fixing a similar bug
in Apache would require many more folks to have the details of the
buffer overflow.  I suspect for something like Apache, full disclosure
is probably the only reasonable alternative.

Of course, some kind of disclosures are still needed for IIS bugs in
order to get sysadmins to download and install patches.

Richard

**********




-------------------------------------------------------------------------
POLITECH -- Declan McCullagh's politics and technology mailing list
You may redistribute this message freely if you include this notice.
To subscribe, visit http://www.politechbot.com/info/subscribe.html
This message is archived at http://www.politechbot.com/
-------------------------------------------------------------------------

Reply via email to