The value of vulnerabilities
Jason Miller,
http://www.securityfocus.com/print/columnists/391

There is value in finding vulnerabilities. Yet many people believe that a
vulnerability doesn't exist until it is disclosed to the public. We know
that vulnerabilities need to be disclosed, but what role do vendors have to
make these issues public?
One of the things that I really love about information security is the large
number of different technologies involved. With personal computers alone,
there are all sorts of architectures, operating systems, devices, and
protocols to learn about. There's never a shortage of information to digest.
It's hard to maintain a balance between knowing a little bit about
everything, and understanding some specific things at a very deep level.

When it comes to vulnerabilities, there is already a large spectrum of
understanding associated with them. From a high level, that understanding
may simply be about what technologies are affected and what the exploitation
results are. In contrast, if you get down to the lowest level, as a
researcher you can explore the vulnerability in gruesome detail - how
exactly the vulnerable code was found, and how the issue can be exploited.
At this level, there's even a big difference between knowing how to exploit
a vulnerability and actually exploiting it. And of course there is some
middle ground between the high level view and the view a researcher might
have. This middle ground might enable people to implement technical
mitigations for the issue, and otherwise understand the vulnerability at a
level deep enough to pinpoint and protect against the attack vector
associated with the vulnerability - even if these people might not
understand the intricate technical details themselves.

Some vulnerabilities have a very small gap between these levels, such as the
case of a simple SQL injection issue. Here, someone with a very high level
understanding of the issue would probably not have too much trouble figuring
out or learning how to exploit it. On the other hand, there are
vulnerabilities where the gap between these two levels is immense. The
Symantec Firewall DNS parsing kernel stack overflow of 2004 is a great
example of that. Exploiting this vulnerability was something that only a
select group of people would have been able to accomplish in a reasonable
amount of time.

Before I digress too much further, let me just say that I find
vulnerabilities to be fascinating little things. Each one is unique, and
each one has its own subset of knowledge requirements to fully understand
it.

Where do vulnerabilities come from?

Although this might sound like a simple question, the answer isn't always
simple. There are two schools of thought about where vulnerabilities come
from, so I'll discuss each as we explore further.

Most public vulnerabilities are disclosed by a security researcher, and more
often than not these are on a major security-related mailing list such as
Bugtraq. A security researcher can be an employee of a corporation, a
full-time independent researcher, or even an audit-by-night researcher who
simply glances through code in his spare time. In some cases, the person who
discovers a vulnerability may have done so simply by accident. In most
cases, discovering and researching a vulnerability in its entirety is a
pretty intensive process that may involve a lot of skilled man hours.

Now, for whatever reason, the public disclosure of a vulnerability is often
considered to coincide with its very existence. Even the often-used term
"zero-day" seems to imply that an undisclosed vulnerability doesn't really
exist yet. This belief is a mistake that too many people make. It's as if
people are under the impression that these vulnerabilities don't actually
pose any sort of threat until they're publicly disclosed. If a vulnerability
is discovered in the proverbial forest, and no one hears of it, then people
think it isn't really a vulnerability, so to speak.

The process of "responsible disclosure" requires security researchers to sit
on information until vendors have released patches for it. In the past,
we've even seen hostility between vendors and security researchers, who have
two very different opinions on disclosing this information. Vendors want the
time to fix the problem, which can be a pretty involved process. Security
researchers disclosing this information to vendors are obviously looking to
have the issues addressed, regardless of whether or not it's their primary
reason for disclosure. And although these seem like similar goals, conflict
often arises.

Currently, it would seem that the security industry as a whole acknowledges
vulnerabilities on their disclosure date. In some cases, these issues are
reported to vendors weeks, months, or even years before disclosure happens.
There are no guarantees, and therefore I think it would be pretty naive to
believe that the person reporting the issue is the only one aware of its
existence. That in itself is pretty frightening if you think about it.

The value in vulnerabilities

Vulnerabilities are becoming a valuable commodity. There were rumors
circulating around that the recent WMF vulnerability, which was exploited
before the public had any knowledge of the issue, was sold for $4000 -
though of course it was just a rumor and remains unconfirmed. Suppose it
were true. I doubt it would be the first case of this happening, and I'm
certain that it won't be the last. There are even companies like iDefense
and 3Com who are willing to pay security researchers for unpublished
vulnerabilities, and therefore take over the process of vendor disclosure.
How do you think this affects vulnerability research? No matter which way
you look at it, vulnerabilities today are worth cold hard cash.

While the programs available from iDefense and 3Com might not be a big draw
for corporations with security research teams, they do make a huge
difference for independent researchers. Because of these programs, it's now
possible for an independent researcher to devote all of his time to this
sort of research. Providing he is good at finding vulnerabilities, he can
now make a living at it, and this sort of work can become a full time
endeavor. I think that's pretty awesome.

The ethics of vulnerabilities

Some people might find the ethics behind selling vulnerabilities to be
somewhat questionable. Shouldn't security researchers disclose this
information out of the kindness of their heart? Perhaps. But should we then
expect security researchers to audit commercial software, which is sold for
profit, and to do so for free? If there are ethical issues in the sale of
vulnerabilities, what's ethical about selling very insecure software in the
first place? While its impossible to write software without vulnerabilities,
it's pretty obvious that some companies don't even try to create secure
products - and thus, ethics don't seem to come into play for a company
that's focused only on the bottom line. Making secure software costs time
and money, and corporations are unlikely to devote time and money to this
problem if it doesn't have a significant impact on the bottom line.

What side of the fence to do you sit on? Do you believe that vulnerabilities
only become a real threat after they've been publicized? Or are these issues
a threat regardless of their public disclosure?

Why we need responsible, public disclosure

Personally, I believe that vulnerabilities pose a real threat long before
they are publicly disclosed. The notion that publicly disclosing these
issues puts people at risk - long after the vendor has been notified, and
months or even years have passed without any sort of public notification -
seems ignorant and self-serving. This view also seems like nothing more than
an attempt to divert the blame for insecure software and a poor remediation
process away from where it belongs: those who created the software. The
bottom line is that after a vulnerability is discovered and reported to a
vendor, the systems are still vulnerable to the issue, regardless of whether
or not someone decides to make the information public.

And, to clarify, I'm not saying that posting an exploit to Bugtraq before
even contacting a vendor (or perhaps, just a few hours after contacting
them) is responsible. It's not. I'm also not saying that it doesn't put
people at risk. It takes time to fix these issues, and I'm not trying to
downplay the difficulty in making patches for widely deployed commercial
software. However there should be some sort of limitation here - a vendor
that keeps its head in the sand and refuses to acknowledge a vulnerability
publicly isn't helping anyone.

I think it's time that vendors start to acknowledge issues publicly before
they release the patches, particularly in the event that the patching is
going to take them several months - or in some cases, more than a year. At
the least acknowledge the issue, and provide people with some sort of
mitigation for it.

Ultimately, I believe that security researchers are doing us all a favor.
That's something that they deserve to be rewarded for. While responsible
disclosure is important, there are also limitations to reasonable vendor
response times - because people are at risk long before the public
disclosure. In the end, security researchers aren't the ones creating the
vulnerabilities, they're just the ones finding them.


_______________________________________________
Infowarrior mailing list
[email protected]
https://attrition.org/mailman/listinfo/infowarrior

Reply via email to