There are some interesting issues being raised:

<snip>

1)  Researcher R finds a security hole in vendor V's product.
2)  R attempts to contact V to reveal the bug.
3)  V does not respond.
4)  R attempts communication several times over the next 90 days, but
never receives a response.
5)  R releases an advisory.
6)  Attacker A writes an exploit for the hole, and uses it to hack
into company C.
7)  C successfully sues V for several million dollars compensation.

Does V still have the right to sue R?  If vendors are made liable for
security holes, and those vendors have the right to sue the people who
find advisories and / or release exploits, then we'll be seeing
security researchers on the wrong end of multi-million dollar
lawsuits.  I'm sure I'm not the only person who feels uncomfortable
about this.  Buffer overflow exploits are not difficult to write; it
doesn't come down to whether there's exploit code or just an advisory.

</snip>
[RS] Lets assume that contracts and licensing are not defunct of liability.
Providing that the security vulnerability is reported to the vendor, the
vendor should immediately verify the claims and inform all its licensed
clients.  In most cases many vulnerabilities could be mitigated with certain
other efforts, whilst not as efficient or reduce business functionality, may
reduce the risk, until a patch is available.  The business would decide if
the risk is acceptable to continue business or would defer risk by either
reducing functionality (stopping services etc) or completely stop until a
patch (in the event the IDS picked something up).  Just because a
vulnerability is detected in a service one is using does not necessarily
mean my server has to be placed off line.  However, I would expect a patch
if I intend to use that feature in the future.

In such cases, businesses are fully aware of risk of doing business, can
apply some vague quantitative measure of risk and understand the risk model.
If the client was not notified, after the vulnerability was published (not
the exploit), businesses affected by the security hole, could sue the
vendor.  The vendor may have chosen not to inform it's clients of the
potential security problem, and thus did not do its due diligence.

I believe this would be a better model of controlling and enabling full
disclosure.  Thus, the vulnerability owner would notify a vendor, and
following the guidelines, give 30 days for client notification (assume 30,
could be anything noted..).  The Vendor must notify clients to take
precautionary action.
If vendor refuses to notify clients, and clients discover additional risk,
and/or potential damage litigation can be a consequence.  [Seems very
similar to other product warranties et al ?? ...]

<snip>
IMHO, vendors SHOULD be responsible for security holes.  However,
before that can be done there needs to be some kind of law put in
place to protect the researchers who find the holes.  Doesn't need to
be much, just a blanket law that if the researcher has taken
reasonable steps to alert the vendor, they cannot be held liable for
the consequences of releasing the advisory. If that doesn't happen,
things are going to get messy.
</snip>

[RS] I must admit that the legal system in this country is not proactive,
very reactive and very heavily fraught with strange laws.  The introduction
of laws and regulations to prevent reverse engineering is just step to
remove full disclosure.  The onus should be placed back in to liability and
insurance.  Preventing discovery is not the answer.  If Full Disclosure was
covered by some government classification as to require adequate and
official steps, liability is placed on both hands of the vulnerability.  The
author would be required to follow the steps, informing the vendor and then
releasing an advisory and then potentially the exploit.  Whilst the vendor
must be required to notify licensees / clients prior to the advisory and
then follow up with a patch.

Secondly, just because one person has discovered the flaw doesn't mean
others do not know about it.  Hence, it is vital that vendors treat
advisories as high priority issues and must assume that potential criminals
could use those vulnerabilities.

It doesn't seem much to stretch the Homeland office for security to regard
commerce systems as "Infrastructure" and hence bind researchers and vendors
to an agreement.  The only sticky part is if a vendor fails to take note and
the advisory and exploits are released.  In such a case the department of
HLS could be involved in high level cases, i.e. large scale potential.

This is just a sketch and there are numerous possible obstacles, but it
certainly beats the current rogue view of many members who regard FD a
terrible thing.

Cheerio
r.

Richard Scott
INFORMATION SECURITY
Best Buy World Headquarters
7075 Flying Cloud Drive
Eden Prairie, MN 55344 USA

The views expressed in this email do not represent Best Buy
or any of its subsidiaries


Reply via email to