I'm new to this group, but wanted to offer some observations as the
author of the GeoTrust White Paper in the link above (and repeated
here):

http://www.geotrust.com/resources/white_papers/pdfs/SSLVulnerabilityWPcds.pdf

I'm a recovering lawyer (haven't practiced for 20 years), so when I
first started working with GeoTrust in 2000 on standards and policy
issues, as well as CPS drafting, I became very curious about the
authentication standards for manually vetted certificates as practiced
by all the CAs.  To be blunt, I wasn't impressed by what I found for
anyone - it's basically a pretty light-level touch that weeds out
only the most obvious false entities.  Fraudsters can hijack the
business identities of other companies with a little effort.  And even
for entities owned by a fraudster (registered corporations, etc.)
there's no guarantee that it's a "real" business or that
there's anything there but a shell.

These deficiencies arose, in my opinion, because the people setting
digital certificate standards in the early days were not familiar with
business and legal issues and assumed the process of authenticating a
business, etc. was somehow easy - but it isn't easy at all.  They
were also developing these standards in the early 1990s, when Internet
use was confined pretty much to the Defense Department, universities,
and a few savvy individuals.  Phishing and other online fraud attacks
were not contemplated.

None of this has mattered to date because no one has ever looked at the
identity data contained deep in a manually vetted certificate (O, OU,
L, ST, and C fields) - so no one cared whether or not it was valid or
useful.  A big part of the cost of these manually vetted certificates,
however, is the process of collecting the documents, etc. that
populates these fields.  The more we thought about it, the more we
realized that the most important field - and the only one that's
unique and can't be faked - is the CN field (domain name).  That
certificate field must also match the Internet address of the viewer or
else warnings will pop up for the consumer - and it's the only
certificate data field that is handled in exactly the same way by all
CAs and all browser makers.  So we created a new type of automated
certificate that focuses on proof of domain control in real time
combined with real time email and telephone validation and
sophisticated fraud-detection algorithms.  We still do manually vetted
certificates as well, but are trying to move away from this older model
because automated certificates are better (and faster too - minutes
instead of days).  Because of the inherent vulnerabilities of the old
"O field certs", there's just no added value in the paper
process.  Other CAs have started offering the automated certs as well,
so it's becoming a standard.

I'm concerned about any prominent display of O field data in next
generation browsers because I think it will cause many viewers to
believe the identity data is reliable.  In many cases it's relatively
reliable (not sure it's very useful information much of the time),
but there are enough holes and vulnerabilities in the manually vetted O
field certs to attract phishers and other crooks.  I believe next
generation automated certs combined with all the third party data tied
to the site or domain itself (again, this is a unique identifier
worldwide) as outlined in my White Paper will be a lot more reliable
and understandable for consumers.

I look forward to a discussion of these issues.

Kirk Hall, Standards and Policies
GeoTrust

_______________________________________________
Mozilla-security mailing list
Mozilla-security@mozilla.org
http://mail.mozilla.org/listinfo/mozilla-security

Reply via email to