Apple OCSP Responder Issues Yesterday (2020-11-12)

2020-11-13 Thread Matthew Hardeman via dev-security-policy
In as far as that part of Apple's CA hierarchy is publicly trusted and 
participates in the Mozilla Root CA program and that there were apparent 
performance issues with ocsp.apple.com yesterday, I'm writing to suggest that I 
believe there may be cause to expect some transparency regarding recent Apple 
OCSP responder performance issues, whether those issues impacted responses over 
covered certificates, what failures led to those issues, and what remediations 
have been taken.

I haven't seen any other mention of this and whether it rises as to the level 
of an incident as yet.

I clarify that I do not personally allege that I experienced a timeout or long 
delay querying an in-scope certificate, but rather that infrastructure that 
seems to be shared with publicly trusted signers had externally detectable 
issues related to OCSP performance.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-30 Thread Matthew Hardeman via dev-security-policy
On Fri, Oct 30, 2020 at 10:49 AM Bailey Basile via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> We specifically chose not to issue Apple certificates for these keys
> because we did not want users to have to trust only Apple's assertion that
> this key is for a third party.
>
>
I understand the goal of having an external CA certify the domain name of
the data processing participants' certificate (and associated key), but...
What UI experience makes any of this relevant to the user?  Is there going
to be a UI screen in the platform in which the user can view and/or choose
what parties (presumably by domain name) they will be submitting data
shares to?  Will that UI be displaying any of the certificates, key hashes,
or public keys involved?

I think domain validation for this kind of thing is pretty weak
regardless.  If Apple wanted to, they could just register
super-trusted-data-process-namealike.com, get ISRG to issue a WebPKI cert
for that and then incorporate that certificate in this scheme.  DNS based
validations don't demonstrate that the target is truly independent of Apple.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-29 Thread Matthew Hardeman via dev-security-policy
On Thu, Oct 29, 2020 at 6:30 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

The way I read Jacob's description of the process, the subscriber is
> "misusing" the certificate because they're not going to present it to TLS
> clients to validate the identity of a TLS server, but instead they (the
> subscriber) presents the certificate to Apple (and other OS vendors?) when
> they know (or should reasonably be expected to know) that the certificate
> is
> not going to be used for TLS server identity verification -- specifically,
> it's instead going to be presented to Prio clients for use in some sort of
> odd processor identity parallel-verification dance.
>

To my knowledge, caching/storing a leaf certificate isn't misuse.  While
they appear to be presenting it in some manner other than via a TLS
session, I don't believe there's any prohibition against such a thing.
Would it cure the concern if they also actually ran a TLS server that does
effectively nothing at the host name presented in the SAN dnsName?


>
> Certainly, whatever's going on with the certificate, it most definitely
> *isn't* TLS, and so absent an EKU that accurately describes that other
> behaviour,
> I can't see how it doesn't count as "misuse", and since the subscriber has
> presented the certificate for that purpose, it seems reasonable to describe
> it as "misuse by the subscriber".
>

Not all distribution of a leaf certificate is "use", let alone "misuse".
There are applications which certificate PIN rather than key PIN.  Is that
misuse?


>
> Although misuse is problematic, the concerns around agility are probably
> more concerning, IMO.  There's already more than enough examples where
> someone has done something "clever" with the WebPKI, only to have it come
> back and bite everyone *else* in the arse down the track -- we don't need
> to
> add another candidate at this stage of the game.  On that basis alone, I
> think it's worthwhile to try and squash this thing before it gets any more
> traction.
>

My question is by what rule do you squash this thing that doesn't also
cover a future similar use by a third-party relying party that makes
"additional" use of some subscriber's certificate.


>
> Given that Apple is issuing another certificate for each processor anyway,
> I
> don't understand why they don't just embed the processor's SPKI directly in
> that certificate, rather than a hash of the SPKI.  P-256 public keys (in
> compressed form) are only one octet longer than a SHA-256 hash.  But
> presumably there's a good reason for not doing that, and this isn't the
> relevant forum for discussing such things anyway.
>

Presumably this is so that the data processors can choose a key for the
encryption of their data shards and bind that to a DNS name demonstrated to
be under the data processor's control via a standard CA issuance process
without abstracting the whole thing away to certificates controlled by
Apple and/or Google.  To demonstrate that the fractional data shard
holder's domain was externally validated by a party that isn't Apple or
Google.

People scrape and analyze other parties' leaf certificates all the time.
What those third parties do with those certificates (if anything) is up to
those third parties.

If a third party can do things which causes a subscriber's certificate to
be revokable for misuse without having derived or acquired the private key,
I hesitate to call that ridiculous, but it is probably unsustainable.
Extending upon that, if the mere fact that the subscriber and the author of
the relying party validation agent are part of the same corporate hierarchy
changes the decision for the same set of circumstances, that's suspect.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: TLS certificates for ECIES keys

2020-10-29 Thread Matthew Hardeman via dev-security-policy
IFF the publicly trusted certificate for the special domain name is
acquired in the normal fashion and is issued from the normal leaf
certificate profile at LE, I don't see how the certificate could be claimed
to be "misused" _by the subscriber_.

To the extent that there is misuse in the described use-case, it would be
"misuse" on the part of the relying party software agent (which would be
trusting the certificate with a purpose and role in conflict with the
EKUs), but not misuse on the part of the certificate subscriber.
Publication of a leaf certificate (via various mechanisms) is not unusual
nor cause for alarm or revocation.  People public key PIN, though unwise
people also certificate PIN.  A key compromise would be different, but
that's not described here.

Would you revoke a properly issued certificate upon proof that some
new-fangled scheme employed by a third-party application acquires a copy of
a TLS Server leaf certificate, chases its validity (save for EKU
impropriety) correctly, and then utilizes the certificate's subject public
key in an unanticipated way?  Any competent software developer with basic
PKI understanding and some rules lawyering could get any certificate
revoked at any time in that picture.  I would submit that this can not be
the correct analysis, not pragmatically.

The mere fact that a single entity is both the Subscriber and the author of
the Relying Party agent is inconsequential, is it not?

Quite separately, I'm most confused by what kind of aggregate statistics
Prio is purported to be sending and its urgency as a public health matter.
I think I am likely to be unpleased by whatever this thing is, but I
suppose that's not at all germane to the WebPKI question.

On Thu, Oct 29, 2020 at 1:07 PM Jacob Hoffman-Andrews via
dev-security-policy  wrote:

> Hi all,
>
> ISRG is working with Apple and Google to deploy Prio, a "privacy-preserving
> system for the collection of aggregate statistics:"
> https://crypto.stanford.edu/prio/. Mozilla has previously demonstrated
> Prio
> for use with telemetry data:
>
> https://hacks.mozilla.org/2018/10/testing-privacy-preserving-telemetry-with-prio/
> and
>
> https://blog.mozilla.org/security/2019/06/06/next-steps-in-privacy-preserving-telemetry-with-prio
> .
> Part of the plan involves using Web PKI certificates in an unusual way, so
> I wanted to get feedback from the community and root programs.
>
> In Prio, clients (mobile devices in this case) generate "shares" of data to
> be sent to non-colluding processors. Those processors calculate aggregate
> statistics without access to the underlying data, and their output is
> combined to determine the overall statistic - for instance, the number of
> users who clicked a particular button. The goal is that no party learns the
> information for any individual user.
>
> As part of this particular deployment, clients encrypt their shares to each
> processor (offline), and then send the resulting encrypted "packets" of
> share data via Apple and Google servers to the processors (of which ISRG
> would be one). The encryption scheme here is ECIES (
> https://en.wikipedia.org/wiki/Integrated_Encryption_Scheme).
>
> The processors need some way to communicate their public keys to clients.
> The current plan is this: A processor chooses a unique, public domain name
> to identify its public key, and proves control of that name to a Web PKI
> CA. The processor requests issuance of a TLS certificate with
> SubjectPublicKeyInfo set to the P-256 public key clients will use to
> encrypt data share packets to that processor. Note that this certificate
> will never actually be used for TLS.
>
> The processor sends the resulting TLS certificate to Apple. Apple signs a
> second, non-TLS certificate from a semi-private Apple root. This root is
> trusted by all Apple devices but is not in other root programs.
> Certificates chaining to this root are accepted for submission by most CT
> logs. This non-TLS certificate has a CN field containing text that is not a
> domain name (i.e. it has spaces). It has no EKUs, and has a special-purpose
> extension with an Apple OID, whose value is the hash of the public key from
> the TLS certificate (i.e. the public key that will be used by clients to
> encrypt data share packets). This certificate is submitted to CT and uses
> the precertificate flow to embeds SCTs.
>
> The Prio client software on the devices receives both the TLS and non-TLS
> certificate from their OS vendor, and validates both, checking OCSP and CT
> requirements, and checking that the public key hash in the non-TLS
> certificate's special purpose extension matches the SubjectPublicKeyInfo in
> the TLS certificate. If validation passes, the client software will use
> that public key to encrypt data share packets.
>
> The main issue I see is that the processor (a Subscriber) is using the TLS
> certificate for a purpose not indicated by that certificate's EKUs. RFC
> 5280 says 

Re: PEM of root certs in Mozilla's root store

2020-10-07 Thread Matthew Hardeman via dev-security-policy
Would it be unreasonable to also consider publishing, as an "easy to use"
list, that set of only those anchors which are currently trusted in the
program and for which no exceptional in-product policy enforcement is
imposed?  (TLD constraints, provisional distrusts, etc.)

The lazier implementers are going to take the raw set of anchors and none
of the policy associated, and so the default assumption should be that none
of the enhanced policy enforcements from nss or firefox would get copied
along.

On Tue, Oct 6, 2020 at 9:09 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> It seems like there should be a link to
>
> https://wiki.mozilla.org/CA/FAQ#Can_I_use_Mozilla.27s_set_of_CA_certificates.3F
> there
>
> I realize there’s a tension between making this easily consumable, and the
> fact that “easily consumed” doesn’t and can’t relieve an organization of
> having to be responsible and being aware of the issues and discussions here
> about protecting their users.
>
> I do worry this is going to encourage one of the things that can make it
> more difficult for Mozilla to protect Mozilla users, which is when vendors
> blindly using/build a PEM file and bake it into a device they never update.
> We know from countless CA incidents that when vendors do that, and aren’t
> using these for “the web”, that it makes it more difficult for site
> operators to replace these certificates. It also makes it harder for
> Mozilla to fix bugs in implementations or policies and otherwise take
> actions that minimize any disruption for users. At the same time, Mozilla
> running a public and transparent root program does indeed mean it’s better
> for users than these vendors doing nothing at all, which is what would
> likely happen if there were too many roadblocks.
>
> While personally, I want to believe it’s “not ideal” to make it so easy, I
> realize the reality is plenty of folks already repackage the Mozilla store
> for just this reason, totally ignoring the above link, and make it easy for
> others to pull in. At least this way, you could reiterate that this list
> doesn’t really absolve these vendors of having to keep users up to date and
> protected and be able to update their root stores for their products, by
> linking to
>
> https://wiki.mozilla.org/CA/FAQ#Can_I_use_Mozilla.27s_set_of_CA_certificates.3F
>
> On Tue, Oct 6, 2020 at 5:47 PM Kathleen Wilson via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > All,
> >
> > I've been asked to publish Mozilla's root store in a way that is easy to
> > consume by downstreams, so I have added the following to
> > https://wiki.mozilla.org/CA/Included_Certificates
> >
> > CCADB Data Usage Terms
> > 
> >
> > PEM of Root Certificates in Mozilla's Root Store with the Websites
> > (TLS/SSL) Trust Bit Enabled (CSV)
> > <
> >
> https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEM?TrustBitsInclude=Websites
> > >
> >
> > PEM of Root Certificates in Mozilla's Root Store with the Email (S/MIME)
> > Trust Bit Enabled (CSV)
> > <
> >
> https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEM?TrustBitsInclude=Email
> > >
> >
> >
> > Please let me know if you have feedback or recommendations about this.
> >
> > Thanks,
> > Kathleen
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Concerns with Let's Encrpyt repeated issuing for known fraudulent sites

2020-08-13 Thread Matthew Hardeman via dev-security-policy
It’s actually really simple.

You end up in a position of editorializing.  If you will not provide
service for abuse, everyone with a gripe constantly tries to redefine abuse.


Additionally, this is why positive security indicators are clearly on the
way out.  In the not too distant future all sites will be https, so all
will require certs.

CAs are not meant to certify that the party you’re communicating with isn’t
a monster.  Just that if you are visiting siterunbymonster.com that you
really are speaking with siterunbymonster.com.

On Wednesday, August 12, 2020, Paul Walsh via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> [snip]
>
> >> So the question now is what the community intends to do to retain trust
> >> in a certificate issuer with such an obvious malpractise enabling
> >> phishing sites?
> >
> > TLS is the wrong layer to address phishing at, and this issue has
> already been discussed extensively on this list. This domain is already
> blocked by Google Safe Browsing, which is the correct layer (the User
> Agent) to deal with phishing at. I'd suggest reading through these posts
> before continuing so that we don't waste our time rehashing old arguments:
> https://groups.google.com/g/mozilla.dev.security.policy/search?q=phishing
>
>
> [PW]  I’m going to ignore technology and phishing here, it’s irrelevant.
> What we’re talking about is a company’s anti-abuse policies and how they’re
> implemented and enforced. It doesn’t matter if they’re selling certificates
> or apples.
>
> Companies have a moral obligation (often legal) to **try** to reduce the
> risk of their technology/service being abused by people with ill intent. If
> they try and fail, that’s ok. I don’t think a reasonable person can
> disagree with that.
>
> If Let’s Encrypt, Entrust Datacard, GoDaddy, or whoever, has been informed
> that bad people are abusing their service, why wouldn’t they want to stop
> that from happening? And why would anyone say that it’s ok for any service
> to be abused? I don’t understand.
>
> - Paul
>
>
>
> >
> > Jonathan
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

2020-07-04 Thread Matthew Hardeman via dev-security-policy
Just chiming in as another subscriber and relying party, with a view to
speaking to the other subscribers on this topic.

To the extent that your use case is not specifically the WebPKI as pertains
to modern browsers, it was clear to me quite several years ago and gets
clearer every day: the WebPKI is not for you, us, or anyone outside that
very particular scope.

Want to pin server cert public keys in an app?  Have a separate TLS
endpoint for that with an industry or org specific private PKI behind it.

Make website endpoints that need to face broad swathes of public users’ web
browsers participate in the WebPKI.  Get client certs and API endpoints out
of it.

That was the takeaway I had quite some years ago and I’ve been saved much
grief for having moved that way.

On Saturday, July 4, 2020, Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Sat, Jul 4, 2020 at 5:32 PM Mark Arnott via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Why aren't we hearing more from the 14 CAs that this affects.  Correct me
> > if I am wrong, but the CA/B form has something like 23 members??  An
> issue
> > that affects 14 CAs indicates a problem with the way the forum
> collaborates
> > (or should I say 'fails to work together')  Maybe this incident should
> have
> > followed a responsible disclosure process and not been fully disclosed
> > right before holidays in several nations.
>
>
> This was something disclosed 6 months ago and 6 years ago. This is not
> something “new”. The disclosure here is precisely because CAs failed, when
> engaged privately, to understand both the compliance failure and the
> security risk.
>
> Unfortunately, debates about “responsible” disclosure have existed for as
> long as computer security has been an area of focus, and itself was a term
> that was introduced as way of having the language favor the vendor, not the
> reporter. We have a security risk introduced by a compliance failure, which
> has been publicly known for some time, and which some CAs have dismissed as
> not an issue. Transparency is an essential part of bringing attention and
> understanding. This is, in effect, a “20-year day”. It’s not some new
> surprise.
>
> Even if disclosed privately, the CAs would still be under the same 7 day
> timeline. The mere act of disclosure triggers this obligation, whether
> private or public. That’s what the BRs obligate CAs to do.
>
>
> > Thank you for explaining that.  We need to hear the official position
> from
> > Google.  Ryan Hurst are you out there?
>
>
> Just to be clear: Ryan Hurst does not represent Google/Chrome’s decisions
> on certificates. He represents the CA, which is accountable to
> Google/Chrome just as it is to Mozilla/Firefox or Apple/Safari.
>
> In the past, and when speaking on behalf of Google/Chrome, it’s been
> repeatedly emphasized: Google/Chrome does not grant exceptions to the
> Baseline Requirements. In no uncertain terms, Google/Chrome does not give
> CAs blank checks to ignore violations of the Baseline Requirements.
>
> Ben’s message, while seeming somewhat self-contradictory in messaging,
> similarly reflects Mozilla’s long-standing position that it does not grant
> exceptions to the BRs. They treat violations as incidents, as Ben’s message
> emphasized, including the failure to revoke, and as Peter highlighted, both
> Google and Mozilla work through a public post-mortem process that seeks to
> understand the facts and nature of the CA’s violations and how the
> underlying systemic issues are being addressed. If a CA demonstrates poor
> judgement in handling these incidents, they may be distrusted, as some have
> in the past. However, CAs that demonstrate good judgement and demonstrate
> clear plans for improvement are given the opportunity to do so.
> Unfortunately, because some CAs think that the exact same plan should work
> for everyone, it’s necessary to repeatedly make it clear that there are no
> exceptions, and that each situation is case-by-case.
>
> This is not a Google/Mozilla issue either: as Mozilla reminds CAs at
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , delayed
> revocation issues affect everyone, and CAs need to directly communicate
> with all of the root programs that they have made representations to.
> WISeKey knows how to do this, but they also know what the expectation and
> response will be, which is aligned with the above.
>
> Some CAs have had a string of failures, and around matters like this, and
> thus know that they’re at risk of being seen as CAs that don’t take
> security seriously, which may lead to distrust. Other CAs recognize that
> security, while painful, is also a competitive advantage, and so look to be
> leaders in an industry of followers and do the right thing, especially when
> this leadership can help ensure greater flexibility if/when they do have an
> incident. Other CAs may be in uniquely difficult positions where 

Re: Digicert issued certificate with let's encrypts public key

2020-05-19 Thread Matthew Hardeman via dev-security-policy
On Mon, May 18, 2020 at 6:55 PM Kyle Hamilton  wrote:

> So, I request and encourage that CABForum members consider populating
> clause 3.2.1 of the Basic Requirements, so that Proof-of-Possession be
> mandated.
>

I don't mean to beat a dead horse, and without addressing the merits of
trying to consider a leaf certificate issued over a particular public key
as proof-of-possession/control of the corresponding private key, I add one
further practical problem:

The standard use of the most common way of communicating the public key and
the purported proof-of-possession of the private key to the CA, the CSR,
does not provide replay protection and yet is frequently NOT treated as a
security impacting element should it be disclosed post-issuance.  As such,
one must question if an arbitrary CSR which contains a valid signature
produced using the private key which corresponds to the subject public key
in same said CSR is really qualified to be considered proof-of-possession
(or proof of control) of said private key.  I submit that it is not.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-19 Thread Matthew Hardeman via dev-security-policy
On Mon, May 18, 2020 at 6:55 PM Kyle Hamilton  wrote:

> With proof of possession, these situations can be detected and raised as
> being not-just-theoretical, and the CAs (or whoever wants to search the CT
> logs) can notify the entities involved that they probably want to change
> their keys. In the case of CA keys potentially being duplicated, this is an
> incredibly important capacity.  In the case of EV certificate keys being
> duplicated, it can be a reportable event for the certified entities (such
> as banks) if copies of their private key are found to be in the possession
> of anyone else.
>

How, precisely?  Today, the vast majority of certificates lack any
end-entity identifying factors other than some number of SAN dnsName
entries.  In modern CDNs (example: CloudFlare), a single certificate might
represent a plurality of entirely unrelated websites run by entirely
unrelated entities who happen to share a service provider in common.  In as
far as that these sites are already sharing a TLS concentrator -- while it
is not the practice of anyone I know of -- such a service provider could
have quite a number of TLS concentrator elements sharing (access to) a
public/private key pair and might choose to provision quite a number of
unrelated certificates with the same key.

Two certificates issued at two different times containing the same public
key is not proof one way or the other.  It does not prove two entities are
definitely related.  It does not prove that they are not.  But the fact
that proof of possession isn't required increases the plausibility that it
does not prove a relationship.  I wonder if the attendant ambiguity has
saved anyone's head from rolling.

On Mon, May 18, 2020 at 6:55 PM Kyle Hamilton  wrote:

> CABForum's current Basic Requirements, section 3.2.1, is titled "Method to
> prove possession of private key".
>
> It is currently blank.
>
> A potential attack without Proof of Possession which PKIX glosses over
> could involve someone believing that a signature on a document combined
> with the non-possession-proved certificate constitutes proof of possession,
> and combined with external action which corroborates the contents of the
> document could heuristically evidence the authority to issue the document.
> (Yes, this would be a con job. But it would be prevented if CAs actually
> had the applicant prove possession of the private key.)
>
> Regardless of that potential con, though, there is one very important
> thing which Proof of Possession is good for, regardless of whether any
> credible attacks are "enabled" by its lack: it enables identification of a
> situation where multiple people independently generate and possess the same
> keypair (such as what happened in the Debian weak-key fiasco). Regardless
> of how often it might be seen in the wild, the fact is that on every key
> generation there is a chance (small, but non-zero) that the same key will
> be generated again, probably by someone different than the person who
> originally generated it. (With bad implementations the chance gets much
> larger.)
>
> With proof of possession, these situations can be detected and raised as
> being not-just-theoretical, and the CAs (or whoever wants to search the CT
> logs) can notify the entities involved that they probably want to change
> their keys. In the case of CA keys potentially being duplicated, this is an
> incredibly important capacity.  In the case of EV certificate keys being
> duplicated, it can be a reportable event for the certified entities (such
> as banks) if copies of their private key are found to be in the possession
> of anyone else.
>
> Non-zero probability of duplication is not zero probability of
> duplication, and relying on it being "close enough to zero" is eventually
> going to bite us all.  It's up to those who work for CAs to put in
> mitigations for when that day ultimately arrives, or else risk the
> viability of not only their businesses but every other CA business they
> compete with.
>
> So, I request and encourage that CABForum members consider populating
> clause 3.2.1 of the Basic Requirements, so that Proof-of-Possession be
> mandated.
>
> -Kyle H
>
> On Sun, May 17, 2020, 22:23 Matthew Hardeman via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> > In particular, there must have been some authorisation carried out at
>> some
>> > point, or perhaps that wasn't carried out, that indicates who requested
>> the
>> > cert.  What I'm trying to discover is where the gap was, and what's
>> > required
>> > to fix it in the future.
>> >
>>
>> What gap, exactly?  There’s not a risk here.
>>
>> I don’t think it’s been codifie

Re: Digicert issued certificate with let's encrypts public key

2020-05-19 Thread Matthew Hardeman via dev-security-policy
On Mon, May 18, 2020 at 6:55 PM Kyle Hamilton  wrote:

> A potential attack without Proof of Possession which PKIX glosses over
> could involve someone believing that a signature on a document combined
> with the non-possession-proved certificate constitutes proof of possession,
> and combined with external action which corroborates the contents of the
> document could heuristically evidence the authority to issue the document.
> (Yes, this would be a con job. But it would be prevented if CAs actually
> had the applicant prove possession of the private key.)
>

Could you explain how this is different from other stretches of fitness for
purpose?  For example, I can use Lipton's tea leaves as guidance dictates -
for making a beverage purportedly fit for human consumption.  I can also
try to diving events of the future by finding meaning in the patterns of
the tiny dregs of tea leaf left in the bottom of my mug.  But I should
expect to get laughed at by Lipton customer service if I ask for assistance
with this or appear to be taking these predictions seriously.

On Mon, May 18, 2020 at 6:55 PM Kyle Hamilton  wrote:

> CABForum's current Basic Requirements, section 3.2.1, is titled "Method to
> prove possession of private key".
>
> It is currently blank.
>
> A potential attack without Proof of Possession which PKIX glosses over
> could involve someone believing that a signature on a document combined
> with the non-possession-proved certificate constitutes proof of possession,
> and combined with external action which corroborates the contents of the
> document could heuristically evidence the authority to issue the document.
> (Yes, this would be a con job. But it would be prevented if CAs actually
> had the applicant prove possession of the private key.)
>
> Regardless of that potential con, though, there is one very important
> thing which Proof of Possession is good for, regardless of whether any
> credible attacks are "enabled" by its lack: it enables identification of a
> situation where multiple people independently generate and possess the same
> keypair (such as what happened in the Debian weak-key fiasco). Regardless
> of how often it might be seen in the wild, the fact is that on every key
> generation there is a chance (small, but non-zero) that the same key will
> be generated again, probably by someone different than the person who
> originally generated it. (With bad implementations the chance gets much
> larger.)
>
> With proof of possession, these situations can be detected and raised as
> being not-just-theoretical, and the CAs (or whoever wants to search the CT
> logs) can notify the entities involved that they probably want to change
> their keys. In the case of CA keys potentially being duplicated, this is an
> incredibly important capacity.  In the case of EV certificate keys being
> duplicated, it can be a reportable event for the certified entities (such
> as banks) if copies of their private key are found to be in the possession
> of anyone else.
>
> Non-zero probability of duplication is not zero probability of
> duplication, and relying on it being "close enough to zero" is eventually
> going to bite us all.  It's up to those who work for CAs to put in
> mitigations for when that day ultimately arrives, or else risk the
> viability of not only their businesses but every other CA business they
> compete with.
>
> So, I request and encourage that CABForum members consider populating
> clause 3.2.1 of the Basic Requirements, so that Proof-of-Possession be
> mandated.
>
> -Kyle H
>
> On Sun, May 17, 2020, 22:23 Matthew Hardeman via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> > In particular, there must have been some authorisation carried out at
>> some
>> > point, or perhaps that wasn't carried out, that indicates who requested
>> the
>> > cert.  What I'm trying to discover is where the gap was, and what's
>> > required
>> > to fix it in the future.
>> >
>>
>> What gap, exactly?  There’s not a risk here.
>>
>> I don’t think it’s been codified that private key possession or control
>> has
>> to be demonstrated.
>>
>> I think it would be plausible for a CA to allow submission of a public key
>> in lieu of a CSR and that nothing would be wrong about it.
>> ___
>> dev-security-policy mailing list
>> dev-security-policy@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-18 Thread Matthew Hardeman via dev-security-policy
On Mon, May 18, 2020 at 12:44 PM Ryan Sleevi  wrote:

> The scenario you ascribe to
> StartCom is exactly what is recommended, of CAs, in numerous CA
> incident bugs where the failure to apply that restrictive model has
> lead to misissuance.
>

Separate to the matter in discussion in this thread, my understanding of
CSR processing best practice mirrored what you say here -- take the minimum
that you require from the structure and discard the rest.  I was surprised
in reading the ACME specs that various factors for issuance rely upon data
in the rather flexible but (relatively) complex data structure that is the
CSR, like requested DNS names, whether or not OCSP must-staple is desired,
etc.

I am curious what the authors' intent was there.  Was it possibly a desire
to adhere to the original functional intent of the CSR as elsewhere
specified, irrespective of the known risks which had been previously
demonstrated in bad CA implementations?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-18 Thread Matthew Hardeman via dev-security-policy
I did not state the point well.  "Scary example" as I used it above was
merely because it was a reference to StartCom at all (given the history,
etc.) -- not particularly in the context of this practice.

I concur that I see no risk in leaf certificates issued with signatures
over public keys for which neither ownership or control of the
corresponding private key have been established.

I merely wished to add an example case to the discussion in which it was
presumably possible to have leaf certificates issued over a public key for
which the control of private key had not been proven.

On Mon, May 18, 2020 at 12:44 PM Ryan Sleevi  wrote:

> On Mon, May 18, 2020 at 11:40 AM Matthew Hardeman via
> dev-security-policy  wrote:
> > A scary example, I know, but StartCom's original system was once
> described
> > as taking the public key data (and they emphasized _only_ the public key
> > data) from the CSR.  Everything else was populated out-of-band of any PKI
> > protocols via the website.
> >
> > Frankly, I don't see how anyone permitting signature over a third party
> > public key without proof of control of the matching private key creates a
> > risk.  I think if there are relying-party systems where this creates a
> > problem, the error is in those relying-party systems and their respective
> > validation logic.
>
> Why would StartCom's system be "A scary example" when you acknowledge
> that you don't see how it creates a risk? The scenario you ascribe to
> StartCom is exactly what is recommended, of CAs, in numerous CA
> incident bugs where the failure to apply that restrictive model has
> lead to misissuance.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-18 Thread Matthew Hardeman via dev-security-policy
I certainly recall descriptions of other issuing systems in history in
which it was (at least based on the description) possible to get a
certificate issued without proof of control of the private key.

A scary example, I know, but StartCom's original system was once described
as taking the public key data (and they emphasized _only_ the public key
data) from the CSR.  Everything else was populated out-of-band of any PKI
protocols via the website.

Frankly, I don't see how anyone permitting signature over a third party
public key without proof of control of the matching private key creates a
risk.  I think if there are relying-party systems where this creates a
problem, the error is in those relying-party systems and their respective
validation logic.

On Mon, May 18, 2020 at 10:05 AM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> It was just the one system and situation-specific.
>
> -Original Message-
> From: dev-security-policy 
> On Behalf Of Peter Gutmann via dev-security-policy
> Sent: Monday, May 18, 2020 6:31 AM
> To: Matt Palmer ; Mozilla <
> mozilla-dev-security-pol...@lists.mozilla.org>; Jeremy Rowley <
> jeremy.row...@digicert.com>
> Subject: Re: Digicert issued certificate with let's encrypts public key
>
> Jeremy Rowley via dev-security-policy <
> dev-security-policy@lists.mozilla.org> writes:
>
> >For those interested, the short of what happened is that we had an old
> >service where you could replace existing certificates by having
> >DigiCert connect to a site and replace the certificate with a key taken
> >from the site after a TLS connection. No requirement for a CSR since we
> >obtained proof of key control through a TLS connection with the
> >website. Turned out the handshake didn't actually take the key, but
> >allowed the customer to submit a different public key without a CSR. We
> >took down the service a while ago - back in November I think. I plan to
> >put it back up when we work out the kink with it not forcing the key to
> match the key used in the handshake.
>
> Thanks, that was the info I was after: was this a general problem that we
> need to check other systems for as well, or a situation-specific issue that
> affected just one site/system but no others.  Looks like other systems are
> unaffected.
>
> Peter.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Digicert issued certificate with let's encrypts public key

2020-05-17 Thread Matthew Hardeman via dev-security-policy
> In particular, there must have been some authorisation carried out at some
> point, or perhaps that wasn't carried out, that indicates who requested the
> cert.  What I'm trying to discover is where the gap was, and what's
> required
> to fix it in the future.
>

What gap, exactly?  There’s not a risk here.

I don’t think it’s been codified that private key possession or control has
to be demonstrated.

I think it would be plausible for a CA to allow submission of a public key
in lieu of a CSR and that nothing would be wrong about it.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GoDaddy: Failure to revoke key-compromised certificate within 24 hours

2020-03-10 Thread Matthew Hardeman via dev-security-policy
Isn't the evident answer, if reasonable compromise is not forthcoming, just
to publish the compromised private key.  There's no proof of a compromised
private key quite as good as providing a copy of it.

I understand the downsides, but I think that capricious burdens encourage
stripping the issue bare.  You can't dodge a copy of a key.

On Tue, Mar 10, 2020 at 5:31 PM Piotr Kucharski via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> For 0% of impact the FPs do not matter that much, so agreed!
>
> Of course for now reality is not that... yet!
> https://github.com/certbot/certbot/issues/1028 seems so appropriate :)
>
> PS  I was definitely not advocating for 5% false negative, no; we must
> strive for 0% false negatives as well; all I was saying was exercising
> caution for the perhaps-not-so-clear-cut 5% cases. (Probably closer to 1%)
>
> On Tue, 10 Mar 2020 at 23:08, Ryan Sleevi  wrote:
>
> >
> >
> > On Tue, Mar 10, 2020 at 5:56 PM Piotr Kucharski  wrote:
> >
> >> I'm sympathetic to CAs wanting to filter out the noise of shoddy reports
> >>> and shenanigans, but I'm also highly suspicious of CAs that put too
> >>> unreasonable an onus on reporters. It seems, in the key compromise
> case,
> >>> the benefit of the doubt should largely deal with the reporter. If we
> saw
> >>> some quantifiable increase in hijinks and misrevocations, there are a
> >>> myriad of ways to deal with that. The most effective of these reasons
> seems
> >>> to be facilitating rapid replacement of certificates, rather than
> >>> preferring ossification.
> >>>
> >>
> >> I am totally against putting unreasonable onus on reporters! But
> >> hopefully you agree that CAs should strive for zero false positives in
> >> revocations.
> >>
> >
> > I'd happily take a 95% false positive of revocations if there were 0%
> > impact in the revocation (e.g. due to easy replacement). I'm mainly
> > hesitant to setting up a system of 0% false positives but which has a 5%
> > false negative.
> >
> > That's why I'm less excited for standard systems of signaling revocation
> > (not that there isn't some value!), and more keen on systems that make
> > revocation easier, quicker, and less-impactful. That's obviously Hard
> Work,
> > but that's the exciting part of working in PKI. Everything is Hard Work
> > these days :D
> >
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: How Certificates are Verified by Firefox

2019-12-04 Thread Matthew Hardeman via dev-security-policy
I had thought that the OCSP privacy concerns were among the reasons for the
general decline in OCSP queries issued by browsers.  In addition, part of
the rationale for development and encouragement of deployment of OCSP
stapling.

On Wed, Dec 4, 2019 at 6:12 PM Peter Bowen  wrote:

> Why not use OCSP?
>
> On Wed, Dec 4, 2019 at 3:52 PM Matthew Hardeman via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> Not that anyone is presently doing or would do such a thing, but...
>>
>> Imagine a CA that wanted to offer up a user/browser tracking service to
>> their subscriber customer.
>>
>> Is there any rule that prevents an issuing CA from having a "custom"
>> (hiding an identifier for the end-entity certificate) AIA URL?  Such that
>> when the browser AIA chases, it's disclosing the fact of the AIA chase as
>> well as a user's IP address (and possibly some browser details) to the CA?
>> One could easily do it with wildcard DNS and a per-end-entity cert host
>> label for the AIA distribution point.
>>
>>
>> On Wed, Dec 4, 2019 at 4:13 PM Ryan Sleevi via dev-security-policy <
>> dev-security-policy@lists.mozilla.org> wrote:
>>
>> > Yes, I am one of the ones who actively disputes the notion that AIA
>> > considered harmful.
>> >
>> > I'm (plesantly) surprised that any CA would be opposed to AIA (i.e.
>> > supportive of "considered harmful", since it's inherently what gives
>> them
>> > the flexibility to make their many design mistakes in their PKI and
>> still
>> > have certificates work. The only way "considered harmful" would work is
>> if
>> > we actively remove the flexibility afforded CAs in this realm, which I'm
>> > highly supportive of, but which definitely encourages more distinctive
>> PKIs
>> > (i.e. more explicitly reducing the use of Web PKI in non-Web cases)
>> >
>> > Of course, AIA is also valuable in helping browsers push the web
>> forward,
>> > so I can see why "considered harmful" is useful, especially in that it
>> > helps further the notion that root certificates are a thing of value
>> (and
>> > whose value should increase with age). AIA is one of the key tools to
>> > helping prevent that, which we know is key to ensuring a more flexible,
>> and
>> > agile, ecosystem.
>> >
>> > The flaw, of course, in a "considered harmful", is the notion that
>> there's
>> > One Chain or One Right Chain. That's not the world we have, nor have we
>> > ever. The notion that there's One Right Chain for a TLS server to send
>> > presumes there's One Right Set of CA Trust Anchors. And while that's
>> > definitely a world we could pursue, I think we know from the past
>> history
>> > of CA incidents, there's incredible benefit to users to being able to
>> > respond to CA security incidents differently, to remove trust in
>> > deprecated/insecure things differently, and to set policies differently.
>> > And so we can't expect servers to know the Right Chain because there
>> isn't
>> > One Right Chain, and AIA (or intermediate preloading with rapid updates)
>> > can help address that.
>> >
>> > On Wed, Dec 4, 2019 at 5:02 PM Tim Hollebeek via dev-security-policy <
>> > dev-security-policy@lists.mozilla.org> wrote:
>> >
>> > > Someone really should write up "AIA chasing considered harmful".  It
>> was
>> > > disputed at the TLS session at IETF 105, which shows that the
>> reasoning
>> > > behind it is not as widely understood as it needs to be, even among
>> TLS
>> > > experts.
>> > >
>> > > I'm very appreciative of Firefox's efforts in this area.  Leveraging
>> the
>> > > knowledge of all the publicly disclosed ICAs to improve
>> chain-building is
>> > > an
>> > > idea whose time has come.
>> > >
>> > > -Tim
>> > >
>> > > > -Original Message-
>> > > > From: dev-security-policy <
>> > dev-security-policy-boun...@lists.mozilla.org
>> > > >
>> > > On
>> > > > Behalf Of Wayne Thayer via dev-security-policy
>> > > > Sent: Monday, December 2, 2019 3:29 PM
>> > > > To: Ben Laurie 
>> > > > Cc: mozilla-dev-security-policy
>> > > ;
>> > > > Peter Gutmann 
>> > > > Subject: Re: [FORGED] Re: How Certificates are 

Re: [FORGED] Re: How Certificates are Verified by Firefox

2019-12-04 Thread Matthew Hardeman via dev-security-policy
Not that anyone is presently doing or would do such a thing, but...

Imagine a CA that wanted to offer up a user/browser tracking service to
their subscriber customer.

Is there any rule that prevents an issuing CA from having a "custom"
(hiding an identifier for the end-entity certificate) AIA URL?  Such that
when the browser AIA chases, it's disclosing the fact of the AIA chase as
well as a user's IP address (and possibly some browser details) to the CA?
One could easily do it with wildcard DNS and a per-end-entity cert host
label for the AIA distribution point.


On Wed, Dec 4, 2019 at 4:13 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Yes, I am one of the ones who actively disputes the notion that AIA
> considered harmful.
>
> I'm (plesantly) surprised that any CA would be opposed to AIA (i.e.
> supportive of "considered harmful", since it's inherently what gives them
> the flexibility to make their many design mistakes in their PKI and still
> have certificates work. The only way "considered harmful" would work is if
> we actively remove the flexibility afforded CAs in this realm, which I'm
> highly supportive of, but which definitely encourages more distinctive PKIs
> (i.e. more explicitly reducing the use of Web PKI in non-Web cases)
>
> Of course, AIA is also valuable in helping browsers push the web forward,
> so I can see why "considered harmful" is useful, especially in that it
> helps further the notion that root certificates are a thing of value (and
> whose value should increase with age). AIA is one of the key tools to
> helping prevent that, which we know is key to ensuring a more flexible, and
> agile, ecosystem.
>
> The flaw, of course, in a "considered harmful", is the notion that there's
> One Chain or One Right Chain. That's not the world we have, nor have we
> ever. The notion that there's One Right Chain for a TLS server to send
> presumes there's One Right Set of CA Trust Anchors. And while that's
> definitely a world we could pursue, I think we know from the past history
> of CA incidents, there's incredible benefit to users to being able to
> respond to CA security incidents differently, to remove trust in
> deprecated/insecure things differently, and to set policies differently.
> And so we can't expect servers to know the Right Chain because there isn't
> One Right Chain, and AIA (or intermediate preloading with rapid updates)
> can help address that.
>
> On Wed, Dec 4, 2019 at 5:02 PM Tim Hollebeek via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > Someone really should write up "AIA chasing considered harmful".  It was
> > disputed at the TLS session at IETF 105, which shows that the reasoning
> > behind it is not as widely understood as it needs to be, even among TLS
> > experts.
> >
> > I'm very appreciative of Firefox's efforts in this area.  Leveraging the
> > knowledge of all the publicly disclosed ICAs to improve chain-building is
> > an
> > idea whose time has come.
> >
> > -Tim
> >
> > > -Original Message-
> > > From: dev-security-policy <
> dev-security-policy-boun...@lists.mozilla.org
> > >
> > On
> > > Behalf Of Wayne Thayer via dev-security-policy
> > > Sent: Monday, December 2, 2019 3:29 PM
> > > To: Ben Laurie 
> > > Cc: mozilla-dev-security-policy
> > ;
> > > Peter Gutmann 
> > > Subject: Re: [FORGED] Re: How Certificates are Verified by Firefox
> > >
> > > Why not "AIA chasing considered harmful"? The current state of affairs
> is
> > that
> > > most browsers [other than Firefox] will go and fetch the intermediate
> if
> > it's not
> > > cached. This manifests itself as sites not working in Firefox, and
> users
> > switching
> > > to other browsers.
> > >
> > > You may be further dismayed to learn that Firefox will soon implement
> > > intermediate preloading [1] as a privacy-preserving alternative to AIA
> > chasing.
> > >
> > > - Wayne
> > >
> > > [1]
> > >
> >
> https://wiki.mozilla.org/Security/CryptoEngineering/Intermediate_Preloading
> > > #Intermediate_CA_Preloading
> > >
> > > On Thu, Nov 28, 2019 at 1:39 PM Ben Laurie  wrote:
> > >
> > > >
> > > >
> > > > On Thu, 28 Nov 2019 at 20:22, Peter Gutmann
> > > > 
> > > > wrote:
> > > >
> > > >> Ben Laurie via dev-security-policy
> > > >> 
> > > >> writes:
> > > >>
> > > >> >In short: caching considered harmful.
> > > >>
> > > >> Or "cacheing considered necessary to make things work"?
> > > >
> > > >
> > > > If you happen to visit a bazillion sites a day.
> > > >
> > > >
> > > >> In particular:
> > > >>
> > > >> >caching them and filling in missing ones means that failure to
> > > >> >present correct cert chains is common behaviour.
> > > >>
> > > >> Which came first?  Was cacheing a response to broken chains or
> broken
> > > >> chains a response to cacheing?
> > > >>
> > > >> Just trying to sort out cause and effect.
> > > >>
> > > >
> > > > Pretty sure if broken chains caused browsers to not show pages, then
> > > > there 

Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Matthew Hardeman via dev-security-policy
My apologies.  I messed up when trimming that down.  I was quoting Ryan
Sleevi there.

On Tue, Oct 8, 2019 at 2:55 PM Paul Walsh  wrote:

>
> On Oct 8, 2019, at 12:51 PM, Matthew Hardeman  wrote:
>
>
> On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  wrote:
>>
>> so we need better solutions. It's also being willing to acknowledge that
>> if
>> we can't find systemic fixes, it may be that we have a broken system, and
>> we should not be afraid of looking to improve or replace the system.
>>
>
> Communication styles aside, I believe there's merit to far more serious
> community consideration of the notion that either the system overall or the
> standard for expectations of the system's performance are literally
> broken.  There's probably a better forum for that discussion than this
> thread, but I echo that I believe the notion has serious merit.
>
>
> [PW] It looks like I said those words above, but I didn’t :)
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Mozilla Policy Requirements CA Incidents

2019-10-08 Thread Matthew Hardeman via dev-security-policy
On Tue, Oct 8, 2019 at 2:10 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Tue, Oct 8, 2019 at 2:44 PM Paul Walsh  wrote:
>
> so we need better solutions. It's also being willing to acknowledge that if
> we can't find systemic fixes, it may be that we have a broken system, and
> we should not be afraid of looking to improve or replace the system.
>

Communication styles aside, I believe there's merit to far more serious
community consideration of the notion that either the system overall or the
standard for expectations of the system's performance are literally
broken.  There's probably a better forum for that discussion than this
thread, but I echo that I believe the notion has serious merit.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-30 Thread Matthew Hardeman via dev-security-policy
On Fri, Aug 30, 2019 at 11:56 AM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> For readers unfamiliar, let me briefly explain what Safe Browsing gives
> browsers:
>
> For every URL you're considering displaying you calculate a whole bunch
> of cryptographic hashes, of the whole URL, just the FQDN and certain
> other combinations. Then you truncate the hashes and you see if the
> truncated hashes are in a small list Google gave you (a browser will
> update this list periodically using a synchronisation API Google
> designed for the purpose).
>
> If one of your truncated hashes /is/ in the list, maybe this is
> Phishing! You call Google, telling them the truncated hash you are
> worried about, and Google gives you a complete list of full (not
> truncated) hashes you should worry about with this prefix. It might be
> empty (the phishing attack is gone) or have multiple entries.
>
> Only if the full hash you were worried about is in that fresh list from
> Google do you tell the user "Ohoh. Phishing, probably go somewhere
> else" in all other cases everything is fine.
>

What's described here is how the browser determines with the service
whether the page you visit is on the list of what Google considers to be a
likely unsafe page.

What's not discussed in that mechanism is how Google decides what pages are
unsafe and when?

Say, for example, you're actively monitoring a property that historically
has had EV presentation.  For high value sites, especially in finance,
perhaps the database notes that EV is "normal" for the site.  If subsequent
checks against that site lack EV, perhaps it flags a human review to
determine if the site has been hijacked.  Perhaps it combines the change of
EV status with a change in other underlying elements (new / different a-DNS
set, A records resolving to suspicious IP space, etc.)  But I'm not sure we
can really know, unless they're willing to say.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-30 Thread Matthew Hardeman via dev-security-policy
>
> I’m not saying that this is the case, but merely to say that the
> Yes/No/IDK does not represent the full set of feasible responses.
>

So let's add "I decline to make inquiries, official or otherwise" and
"Policy prevents me from discussing that" to the list.  It would be
interesting to get one of any of the mentioned responses back.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GlobalSign: SSL Certificates with US country code and invalid State/Prov

2019-08-28 Thread Matthew Hardeman via dev-security-policy
I'd particularly like to see the memes directly within the certificate,
maybe an extension to RFC 6170.

On Wed, Aug 28, 2019 at 6:13 AM Corey Bonnell via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thursday, August 22, 2019 at 11:08:03 PM UTC-4, Jeremy Rowley wrote:
> > It's a trap. I do wish memes showed up here
> >
> > Censys shows something like 130 globalsign certs with abbreviated joi
> info. I think we show 16?
> > 
> > From: dev-security-policy 
> on behalf of Corey Bonnell via dev-security-policy <
> dev-security-policy@lists.mozilla.org>
> > Sent: Thursday, August 22, 2019 8:57:42 PM
> > To: Doug Beattie ;
> mozilla-dev-security-pol...@lists.mozilla.org <
> mozilla-dev-security-pol...@lists.mozilla.org>
> > Subject: Re: GlobalSign: SSL Certificates with US country code and
> invalid State/Prov
> >
> > Hi Doug,
> > Thank for you for posting this incident report to the list. I have one
> clarifying question in regard to the correctness criteria for the jurisST
> field when performing the scanning for additional problematic certificates.
> Is GlobalSign allowing state abbreviations in the jurisST field, or only
> full state names?
> > Thanks,
> > Corey
> >
> > 
> > From: dev-security-policy 
> on behalf of Doug Beattie via dev-security-policy <
> dev-security-policy@lists.mozilla.org>
> > Sent: Thursday, August 22, 2019 11:35
> > To: mozilla-dev-security-pol...@lists.mozilla.org
> > Subject: GlobalSign: SSL Certificates with US country code and invalid
> State/Prov
> >
> > Today we opened a bug disclosing misissuance of some certificates that
> have
> > invalid State/Prov values:
> >
> >
> https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugzilla.mozilla.org%2Fshow_bug.cgi%3Fid%3D1575880data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940sdata=sgDFjHsrYMjJMl02%2Bj3BH7Hw%2FUPNR3O8q6r8nr3OgZE%3Dreserved=0
> >
> >
> >
> > On Tuesday August 20th 2019, GlobalSign was notified by a third party
> > through the report abuse email address that two certificates were
> discovered
> > which contained wrong State information, either in the
> stateOrProvinceName
> > field or in the jurisdictionStateOrProvinceName field.
> >
> >
> >
> > The two certificates in question were:
> >
> >
> https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcrt.sh%2F%3Fid%3D1285639832data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940sdata=jXC4T%2BbvYYNdPJhXUukJT7cGEYgv0Lyg3qFO81S9xPE%3Dreserved=0
> >
> >
> https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcrt.sh%2F%3Fid%3D413247173data=02%7C01%7C%7Ceba31d5add5949261f1508d7271662df%7C84df9e7fe9f640afb435%7C1%7C0%7C637020849406465940sdata=KJ7FfggP5XKliFv%2FL2VLwpRtG8bcg7Eq%2FzFG6qx8ndQ%3Dreserved=0
> >
> >
> >
> > GlobalSign started and concluded the investigation within 24 hours.
> Within
> > this timeframe GlobalSign reached out to the Certificate owners that
> these
> > certificates needed to be replaced because revocation would need to
> happen
> > within 5 days, following the Baseline Requirements. As of the moment of
> > reporting, these certificates have not yet been replaced, and the
> offending
> > certificates have not been revoked. The revocation will happen at the
> latest
> > on the 25th of August.
> >
> >
> >
> > Following this report, GlobalSign initiated an additional internal review
> > for this problem specifically (unexpected values for US states in values
> in
> > the stateOrProvinceName or jurisdictionStateOrProvinceName fields).
> Expected
> > values included the full name of the States, or their official
> abbreviation.
> > We reviewed all certificates, valid on or after the 21st of August, that
> > weren't revoked for other unrelated reasons.
> >
> >
> >
> > To accommodate our customers globally, the stateOrProvinceName field or
> in
> > the jurisdictionStateOrProvinceName are text fields during our ordering
> > process. The unexpected values were not spotted or not properly
> corrected.
> > We have put additional flagging in place to highlight unexpected values
> in
> > both of these fields, and are looking at other remedial actions. None of
> > these certificates were previously flagged for internal audit, which is
> > completely randomized.
> >
> >
> >
> > We will update with a full incident report for this and also disclose all
> > other certificates found based on our research.
> >
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
>
> At the risk of turning this place into Reddit, I agree that a meme feature
> is needed.
>
> Anyhow, judging from censys.io, it looks like there are far bigger
> offenders of this particular quirky rule than 

Re: CA handling of contact information when reporting problems

2019-08-22 Thread Matthew Hardeman via dev-security-policy
I'm merely a relying party and subscriber, but it seems quite unreasonable
to believe that there is or should be any restriction upon a party to a
business communication (which is what a report / complaint from a third
party regarding key compromise, etc, is) from further dissemination of said
communications.

It seems to me quite a stretch to suggest that the even the GDPR restrains
such behavior.  Are people seriously suggesting that a third party, with
whom you have no NDA or agreement in place, may as much as email you and
expect you to take action based upon said email AND expect that you be
enjoined from as little as forwarding a copy of that email?  That seems
absurd.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Intent to Ship: Move Extended Validation Information out of the URL bar

2019-08-16 Thread Matthew Hardeman via dev-security-policy
Honestly the issues, as I see them, are twofold:

1.  When I visit a site for the first time, how do I know I should expect
an EV certificate?  I am conscientious about subsequent visits, especially
financial industry sites.

2.  The browsers seem to have a bias toward the average user, that user
literally being less ...smart/aware... than half of all of users.  EV is a
feature that can only benefit people who are vigilant and know what to look
for.  It seems dismissive of the more capable users, but I suppose that's
their call.

On Fri, Aug 16, 2019 at 5:15 PM Daniel Marschall via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I have a few more comments/annotations:
>
> (1) Pro EV persons argue "Criminals have problems getting an EV
> certificate, so most of them are using only DV certificates".
>
> Anti EV persons argue "Criminals just don't use EV certificates, because
> they know that end users don't look at the EV indicator anyway".
>
> I assume, we do not know which of these two assumptions fits to the
> majority of criminals. So why should we make a decision (change of UI)
> based on such assumptions?
>
> (2) I am a pro EV person, and I do not have any financial benefit from EV
> certificates. I do not own EV certificates, instead my own websites use
> Let's Encrypt DV certificates. But when I visit important pages like Google
> or PayPal, I do look at the EV indicator bar, because I know that these
> pages always have an EV certificate. If I would visit PayPal and only see a
> normal pad lock (DV), then I would instantly leave the page because I know
> that PayPal always has an EV certificate. So, at least for me, the UI
> change is very negative (except if you color the pad lock in a different
> color, that would be OK for me). We cannot say that all users don't care
> about the EV indicator. For some users like me, it is important.
>
> (3) Also, I wanted to ask, if you want to remove the UI indicator, because
> you think that EV certificates give the feeling of false security, then
> please tell me: What is the alternative? Removing the UI bling without
> giving any alternative solution is just wrong in my opinion. Yes, there
> might be a tiny amount of phishing sites that use EV certificates, but the
> EV indicator bar is still better than just nothing. AntiPhishing filters
> are not a good alternative because they only protect when the harm is
> already done to some users.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Use of Certificate/Public Key Pinning

2019-08-13 Thread Matthew Hardeman via dev-security-policy
I feel that there's a great deal of consultancy and assistance that CAs and
PKI professionals could bring to their more sophisticated customers with
scenarios such as these where public key pinning an a field-deployed
application may present problems for certificates being revoked.

A best practices document explaining to the application developers and
server-side teams that:

1.  An app which calls a server-side API under your control should _always_
do so on a TLS endpoint at a different hostname & SNI label than any
browser-facing websites.
2.  Following step 1's guidance means that you can control the lifecycle of
the certificate for the services accessed by your own application(s)
separate from WebPKI facing certificates meant to facilitate a TLS
authenticated session to a modern browser.
3.  It also means that the endpoints serving the application CAN but don't
have to be from a publicly trusted PKI.  For compatibility reasons, they
generally should be, if there are any external consumers of the API, but
ultimately if their own application wishes to PIN, they should pre-create
several certificates with distinct keys and write their app to override the
platform trust decisioning and pin on the set of keys that their API
endpoint certificates will have, ignoring revocation and requiring that the
presented leaf certificate be a signature over one of the set of pinned
public keys.

This is essentially free in virtually all deployment models today.

Oversubscribing TLS endpoints (for our purposes let's say a DNS based
hostname and TLS SNI label define a TLS endpoint) for different target
audiences, especially when those audiences are modern browsers in
combination with anything else, is one of the most significant causes of
compatibility issues and legacy cruft which have historically hindered the
agility of the WebPKI.

On Tue, Aug 13, 2019 at 10:12 AM Nuno Ponte via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Dear m.d.s.p.,
>
> I would like to bring into discussion the use of certificate/public key
> pinning and the impacts on the 5-days period for certificate revocation
> according to BR §4.9.1.1.
>
> Recently, we (Multicert) had to rollout a general certificate replacement
> due to the serial number entropy issue. Some of the most troubled cases to
> replace the certificates were customers doing certificate pinning on mobile
> apps. Changing the certificate in these cases required configuration
> changes in the code base, rebuild app, QA testing, submission to App
> stores, call for expedited review of each App store, wait for review to be
> completed and only then the new app version is made available for
> installation by end users (which is turn are required to update the app the
> soonest).
>
> Meeting the 5-days deadline with this sort of process is “challenging”, at
> best.
>
> A first approach is to move from certificate pinning to public key pinning
> (PKP). This prevents the need to update the app in many of the certificate
> replacement operations, where the public key is reused and the certificate
> can be replaced transparently to the app (generically, an “User Agent”
> doing PKP).
>
> However, in the event of a serious security incident that requires re-key
> (such as key compromise), the certificate must be revoked in less than 24
> hours (for the benefit of everyone – subscriber, relying parties, issuing
> CA, etc). It’s virtually impossible to release a new app version within
> this timeframe. And this, I think, make a very strong point against the use
> of PKI.
>
> On the other side, PKP is a simple yet powerful and effective technique to
> protect against MITM and other attacks. It seems to be widely used in apps
> with advanced threat models (mobile banking, sensitive personal
> information, etc) and there are many frameworks available (including native
> support in Android via Network Security Configuration [1]).
>
> There are several possible mitigation actions, such as pinning more than
> one public key to have more than one certificate to quickly rollover in
> case of a revocation. Even then, it is very likely that all the redundant
> key pairs were generated and maintained by the same systems and procedures,
> and thus all of them will become effectively compromised.
>
> Ultimately, it may become common practice that 1) PKP frameworks are set
> to bypass revocation checks or 2) PKP is done with private certificates
> (homemade, self-signed, managed ad-hoc with no CRL/OCSP services). Does any
> of this leads to a safer Internet?
>
> I don’t expect this thread to end up into an absolute conclusion
> advocating for or against, but opening it to discussion and contributions
> may help to document possible strategies, mitigations, alternatives, pros &
> cons, and hopefully provide guidance for an educated decision.
>
> Best regards,
>
> Nuno Ponte
> Multicert SA
>
> [1] https://developer.android.com/training/articles/security-config
>
>
>
>
>
> 

Re: DarkMatter CAs in Google Chrome and Android

2019-07-25 Thread Matthew Hardeman via dev-security-policy
On Thu, Jul 25, 2019 at 4:33 AM Nick Lamb via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Surely the answer is "Yes" ? I mean, it makes strategic sense to react
> to a CA which tries to appeal a trust store decision over the heads of
> the people making it in exactly this way - by distrusting it.
>
> I think it's what I would advise an independent trust store to do in
> this situation.
>

 Perhaps I misunderstand, but this would seem to suggest that there be
direct penalties for mere pursuit of due process.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-24 Thread Matthew Hardeman via dev-security-policy
This is not at all a safe assumption.  If they care to know and have active
MITM infrastructure in place, the last time I looked at the issue,
identifying which browser was in use (and generally speaking which
operating system or set of operating systems) was fairly trivial by
fingerprinting the characteristics of the TLS negotiation.

On Wed, Jul 24, 2019 at 11:43 AM jfb1776--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> The government sending out SMSes to tell users to install the certificate
> don't (until the certificate is installed) know what browser the user is
> using.
>
> 
>
> If only 10% of the populace hears what's going on directly, that gets the
> word out a whole lot better than 0%. People talk. It might be enough to get
> them to stop. *Because* they don't *yet* know which browser. Nobody wants
> to be sending out "Hey, install this so you can immediately be told about
> my corruption!" to their entire populace.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-22 Thread Matthew Hardeman via dev-security-policy
On Mon, Jul 22, 2019 at 9:20 PM Corey Bonnell via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> I think the optimal solution in terms of user security is to create a
> blacklist of known MITM CA public keys and simply prevent the installation
> of certificates containing these public keys in the trust store. If several
> browsers could coordinate on such an effort, then perhaps that would
> pressure the government to back down on their demand to intercept TLS
> communications because their root is would be incompatible with major
> browsers.
>

It is an interesting question.  It essentially becomes a gamble on whether
they'll back down or just fork their own KazakhFox.  But if they do push
this all the way with a national browser, then their people are even
further worse off.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-19 Thread Matthew Hardeman via dev-security-policy
While possible, that seems unlikely.  Corporates are, in general, not
trying to hide when this is being done.

In fact, there are lots of good legal liability reasons why they should
want their users to be constantly reminded.

On Fri, Jul 19, 2019 at 10:27 AM Troy Cauble via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I like the consistency of a reminder in all cases, but this
> might lead to corporate policies to use other browsers.
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-18 Thread Matthew Hardeman via dev-security-policy
Regarding indicators, I agree that it should be more apparent.  Perhaps a
dedicated bar that occupies an entire edge-to-edge horizontal area.

I would propose that it might have two distinct messages, as well:

1.  A message that an explicitly known MiTM certificate exists in the
certificate chain being relied upon.  This would allow for explicit warning
about known MiTM infrastructures and would allow tailoring any "more info"
resource to explicitly call out that it is known that interception is being
performed.

2.  A message that indicates that a non-standard certificate chain is being
presented, which might mean corporate interception, private websites within
an organization, etc, etc.

On Thu, Jul 18, 2019 at 2:11 PM Andrew via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I agree a persistent indicator is a good idea. From what I understand
> Firefox does already have an indicator hidden in the site information box
> that appears when you click the lock icon in the address bar (
> https://bugzilla.mozilla.org/show_bug.cgi?id=1549605 ). This should be
> more visible in my opinion. Maybe add an asterisk next to the lock icon or
> something.
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Nation State MITM CA's ?

2019-07-18 Thread Matthew Hardeman via dev-security-policy
If the government of Kazakhstan requires interception of TLS as a condition
of access, the real question being asked is whether or not Mozilla products
will tolerate being used in these circumstances.

Your options are to block the certificate, in which case Mozilla products
simply become unusable to those subject to this interception, or not block
the certificate.

I certainly think that Mozilla should not distribute the MiTM root or do
anything special to aid in its installation.  I believe policy already
makes clear that NO included root (commercial or government) is allowed for
use in MiTM scenarios and I believe that policy should be held firm.

I do believe that as it is manually installed rather than distributed as a
default that it should continue to override pinning policy.

This is an accepted corporate use case today in certain managed
environments.  The dynamic is quite different for an entire people under an
entire government, but the result is the same:

One has to choose whether to continue serving the user despite the adverse
and anti-privacy scenario, or if one simply won't have their products be
any part of that.

Much has been said about the TLS 1.3 design hoping to discourage use cases
like this, but the reality is what I always suspected: some enterprises or
governments actually will spend the money to do full active MiTM
interception.

Let's posit what might happen if Mozilla made their products intentionally
break for this use case.

Further, let's stipulate that every other major browser follows course and
they all blacklist this or any other nation-state interception certificate,
even if manually installed.

Isn't the logical outcome that the nation-state forks one of the
open-source browser projects, patches in their MiTM certificate, and
un-does the blacklisting?  I think that's exactly what would happen.  The
trouble is, there's no reason to expect that the fork will be maintained or
updated as security issues are discovered and upstream patches are issued.
We wind up with an infrequent release cycle browser being used by all these
users, who in turn get no privacy AND get their machines rooted
disproportionate to the global population.

I do definitely support a persistent UI indicator for MiTM scenarios that
emphasizes on-screen at all times that the session is being protected by a
non-standard certificate and some sort of link to explain MiTM and the
risks.

On Thu, Jul 18, 2019 at 12:04 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, Jul 18, 2019 at 10:00 AM Ryan Sleevi 
> wrote:
>
> >
> > On Thu, Jul 18, 2019 at 12:50 PM Wayne Thayer via dev-security-policy <
> > dev-security-policy@lists.mozilla.org> wrote:
> >
> >> Finally, I'll point out that Firefox implements public key pinning via a
> >> preloaded list of sites, so the reported MITM will fail for those:
> >>
> >>
> https://wiki.mozilla.org/SecurityEngineering/Public_Key_Pinning#Implementation_status
> >
> >
> > Wayne,
> >
> > I don't believe this is correct. Locally-installed trust anchors bypass
> > pinning, as they're indicators of explicit user action (or coercion) to
> > configure. As a consequence, unless the pinning mode is set to 2. Strict
> > (which will typically preclude the use of a number of anti-virus
> products,
> > for better or worse), which it is not by default, the MITM will not fail.
> > From the Firefox point-of-view, it's completely transparent whether the
> > MITM is being done by local security software or a nation-state
> >
>
> Yes, I had just realized that - in the default state, pinning in Firefox
> will not block this type of MITM.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-16 Thread Matthew Hardeman via dev-security-policy
In fairness, I think Mozilla essentially stipulated that this reason was
given little or no weight in the decision.

Specifically Wayne Thayer noted at [1]:

Some of this discussion has revolved around compliance issues, the most
prominent one being the serial number entropy violations discovered by
Corey Bonnell. While these issues would certainly be a consideration when
evaluating a root inclusion request, they are not sufficient to have
triggered an investigation aimed at revoking trust in the DarkMatter
intermediates or QuoVadis roots. Therefore, they are not relevant to the
question at hand.


I certainly am not trying to divine something that's not there, but "not
relevant to the question at hand" fairly strongly suggests "was not a
factor in the decision".

[1]:
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/TseYqDzaDAAJ


On Tue, Jul 16, 2019 at 4:12 PM Nadim Kobeissi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I think it's interesting how one of the main technical arguments for
> denying DarkMatter's root inclusion request -- the misissuance of
> certificates with 63-bit identifiers instead of 64-bit identifiers, also
> affected Google, Apple and Godaddy, and to a much greater extent:
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-16 Thread Matthew Hardeman via dev-security-policy
Hi Kathleen and community,

I understand that you've made a decision w/r/t the DarkMatter CA matters
and am not writing to challenge or attempt influence on those.

I'm responding here only in so far as that you were "intrigued" by my
comments analogizing Mozilla Root Trust store decisioning to the kinds of
risk management exercised in assumption of financial risks such as in
consumer lending.  I'm writing to further expound on my positions in that
regard.

I submit that I disagree somewhat with Gijs' suggestion that Mozilla acts
in the nature of a third-party guarantor here.  I further submit that the
more direct analogue is that the community of Mozilla users present
and future is the set of depositing members of the Mozilla Trust Credit
Union, a bank of trust/credit which is lended out to CAs from the pool of
trust + good will of those users -- that pool being under the direction and
management of the Mozilla organization, who, I believe, are literally
acting in the nature of a lender, loaning out the pooled assets (in this
case the sum of the trust extended to Mozilla) to qualified trust-borrowers
(CAs).  Mozilla is explicitly in the position of making decisions regarding
where to invest that pooled trust.

Indeed, if Mozilla is a mere guarantor in this process, who precisely is
the lender?

I also disagree with the contention that Mozilla has "effectively no
recourse" should a trust "debtor" (CA) "default" (fail to make "payments"
on the borrowed trust through providing services to certificate subscribers
only in compliance with program and industry guidelines and with proper
validations.)  Mozilla's recourse is essentially absolute: you can revoke
the trust you've extended, preventing further damage.  Just as a lender in
consumer finance has the ability to service and manage borrowers in their
current portfolio (for example, via periodic credit monitoring of current
borrowers), tools for the management and monitoring of program participants
exist: Certificate Transparency log monitoring as well as a fairly active
community of users who are actively digging for problems.

I agree that it's quite possible that the Mozilla Root program should be
far more selective.  Perhaps DarkMatter does not meet the bar.  If so,
though, I think there are a whole lot of other participants (including many
current participants) who also do not meet the bar, if one is to be
objective in these decisions.

In support of objectivity in these matters, I again raise the scenario:
Imagine you personally have to appear before a judge in some court for some
reason regarding rationale in program membership.  Do we want a future in
which this might be the testimony: "Your honor, CAs A, B, and C despite
having minor compliance issues directly aligned to program guidelines and
which were quickly remediated, those CAs met the bar for inclusion.
However, CAs D, E, and F despite meeting compliance burdens aligned to
program guidelines without exception, failed to meet the bar for inclusion
because of real or perceived shortcomings not directly aligned to program
guidelines."?

Whether by Mozilla's doing or not, inclusion in the Mozilla Root trust
store is essentially a prerequisite to access to other trust stores, access
to stores in various other OS distributions, access to default stores in
IoT devices being manufactured, etc.

These past couple of years have shown a very particular direction and focus
for the WebPKI.  (Broadly, from what I've seen, toward a domain-validated
only future with what is likely to evolve as a single static leaf
certificate profile.  Perhaps with caveats for signed exchange certs, etc.)

If that's truly where it's headed and if that future has a charitable CA
supported by the community, there does not really seem to be a place for
commercial CAs moving forward, with respect to the WebPKI.  Perhaps it
makes sense for the program to begin aligning policy to a "hard divorce" of
the public WebPKI from all other use cases?  This would likely dramatically
reduce incentives for commercial participants with intentions good or bad
from joining and maintaining membership in the program.  If that's where
it's headed anyway, it may be that a great deal of work [on the part of all
involved parties] can be avoided by being explicit on that intent sooner
rather than later.  Were you to do that -- and combine that with taking
steps that technologically make it infeasible to have a single TLS endpoint
usefully act as part of the WebPKI and a private hierarchy simultaneously,
I believe you could essentially eliminate much of the commercial and
government interest in program membership.  We would hopefully end up with
no more than a handful of equivalent but fully independent (managerially
and technologically) CAs in the image of Let's Encrypt and no reason for
any other CAs to be in the program.

On Tue, Jul 16, 2019 at 11:19 AM Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> I 

Re: DarkMatter Concerns

2019-07-10 Thread Matthew Hardeman via dev-security-policy
On Wed, Jul 10, 2019 at 11:43 AM Scott Rea via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Mozilla’s new process, based on its own admission, is to ignore technical
> compliance and instead base its decisions on some yet to be disclosed
> subjective criterion which is applied selectively.  We think everybody in
> the Trust community should be alarmed by the fact that the new criterion
> for inclusion of a commercial CA now ignores any qualification of the CA or
> its ability to demonstrate compliant operations. We fear that in doing so
> Mozilla is abandoning its foundational principles of supporting safe and
> secure digital interactions for everyone on the internet.  This new process
> change seems conveniently timed to derail DigitalTrust’s application.
>
> By Mozilla’s own admission, DigitalTrust is being held to a new standard
> which seems to be associated with circular logic – a media bias based on a
> single claimed event that aims to falsely implicate DarkMatter is then used
> to inform Mozilla’s opinion, and the media seizes on this outcome to
> substantiate the very same bias it aimed to introduce in the first place.
> Additionally, in targeting DigitalTrust and in particularly DarkMatter’s
> founder Faisal Al Bannai, on the pretense that two companies can’t operate
> independently if they have the same owner, we fear another dangerous
> precedent has been set.
>

I broadly concur with these points.

In other significant risk management disciplines and domains in which a
plurality of diverse applicants seek trust, objectivity and strong
data-backed alignment of specific risk factors associated to specific bad
outcomes are prized above practically all else.  An obvious example is
consumer credit lending and particularly large loans like mortgages.

As an analogy, consider that at least in a broad directional sense, the
change in Mozilla's decisioning and underlying reasoning is akin to moving
from a mechanism where one particular FICO score means one particular
outcome regardless of the color of your skin or sexuality and toward a
mechanism in which despite having matching FICO scores two applicants and
their applications share dissimilar fates: one of them is declined not for
falling outside of objective risk management criteria but because they
"seem shady" or "fit the description of someone who did something bad" or
"just aren't a good match for our offering".  In finance, such decisioning
wouldn't survive the most cursory and forgiving review.  That "fact"
pattern wouldn't overcome a claim of racism even if the lender and the
applicant whose loan was declined were of the same race.

Please let me be quite specific in that I am not suggesting that there is
racial or national animus expressed in this decision by Mozilla.  I used
the parallel to racism in finance because it's exceedingly well documented
that strong objective systems of risk management and decisioning led to
better overall financial outcomes AND significantly opened the door to
credit (aka trust) to otherwise improperly maligned and underserved
communities.

To my mind, this decision is regression from a more formal standard and
better compliance monitoring than has ever been available (CT, etc.) to a
subjective morass with handwringing and feelings and bias.

I can not see how one reconciles taking pride in their risk management and
compliance acumen while making such a regression.  That kind of dissonance
would eat at my soul.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-10 Thread Matthew Hardeman via dev-security-policy
Even if we stipulated that all those accounts were fully accurate, all
those reports are about a separate business that happens to be owned by the
same owner.

Furthermore, in as far as none of those directly speak to their ability to
own or manage a publicly trusted CA, I would regard those issues as
immaterial.  Perhaps they also indiscriminately kill puppies?  That would
be awful.  Still, I do not see how that would be disqualifying.

On Wed, Jul 10, 2019 at 2:45 AM Nex via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I think that dismissing as baseless investigations from 9 different
> reporters, on 3 different newspapers (add one more, FP, if consider
> this[1]) is misleading. Additionally, it is just false to say all the
> articles only relied on anonymous sources (of which they have many, by
> the way), but there are clearly sources on record as well, such as
> Simone Margaritelli and Jonathan Cole for The Intercept, and Lori Stroud
> for Reuters.
>
> While obviously there is no scientific metric for this, I do think the
> number of sources (anonymous and not) and the variety of reporters and
> of newspapers (with their respective editors and verification processes)
> do qualify the reporting as "credible" and "extensively sourced".
>
> Additionally, details provided by sources on record directly matched
> attacks documented by technical researchers. For example, Lori Stroud
> talking details over the targeting of Donaghy, which was also proven in
> Citizen Lab's "Stealth Falcon" report. Lastly, Reuters reporters make
> repeated mentions of documents they had been able to review supporting
> the claims of their sources. Unless you have good reasons to believe
> reporters are just lying out of their teeth, I don't see how all of this
> can't be considered credible.
>
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-09 Thread Matthew Hardeman via dev-security-policy
On Sun, Jun 23, 2019 at 11:52 AM Cynthia Revström via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> My view is a bit different, we have lots of CAs already, I think it is more
> important to be extra secure rather than to take unnecessary risks.
>

A position like this is not unreasonable, but it would open up other
questions.  Taken to its logical conclusion, if we have lots of CAs and
believe security risks arise from new additions, why would we ever add a
new one again?  Or at least, what becomes the criteria for getting past
that risk and to adding a new one?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-09 Thread Matthew Hardeman via dev-security-policy
On Tue, Jul 9, 2019 at 4:34 PM mono.riot--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I think it's less about a single person than about an alleged firewalling
> of entities that end up being not firewalled at all, but all owned by the
> same person in the end.
>

That kind of corporate hierarchy exists for numerous legitimate reasons --
mostly tax & liability segregation.  Nothing about that, in itself, is
illegitimate.  And the separation offered do, properly implemented, mean
that the team at the other divisions has no sway over the CA, save for by
convincing the ownership to dictate policy.

There is even precedent for a major CA (what was Comodo CA) and a TLS
interception device manufacturer (BlueCoat) to share significant beneficial
ownership: Francisco Partners.  It is difficult for me to see the
difference, objectively speaking.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-07-09 Thread Matthew Hardeman via dev-security-policy
On Tuesday, July 9, 2019 at 10:31:27 AM UTC-5, Wayne Thayer wrote:

> DarkMatter has argued [3] that their CA business has always been operated
> independently and as a separate legal entity from their security business.
> Furthermore, DarkMatter states that once a rebranding effort is completed,
> “the DarkMatter CA subsidiary will be completely and wholly separate from
> the DarkMatter Group of companies in their entirety.” However, in the same
> message, DarkMatter states that “Al Bannai is the sole beneficial
> shareholder of the DarkMatter Group.” and leaves us to assume that Mr. Al
> Bannai would remain the sole owner of the CA business. More recently,
> DarkMatter announced that they are transitioning all aspects of the
> business to DigitalTrust and confirmed that Al Bannai controls this entity.
> This ownership structure does not assure me that these companies have the
> ability to operate independently, regardless of their names and legal
> structure.

I seek to better understand this aspect.  Are we to infer that "Mr. Al Bannai" 
is banned from the program?  Is there a list of named individuals who are 
banned?  Will there be?

Truly horrid organizations and/or individuals passively own all kinds of 
assets.  A strong management team that can be trusted to keep commitments to 
sound the alarm if the organization goes off track is one way to address that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: GRCA Incident: BR Compliance and Document Signing Certificates

2019-03-25 Thread Matthew Hardeman via dev-security-policy
On Mon, Mar 25, 2019 at 3:03 PM Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> While it may be true that the certificates in question do not contain
> SANs, unfortunately, the certificates may still be trusted for SSL since
> they do not have EKUs.
>
> For an example see "The most dangerous code in the world: validating SSL
> certificates in non-browser software" which is available at
> https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-bugs.html
>
> What you will see that hostname verification is one of the most common
> areas applications have a problem getting right. Often times they silently
> skip hostname verification, use libraries provide options to disable host
> name verifications that are either off by default, or turned off for
> testing and never enabled in production.
>
> One of the few checks you can count on being right with any level of
> predictability in my experience is the server EKU check where absence is
> interpreted as an entitlement.
>

My ultimate intent was to try to formulate a way in which GRCA could
provide certificates for the applications that they're having to support
for their clients today without having to essentially plan to be
non-compliant for a multi-year period.

It sounds like there's one or more relying-party applications that perform
strict validation of EKU if provided, but would appear not to have a single
standard EKU that they want to see (except perhaps the AnyPurpose.)

I'm confused as to why these certificates, which seem to be utilized in
applications outside the usual WebPKI scope, need to be issued in a trust
hierarchy that chains up to a root in the Mozilla store.  It would seem
like the easiest path forward would be to have the necessary applications
include a new trust anchor and issue these certificates outside the context
of the broader WebPKI.

In essence, if there are applications in which these GRCA end-entity
certificates are being utilized where the Mozilla trust store is utilized
as a trust anchor set and for which the validation logic is clearly quite
different from the modern WebPKI validation logic and where that validation
logic effectively requires non-compliance with Mozilla root store policy,
is this even a use case that the program and/or community want to support?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-22 Thread Matthew Hardeman via dev-security-policy
I'm not sure on the weighting of the two sides that you point out, but I do
broadly agree that it is about striking some balance between those two ends.

That said, if all outcomes are equally bad, I think I favor the bad outcome
that doesn't open the door to accusations of a discriminatory approach/bias.

On Fri, Mar 22, 2019 at 11:49 AM Nadim Kobeissi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> What a strange situation.
>
> On the one hand, denying DarkMatter's CA bid because of these press
> articles would set the precedent of refusing to accept the engagement and
> apparent good faith of a member of the industry, based only on hearsay and
> with no evidence.
>
> On the other hand, deciding to move forward with a good-faith, transparent
> and evidence-based approach actually risks creating a long-term undermining
> of public confidence in the CA inclusion process.
>
> It really seems to me that both decisions would cause damage to the CA
> inclusion process. The former would make it seem discriminatory (and to
> some even somewhat xenophobic, although I don't necessarily agree with
> that) while the latter would cast a serious cloud of uncertainty above the
> safety of the CA root store in general that I have no idea how anyone could
> or will eventually dispel.
>
> As a third party observer I genuinely don't know what could be considered a
> good move by Mozilla at this point. I want Mozilla to both offer good faith
> and a transparent process to anyone who promises to respect its mission,
> but I also want it to maintain the credibility and trust that it has built
> for its CA store. For it to seem impossible for Mozilla to do both at the
> same time seems deeply unfortunate and a seriously problematic setting for
> the future of this process overall.
>
> I really wish that solid evidence of the claims being made against
> DarkMatter is published (if it exists). That would be a great way for
> Mozilla to make a unilaterally defensible position.
>
> Nadim Kobeissi
> Symbolic Software • https://symbolic.software
> Sent from Galaxy
>
> On Fri, Mar 22, 2019, 4:19 PM Benjamin Gabriel <
> benjamin.gabr...@darkmatter.ae> wrote:
>
> >
> >
> > Benjamin Gabriel | General Counsel & SVP Legal
> > Tel: +971 2 417 1417 | Mob: +971 55 260 7410
> > benjamin.gabr...@darkmatter.ae
> >
> > The information transmitted, including attachments, is intended only for
> > the person(s) or entity to which it is addressed and may contain
> > confidential and/or privileged material. Any review, retransmission,
> > dissemination or other use of, or taking of any action in reliance upon
> > this information by persons or entities other than the intended recipient
> > is prohibited. If you received this in error, please contact the sender
> and
> > destroy any copies of this information.
> >
> > On 2/24/19 11:08 AM, Nex wrote:
> >
> > > The New York Times just published another investigative report that
> > mentions
> > > DarkMatter at length, with additional testimonies going on the
> > > record:
> >
> > Dear Nex,
> >
> > The New York Times article that you reference does not add anything new
> to
> > the misleading allegations previously published in the Reuters article.
> It
> > simply repeats ad-nauseum a false, and categorically denied, narrative
> > about DarkMatter, under the guise of an investigative reporting on the
> > alleged surveillance practices of governmental authorities of foreign
> > countries.
> >
> > DarkMatter is strictly a commercial company which exists to provide
> > cyber-security and digital transformation services to our customers in
> the
> > United Arab Emirates, and the larger GCC and MENA regions.
> >
> > We have already noted that these misleading allegations about DarkMatter
> > were originally planted by defamatory and false sources - in two (2)
> > articles published on the internet - and are now repeatedly recycled by
> > irresponsible journalists looking for a sensationalist angle on
> > socio-political regional issues.  And we have consistently, and
> > categorically, denied and refuted all of the allegations about
> DarkMatter,
> > including on this forum. [1][2]
> >
> > The fact that New York Times has chosen to recycle these refuted false
> > narratives about DarkMatter, without reaching out to inquire on the real
> > DarkMatter story, is unfortunate.  At times like this - it is important
> to
> > note that not all news reporting is based on factual or true events, and
> is
> > sometimes based on undisclosed bias or in some instances on outright
> > fraudulent reporting.[3][4][5][6][7][8]
> >
> > We continue to push for responsible journalism that is based on truth and
> > verifiable facts.
> >
> > Regards,
> > Benjamin Gabriel
> > General Counsel, DarkMatter Group
> >
> > [1]
> >
> https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/QAj8vTobCAAJ
> > [2]
> >
> 

Re: GRCA Incident: BR Compliance and Document Signing Certificates

2019-03-16 Thread Matthew Hardeman via dev-security-policy
While sending a message that non-compliance could result in policy change
is generally a bad idea, I did notice something about the profile of the
non-compliant certificate which gave me pause:

None of the example certificates which were provided include a SAN
extension at all.

Today, no valid certificates for the WebPKI lack a SAN extension.  There
should always been at least one SAN dnsName, SAN ipAddress, or in case of
S/MIME certificates, a SAN rfc822 name.

I know that Chrome has already fully deprecated non-SAN bearing certs.
Have the other browsers?

I'm wondering whether it's possible or reasonable for policy to update such
that certificates that lack any SAN at all would be out of scope?

On Sat, Mar 16, 2019 at 6:42 PM Matthew Hardeman 
wrote:

> I think answers to the following questions might be helpful:
>
> 1.  What software / types of software are being utilized which would give
> compatibility issues?  What is the validation logic of those applications /
> systems?
>
> 2.  If these certificates don't have a purpose known to or respected by
> the WebPKI, why must they be issued from a trust hierarchy which delegates
> trust within the WebPKI?
>
> 3.  If there are outside systems that want to see these certificates chain
> to existing roots, perhaps a new SubCA could be spun up with an intention,
> from the very start, of being OneCRL listed?  (In other words, special
> agreement from the root programs that this particular subCA is invalid in
> the browsers, but remains unrevoked in the Root CA's CRL?) Obviously this
> would require buy-in from the root programs as well as the CA, but maybe
> it's a compromise that could be worked out?
>
> 4.  What if they got a little innovative?  Is there any chance they could
> require that each of these certificates be issued with a subject including
> an email address which has been validated to the standards the programs
> require?  Then set the client auth and email protection EKUs?  They could
> even provide the email addresses in question temporarily on a domain they
> own, if needed.  Would that still result in compatibility issues?
>
> 5.  Other document signing programs exist and have existed for a long
> time.  It's a bigger thing in Europe, right?  What's unique about this
> situation that causes this non-compliance but that hasn't resulted in
> non-compliance there?
>
> On Sat, Mar 16, 2019 at 6:02 PM Wayne Thayer via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
>> In bug 1523221 [1], GRCA (Government of Taiwan) has responded to a
>> misissuance report by stating that the certificates in question are not
>> intended for serverAuth or emailProtection. However, our policy applies to
>> certificates **capable** of being used for serverAuth or emailProtection,
>> including those that omit an EKU extension. GRCA acknowledges this fact,
>> but has stated that these are document signing certificates, and there are
>> no standardized EKUs for document signing that they could use to constrain
>> these certificates [2] without creating interoperability problems [3].
>>
>> GRCA has now filed an incident report [4 and below] in which they have
>> proposed a timeline for moving away from this practice that has them
>> issuing unconstrained certificates that do not comply with the BRs until
>> the end of 2020. Presumably it would be years longer before these
>> certificates have all expired.
>>
>> I would appreciate everyone's input on this issue and the proposed
>> solution.
>>
>> - Wayne
>>
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1523221
>> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1523221#c4
>> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1523221#c7
>> [4] https://bugzilla.mozilla.org/attachment.cgi?id=9051175
>>
>> == Incident Report ==
>> 1. How your CA first became aware of the problem (e.g. via a problem
>> report
>> submitted to your Problem Reporting Mechanism, a discussion in
>> mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
>> the time and date.
>> Taiwan Government PKI has been developed for more than 10 years.
>> Government
>> PKI and various e-Government applications strictly follow RFC 5280 and the
>> original ITU-T X.509 standard for certificate format and Validation
>> method.
>> However, the current use of the extended field of EKU by Browsers on Web
>> PKI is inconsistent with the original definitions in RFC 5280 and ITU-T
>> X.509 (please refer to
>> https://mailarchive.ietf.org/arch/msg/pkix/aYpt23Ea4Ey5nB4kR6QfyND4SPk),
>> and our Government PKI existed before the definition of the so-called Web
>> PKI and the new usage of EKU invented by various Browsers.
>> Taiwan Government joined the restriction of EKU in SSL certificates in
>> 2018, but unfortunately each of the Browsers still forces our Government
>> PKI to add EKUs to Citizen Certificates other than SSL certificates (the
>> number of these certificates exceeds 4 million), even if these Citizens
>> 

Re: GRCA Incident: BR Compliance and Document Signing Certificates

2019-03-16 Thread Matthew Hardeman via dev-security-policy
I think answers to the following questions might be helpful:

1.  What software / types of software are being utilized which would give
compatibility issues?  What is the validation logic of those applications /
systems?

2.  If these certificates don't have a purpose known to or respected by the
WebPKI, why must they be issued from a trust hierarchy which delegates
trust within the WebPKI?

3.  If there are outside systems that want to see these certificates chain
to existing roots, perhaps a new SubCA could be spun up with an intention,
from the very start, of being OneCRL listed?  (In other words, special
agreement from the root programs that this particular subCA is invalid in
the browsers, but remains unrevoked in the Root CA's CRL?) Obviously this
would require buy-in from the root programs as well as the CA, but maybe
it's a compromise that could be worked out?

4.  What if they got a little innovative?  Is there any chance they could
require that each of these certificates be issued with a subject including
an email address which has been validated to the standards the programs
require?  Then set the client auth and email protection EKUs?  They could
even provide the email addresses in question temporarily on a domain they
own, if needed.  Would that still result in compatibility issues?

5.  Other document signing programs exist and have existed for a long
time.  It's a bigger thing in Europe, right?  What's unique about this
situation that causes this non-compliance but that hasn't resulted in
non-compliance there?

On Sat, Mar 16, 2019 at 6:02 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> In bug 1523221 [1], GRCA (Government of Taiwan) has responded to a
> misissuance report by stating that the certificates in question are not
> intended for serverAuth or emailProtection. However, our policy applies to
> certificates **capable** of being used for serverAuth or emailProtection,
> including those that omit an EKU extension. GRCA acknowledges this fact,
> but has stated that these are document signing certificates, and there are
> no standardized EKUs for document signing that they could use to constrain
> these certificates [2] without creating interoperability problems [3].
>
> GRCA has now filed an incident report [4 and below] in which they have
> proposed a timeline for moving away from this practice that has them
> issuing unconstrained certificates that do not comply with the BRs until
> the end of 2020. Presumably it would be years longer before these
> certificates have all expired.
>
> I would appreciate everyone's input on this issue and the proposed
> solution.
>
> - Wayne
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=1523221
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1523221#c4
> [3] https://bugzilla.mozilla.org/show_bug.cgi?id=1523221#c7
> [4] https://bugzilla.mozilla.org/attachment.cgi?id=9051175
>
> == Incident Report ==
> 1. How your CA first became aware of the problem (e.g. via a problem report
> submitted to your Problem Reporting Mechanism, a discussion in
> mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
> the time and date.
> Taiwan Government PKI has been developed for more than 10 years. Government
> PKI and various e-Government applications strictly follow RFC 5280 and the
> original ITU-T X.509 standard for certificate format and Validation method.
> However, the current use of the extended field of EKU by Browsers on Web
> PKI is inconsistent with the original definitions in RFC 5280 and ITU-T
> X.509 (please refer to
> https://mailarchive.ietf.org/arch/msg/pkix/aYpt23Ea4Ey5nB4kR6QfyND4SPk),
> and our Government PKI existed before the definition of the so-called Web
> PKI and the new usage of EKU invented by various Browsers.
> Taiwan Government joined the restriction of EKU in SSL certificates in
> 2018, but unfortunately each of the Browsers still forces our Government
> PKI to add EKUs to Citizen Certificates other than SSL certificates (the
> number of these certificates exceeds 4 million), even if these Citizens
> Certificates are not considered by Browsers to be SSL/TLS Certificates in
> terms of format and technique.
> And there is currently no international standard defined General-Purposed
> Signing/Encryption EKU OID can be used, and this topic was also discussed
> in the 42nd CABForum at F2F meeting on 2017/10/3 - 2017/10/5 (please refer
> to
>
> https://cabforum.org/2017/10/04/2017-10-04-minutes-face-face-meeting-42-taipei/#Determine-Applicability-of-Certificates-by-using-standard-CABF-CP-OIDs
> ),
> but so far there is no conclusion.
> This issue was again raised in Bugzilla on January 27, 2019 (Bug 1523221),
> and we responded immediately, but Mozilla still required that all other
> non-SSL certificates must add the EKU.
>
> 2. A timeline of the actions your CA took in response. A timeline is a
> date-and-time-stamped sequence of all relevant events. This may include
> 

Re: Open Source CA Software

2019-03-15 Thread Matthew Hardeman via dev-security-policy
I think open source is great, but it's not a panacea.

While there are many CAs and several root programs, this community is a
relatively small one in the grand scheme.

Prior events suggest that there are not enough people with the necessary
skill overlap to parse both the rules and the code to make useful analysis
while also having an interest in doing so.

On Fri, Mar 15, 2019 at 9:59 AM Mike Kushner via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thursday, March 14, 2019 at 11:54:52 PM UTC+1, James Burton wrote:
> > Let's Encrypt CA software 'Boulder' is open source for everyone to browse
> > and check for issues. All other CAs should follow the Let's Encrypt lead
> > and open source their own CA software for everyone to browse and check
> for
> > issues. We might have found the serial number issue sooner.
> >
> > Thank you,
> >
> > Burton
>
> Dude, EJBCA has been open source long enough to be able to legally vote
> and have a driver's license. Literally. But I agree, and we are open source
> for exactly that reason.
>
> I will add though, and stress, that this was not an issue with how EJBCA
> generates serial numbers. EJBCA still produces serial numbers with the max
> entropy for a given serial number length, as configured in number of
> octets. If you set EJBCA to use 20 octets you'll get 159 bits of entropy,
> the max available without breaking the RFC, and it's been that way since
> 2014.
>
> To save people time, by the way, here you go:
>
> https://svn.cesecore.eu/svn/ejbca/trunk/ejbca/modules/cesecore-common/src/org/cesecore/certificates/ca/internal/SernoGeneratorRandom.java
>
> This was not an issue with the source, it was an issue with the end user's
> understanding of what it means to define an SN length as given number of of
> octets, how integer octets are defined in x690, and what entropy that can
> be derived. That is all a documentation failure on our end - we could have
> been more explicit, and we could have reached out more.
>
> There's also the faulty assumption that SN length = entropy. As we've
> seen, many other CA implementations produce SNs with far less entropy than
> their length would allow. I'm not saying that there's anything inherently
> wrong with that, but it illustrates the danger of making assumptions.
>
> As we didn't follow cabf at the time we weren't fully aware of the
> severity of the problem, and assumed that affected parties understood their
> configurations and raised the SN length.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Serial Number Origin Transparency proposal (was Re: A modest proposal for a better BR 7.1)

2019-03-12 Thread Matthew Hardeman via dev-security-policy
Overall I think it's a neat scheme.

It does impose some trade-offs beyond the mechanism that I proposed:

1.  It leaves the implementing CA with no space within the serial number
field to include a CA significant sequence number, timestamp, or other
value.  That may not be a bad thing, but it's other than capabilities that
they have today.

2.  It necessarily requires that the TBS certificate data be available to
the serial number generation routine.  This would seem to lock in some
architectural changes  as the system element producing the serial number
necessarily has to have all the TBSCertificate data), which may not
necessarily be the case today.

On Tue, Mar 12, 2019 at 12:10 PM Rob Stradling via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi all.  I've been working on an alternative proposal for a serial
> number generation scheme, for which I intend to write an I-D and propose
> to the LAMPS WG.  However, since other folks' proposals are already
> flowing, I will share the gist of mine here.  Comments welcome!
>
> - Serial Number Origin Transparency (SNOT ;-) ): Generation -
> 1. Let H (meaning "Header"; uint24) be: 0x00DE7E.  The 0x00 is the byte
> that makes the ASN.1 INTEGER a positive value.  0xDE7E signifies
> "DE7Erministic".
>
> 2. Let A (meaning "Algorithm"; uint8) be a hash algorithm ID from the
> TLS HashAlgorithm registry
> (
> https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-18
> ).
>
> 3. Let R (meaning "Random"; uint64) be 64-bits of (fresh and
> unfiltered!) output from a CSPRNG.
>
> 4. Let M (meaning "Magic"; uint64) be the magic constant:
>0x0102030405060708
>
> 5. Generate the TBSCertificate template with the serial number value set
> to:
>H || A || R || M
>
> 6. Let D (meaning "Digest") be the thumbprint of the DER-encoded
> TBSCertificate, calculated using the hash algorithm denoted by A.
>e.g., D = SHA-256(DER(TBSCertificate))
>
> 7. Change the serial number value in the TBSCertificate template to:
>H || A || R || TRUNCATE_TO_64BITS(D).
>
> 8. Calculate DER(TBSCertificate), then sign it.
> --
>
> Since this mechanism includes 64-bits of (fresh and unfiltered!) output
> from a CSPRNG, it is compatible with today's BRs.  The randomness also
> ensures that this mechanism doesn't yield multiple certs with the same
> serial number (contrary to RFC5280 §4.1.2.2) if the CA signs the exact
> same TBSCertificate multiple times using a nondeterministic signature
> algorithm.
>
> In terms of preventing certificate forgery (see [1]), which is the thing
> that unpredictable serial numbers are designed to prevent, this
> mechanism gives CAs two chances to not screw up:
>1) if the CA implements this mechanism wrongly but nonetheless does
> successfully include 64-bits of (fresh and unfiltered!) output from a
> CSPRNG, then the desired level of security is still achieved.
>2) or, if the CA correctly implements the deterministic parts of this
> mechanism but mishandles the output from their CSPRNG, then the desired
> level of security is still achieved (although let me stress that this
> would of course not be compliant with today's BRs).
>
> Whilst this mechanism does add complexity for the CA (compared to only
> using a CSPRNG to generate serial numbers), I think that the additional
> operations on the TBSCertificate are less complicated than most CAs have
> already had to deal with to issue CT precertificates and embed SCTs in
> certificates.
>
> *** ADVANTAGES OF THIS MECHANISM ***
> When implemented correctly by the CA, this mechanism enables the
> community to programmatically verify(*) that a certificate is not a
> forgery, without having to...
> 1. trust the CA (to have handled their CSPRNG correctly), or
> 2. trust the CA's WebTrust/ETSI auditors (to have correctly verified
> that the CA has handled their CSPRNG correctly), or
> 3. trust the CSPRNG algorithm to actually be unpredictable
> (Dual_EC_DRBG, anyone?)
>
> This mechanism builds on PHB's earlier proposal [2].
>
>
> [1] https://www.win.tue.nl/hashclash/rogue-ca/
>
> [2] https://cabforum.org/pipermail/public/2016-July/008053.html
>
> (*)
> - Serial Number Origin Transparency (SNOT): Verification -
> 1. Check that the first 3 bytes of the certificate's serial number value
> (including the leading sign byte) are 0x00DE7E.
>
> 2. Check that the certificate's serial number value is exactly 20 bytes
> long (including the leading 0x00 sign byte).
>
> 3. Let A be the 4th byte of the serial number value (considering the
> leading 0x00 sign byte to be the 1st byte).  Check that A denotes a
> supported hash algorithm.
>
> 4. Let D1 be a copy of the last 8 bytes of the certificate's serial number.
>
> 5. Let T be a copy of the DER-encoded TBSCertificate component of the
> certificate.
>
> 6. Change the last 8 bytes of T's serial number to the magic constant:
>

Re: What's the meaning of "non-sequential"? (AW: EJBCA defaulting to 63 bit serial numbers)

2019-03-11 Thread Matthew Hardeman via dev-security-policy
On Mon, Mar 11, 2019 at 12:18 PM Buschart, Rufus via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> I really like reading this discussion about 64 vs. 63 bits and how to read
> the BRGs as it shows a lot of passion by all of us in the PKI community.
> Never the less, in the discussion, I miss one interesting aspect. The BRGs
> not only speak about 64 bits as output from a CSPRNG but also about serial
> numbers being "non-sequential". But nowhere the BRGs define the exact
> meaning of "non-sequential". I always read this as serial numbers being
> totally random, but I know there is at least one CA out there that
> constructs its serial numbers like this
>

I'm glad someone else asked, as no one has enjoyed the question in the form
that I presented it.

But I suggest that if "non-sequential" is taken to mean a guarantee that no
two serial numbers shall be numerically adjacent integer values, then I
submit that any serial numbers which only contain what was previously
considered to be 64-bits of entropy and no other data save, perhaps a
leading 0x00 byte if necessary to prevent high-order bit being 1, then the
effective entropy must be considered less because two adjacent values are
effectively blocked by any prior chosen value.

But, maybe "non-sequential" doesn't mean that.  It's a pity a concept like
that isn't clearly objective.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-08 Thread Matthew Hardeman via dev-security-policy
On Fri, Mar 8, 2019 at 9:49 PM Ryan Sleevi  wrote:

> I consider that only a single CA has represented any ambiguity as being
> their explanation as to why the non-compliance existed, and even then,
> clarifications to resolve that ambiguity already existed, had they simply
> been sought.
>

Please contemplate this question, which is intended as rhetorical, in the
most generous and non-judgmental light possible.  Have you contemplated the
possibility that only one CA attempted to do so because you've stated your
interpretation and because they're subject to your judgement and mercy,
rather than because the text as written reflects a single objective
mechanism which matches your own position?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-08 Thread Matthew Hardeman via dev-security-policy
On Fri, Mar 8, 2019 at 8:52 PM Ryan Sleevi  wrote:

I appreciate the attention to detail, but I find it difficult to feel that
> it is a good faith effort that is designed to produce results consistent
> with the goals that many of this community have and share, and thus don't
> think it would be a particularly valuable thing to continue discussing.
> While there is certainly novelty in the approach, which is not at all
> unfamiliar in the legal profession [3], care should be taken to make sure
> that we are making forward progress, rather than beating dead horses.
>

In the spirit of demonstrating good faith, looking forward, and perhaps
even making a useful contribution), I have started a new thread [1] in
which I propose alternative language which might replace the specification
presently in BR 7.1.  I would appreciate your thoughts on it.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/PDzNNsxhzLU/F0uxY6qmCAAJ

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A modest proposal for a better BR 7.1

2019-03-08 Thread Matthew Hardeman via dev-security-policy
On Fri, Mar 8, 2019 at 8:57 PM Peter Gutmann 
wrote:

> Matthew Hardeman via dev-security-policy <
> dev-security-policy@lists.mozilla.org> writes:
>
> >shall be 0x75
>
> Not 0x71?
>

:-)  In truth, I think any particular chosen single value for the first
byte which has the high order bit set to 0 and is not 0x00, 0x01, or 0x7F
is probably fine.  0x00 is avoided for obvious encoding reasons.  0x01 and
0x7F should be avoided as they seem likely to be the most common values
people would utilize in that position when they have the goal of avoiding
variable length.  One of the benefits of choosing a particular fixed value
for the entire first byte is that it creates a significant probability
(127/128) that a random value (save for a fixed high order bit set to 0) in
the first byte by a CA who hasn't updated their behavior to conform will be
rapidly and obviously revealed.


> Sounds good, and saves me having to come up with something (the
> bitsort(CSPRNG64()) nonsense took enough time to type up).  The only thing
> I
> somewhat disagree with is #3, since this is now very concise and requires
> "the
> first 64 bits of output" you can just make it a CSPRNG, which is well-
> understood and presumably available to any CA, since it's a standard
> feature
> of all HSMs.


I don't necessarily have strong opinions about it, but I did consider it
and still came to the conclusion that it should be specified as a symmetric
key generation operation.  My reason for this change arises from my own
experiences in a variety of languages and platforms on various hardware
over the years.  CSPRNG ought to be enough, but sometimes some environments
will spoil a developer with choice.  And if the developer isn't necessarily
a cryptographer, they could easily choose the wrong type or initialize it
incorrectly.  Conversely, through the years various programming languages
and runtime environments have gotten better and better about the default or
most prevalent routines for key generation on those platforms.  It is
therefore belief that specifying the entropy source as a standardized
symmetric key generation operation improves the odds that a less than
expert developer will accidentally get it right.  I kind of cringe at that
idea, but I still think it deserves a look.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


A modest proposal for a better BR 7.1

2019-03-08 Thread Matthew Hardeman via dev-security-policy
I know this isn't the place to bring a BR ballot, but I'm not presently a
participant there.

I present alternative language along with notes and rationale which, I put
forth, would have resulted in a far better outcome for the ecosystem than
the issues which have arisen from the present BR 7.1 subsequent to ballot
164.

I humbly propose that this would have been a far better starting point, for
reasons I discuss in notes below.

Effective as of Month Day, Year, CAs shall generate a certificate serial
numbers as herein specified:


   1. The ASN.1 signed integer encoded form of the certificate serial
   number value must be represented as not less than 9 bytes and not more than
   20 bytes.  [Note 1]
   2. The hexadecimal value of the first byte of the certificate serial
   number shall be 0x75.  [Note 2]
   3. The consecutive 64 bits immediately following the first byte of the
   encoded serial number shall be the first 64 bits of output of an AES-128
   random session key generation operation, said operation having been seeded
   within random data to within its design requirements. [Note 3]
   4. The remaining bytes of the encoded serial number (the 10th through
   20th bytes of the encoded serial number), to the extent any are desired,
   may be populated with any values. [Note 4]

Notes / Rationale:

Note 1.  The first bullet point sets out a structure which necessarily
requires that the encoded form of the serial number for all cases be at
least 9 bytes in length.  As many CAs would have been able to immediately
see that their values, while random, don't reach 9 bytes, each CA in that
case would have had an easy hint that further work to assess compliance
with this BR change would be necessary and would definitely result in
changes.  I believe that would have triggered the necessary investigations
and remediations.  To the extent that it did not do so, the CAs which
ignored the change would be quickly identifiable as a certificate with an 8
byte serial number encoding would not have been valid after the effective
date.

Note 2.  A fixed value was chosen for the first byte for a couple of
reasons.  First, by virtue of not having a value of 1 in the highest order
bit, it means that ASN.1 integer encoding issues pertaining to sign are
mooted.  Secondarily, with each certificate issuance subsequent to the
effective date of the proposal, a CA which has not updated their systems to
accommodate this ballot but does use random number generation to populate
the certificate serial has a probability of 127/128 of revealing that they
have not implemented the changes specified in this ballot.

Note 3.  CAs and their software vendors are quite familiar with
cryptographic primitives, cryptographic keys, key generation, etc.  Rather
than using ambiguous definitions of randomness or bits of entropy or output
of a CSPRNG, the administration of a CA and their software vendors should
be able to be relied upon to understand the demands of symmetric key
generation in actual practice.  By choosing to specify a symmetric block
cipher type and key size in common use, the odds of an appropriate
algorithm being selected from among the large body of appropriate
implementations of such algorithms greatly reduces odds of low quality
"random" data for the serial number.

Note 4.  Note 4 makes clear that plenty of space remains for the CA to
populate other information, be it further random data or particular data of
interest to the CA, such as sequence numbers, date/time, etc.

Further notes / rationale:

In supporting systems whose databases may support only a 64 bit serial
number in database storage, etc, it is noteworthy that the serial number
rules I specified here only refer to the encoded form which occurs in the
certificate itself, not any internal representation in an issuance
database.  Because the first byte is hard coded to 0x75 in my proposal,
this doesn't need to be reflected in a legacy system database, it can just
be implemented as part of the certificate encoding process.

Strategically, certificates which would conform to the proposal I've laid
out here would obviously and facially be different from any previously
deployed system which routinely utilized 8 byte encodings, meaning that
every CA previously generating 8 byte serials would have an obvious signal
that they needed to dig into their serial number generation methodologies.

By tying the generation of high quality random data to fill the serial
number to algorithms and procedures already well known to CAs and to their
vendors, auditors, etc, my proposal enhances the odds that the required
amount of random unpredictable bits actually be backed by a mechanism
appropriate for the use of cryptography.

If anyone thinks any of this has merit, by all means run with it.  I
disclaim any proprietary interest (copyright, etc) that I might otherwise
have had by statute and declare that I'm releasing this to the public
domain.

Thanks,

Matt

Re: EJBCA defaulting to 63 bit serial numbers

2019-03-08 Thread Matthew Hardeman via dev-security-policy
On Friday, March 8, 2019 at 6:05:05 PM UTC-6, Ryan Sleevi wrote:

> You're absolutely correct that two certificates, placed next to eachother,
> could appear sequential. Someone might then make a claim that the CA has
> violated the requirements. The CA can then respond by discussing how they
> actually validate serial numbers, and the whole matter can be dismissed as
> compliant.

Let's set aside certificates for a moment and talk about serial numbers, 
elsewhere definitionally defined as positive integers.

Certificate serial number A (represented as plain unencoded integer):  123456
Certificate serial number B (represented as plain unencoded integer): 123457

Can we agree that those two numbers are factually provable as sequential as 
pertains integer mathematics?

If so, then regardless of when (or in what order) two different certificates 
arise in which those serial numbers feature, as long as they arise as 
certificates issued by the same issuing CA, two certificates with 
definitionally sequential numbers have at that point been issued.

Pursuant to the plain language of 7.1 as written, that circumstance -- 
regardless of how it would occur -- would appear to be a misissuance.

I concur with you fully that a CA (and anyone, really) should view the BRs with 
an adversarial approach to review.

The rule as written requires that the output bits have come from a CSPRNG.  But 
it doesn't say that they have to come from a single invocation of a CSPRNG or 
that they have to be collected as a contiguous bit stream from the CSPRNG with 
no bits of output from the CSPRNG discarded and replaced by further invocation 
of the CSPRNG.  Clearly a technicality, but shouldn't the rules be engineered 
with the assumption that implementers (or their software vendors) might take a 
different interpretation?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-08 Thread Matthew Hardeman via dev-security-policy
On Fri, Mar 8, 2019 at 3:10 AM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

Having sequential serial numbers is not problematic.  Having *predictable*
> serial numbers is problematic.


My problem with this is that, if we parse the english language constructs
of the rule as stated in the BRs, the first requirement of a certificate
serial number is literally "non-sequential Certificate serial numbers", and
then furthermore that these must consist consist of at least 64 bits of
output from a CSPRNG.

Both your and Ryan Sleevi's comments seem to suggest that the
non-sequential part doesn't really matter when it arises incidentally as
long as they're randomly generated and that two certificates with
certificate serial numbers off-by-one from each other would not be a
problem.

I am well aware of the reason for the entropy in the certificate serial
number.  What I'm having trouble with is that there can be no dispute that
two certificates with serial numbers off by one from each other, no matter
how you wind up getting there, are in fact sequential serial numbers and
that this would appear to be forbidden explicitly.

It seems that in reality that your perspective calls upon the CA to act
according to the underlying risk that the rule attempts to mitigate rather
than abide the literal text.  That seems a really odd way to construe a
rule.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 9:28 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> The "CS" is "CSPRNG" stands for "cryptographically secure", and "CSPRNG" is
> defined in the BRs.
>

Yes.  There are various levels of qualification and quality for algorithms
and entropy sources bearing that designation and they've changed over the
years.


>
> > It
> > does not specify whether the 64-bits must be comprised of sequential bits
> > of data output by the CSPRNG,
>
> Nor does it need to.
>

Really, why not?  The rule says that 64-bits of output from a CSPRNG must
be utilized.  It does not clearly delineate that one can't be choosy about
which 64 to take.


>
> > nor does it specify that one is not permitted
> > to discard inconvenient values (assuming you seek replacement values from
> > the CSPRNG).
>
> If you generate a 64-bit random value, then discard some values based on
> any
> sort of quality test, the end result is a 64-bit value with
> less-than-64-bits of randomness.  The reduction in randomness depends on
> the
> exact quality function employed.
>

I understand well the reasons that entropy is desired and I understand well
exactly the way, mathematically, that this behavior would reduce total
entropy.  My complaint is that nothing in the rule demands an actual set
minimum amount of true entropy even though that result is clearly what was
really desired.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 8:54 PM bif via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> But BRs are not to be interpreted, just to be applied to the letter,
> whether it makes sense or not. When it no longer makes sense, the wording
> can be improved for the future.
>

Indeed.  But following BR 7.1 to the letter apparently doesn't get you all
the way to compliance, by some opinions.  After all, nothing in 7.1
requires anything as to the quality of the underlying CSPRNG utilized.  It
does not specify whether the 64-bits must be comprised of sequential bits
of data output by the CSPRNG, nor does it specify that one is not permitted
to discard inconvenient values (assuming you seek replacement values from
the CSPRNG).

It is therefore my belief that either the BR 7.1 guideline wrong/inadequate
or the opinions which would hold that following BR 7.1 to the written
letter are not quite adequate are wrong.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 8:29 PM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Past analysis and discussion have shown the interpretation is hardly
> specific to a single CA. It was a problem quite literally publicly
> discussed during the drafting and wording of the ballot. References were
> provided to those discussions. Have you gone and reviewed them? It might be
> helpful to do so, before making false statements that mislead.
>

The actual text of the guideline is quite clear -- in much the same manner
that frosted glass is.

"Effective September 30, 2016, CAs SHALL generate non-sequential
Certificate serial numbers greater than zero (0) containing at least 64
bits of output from a CSPRNG. "  [1]

Irrespective of the discussion underlying the modifications of the BRs to
incorporate this rule, there are numerous respondent CAs of varying
operational vintage, varying size, and varying organizational complexity.

The history underlying a rule should not be necessary to implement and
faithfully obey a rule.  And yet...

Rather than have us theorize as to why non-compliance with this rule seems
to be so widespread, even by a number of organizations which have more
typically adhered to industry best practices, would you be willing to posit
a plausible scenario for why all of this non-compliance has gone on for so
long and by so many across so many certificates?

Additionally, assuming a large CA with millions of issued certificates
using an actual 64-bit random serial number...  Should the CA also do an
exhaustive issued-serial-number search to ensure that the to-be-signed
serial number  is not off-by-one in either direction from a previously
issued certificate serial number?  However implausible, if it occurred,
this would indeed result in having participated in the issuance of 2
certificates with sequential serial numbers.

I agree with Peter Gutmann's statement.  Whatever the cause for the final
language in BR 7.1, the language as presently presented is awful and needs
to be fixed in such a manner as will eliminate ambiguity within the rules.
I cannot imagine that would hurt compliance, but I rather suspect it may
improve it.

[1] https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.6.3.pdf
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 8:20 PM Peter Gutmann 
wrote:

> I swear I didn't plan that in advance :-).


I believe you.  When the comedy is this good, it's because it wrote itself.
 :-)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 8:14 PM Peter Gutmann 
wrote:

>
> As I said above, you can get arbitrarily silly with this.  I'm sure if we
> looked at other CA's code at the insane level of nitpickyness that
> DarkMatter's use of EJBCA has been examined, we'd find reasons why their
> implementations are non-compliant as well.


As if on queue, comes now GoDaddy with its confession.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Pre-Incident Report - GoDaddy Serial Number Entropy

2019-03-07 Thread Matthew Hardeman via dev-security-policy
Practical question:

How does the update to CABLint/Zlint work?

If a CA is choosing to issue certs with serial numbers with exactly 64 bits
of entropy, approximately 50% of the time there will be a certificate with
an 8 byte encoding of the serial number, as the high-order bit of the first
byte will be 0.  Approximately the other 50% of the time, the high-order
bit of the 64 bits of data will be a 1 and the value will therefore be
encoded as 9 bytes, the value of the first byte being 0x00.

As linters work on one document [certificate] at a time, how can the linter
identify with certainty that the 8-byte encoded value represents less than
64 bits of entropy?  Approximately half of the time, a strict 64-bit random
value would encode as a mere 8 bytes.

On Thu, Mar 7, 2019 at 8:01 PM Daymion Reynolds via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> As of 9pm AZ on 3/6/2019 GoDaddy started researching the 64bit certificate
> Serial Number issue. We have identified a significant quantity of
> certificates (> 1.8million) not meeting the 64bit serial number
> requirement. We are still performing accounting so certificate quantity is
> expected to change before we finalize the report.
>
> 1.  How your CA first became aware of the problem (e.g. via a problem
> report submitted to your Problem Reporting Mechanism, a discussion in
> mozilla.dev.security.policy, a Bugzilla bug, or internal self-audit), and
> the time and date.
>
> 9pm 3/6/2019 AZ Time, due to reviewing a discussion in
> mozilla.dev.security.policy.
>
> 2.  A timeline of the actions your CA took in response. A timeline is
> a date-and-time-stamped sequence of all relevant events. This may include
> events before the incident was reported, such as when a particular
> requirement became applicable, or a document changed, or a bug was
> introduced, or an audit was done.
>
> 9pm 3/6/2019 AZ Time, identified a hot issue with serial numbers in
> Mozilla group.
> 10am 3/7/2019 AZ Time, identified the issue was pervasive, and identified
> root cause.
> 6:30pm 3/7/2019 AZ Time, fix deployed to production to correct the serial
> number issue.
> We are still quantifying and classifying the certificate scope of impact.
>
> 3.  Whether your CA has stopped, or has not yet stopped, issuing
> certificates with the problem. A statement that you have will be considered
> a pledge to the community; a statement that you have not requires an
> explanation.
>
> We have deployed a fix to the issue, and are no longer issuing
> certificates with the defect.
>
> 4.  A summary of the problematic certificates. For each problem:
> number of certs, and the date the first and last certs with that problem
> were issued.
>
> Issue was introduced with a change in 2016. Impacted certificates still
> being aggregated. Will update with information and timeline on issue
> closure.
>
> 5.  The complete certificate data for the problematic certificates.
> The recommended way to provide this is to ensure each certificate is logged
> to CT and then list the fingerprints or crt.sh IDs, either in the report or
> as an attached spreadsheet, with one list per distinct problem.
>
> Still being aggregated. Will update with certificate information on issue
> closure.
>
> 6.  Explanation about how and why the mistakes were made or bugs
> introduced, and how they avoided detection until now.
>
> Ambiguity in language led to different interpretations of BR 7.1. It was
> believed a unsigned 64bit integer was sufficient to satisfy the new
> requirement. Additionally, industry tools like CABLint/ZLint were not
> catching this issue, which provided a false sense of compliance. We are
> submitting CABLint/Zlint updates as part of the fix.
>
> 7.  List of steps your CA is taking to resolve the situation and
> ensure such issuance will not be repeated in the future, accompanied with a
> timeline of when your CA expects to accomplish these things.
>
> Defect has been resolved, we are also updating linting tools
> (CABLint/Zlint) and upstreaming to patch for other peoples usage.
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: EJBCA defaulting to 63 bit serial numbers

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 7:47 PM Peter Gutmann via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> 0. Given that the value of 64 bits was pulled out of thin air (or possibly
>less well-lit regions), does it really matter whether it's 63 bits, 64
>bits, 65 3/8th bits, or e^i*pi bits?
>

I was actually joking on Twitter...

Let's say there's a CA that specializes in -- among other things -- special
requests...

What if they typically utilize 71-bits of entropy, encoded with a fixed
high-order bit value of 0, to ensure no extra encoding, and the 7/8 of one
byte + the following 8 bytes are fully populated with 71 bits of entropy as
requested from an appropriate entropy source...

What if a special customer (who may be a degenerate gambler, but isn't
necessarily -- it's merely theorized) insists that they're only going to
accept a "lucky" certificate whose overall serial number decimal value is
any one of the set of any and all prime numbers which may be expressed in
the range of 71-bit unsigned integers?

Can the CA's agent just request the cert, review the to-be-signed
certificate data, and reject and retry until they land on a prime?  Then
issue that certificate?

Does current policy address that? Should it?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 5:35 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> In the face of exterior political force, the people of the UAE couldn't get
> *globally trusted* certificates full-stop.  Off the top of my head, all of
> the widely-adopted web PKI trust stores are managed by US organisations.
> One directive from the US government, and a trust anchor is *gone*.  Thus,
> having a trust anchor is not even a *sufficient* condition to produce the
> outcome you're advocating for, let alone a necessary one.
>
> if the UAE government, or its people, wishes to ensure their supply of
> "globally trusted" certificates, they need to start running their own PKI
> trust store.
>

This gets fairly far afield, but it is far more likely that successful
defenses for maintaining the entry on the trust list could be made than for
the issuance of new certificates.

One of these is literally a case of mere publishing and only to software
users.  The other is the act of actually performing a signature (doing real
work specifically for the benefit of the subscriber).  That later case is
far less protected.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 5:14 PM Matt Palmer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Whilst those are all good points, I don't see how any of them require the
> CA
> to control an unconstrained intermediate CA certificate (or a root
> certificate).  All of those things can be done as a reseller or
> third-party-managed CA.


There's a fundamental difference in gaining membership to a root store like
the Mozilla program.

As I recall, the program intentionally doesn't maintain contractual
relationships with the CAs.

It could be argued under US sanctions laws that the act of working with an
entity and adding their root to the store could in that moment be a
regulated transaction.  However, once it's on the trust list, its
continuation there is not a new service or product being provided to a
sanctioned entity.  At that point, it's merely continued publication of a
curated list, which in the US qualifies as protected speech.

On the other hand, if DarkMatter (or any other foreign entity) signed a
managed SubCA deal with a CA such as Digicert (based in the US), at any
time down the road, the foreign entity might be for whatever reason subject
to US sanctions.  If that happened, any active service or product delivery
performance by Digicert would have to stop.

And so, there is a material difference.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 11:55 AM Wayne Thayer  wrote:

This line of thinking seems to conflate a few different issues.
>

That is true.  I apologize for that, but also feel that some of these
different issues and how they'd play out in relation with this current
matter and ultimately with the inclusion request need to be discussed.


> There are roughly 195 nations in existence today. I would guess that less
> than half have a domestic, publicly-trusted CA. I would agree that we have
> a big problem if websites in any jurisdiction can't obtain trusted
> certificates. The Mozilla manifesto [1] states "We are committed to an
> internet that includes all the peoples of the earth" and "The internet is a
> global public resource that must remain open and accessible". However, I
> don't think that minting 100 new CAs is the best, or even a good way to
> solve the problem.
>

Probably not a good way, but it is likely to be an effective one.


> Many CAs offer robust "reseller" programs that would allow a local company
> to provide certificates to a given region in the local language and
> currency. I acknowledge that this does not address the "exterior political
> force" portion of the concern, but it does address the concern of making it
> easy for website operators in any given country to obtain certificates.
>

Some of my concerns relate particularly to this.  As an example, once upon
a time it was forbidden for US citizens in the general case to engage in
transactions with Cuban individuals or entities (whether a part of Cuban
government or not).  That would effectively disable US based CAs from
issuing end-entity certificates to those parties.  Today, I don't believe
we immediately have that restriction, but it can happen as it has happened
before.

After the example case I've mentioned elsewhere in this thread,
usareally.com, lost its certificate from Let's Encrypt, the CT Logs suggest
that they turned to GlobalSign (who I don't believe are US based) and yet
still issued and quickly revoked certificates for the site.  At this time,
the site ultimately secured certificates from WoTrust (I believe a managed
subCA effectively operated by Certum).  It's conceivable that geopolitical
concerns could prevent potential subscribers from getting certificates.


> The very next request in the Mozilla inclusion queue is for the UAE
> government. [2] Denying DarkMatter does not mean that there can't or won't
> be a CA in the UAE.
>

Indeed, which further opens up a question of what the outcome of the
initial question of whether to revoke/OneCRL the DarkMatter intermediates
means in terms of a future where the UAE is permitted a national PKI.  What
if you OneCRL Dark Matter, only to have the UAE National CA decide that
commercial and individual interests in the UAE would be served by having at
least one commercial CA operating in-country and so create a fully
delegated SubCA for DarkMatter?  (I have no insider knowledge at all here -
no reason to suspect things would or could go that way.)  But pre-supposing
the possibility that Mozilla would need to respond to that in some way is
intriguing.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 11:33 AM Wayne Thayer  wrote:

> Nadim and Matthew,
>
> Can you explain and provide examples for how this "set of empirical
> requirements" differs from the objective requirements that currently exist?
>

Hi, Wayne,

I think the matter of whether or not I could or should opine on that
question essentially turns on events for which we don't yet have outcomes.

Specifically, if the decision is made that the current DarkMatter
intermediates do not require revocation and listing in OneCRL or if that
action is taken but without further prejudice to their root inclusion
request (say, for example, by mutual consent of the program and DarkMatter
that it would be acceptable to move forward without prejudice on a new root
hierarchy), then I would believe that objective policy as set out had been
followed and so there would be no delta versus current policy for me to
propose.

If, at the other extreme end, the intermediate is revoked and OneCRL'ed
with prejudice to continuing the root inclusion request AND the stated
cause were rooted principally in a subjective perception of the
organization versus objective data points, I would propose significant
changes.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 11:29 AM James Burton  wrote:

> I'm talking about someone from a restricted country using a undocumented
> domain name to obtain a Let's Encrypt certificate and there is nothing that
> can be done about it. We can't predict the future.
>

So your assertion, then, is that when a domain is outed as owned by an SDN
listed party, that the SDN listed party should just acquire a new domain
name?  Giving up their one identifier that there's broad consensus on
having continuing relevancy in the WebPKI?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 11:11 AM James Burton  wrote:

> Let's be realistic, anyone can obtain a domain validated certificate from
> Let's Encrypt and there is nothing really we can do to prevent this from
> happening. Methods exist.
>

I am continuing to engage in this tangent only in as far as it illustrates
the kinds of geopolitical issues that already taint this space and in as
much as that, I believe has some relevance for the larger conversation.
Now that I've said that, please, by all means, if I'm wrong about the
referenced assertion that I've posted, reach out to the usareally.com
people and help them get a Let's Encrypt certificate.  Good luck with that.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 10:54 AM James Burton  wrote:

> Let's Encrypt issues domain validation certificates and anyone with a
> suitable domain name (e.g. .com, .net, .org  ) can get one of these
> certificates just by proving control over the domain by using the DNS or "
> /.well-known/pki-validation" directory as stated in the CAB Forum baseline
> requirements. Country location doesn't matter.
>

I'm sorry, but that is inaccurate.  There are literally banned
subscribers.  Let's Encrypt has publicly and officially acknowledged
this[1].

[1]
https://community.letsencrypt.org/t/according-to-mcclatchydc-com-lets-encrypt-revoqued-and-banned-usareally-com/81517/10?u=mdhardeman
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 10:20 AM Matthew Hardeman 
wrote:

>
> Let's Encrypt does not quite provide certificates to everyone around the
> world.  They do prevent issuance to and revoke prior certificates for those
> on the United States various SDN (specially designated nationals) lists.
> For example, units of the Iraqi government or those acting at their behest
> may not receive Let's Encrypt certificates.
>

Whoops!  I meant to say the Iranian government.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 4:20 AM James Burton via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> There isn't any monopoly that prevents citizens and organizations in the
> United Arab Emirates to get certificates from CAs and they are not
> expensive. Let's Encrypt provides free domain validated certificates to
> everyone around the world. Next.
>

This is not entirely accurate and the manner in which it is inaccurate may
be material to this discussion.

Let's Encrypt does not quite provide certificates to everyone around the
world.  They do prevent issuance to and revoke prior certificates for those
on the United States various SDN (specially designated nationals) lists.
For example, units of the Iraqi government or those acting at their behest
may not receive Let's Encrypt certificates.

Obviously that is not an issue for the UAE or its people.  At least not
today.  But it always could be that it will be an issue someday.

What the people of the UAE don't have today is the ability to acquire
globally trusted certificates from a business in their own legal
jurisdiction who would be able to provide them with certificates even in
the face of exterior political force.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 10:10 AM Ken Myers (personal capacity) via
dev-security-policy  wrote:

> Is the issue that a Dark Matter business unit may influence the Dark
> Matter Trust Services (a separate unit, but part of the same company) to
> issue certificates for malicious purposes?
>
> or is it a holistic corporate ethics issue (in regards to Mozilla
> community safety) of a Mozilla-trusted service operated within a company
> that sells offensive cyber services?
>

This particular question is one that I'd very much like to see the program
address officially.  I personally reject the "corporate ethics issue" as
inappropriate to this domain, but I don't really get a vote.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-07 Thread Matthew Hardeman via dev-security-policy
On Thu, Mar 7, 2019 at 9:18 AM nadim--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> I would like to repeat my call for establishing a set of empirical
> requirements that take into account the context of DarkMatter's current
> position in the industry as well as their specific request for the
> inclusion of a specific root CA.


I also concur in this to the extent possible.


> While I don't necessarily fully support the method with which Benjamin
> chose to address Ryan's contributions to the discussion so far, I think
> we're all choosing to kid ourselves here if we continue to say that the
> underlying impetus for this discussion isn't primarily sociopolitical. The
> sooner an end is put to this, the better.
>

I concur in as far as the result, which is to say that I don't necessarily
say that it _is_ "primarily sociopolitical" but rather that there is at
least the appearance and nearly indefensible criticism that it could be.


> The right thing to do, right now, is for there to be a documented process
> through which a set of empirical, falsifiable, achievable requirements are
> set by either Mozilla, the CABForum, or both, for DarkMatter to fulfill so
> that they can be considered for inclusion. If these requirements are (1)
> defined fairly and (2) achieved by DarkMatter verifiably, then great.
> Otherwise, too bad.
>

Indeed the ramifications of a discretionary revocation of the intermediates
or block from joining the root program, if not objectively and cleanly
explained, would likely have a chilling effect on ANY newcomer.  When a
reasonable, documentable, objective path to earning and maintaining trust
in the program exists, investment of time and resources can reasonably
flow.  A new counter-case of an organization that has met all the
requirements and still somehow doesn't meet the bar would be most
discouraging.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-05 Thread Matthew Hardeman via dev-security-policy
On Tue, Mar 5, 2019 at 12:18 PM Ryan Sleevi  wrote:

>
> I believe you may have misunderstood the details of these incidents and
> their relationship to what's currently under discussion.
>
> In the Sectigo + NSO Group, these were entities that shared common
> investment ownership, but otherwise operated as distinct business entities.
> In the Symantec + BlueCoat, these were integrated organizations - and the
> concern was raised about ensuring that the entity BlueCoat did not have
> access to the key material operated by the Symantec entity. In this case,
> the Symantec entity asserted that the keys, operations, and audits were
> under its scope - BlueCoat was prevented from having access or control. [1]
>
> Both of those cases acknowledged a potential of conflicting interests, and
> worked to distinguish how those conflicting interests would not conflict
> with the community needs or goals.
>
> By comparison, the discussion around DarkMatter has been more similar to
> the discussion of Symantec rather than Sectigo, except DarkMatter has
> issued carefully worded statements that may, to some, appear to be denials,
> while to others, suggest rather large interpretative loopholes. This,
> combined with the interpretative issues that have been shown throughout the
> inclusion process - for which the serial numbers are merely the most recent
> incident, but by no means the first, raises concerns that there may be
> interpretative differences in the nature of the statements provided or the
> proposed guarantees. This seems like a reasonable basis of concern. Recall
> when TrustWave provided a similar creative interpretation regarding a MITM
> certificate it issued for purposes of "local" traffic inspection [2][3],
> attempting to claim it was not a BR violation. Or recall that Symantec made
> similar claims that the 30,000+ certificates that it could not demonstrate
> adhered to the BRs were somehow, nevertheless, not "misissued" [4] - as if
> the point of concern was the semantic statement of misissuance, rather than
> the systemic failure of the controls and the resulting lack of assurance.
>

I do acknowledge the difference here, and I appreciate your bringing this
particular concern to my attention.  As always, your depth of knowledge and
experience in the evolution of this area is astounding.

I suppose my initial response to the concern as presented is that it would
seem to be a fairly trivial (just paperwork, really) matter for DarkMatter
(or indeed any other applicant) to separate the CA into a fully separate
legal entity with common ownership interest with any other business they
may currently have going on.  I put forth the question as to whether or not
the assurances you reference and the legal structuring you note are an
actual, effective risk mitigation.

I see two elements in this which might be said to be the real underlying
risk mitigation:

1.  The legal structure and common ownership is truly the safety
mechanism.  I find this...tenuous.  I'm not sure any piece of paper ever
really kept a bad actor from acting bad.  This seems very much like "Meet
the new boss [who's wholly owned by the old boss], same as the old boss."
 In essence, I think if the matter on which the trust hangs is slightly
different nuances to first party assertions, that this is so thin and
consequence free in the violation that I regard it as not really material.

2.  Maybe the real risk mitigation is self-interested asset appreciation /
asset protection.  What I mean by this is that quite simply the ownership
of a hypothetical CA and a hypothetical "bad business" -- however we define
it but construed such that the "bad business" has an apparent conflict in
that they'd like to abuse their owner's CA's trust -- will act to defend
their business interest (in this case the value of each of the business
segments) by preventing one of their business segments from destroying the
continued value of the other segment.  (We can agree, I believe, that a CA
that no one trusts has essentially no value.)

It's pretty clear that I put more faith in a business' "greedy"
self-interest than I do in legal entity paperwork games.  Which, I believe,
raises an intriguing concept.  What if the true test of a CA's
trustworthiness is, in fact, a mutually understandable apparent value build
/ value preservation on a standalone basis of the asset that is the CA?  In
other words, maybe we can only trust a CA whose value proposition to the
ownership we can reasonably understand from the perspective of the
ownership, if we limit that value only to the value that can be derived in
a fully-legitimate use of the CA determined as a going value of the CA from
a fully standalone basis in addition to the value of the CA in the overall
scope of the larger business, constrained to only the legitimate synergies
that may arise.

>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org

Re: DarkMatter Concerns

2019-03-05 Thread Matthew Hardeman via dev-security-policy
On Tue, Mar 5, 2019 at 11:10 AM Matthew Hardeman 
wrote:

>
> This means there are two recent precedents for which this category of
> issues has not resulted in delegation of trust and one proposal that the
> same category of behaviors should.  I am not suggesting that a position
> against DarkMatter on this basis is an indicator of xenophobia or bias
> against a particular national affiliation, but I do wonder how one would
> defend against such an accusation.
>

Whoops.  What I meant to say is "has not resulted in revocation of the
delegation of trust".
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-05 Thread Matthew Hardeman via dev-security-policy
On Tue, Mar 5, 2019 at 8:16 AM Alex Gaynor via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> You're right, there is no test. That's why some of us believe we should
> look at proxies: such as honesty, considering root membership is ultimately
> about trust. DM has made claims that I am unable to understand in any way
> besides lying.
>
>
Unless the lies are material and relate to their CA operations, I don't
think it's relevant.  One has to approach these stories with skepticism.
Bloomberg is regarded as reputable, but look at the SuperMicro case.  If
there are provable commissions of dishonest behavior material to the
operations of the CA, I would think these would have been offered up by now.


> As you are well aware, there is a neighboring claim that _is_ accurate.
> Which is that a malicious root CA would be able to issue for any domain,
> and thus issue certificates to enable MITM. While it is misleading to say
> that DM would be able to decrypt all customer data, it's completely true
> that DM would be able to MITM _any_ TLS traffic -- customer or not!
>
>
And yet many tiny CAs exist, and if we look at the economics of CAs today,
some of them must be struggling.  If this were their [DarkMatter's] intent,
rather than establishing a long term service, wouldn't they just buy up one
of those and delay the disclosure?  If we're assuming that their nefarious
presumptive interception demanding client is the national government of the
UAE, it's clear that there's plenty of cash to do just that.  With that
kind of money, you don't really even need to buy up a tiny CA.  You could
likely just purchase the very integrity of the operators of one.


> Do you believe there is _any_ outside activity a CA could engage in, while
> still maintaing clean audits, that should disqualify them for membership in
> the Mozilla Root Program?
>

Personally, I think the value of the audits is rather limited, but it does
catch some things and remains a good safety.  Certificate Transparency has
done a great deal to improve this space and is, going forward, an even more
valuable check on corruption.

Objections to DarkMatter on the sole basis of the actions of a sibling
business with common owners is dangerous turf to get into, if we care about
historic precedent.  Not only for corporate MITM but for straight-up
malware as well.  Until quite recently the operation presently called
Sectigo was called Comodo and for a not brief period was owned by Francisco
Partners, an organization which also owns/owned the NSO Group.
Additionally, and before Symantec would ultimately be untrusted for
entirely unrelated reasons, Symantec owned BlueCoat.

This means there are two recent precedents for which this category of
issues has not resulted in delegation of trust and one proposal that the
same category of behaviors should.  I am not suggesting that a position
against DarkMatter on this basis is an indicator of xenophobia or bias
against a particular national affiliation, but I do wonder how one would
defend against such an accusation.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-04 Thread Matthew Hardeman via dev-security-policy
My perspective is that of an end user and also that of a software developer
involved in a non-web-browser space in which various devices and
manufacturers generally defer to the Mozilla root program's trust store.
As such, I'm quite certain that my opinions don't -- and should not -- have
the weight that yours or Ryan Sleevi's do.

That said, I do think the principal concern with discretionary basis for
reaching a decision leaves the door open to criticism of favoritism, etc,
all things that transparency helps avoid.  As such, I don't personally
consider use of discretion in these matters to be in keeping with
transparency.

I agree that there is the technical matter of the on the serial matters,
but it seems as though DarkMatter has already signaled a willingness to cut
a new hierarchy that wouldn't have that problem.  By historical precedent,
that would be more than enough remediation.  Discussion also suggests that
there may yet be more CAs with these same serial numbers issues lurking in
the weeds owing to default configuration of the predominant CA software.

While the other two examples get to "no", it is apparent that both of those
cases had a more complicated set of extant issues in the hierarchies which
were being put forth for inclusion/upgrade.

I agree that there's inevitably discretion exercised as to the measure of
sanction or remediation required.  But I can't find any recent cases of
discretion effectively barring an entity from inclusion.

On Mon, Mar 4, 2019 at 10:40 AM Wayne Thayer  wrote:

>
>> I was concerned by the idea that discretionary decisions inherently lack
> transparency, but it sounds like we are agreeing that is not the case. In
> my experience, the approval or denial of a root inclusion request often
> comes down to a subjective decision. Some issues exist that could
> technically disqualify the request (e.g. DarkMatter's serial number
> entropy) and we have to weight the good, 'meh', and bad of the request to
> come to a decision. Sometimes we say 'no' (e.g. [1], [2]).
>
> - Wayne
>
> [1]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/wCZsVq7AtUY/Uj1aMht9BAAJ
> [2]
> https://groups.google.com/d/msg/mozilla.dev.security.policy/fTeHAGGTBqg/l51Nt5ijAgAJ
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-04 Thread Matthew Hardeman via dev-security-policy
On Sun, Mar 3, 2019 at 6:13 PM Ryan Sleevi  wrote:

>
> It is not clear how this follows. As my previous messages tried to
> capture, the program is, and has always been, inherently subjective and
> precisely designed to support discretionary decisions. These do not seem to
> inherently conflict with or contradict transparency.
>
> Even setting aside the examples of inclusions - ones which were designed
> to be based on a communal evaluation of risks and benefits - one can look
> at the fact that every violation of the program rules and guidelines has
> not resulted in CAs being immediately removed. Every aspect of the program,
> including the audits, is discretionary in nature.
>
> It would be useful to understand where and how you see the conflict,
> though.
>

I think my disconnect arises in as far as that for the period of time in
which I've tracked the program and this group, I can not recall use of
subjective discretion to deny admission to the program.  Any use of a
subjective basis as the lead cause for not including Dark Matter would, to
my admittedly limited time-window of observation in this area, be new
territory.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-03-03 Thread Matthew Hardeman via dev-security-policy
On Sun, Mar 3, 2019 at 2:17 PM bxward85--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> Insane that this is even being debated. If the floodgates are opened here
> you will NOT be able to get things back under control.
>

While I can appreciate the passion of comments such as this, I think we're
still back at a core problem:

How can you reconcile this position with the actual program rules &
guidelines?  If they're declined on some discretionary basis, you loose the
transparency that's made the Mozilla root program so uniquely valuable.

Other than the relatively minor issues which have already been brought to
light (and presently DarkMatter seems to be contemplating the generation of
a whole new root and issuing heirarchy to address those), where are the
rules violations that would keep them out?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible DigiCert in-addr.arpa Mis-issuance

2019-02-28 Thread Matthew Hardeman via dev-security-policy
On Wednesday, February 27, 2019 at 8:54:35 AM UTC-6, Jakob Bohm wrote:

> One hypothetical use would be to secure BGP traffic, as certificates
> with IpAddress SANs are less commonly supported.

The networking / interconnection world has already worked out the trust 
hierarchy for the RPKI scheme.  As there are a number of global RIRs who are 
the authoritative source of ASN and IP space information, they've elected to 
themselves be the Root CAs involved.  Its an interesting infrastructure.  You 
can learn more about it here:

https://www.arin.net/resources/rpki/index.html
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-28 Thread Matthew Hardeman via dev-security-policy
I wanted to take a few moments to say that I believe that Ryan Sleevi's
extensive write-up is one of the most meticulously supported and researched
documents that I've seen discuss this particular aspect of trust delegation
decisions as pertains to the various root programs.  It is an incredible
read that will likely serve as a valuable resource for some time to come.

An aspect in which I especially and wholeheartedly agree is the commentary
on the special nature of the Mozilla Root CA program and its commitment to
transparency of decisions.  This is pretty unique among the trust stores
and I believe it's an extreme value to the community and would love to see
it preserved.

Broadly I agree with most of the substance of the positions Ryan Sleevi
laid out, but do have some thoughts that diverge in some aspects.

Regarding program policy as it now stands, it is not unreasonable to arrive
at a position that the root program would be better positioned to supervise
and sanction DarkMatter as a member Root CA than as a trusted SubCA.  For
starters, as a practical matter, membership in the root program does not
offer DarkMatter a privilege or capability that they do not already have
today.  (Save for, presumably, a license fee payable or already paid to
QuoVadis/Digicert.)  By requiring directly interfacing with Mozilla on any
and all disputes or issues, root program membership would mean Mozilla gets
to make final decisions such as revocation of trust directly against the CA
and can further issue a bulletin putting all other CA program members on
note that DarkMatter (if untrusted) is not, under any circumstances, to be
issued a SubCA chaining to a trusted root.  The obvious recent precedent in
that matter is StartCom/StartSSL/WoSign.

On the topic of beneficial ownership I am less enthusiastic for several
reasons:

1.  It should not matter who holds the equity and board level control if
the trust is vested in the executive team and the executive team held
accountable by the program.  At that level, the CA just becomes an
investment property of the beneficial owners and the executive and
(hopefully) the ownership are aware that their membership in the root CA
program is a sensitive and fragile asset subject to easy total loss of
value should (increasingly easily detectible) malfeasance occur.  I submit
that this may be imperfect, but I believe it's far more achievable than
meaningful understanding and monitoring of beneficial ownership.

2.  Actually getting a full understanding of the down-to-the-person level
of the beneficial ownership is complex and in some cases might range from
infeasible to impossible.  It's possible for even the senior management to
have less than full transparency to this.

3.  Even if you can achieve a correct and full understanding of beneficial
ownership, it is inherently a point-in-time data set.  There are ways to
transfer off the equity and/or control that may happen in multiple steps or
increments so as to avoid triggering change-of-control reporting, etc.
There are attorneys and accountants who specialize in this stuff and plenty
of legal jurisdictions that actively facilitate.


On Thu, Feb 28, 2019 at 7:55 AM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> (Writing in a personal capacity)
>
> I want to preemptively apologize for the length of this message. Despite
> multiple rounds of editing, there's still much to be said, and I'd prefer
> to say it in public, in the spirit of those past discussions, so that they
> can be both referred to and (hopefully) critiqued.
>
> These discussions are no easy matter, as shown from the past conversations
> regarding both TeliaSonera [1] and CNNIC [2][3][4][5]. There have been
> related discussions [6][7], some of which even discuss the UAE [8]. If you
> go through and read those messages, you will find many similar messages,
> and from many similar organizations, as this thread has provoked.
>
> In looking at these older discussions, as well as this thread, common
> themes begin to emerge. These themes highlight fundamental questions about
> what the goals of Mozilla are, and how best to achieve those goals. My hope
> is to explore some of these questions, and their implications, so that we
> can ensure we're not overlooking any consequences that may result from
> particular decisions. Whatever the decision is made - to trust or distrust
> - we should at least make sure we're going in eyes wide open as to what may
> happen.
>
> 1) Objectivity vs Subjectivity
>
> Wayne's initial message calls it out rather explicitly, but you can see it
> similarly in positions from past Mozilla representatives - from Gerv
> Markham, Sid Stamm, Jonathan Nightingale - and current, such as Kathleen
> Wilson. The "it" I'm referring to is the tension between Mozilla's Root
> Program, which provides a number of ideally objective criteria for CAs to
> meet for inclusion, and the policy itself providing significant leeway 

Re: Possible DigiCert in-addr.arpa Mis-issuance

2019-02-28 Thread Matthew Hardeman via dev-security-policy
In addition to the GDPR concerns over WHOIS and RDAP data, reliance upon
these data sources has a crucial differentiation from other domain
validation methods.

Specifically, the WHOIS/RDAP data sources are entirely "off-path" with
respect to how a browser will locate and access a given site.  To my way of
thinking, this renders these mechanisms functionally inferior to an
"on-path" mechanism, such as reliances upon demonstrated change control
over an authoritative DNS record or even demonstration content change
control over a website.

Since domain validation is, in theory, about validating that the party to
whom a certificate is to be issued has demonstrated control over the
subject of the desired name(s) or the name space of the desired name(s), it
seems clear that "off-path" validation is less valuable as a security
measure.

Although I'm aware that the BRs bless a number of methods, it's also clear
that methods have been excluded by the Mozilla root program before.  Is it
time to consider further winnowing down the accepted methods?

On Thu, Feb 28, 2019 at 5:43 AM Ryan Sleevi via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Thu, Feb 28, 2019 at 6:21 AM Nick Lamb via dev-security-policy <
> dev-security-policy@lists.mozilla.org> wrote:
>
> > On Thu, 28 Feb 2019 05:52:14 +
> > Jeremy Rowley via dev-security-policy
> >  wrote:
> >
> > Hi Jeremy,
> >
> > > 4. The validation agent specified the approval scope as id-addr.arpa
> >
> > I assume this is a typo by you not the agent, for in-addr.arpa ?
> >
> > Meanwhile, and without prejudice to the report itself once made:
> >
> > > 2. The system marked the WHOIS as unavailable for automated parsing
> > > (generally, this happens if we are being throttled or the WHOIS info
> > > is behind a CAPTCHA), which allows a validation agent to manually
> > > upload a WHOIS document
> >
> > This is a potentially large hole in issuance checks based on WHOIS.
> >
> > Operationally the approach taken ("We can't get it to work, press on")
> > makes sense, but if we take a step back there's obvious potential for
> > nasty security surprises like this one.
> >
> > There has to be something we can do here, I will spitball something in
> > a next paragraph just to have something to start with, but to me if it
> > turns out we can't improve on basically "sometimes it doesn't work so
> > we just shrug and move on" we need to start thinking about deprecating
> > this approach altogether. Not just for DigiCert, for everybody.
> >
> > - Spitball: What if the CA/B went to the registries, at least the big
> >   ones, and said we need this, strictly for this defined purpose, give
> >   us either reliable WHOIS, or RDAP, or direct database access or
> >   _something_ we can automate to do these checks ? The nature of CA/B
> >   may mean that it's not appropriate to negotiate paying for this
> >   (pressuring suppliers to all agree to offer members the same rates is
> >   just as much a problem as all agreeing what you'll charge customers)
> >   but it should be able to co-ordinate making sure members get access,
> >   and that it isn't opened up to dubious data resellers that the
> >   registries don't want rifling through their database.
> >
>
> Unfortunately, this is not really viable. The CA/Browser Forum maintains
> relationships with ICANN, as do individual members. While this, on its
> face, seems reasonable, there are practical, business, and legal concerns
> that prevent this from being viable. Further, proposals which would require
> membership in the CA/Browser Forum should, on their face, be rejected - a
> CA should not have to join the Forum in order to be a CA.
>
> I do agree, however, that the use of WHOIS data continues to show
> problematic incidents - whether it's with OCR issues or manual entry - and
> suspect a more meaningful solution is to move away from this model
> entirely. The recently approved methods to the BRs for expressing contact
> information via the DNS directly is one such approach. The GDPR issues
> surrounding WHOIS and RDAP have already led it to be compelling in its own
> right.
>
> Most importantly, you are on the right path of questions, though - which is
> we should examine such incidents systemically and look for improvements,
> and not convince ourselves that the status quo is the best possible
> solution :)
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-27 Thread Matthew Hardeman via dev-security-policy
While I was going to respond to the below, Nick Lamb has beaten me to it.
I concur in full with the remarks in that reply.

We should not be picking national favorites as a root program.  There's a
whole world out there which must be supported.

What we should be doing is ensuring that we know the parties involved, have
mechanisms for monitoring their compliance, and have mechanisms for
untrusting parties who misissue.

On Wed, Feb 27, 2019 at 8:30 AM Alex Gaynor  wrote:

> (Writing in my personal capacity)
>
> I don't think this is well reasoned. There's several things going on here.
> First, the United States government's sovereign jurisdiction has nothing to
> do with any of these companies' business relationship with it. All would be
> subject to various administrative and judicial procedures in any event.
> Probably most relevantly, the All Writs Act (see; Apple vs FBI) -- although
> it's not at all clear that it would extend to a court being able to compel
> a CA to misissue. (Before someone jumps in to say "National Security
> Letter", you should probably know that an NSL is an administrative subpoena
> for a few specific pieces of a non-content metadata, not a magic catch all.
> https://www.law.cornell.edu/uscode/text/18/2709). Again, none of which is
> impacted by these company's being government contractors.
>
> Finally, I think there's a point that is very much being stepped around
> here. The United States Government, including its intelligence services,
> operate under the rule of law, it is governed by both domestic and
> international law, and various oversight functions. It is ultimately
> accountable to elected political leadership, who are accountable to a
> democracy. The same cannot be said of the UAE, which is an autocratic
> monarchy. Its intelligence services are not constrained by the rule of law,
> and I think you see this reflected in the targetting of surveillance
> described in the Reuters article: journalists, human rights activists,
> political rivals.
>
> While it can be very tempting to lump all governments, and particularly
> all intelligence services, into one bucket, I think it's important we
> consider the variety of different ways such services can function.
>
> Alex
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible DigiCert in-addr.arpa Mis-issuance

2019-02-27 Thread Matthew Hardeman via dev-security-policy
On Wed, Feb 27, 2019 at 9:04 AM Nick Lamb  wrote:

>
> It does feel as though ARPA should consider adding a CAA record to
> in-addr.arpa and similar hierarchies that don't want certificates,
> denying all CAs, as a defence in depth measure.
>

Unless I significantly misunderstand CAA, this mechanism would not
necessarily be effective.

The normal mode of operation is that at the in-addr.arpa zone delegates sub
zones, for example 199.in-addr.arpa to the relevant RIR via an NS record.
Further, the relevant RIR would delegate sub zones of that zone via NS
records to an IP space holder, for example 88.99.199.in-addr.arpa would
have NS records configured on the RIR name servers which would refer to the
authoritative DNS servers serving the IP space holder for 199.99.88.0/24.

As such, superseding CAA records which would allow issuance could be added
back into those hierarchies by the DNS admins of those zones.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-26 Thread Matthew Hardeman via dev-security-policy
The issue I see with that interpretation is that the very same matter has
previously been discussed on this list and resolved quite vocally in the
favor of the other position: that making careful choices about the CSPRNG
output to conform it to mask out the high order bit makes the output of at
least that bit not truly the output of the CSPRNG but rather the output of
the mask.

Pedantically speaking, I actually favor your analysis.  But that probably
will do you no favors as to public perception at a time point when your
request for inclusion is at a crucial phase.

On Wed, Feb 27, 2019 at 12:56 AM Scott Rea via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> G’day Wayne et al,
>
> I am not sure why members of the group keep making the claim that these
> certificates are misused under the BRs.
> Corey pointed to the following paragraph in Section 7.1 of the BRs as the
> source of the control that DM is accused of not complying with:
>
> “Effective September 30, 2016, CAs SHALL generate non-sequential
> Certificate serial numbers greater than zero (0) containing at least 64
> bits of output from a CSPRNG.”
>
> DarkMatter has responded to show that we have actually followed this
> requirement exactly as it is written. Furthermore, since there seems to be
> a number of folks on the Group that believe more stringent controls are
> needed, DM has agreed to move all its public trust certificates to random
> serialNumbers with double the required entropy following our next change
> control in the coming week.
>
> It is not a requirement of Section 7.1 that serialNumber contain random
> numbers with 64-bit entropy – which appears to be the claim you are making.
> If this was the intention of this section in the BRs then perhaps we can
> propose such a change to the BRs. perhaps something like the following
> could be proposed:
>
> “Effective September 30, 2016, CAs SHALL generate non-sequential
> Certificate serial numbers greater than zero (0) and output from a CSPRNG
> such that the resulting serialNumber contains at least 64 bits of entropy.”
>
> However, once again, I want to reiterate the current practice of DM for
> the public trust certificates that we have generated to date:
> 1. all serial numbers are non-sequential;
> 2. all serial numbers are greater than zero;
> 3. all serial numbers contain at least 64 bits of output from a CSPRNG
>
> As such, all DM certificates that Corey specifically highlighted were
> issued in compliance with the BRs and specifically in compliance with
> Section 7.1 that Corey quoted.
>
> If there is another requirement in the BRs in respect to serial numbers
> where it states that they must contain 64 bits of entropy then can you
> please point this out?
>
>
> Regards,
>
> --
>
> Scott Rea
>
> On 2/26/19, 7:41 PM, "dev-security-policy on behalf of Wayne Thayer via
> dev-security-policy"  behalf of dev-security-policy@lists.mozilla.org> wrote:
>
> >I assume you are referring to those certificates containing a serial
> number with effectively 63-bits of entropy? They are misissued. BR
> section
> 4.9.1.1 provides guidance.
>
>
>
>
> Scott Rea | Senior Vice President - Trust Services
> Tel: +971 2 417 1417 | Mob: +971 52 847 5093
> scott@darkmatter.ae
>
> The information transmitted, including attachments, is intended only for
> the person(s) or entity to which it is addressed and may contain
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon
> this information by persons or entities other than the intended recipient
> is prohibited. If you received this in error, please contact the sender and
> destroy any copies of this information.
>
>
>
>
>
>
>
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-26 Thread Matthew Hardeman via dev-security-policy
I'd like to take a moment to point out that determination of the beneficial
ownership of business of various sorts (including CAs) can, in quite a
number of jurisdictions, be difficult to impossible (short of initiating
adverse legal proceedings) to determine.

What does this mean for Mozilla's trusted root program or any other root
program for that matter?  I submit that it means that anyone rarely knows
to a certainty the nature and extent of ownership and control over a given
business to a high degree of confidence.  This is especially true when you
start divorcing equity interest from right of control.  (Famous example,
Zuckerberg's overall ownership of Facebook is noted at less than 30% of the
company, yet he ultimately has personal control of more than 70% of voting
rights over the company, the end result is that he ultimately can control
the company and its operations in virtually any respect.)

A number of jurisdictions allow for creating of trusts, etc, for which the
ownership and control information is not made public.  Several of those, in
turn, can each be owners of an otherwise normal looking LLC in an innocuous
jurisdiction elsewhere, each holding say, 10% equity and voting rights.
Say there are 6 of those.  Well, all six of them can ultimately be proxies
for the same hidden partner or entity.  And that partner/entity would
secretly be in full control.  Without insider help, it would be very
difficult to determine who that hidden party is.

Having said all of this, I do have a point relevant to the current case.
Any entity already operating a WebPKI trusted root signed SubCA should be
presumed to have all the access to the professionals and capital needed to
create a new CA operation with cleverly obscured ownership and corporate
governance.  You probably can not "fix" this via any mechanism.

In a sense, that DarkMatter isn't trying to create a new CA out of the
blue, operated and controlled by them or their ultimate ownership but
rather is being transparent about who they are is interesting.

One presumes they would expect to get caught at misissuance.  The record of
noncompliance and misissuance bugs created, investigated, and resolved one
way or another demonstrates quite clearly that over the history of the
program a non-compliant CA has never been more likely to get caught and
dealt with than they are today.

I believe the root programs should require a list of human names with
verifiable identities and corresponding signed declarations of all
management and technical staff with privileged access to keys or ability to
process signing transactions outside the normal flow.  Each of those people
should agree to a life-long ban from trusted CAs should they be shown to
take intentional action to produce certificates which would violate the
rules, lead to MITM, etc.  Those people should get a free pass if they
whistle blow immediately upon being forced, or ideally immediately
beforehand as they hand privilege and control to someone else.

While it is unreasonable to expect to be able to track beneficial
ownership, formal commitments from the entity and the individuals involved
in day to day management and operations would lead to a strong assertion of
accountable individuals whose cooperation would be required in order to
create/provide a bad certificate.  And those individuals could have "skin
in the game" -- the threat of never again being able to work for any CA
that wants to remain in the trusted root programs.

All of Google, Amazon, and Microsoft are in the program.  All of these have
or had significant business with at least the US DOD and have a significant
core of managing executives as well as operations staff and assets in the
United States.  As such, it is beyond dispute that each of these is
subordinate to the laws and demands of the US Government.  Still, none of
these stand accused of using their publicly trusted root CAs to issue
certificates to a nefarious end.  It seems that no one can demonstrate that
DarkMatter has or would either.  If so, no one has provided any evidence of
that here.

It's beyond dispute that Mozilla's trusted root program rules allow for
discretionary exclusion of a participant without cause.  As far as I'm
aware, that hasn't been relied upon as yet.

For technologists and logicians, it should rankle that it might be
necessary to make reliance upon such a provision in order to keep
DarkMatter out.  In my mind, it actually calls into question whether they
should be kept out.

As Digicert's representative has already pointed out, the only BR
compliance matter even suggested at this point is the bits-of-entropy in
serial number issue and others have been given a pass on that.  While I
suppose you could call this exclusionary and sufficient to prevent the
addition, it would normally be possible for them to create new key pairs
and issuance hierarchy and start again with an inclusion request for those,
avoiding that concern in round 2.

I think the 

Re: Possible DigiCert in-addr.arpa Mis-issuance

2019-02-26 Thread Matthew Hardeman via dev-security-policy
Is it even proper to have a SAN dnsName in in-addr.arpa ever?

While in-addr.arpa IS a real DNS heirarchy under the .arpa TLD, it rarely
has anything other than PTR and NS records defined.

Here this was clearly achieved by creating a CNAME record for
69.168.110.79.in-addr.arpa pointed to cynthia.re.

I've never seen any software or documentation anywhere attempting to
utilize a reverse-IP formatted in-addr.arpa address as though it were a
normal host name for resolution.  I wonder whether this isn't a case that
should just be treated as an invalid domain for purposes of SAN dnsName
(like .local).

On Tue, Feb 26, 2019 at 1:05 PM Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Thanks Cynthia. We are investigating and will report back shortly.
> 
> From: dev-security-policy 
> on behalf of Cynthia Revström via dev-security-policy <
> dev-security-policy@lists.mozilla.org>
> Sent: Tuesday, February 26, 2019 12:02:20 PM
> To: dev-security-policy@lists.mozilla.org
> Cc: b...@benjojo.co.uk
> Subject: Possible DigiCert in-addr.arpa Mis-issuance
>
> Hello dev.security.policy
>
>
> Apologies if I have made any mistakes in how I post, this is my first
> time posting here. Anyway:
>
>
> I have managed to issue a certificate with a FQDN in the SAN that I do
> not have control of via Digicert.
>
>
> The precert is here: https://crt.sh/?id=1231411316
>
> SHA256: 651B68C520492A44A5E99A1D6C99099573E8B53DEDBC69166F60685863B390D1
>
>
> I have notified Digicert who responded back with a generic response
> followed by the certificate being revoked through OCSP. However I
> believe that this should be wider investigated, since this cert was
> issued by me adding 69.168.110.79.in-addr.arpa to my SAN, a DNS area
> that I do control though reverse DNS.
>
>
> When I verified 5.168.110.79.in-addr.arpa (same subdomain), I noticed
> that the whole of in-addr.arpa became validated on my account, instead
> of just my small section of it (168.110.79.in-addr.arpa at best).
>
>
> To test if digicert had just in fact mis-validated a FQDN, I tested with
> the reverse DNS address of 192.168.1.1, and it worked and Digicert
> issued me a certificate with 1.1.168.192.in-addr.arpa on it.
>
>
> Is there anything else dev.security.policy needs to do with this? This
> seems like a clear case of mis issuance. It's also not clear if
> in-addr.arpa should even be issuable.
>
>
> I would like to take a moment to thank Ben Cartwright-Cox and igloo5
> in pointing out this violation.
>
>
> Regards
>
> Cynthia Revström
>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-25 Thread Matthew Hardeman via dev-security-policy
On Mon, Feb 25, 2019 at 12:15 PM Richard Salz  wrote:

> You miss the point of my question.
>
> What types of certs would they issue that would NOT expect to be trusted
> by the public?
>
>>
>>>
I get the question in principle.  If it is a certificate not intended for
public trust, I suppose I wonder whether or not it's truly in scope for
policy / browser inclusion / etc discussions?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: DarkMatter Concerns

2019-02-25 Thread Matthew Hardeman via dev-security-policy
The answer to the question of what certificates they intend to CT log or
not may be interesting as a point of curiosity, but the in-product CT
logging requirements of certain internet browsers (Chrome, Safari) would
seem to ultimately force them to CT log the certificates that are intended
to be trusted by a broad set of internet browsers.

On Mon, Feb 25, 2019 at 12:01 PM rich.salz--- via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Apart from the concerns others have already raised, I am bothered by the
> wording of one of the Dark Matter commitments, which says that "TLS certs
> intended for public trust" will be logged. What does public trust mean?
> Does it include certificates intended only for use within their country?
> Those intended to be used only on a small, privately-specified, set of
> recipients?
>
> Perhaps a better way to phrase my question is: what certs would DM issue
> that would *not* be subject to their CT logging SOP?
>
> Is there any other trusted root that has made a similar exemption?
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: usareally.com and OFAC lists

2019-01-15 Thread Matthew Hardeman via dev-security-policy
On Mon, Jan 14, 2019 at 5:45 PM Wayne Thayer via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> > Am I wrong to expect US CAs to be monitoring OFAC sanctions lists?
> Otherwise they would risk violating the typical "comply with applicable
> law" stipulation in section 9 of their CPS'.
>

I'm concerned that such a policy interpretation would necessarily imply
that the CA, in the routine course, might need to have an assertion as to
the individual / organization requesting or benefitting from the
certificate.  While Let's Encrypt allows account registration to be
attached to an email address, I'm pretty sure they don't require it.

I'm working from the assumption that the CAs in the US are following a
"most-conservative approach" legal guidance in determining whether or not a
given certificate that they might issue or may have issued constitutes a
covered dealing as defined in the various sanctions orders and laws (which
are numerous.)  In the strictest literal sense, a CA is performing a
service (as in performing work upon the request of a party) at the behest
of a certificate requestor / subscriber -- who might in fact be unrelated
to the ownership of the DNS name which is incorporated in the certificate
(providing the requestor can succeed at validation.  This would suggest
that providing service for the wrong requestor might be of more
significance to the sanctions rules than whether or not the target website
mentioned in the certificate is owned or operated by a sanctioned entity.

There's also a reasonable case to be made that providing OCSP status
information over an unrelated third party's certificate, to and/or upon the
request of a sanctioned entity might be construed as providing service to a
sanctioned entity.

I suppose my concern is that generally speaking, dns names aren't
sanctioned.  Entities are.  But in a domain validation environment, it is
reasonable to suggest that CAs pull in lists of entities and then monitor
these for domain names, the provenance of said domain names having not even
been required and/or established in the course of issuance?
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Concerns with Dun & Bradstreet as a QIIS

2018-09-27 Thread Matthew Hardeman via dev-security-policy
A whitelist of QGIS sounds fairly difficult.  And how long would it take to
adopt a new one?

In some states you're going to have an authority per county.  It'd be a big
list.

On Thu, Sep 27, 2018 at 5:35 PM, Ian Carroll via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wednesday, September 26, 2018 at 6:12:22 PM UTC-7, Ryan Sleevi wrote:
> > Thanks for raising this, Ian.
> >
> > The question and concern about QIIS is extremely reasonable. As discussed
> > in past CA/Browser Forum activities, some CAs have extended the
> definition
> > to treat Google Maps as a QIIS (it is not), as well as third-party WHOIS
> > services (they’re not; that’s using a DTP).
> >
> > In the discussions, I proposed a comprehensive set of reforms that would
> > wholly remedy this issue. Given that the objective of OV and EV
> > certificates is nominally to establish a legal identity, and the legal
> > identity is derived from State power of recognition, I proposed that only
> > QGIS be recognized for such information. This wholly resolves differences
> > in interpretation on suitable QIIS.
> >
> > However, to ensure there do not also emerge conflicting understandings of
> > appropriate QGIS - and in particular, since the BRs and EVGs recognize a
> > variety of QGIS’s with variable levels of assurance relative to the
> > information included - I further suggested that the determination of a
> QGIS
> > for a jurisdictional boundary should be maintained as a normative
> whitelist
> > that can be interoperably used and assessed against. If a given
> > jurisdiction is not included within that whitelist, or the QGIS is not on
> > it, it cannot be used. Additions to that whitelist can be maintained by
> the
> > Forum, based on an evaluation of the suitability of that QGIS for
> purpose,
> > and a consensus for adoption.
> >
> > This would significantly reduce the risk, while also further reducing
> > ambiguities that have arisen from some CAs attempting to argue that
> > non-employees of the CA or QGIS, but which act as intermediaries on
> behalf
> > of the CA to the QGIS, are not functionally and formally DTPs and this
> > subject to the assessment requirements of DTPs. This ambiguity is being
> > exploited in ways that can allow a CA to nominally say it checked a QGIS,
> > but is relying on the word of a third-party, and with no assurance of the
> > system security of that third party.
> >
> > Do you think such a proposal would wholly address your concern?
>
> I think I'll always agree with removing intermediaries from the validation
> process. Outside of practical concerns, a whitelist of QGIS entities sounds
> like a good idea.
>
> I would wonder what the replacement for D is in the United States. You
> can normally get an address for a company from a QGIS but not (from the
> states I've seen) a phone number for callback verification.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Google Trust Services Root Inclusion Request

2018-09-18 Thread Matthew Hardeman via dev-security-policy
A few thoughts, inlined below...

On Monday, September 17, 2018 at 6:42:29 PM UTC-5, Jake Weisz wrote:
> I guess under this logic, I withdraw my protest. As you say, Google
> could simply start using these certificates, and Mozilla executives
> would force you to accept them regardless of any policy violations in
> order to keep people using Firefox. This whole process appears to
> mostly just be a veneer of legitimacy on a process roughly akin to the
> fair and democratic election of Vladimir Putin. :| As long as Google
> remains legally answerable to no authority and an effective monopoly
> in half a dozen markets, there is roughly no point for Mozilla to
> maintain a CA policy: It should simply use Chrome's trusted store.

Your summation here does not logically follow.  Yes, it's true that with a 
giant installed base of Chrome and the ability to auto-update it, Google can 
more or less arbitrarily insert new trust into their browser with impunity.

Having said that, they have historically not done so.  In fact -- and I think 
this may be changing --  for now, Chrome on most platforms delegates initial 
trust decision to the OS's corresponding trust store.  Chrome on MacOS / Chrome 
on IOS use the native APIs and Apple trust store to determine initial trust, 
then Chrome applies further logic to downgrade trust of certain scenarios 
(Symantec descendant certs, etc.)  Chrome on Windows presently uses the Windows 
APIs and Windows trust store.  It has been suggested that Chrome ultimately 
intends to maintain a formal Chrome trust store, but this is not the case today.

Today this means that to be trusted on Windows, even in Chrome, you have to be 
in the Microsoft root program.  To be trusted on Apple platforms, even in 
Chrome, you have to be in the Apple root program.

To date, no one has caught Chrome trusting things it shouldn't by way of an 
automated update.  If they tried to do that without good explanation, it would 
be easily caught at the level of scale that Chrome is used at.

It is undeniable that the various titans of the internet wield enormous power 
over the software and infrastructure of the internet.  Historically, Google is 
a significant enough contributor to Mozilla financially that it's hard to 
imagine that Mozilla would deny them much even if making Firefox trust 
everything that Chrome trusts didn't become competitively necessary.

Nevertheless, even if Google were totally exempt from the standards for 
inclusion and even if Google didn't act honorably in their inclusions (though 
nothing has suggested this), your argument that Mozilla shouldn't bother with a 
trust store / root program is illogical.  Even if Google got a truly free pass, 
someone still has to police the many others who want to be in the trust program.

> Google's explanation in their announcement seems to confirm my
> statement: That buying roots from GlobalSign is effectively
> backdooring the CA process and making their certificates work in
> products which would not otherwise trust them.

Actually, Google took a bit of heat from the community and the Mozilla root 
program regarding the acquisitions of that root and of the transfer of the 
roots to Google.  While ultimately no action was taken against Google or 
Globalsign as a direct result of those transfers, the transfers did evidence 
holes in the program's policies and further revisions were made and guidance 
given for any future transfers.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A vision of an entirely different WebPKI of the future...

2018-08-17 Thread Matthew Hardeman via dev-security-policy
On Friday, August 17, 2018 at 2:01:55 AM UTC-5, Peter Gutmann wrote:

> That was actually debated by one country, that whenever anyone bought a domain
> they'd automatically get a certificate for it included.  Makes perfect sense,
> owning the domain is a pretty good proof of ownership of the domain for
> certificate purposes.  It eventually sank under the cost and complexity of
> registrars being allowed to operate CAs that were trusted by browsers [0].

That's very interesting.  I would be curious to know the timing of this.  Was 
this before or after massive deployment of DNSSEC by the registries?

Also, I wish to clarify one tiny point again: I submit that only the Registries 
would be operating CAs and performing signature operations.  Registrars would 
merely interface with the registries.  This is an important and noteworthy 
distinction as there are far fewer Registries than Registrars (and additionally 
the burdens and complexities of operating as a Registry are significantly 
greater than the challenges of running a Registrar).

As to the questions of the complexity of gaining trust by the browsers, I 
assume this question arose because the discussion centered around trying to fit 
such a scheme to the current WebPKI and its assumptions.  I'm inclined to 
believe that if the browsers and the Registries and/or ICANN on their behalf 
wanted to create a secure and trustable mechanism that it could happen.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A vision of an entirely different WebPKI of the future...

2018-08-17 Thread Matthew Hardeman via dev-security-policy
On Thursday, August 16, 2018 at 6:18:47 PM UTC-5, Jakob Bohm wrote:

> The main cause of this seems to be that CT has allowed much more
> vigorous prosecution of even the smallest mistake.  Your argument
> is a sensationalist attack on an thoroughly honest industry.

I certainly didn't mean it as an attack.  I do agree that CT has allowed for 
greater scrutiny and in turn we find more issues.  Some of those issues are 
insignificant, some are of concern.  I did not mean to in any way imply that 
there is currently a controversy involving malfeasance at a CA.

In fact, my proposal stemmed in equal part from the concern that today's domain 
validation methods are susceptible to problems in network service layers which 
are known to be insecure and where vulnerabilities have been demonstrated.

> That is a viewpoint promoted almost exclusively by a company that has
> way too much power and is the subject of some serious public
> prosecution.  Cow-towing to that mastodont is not buy-in or agreement,
> merely fear.

In this particular aspect, I suspect you and I substantially agree.  I don't 
hold any strong opinion against that particular company, but they certainly can 
bring much weight to any argument they make.  I do see both sides of the 
argument.  I'm on the record in several other threads in this group advocating 
for the value of strong identity in WebPKI certificates and advocating for 
continued inclusion of this information.  My overall position on that has not 
changed and to reiterate clearly, one again, I'm definitely on the other side 
of that argument versus that certain large company.

Having said that, IF and only IF consensus arrived in the other direction -- 
that the only meaningful subject identifiers in WebPKI certificates are the 
covered domain labels -- then I would assert that it makes sense to pursue a 
WebPKI in which the existing authority hierarchy be directly responsible for 
certificates within that hierarchy and in a manner constrained directly to the 
limits of each Registry's authority over the DNS.  This, I believe, would be 
preferable to a multi-party system in which the best practices, at best, 
determine issuing authority on the basis of insecure proxies and consequences 
of the authoritative data.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Matthew Hardeman via dev-security-policy
On Thursday, August 16, 2018 at 3:34:01 PM UTC-5, Paul Wouters wrote:
> Why would people not in the business of being a CA do a better job than
> those currently in the CA business?

I certainly do not assert that there would be no learning curve.  However, 
these same registries for the generic TLDs are already implementing 
cryptographic signatures and delegations at scale for DNSSEC, including 
signatures over the delegation records to authoritative DNS as well as 
cryptographic assurances that DNSSEC is not enabled for a given domain, many of 
the operational concerns of a CA are already being undertaken today as part of 
the job routinely performed by the registries.

> If you want a radical change that makes it simpler, start doing TLSA in
> DNSSEC and skip the middle man that issues certs based on DNS records.

The trouble that I see with that scheme is the typical location of DNSSEC 
validation.  DNSSEC + a third party CA witnessing point-in-time correctness of 
the validation challenge as DNSSEC signed allows for DNSSEC to provide some 
improvement to issuance-time DNS validation.  However, as soon as you take a 
third party CA out of the picture, you no longer have a "witness" with 
controlled environment (independent network vantage point, proper ensuring of 
DNSSEC signature validity, etc).  Desktop clients today don't generally perform 
DNSSEC validation themselves, relying upon the resolver that they reference to 
perform that task.  This opens a door for a man in the middle between the 
desktop and the recursive resolver.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Matthew Hardeman via dev-security-policy
On Thursday, August 16, 2018 at 3:18:38 PM UTC-5, Wayne Thayer wrote:
> What problem(s) are you trying to solve with this concept? If it's
> misissuance as broadly defined, then I'm highly skeptical that Registry
> Operators - the number of which is on the same order of magnitude as CAs
> [1] - would perform better than existing CAs in this regard. You also need
> to consider the fact that ICANN has little authority over ccTLDs.

One issue that would be solved in such a scheme as I've proposed is that only a 
single administrative hierarchy may issue certificates for a given TLD and 
further that that hierarchy is the same as that which has TLD level 
responsibility over domains within that TLD.

Pedantic as it may be, there's virtually no such thing as a misissuance by a 
registry, if only because literally whatever they say about a domain at any 
given moment is "correct" and is the authoritative answer.

A scheme such as I've proposed also eliminates all the other layers of failure 
which may occur that can yield undesirable issuances today: concerns over BGP 
hijacks of authoritative DNS server IP space are eliminated, concerns over 
authoritative DNS server compromise of other forms is eliminated, concern over 
compromise of a target web server is eliminated.

In the scheme I propose, the registry is signing only upon orders from the 
registrar responsible for the given domain within the TLD and the registrar 
gives such orders only upon authenticated requests that are authenticated at 
least to the same level of assurance as would be required to alter the 
authoritative DNS delegations for the domain.  (Consequently, that level of 
access today is certainly sufficient to achieve issuance from any CA that 
issues automatically upon validation against DNS records.)

I concede that ICANN would have no means to impose this upon the CC TLDs, 
leaving a gap to be figured out.

I recognize that this is a maverick idea, nearly completely divorced from the 
current WebPKI's structure.  Having said that, I do think it aligns the 
capability to issue a certificate to the administrative structures which 
already determine the very definition of what is meant by a given dnsName.  In 
addition, it reduces many diverse attack surface areas down to a single one 
(account takeover / infrastructure takeover of registrar/registry) that is 
already in the overall threat model.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


A vision of an entirely different WebPKI of the future...

2018-08-16 Thread Matthew Hardeman via dev-security-policy
Of late, there seems to be an ever increasing number of misissuances of various 
forms arising.

Despite certificate transparency, increased use of linters, etc, it's virtually 
impossible to find any CA issuing in volume that hasn't committed some issuance 
sin.

Simultaneously, there seems to be an increasing level of buy-in that the only 
useful identifying element(s) in a WebPKI certificate today are the domain 
labels covered by the certificate and that these certificates should be issued 
only upon demonstrated control of the included domain labels.

DNS is already authoritatively hierarchical.

ICANN has pretty broad authority to impose requirements upon the gTLDs...

What if the various user agents' root programs all lobbied ICANN to impose a 
new technical requirement upon TLD REGISTRY operators?

Specifically, that all registry operators would:

1.  Run one or more root CAs (presumably a Root CA per TLD under their 
management), to be included in the user agents' root program trust stores, such 
that each of these certificates is technically constrained to the included 
gTLD(s) contractually managed by that registry operator.

- and further -

2.  That all such registries be required to make available to the registrars an 
automated interface for request of certificates signed by said CA (or a 
registry held and controlled issuing CA descendent of said root) over domain 
labels within that TLD on behalf of the customer holding a domain.  (For 
example, Google Domains has an interface by which it can request a certificate 
to be created and signed over a customer provided public key for requested 
labels within that registrar's customer's account.)

- and further -

3.  The registrars be required to provide appropriate interfaces to their 
customers (just as they do for DNSSEC DS records today) to have the registry 
issue certificates over those domains they hold.

If you wanted to spice it up, you could even require that the domain holder be 
able to request a signature over a technically constrained SubCA.  Then the 
domain holders can do whatever they like with their domains' certificates.

Perform validation of technical requirements (like no 5 year EE certs) into the 
product and enforce at the product level.

If the WebPKI is truly to be reduced to identifying specific domain labels in 
certificates issued only for those demonstrating control over those labels, why 
do we really need a marketplace where multiple entities can provide those 
certificates?

The combination of registrar and registry already have complete trust in these 
matters because those actors can hijack control of their domains in an instant 
and properly ask any CA to issue.  That can happen today.

What this would improve, however, is that there's one and only one place to get 
a certificate for your example.com.  From the registrar for that domain, with 
the signing request authenticated by the registrar as being for the customer of 
the registrar who holds that domain and then further delegated for signature by 
the registry itself.

Such a mechanism could even be incrementally rolled out in parallel to the 
current scheme.  Over time, modern TLS endpoints meant to be accessed to 
browsers would migrate to certificates issued descending from these registry 
held and managed roots.

>From a practicality perspective, I don't see why this couldn't happen, should 
>enough lobbying of ICANN be provided.  Today, ICANN already imposes certain 
>technical requirements upon both Registries and Registrars as well as 
>constraints upon their interactions with each other.  As a not entirely 
>unrelated example -- this one involving cryptography and key management --  
>today registries of generic TLDs are required to implement DNSSEC.

I recognize it's a radical departure from what is.  I'm interested in 
understanding if anything proposed here is impossible.  If what's proposed here 
CAN happen, AND IF we are confident that valid certificates for a domain label 
should unambiguously align to domain control, isn't this the ultimate solution?

Thanks,

Matt Hardeman

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Further BGP hijacks of high value authoritative DNS servers' IP space.

2018-08-03 Thread Matthew Hardeman via dev-security-policy
Noted by the Oracle/Dyn team at: 
https://blogs.oracle.com/internetintelligence/bgp-dns-hijacks-target-payment-systems

July 2018 saw multiple attacks on authoritative DNS infrastructure of both 
dedicated DNS service providers and of certain high value internally 
administered DNS services which answer authoritatively for multiple of the 
major (primarily US based) credit card processing networks.

While the scope of the advertisements was somewhat contained, they still 
managed to get 30% of peers of some of the BGP listening points at which Dyn 
has visibility to accept these more specific routes.

In the case of First Data, the specific networks which answer authoritatively 
for First Data's Datawire network were among the particular (and obviously 
intentionally) selected targets.

While the Dyn article does not mention this, the casual outsider might 
recognize First Data as a major player in the credit card payments space, but 
Datawire and the datawire.net domain (which are First Data services for 
transmission of payment batch settlement data and secure file exchange for 
things like the BIN Master File, etc.) is not well know.

This suggests that one or more parties quite familiar with the payment networks 
and the crucial infrastructure of the payment networks (and so, in turn, would 
be well familiar with the fact that these mostly rely upon TLS encryption) is 
attempting to subvert the authoritative DNS for some cause.

I believe it's not a great leap to suggest that they may likely seek 
certificate issuance.

Just thought I'd ping the list for thoughts...

Matt Hardeman
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible violation of CAA by nazwa.pl

2018-07-26 Thread Matthew Hardeman via dev-security-policy
On Thu, Jul 26, 2018 at 2:23 PM, Tom Delmas via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> > The party actually running the authoritative DNS servers is in control
> of the domain.
>
> I'm not sure I agree. They can control the domain, but they are supposed
> to be subordinate of the domain owner. If they did something without the
> owner consent/approval, it really looks like a domain hijacking.


But the agreement under which they're supposed to be subordinate to the
domain owner is a private matter between the domain owner and the party
managing the authoritative DNS.  Even if this were domain hijacking, a
certificate issued that relied upon a proper domain validation method is
still proper issuance, technically.  Once this comes to light, there may be
grounds for the proper owner to get the certificate revoked, but the
initial issuance was proper as long as the validation was properly
performed.


>
>
> > I'm not suggesting that the CA did anything untoward in issuing this
> > certificate.  I am not suggesting that at all.
>
> My opinion is that if the CA was aware that the owner didn't ask/consent
> to that issuance, If it's not a misissuance according to the BRs, it should
> be.


Others can weigh in, but I'm fairly certain that it is not misissuance
according to the BRs.  Furthermore, with respect to issuance via domain
validation, there's an intentional focus on demonstrated control rather
than ownership, as ownership is a concept which can't really be securely
validated in an automated fashion.  As such, I suspect it's unlikely that
the industry or browsers would accept such a change.


>
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible violation of CAA by nazwa.pl

2018-07-26 Thread Matthew Hardeman via dev-security-policy
I think the whole point of domain validation certificates is taking the
human part out of it and verifying technical control of the domain as the
standard upon which to base issuance.

Since the CA is also the DNS server, it's more or less a given that they
certainly can or would successfully validate.  It's noteworthy that domain
validation is about demonstrating control rather than ownership.  The party
actually running the authoritative DNS servers is in control of the domain.

I'm not suggesting that the CA did anything untoward in issuing this
certificate.  I am not suggesting that at all.

I am, however, suggesting that even if they admitted to just creating a new
certificate for the domain without contacting the owner, I think that
wouldn't technically be a misissuance, right?


On Thu, Jul 26, 2018 at 10:40 AM, Tom via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Wednesday, 25 July 2018 21:08:59 UTC, michel.le...@gmail.com  wrote:
> > Hello,
> >
> > My domain registrar who is also a certificate authority just issued a
> > precertificate (visible in CT logs) and a valid
> > certificate for my domain. This is part of their new offer to
> automatically offer free certificates for all of their domains:
> > https://www.nazwa.pl/certyfikaty-ssl/
> >
> > I had a CAA record that only allowed letsencrypt.org to issue
> > certificates for my domain:
> > `lebihan.pl.3600IN  CAA 0 issue
> > "letsencrypt.org"`
> >
> >
> > I think my domain registrar just violated my CAA by issuing that
> > certificate. Where they allowed to issue this certificate?
>
>
> Can you clarify if _you_ initiated the certificate request; or if the
> certificate was created and signed without any action from you?
>
> I think those are two very difference cases. If you initiated it, they
> didn't CAA (because they weren't required to.)  If you didn't... isn't that
> a rogue issuance?
>
> -tom
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Possible violation of CAA by nazwa.pl

2018-07-25 Thread Matthew Hardeman via dev-security-policy
Yes, I thought there was an exemption for that also.

The A-DNS operator could always just momentarily change the records to
authorize anyway, so why bother with the check?

On Wed, Jul 25, 2018 at 4:21 PM, Quirin Scheitle via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi Michel,
>
> > On 23. Jul 2018, at 22:36, michel.lebihan2000--- via dev-security-policy
>  wrote:
> >
> > I think my domain registrar just violated my CAA by issuing that
> > certificate. Where they allowed to issue this certificate?
>
> the name servers for lebihan.pl are ns[1-3].nazwa.pl. , which indicates
> that your hoster (nazwa.pl) also operates your name servers.
>
> The certificate is issued by nazwaSSL, which links to Certum’s roots.
>
> Checking against current version 1.6.0 of BRs, Sec 3.2.2.8 reads:
>
> "CAA checking is optional if the CA or an Affiliate of the CA is the DNS
> Operator (as defined in RFC 7719) of the domain's DNS.”
>
> So, if am not mistaken at some step, this is probably OK per current CAB
> BRs.
>
> Kind regards
> Quirin
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Namecheap refused to revoke certificate despite domain owner changed

2018-06-01 Thread Matthew Hardeman via dev-security-policy
On Fri, Jun 1, 2018 at 2:38 PM, Jeremy Rowley via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> This is one of the reasons I think we should require an OID specifying the
> validation method be included in the cert. Then you can require the CA
> support revocation using the same validation process as was used to confirm
> certificate authorization. With each cert logged in CT, everyone in the
> world will know exactly how to revoke an unauthorized or no-longer-wanted
> cert.
>
>
I agree that it would be forensically interesting to have that data
available in the certificate.  I question whether a policy of using only
the same method of demonstrating control anew is appropriate as a policy
for granting revocation.

There is a hierarchy of supremacy in domain validation.  The party
controlling the NS delegations from the registry has absolute precedence
over the present effective DNS server administrator, should they choose to
flex it.  The party immediately in effective control of the authoritative
DNS takes precedence over a website admin within the domain.

Consider that now current CAA records and policy (for good cause, even)
might presently prohibit successful validation via the method previously
utilized to acquire the certificate that the current domain holder wishes
to have revoked.  (Even if only by specifying a new CA, rather than the CA
that previously issued the certificate for which revocation is being
sought.)  Would you then advocate that if the validation can succeed --
save for the CAA mismatch -- that this be regarded as sufficient evidence
to revoke?  That probably deserves some careful thought.

In any event, proof of ability to modify the authoritative DNS over each
label in the certificate should almost certainly suffice to revoke a
previously issued certificate that relied exclusively upon just about any
other sort of domain validation.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-06-01 Thread Matthew Hardeman via dev-security-policy
On Thu, May 31, 2018 at 8:38 PM, Peter Gutmann 
wrote:

>
> >Banks, trade vendors, etc, tend to reject accounts with names like this.
>
> Do they?
>
> https://www.flickr.com/photos/nzphoto/6038112443/


I would hope that we could agree that there is generally a different risk
management burden in getting a store loyalty tracking card versus getting a
loan or even opening a business demand deposit account.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Disallowed company name

2018-06-01 Thread Matthew Hardeman via dev-security-policy
On Fri, Jun 1, 2018 at 10:28 AM, Ryan Hurst via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

>
> re: Most of the government offices responsible for approving entity
> creation are concerned first and foremost with ensuring that a unique name
> within their jurisdiction is chosen
>
> What makes you say that, most jurisdictions have no such requirement.
>
>
This was anecdotal, based on my own experience with formation of various
limited liability entities in several US states.

Even my own state of Alabama, for example, (typically regarded as pretty
backwards) has strong policies and procedures in place for this.

In Alabama, formation of a limited liability entity whether a Corporation
or LLC, etc, begins with a filing in the relevant county probate court of
an Articles of Incorporation, Articles or Organization, trust formation
documents, or similar.  As part of the mandatory filing package for those
document types, a name reservation certificate (which will be validated by
the probate court) from the Alabama Secretary of State will be required.
The filer must obtain those directly from the appropriate office of the
Alabama Secretary of State.  (It can be done online, with a credit card.
The system enforces entity name uniqueness.)
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


  1   2   3   >