Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-20 Thread jacob.hoffmanandrews--- via dev-security-policy
> It's been an on-going question for me, since the use case (as a software
> developer) is quite real: if you serve a site over HTTPS and it needs to
> communicate with a local client application then you need this (or, you
> need to manage your own CA, and ask every person to install a
> certificate on all their devices)

I think it's both safe and reasonable to talk to localhost over HTTP rather 
than HTTPS, because any party that can intercept communications to localhost 
presumably has nearly full control of your machine anyhow.

There's the question of mixed content blocking: If you have an HTTPS host URL, 
can you embed or otherwise communicate with a local HTTP URL? AFAICT both 
Chrome and Firefox will allow that: 
https://chromium.googlesource.com/chromium/src.git/+/130ee686fa00b617bfc001ceb3bb49782da2cb4e
 and https://bugzilla.mozilla.org/show_bug.cgi?id=903966. I haven't checked 
other browsers. Note that you have to use "127.0.0.1" rather than "localhost." 
See https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-03 for 
why.

So I think the answer to your underlying question is: Use HTTP on localhost 
instead of a certificate with a publicly resolvable name and a compromised 
private key. The latter is actually very risky because a MitM attacker can 
change the resolution of the public name to something other than 127.0.0.1, and 
because the private key is compromised, the attacker can also successfully 
complete a TLS handshake with a valid certificate. So the technique under 
discussion here actually makes web<->local communications less secure, not more.

Also, as a reminder: make sure that the code operating on localhost carefully 
restricts which web origins are allowed to talk to it, for instance by using 
CORS with the Access-Control-Allow-Origin header: 
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-20 Thread Matthew Hardeman via dev-security-policy
On Tuesday, June 20, 2017 at 2:15:57 PM UTC-5, annie nguyen wrote:

> Dropbox, GitHub, Spotify and Discord (among others) have done the same
> thing for years: they embed SSL certificates and private keys into their
> applications so that, for example, open.spotify.com can talk to a local
> instance of Spotify (which must be served over https because
> open.spotify.com is also delivered over https).
> 
> This has happened for years, and these applications have certificates
> issued by DigiCert and Comodo all pointing to 127.0.0.1 whose private
> keys are trivially retrievable, since they're embedded in publicly
> distributed binaries.
> 

Really?!?  This is ridiculous.


> What I want to know is: how does this differ to Cisco's situation? Why
> was Cisco's key revoked and considered compromised, but these have been
> known about and deemed acceptable for years - what makes the situation
> different?

That situation is not different from the Cisco situation and should yield the 
same result.

> It's been an on-going question for me, since the use case (as a software
> developer) is quite real: if you serve a site over HTTPS and it needs to
> communicate with a local client application then you need this (or, you
> need to manage your own CA, and ask every person to install a
> certificate on all their devices)

There are numerous security reasons for this, which quite several other people 
here are better than I to illuminate.  I'm a software developer myself 
(particularly, I am in the real-time communications space).  I am not naive to 
the many great uses of WebSockets and similar.  I have to admit, however, that 
never once have I ever considered having a piece of software that I have 
written running in the background with an open server socket for purposes of 
waiting on any old caller from localhost to interact with it.

The major browsers already consider localhost to be a secure context 
automatically, even without https.  In this case, however, they don't seem to 
follow that.  I have this theory as to why

Maybe they think that it is ridiculous that an arbitrary website "need" to 
interact with locally installed software via WebSocket (or any other manner, 
short of those which require a great deal of explicit end-user interaction).  
It is not beyond imagination that they may even regard the mere fact that 
people believe that they "need" such interaction to be ridiculous.

Perhaps they've stopped to think, "Well, that would only work if our software 
or some part of our software is running on the visitor's system all the time."  
That kind of thing, in turn, encourages developers to write auto-start software 
that runs in the background from system startup and just sits there waiting for 
the user to load your website.  That wastes of system resources (and probably 
an unconscionable amount of energy worldwide).

Perhaps they are concerned that if the local software "needs" interaction from 
a browser UI being served up from an actual web server elsewhere on the 
internet, then the software may well be written by people who are not well 
versed in the various mechanisms of security exploit in networked environments. 
 Those are just the kinds of developers you do not want to be writing code that 
opens and listens for connections on server sockets.  As a minor example, I do 
not believe that cisco.com is on the PSL.  This means that if other Cisco.com 
web sites use domain-wide cookies, those cookies are available to that software 
running on the computer.  Conversely, having that key and an ability to 
manipulate a computer's DNS queries might allow a third party to perpetrate a 
targeted attack and capture any cisco.com site wide cookies.

I personally am of the position that visiting a website in a browser should 
never privilege that website with direct interaction to other software on my 
computer without some sort of explicit extension / plugin / bridge technology 
that I had numerous obnoxious warnings to overcome to get installed.

Why on earth would a visit to spotify.com ever need to interact with the 
Spotify application on my computer?  Can you explain what they do with that?

More broadly, the Google people provide Chrome Native Messaging for scenarios 
where trusted sources (like a Chrome extension) can communicate with a local 
application which has opted into this arrangement in a secure way.  Limiting it 
to access via Chrome Extensions means that your Chrome browser user needs to 
install the Chrome Extension before you will be able to engage via that conduit.

A brief glance at the various chromium bugs involved in locking down access to 
WebSocket when referenced from a secure origin shows that the Chrome people 
definitely understood the use case and did not care that it would break things 
like this.

I have no affiliation with any browser team, but I speculate based on their 
actions and their commentary that they _meant_ to break this use case.  Rather, 
they seem to

Re: Symantec response to Google proposal

2017-06-20 Thread Jakob Bohm via dev-security-policy

On 20/06/2017 08:08, Gervase Markham wrote:

On 20/06/17 01:21, Jakob Bohm wrote:

2. For any certificate bundle that needs to be incorporated into the
   Mozilla root stores, a significant period (3 to 6 months at least)
   will be needed between acceptance by Mozilla and actual trust by
   Mozilla users.


Not if the roots were cross-signed by the old PKI.



Then they don't "need to be incorporated into the Mozilla root stores",
although incorporating them may be useful as part of removing the old
roots.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [EXT] Mozilla requirements of Symantec

2017-06-20 Thread Jakob Bohm via dev-security-policy

On 20/06/2017 09:05, Ryan Sleevi wrote:

On Mon, Jun 19, 2017 at 7:01 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:


NSS until fairly recently was in fact used for code signing of Firefox
extensions using the public PKI (this is why there is a defunct code
signing trust bit in the NSS root store).



This is not an accurate representation on why there is a code signing trust
bit in the NSS root store.



Then what is an accurate representation?

I am of cause aware that before the xpi format was even invented,
Netscape, Mozilla and Sun/Oracle used the NSS key store and possibly the
NSS code to validate signatures on Java applets.  But I am unsure if and
when they stopped doing that.




I also believe that the current checking of "AMO" signatures is still
done by NSS, but not using the public PKI.



If you mean with respect to code, sure, but that is a generic signature
checking.



Really?  I would have thought it was the same validation code previously
used for public PKI signatures on the same file types.

Anyway, it is most certainly checking signatures on code in a way
consistent with the general concept of "code signing" (the exact
placement and formatting of "code signing" signatures is extremely
vendor and file format dependent).




This makes it completely reasonable for other users of the NSS libraries
to still use it for code signing, provided that the "code signing" trust
bits in the NSS root store are replaced with an independent list,
possibly based on the CCADB.



This is not correct. The NSS team has made it clear the future of this code
with respect to its suitability as a generic "code signing" functionality -
that is, that it is not.



Pointer?

Was this communicated in a way visible to all NSS using software?




It also makes it likely that systems with long development / update
cycles have not yet deployed their own replacement for the code signing
trust bits in the NSS root store, even if they have a semi-automated
system importing changes to the NSS root store.  That would of cause be
a mistake on their part, but a very likely mistake.



This was always a mistake, not a recent one. But a misuse of the API does
not make a valid use case.



How was it a mistake back when Mozilla was using NSS for "code signing"?
(Whenever that was).

P.S.

I am following the newsgroup, no need to CC me on replies.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-20 Thread Koen Rouwhorst via dev-security-policy
For your information: I have reported this issue to Spotify on Monday
(yesterday) through their official vulnerability disclosure channel
(HackerOne).  The (not-yet-public) issue was assigned ID 241222.

In the report I have included all the necessary (technical) details,
including citations of the relevant sections from the Baseline
Requirements, and DigiCert policies. The report was acknowledged, and
has been escalated to Spotify's security team for review.

On Tue, Jun 20, 2017, at 22:23, Rob Stradling via dev-security-policy
wrote:
> [CC'ing rev...@digicert.com, as per 
> https://ccadb-public.secure.force.com/mozillacommunications/CACommResponsesOnlyReport?CommunicationId=a05o03WrzBC&QuestionId=Q00028]
> 
> Annie,
> 
> "but these have been known about and deemed acceptable for years"
> 
> Known about by whom?  Deemed acceptable by whom?  Until the CA becomes 
> aware of a key compromise, the CA will not know that the corresponding 
> certificate(s) needs to be revoked.
> 
> Thanks for providing the Spotify example.  I've just found the 
> corresponding certificate (issued by DigiCert) and submitted it to some 
> CT logs.  It's not yet revoked:
> https://crt.sh/?id=158082729
> 
> https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0 does 
> appear to be the corresponding private key.
> 
> On 20/06/17 15:57, annie nguyen via dev-security-policy wrote:
> > Hi!
> > 
> > I'm not sure if this is the correct place to ask (I'm not sure where
> > else I would ask). I'm so sorry if this message is unwanted.
> > 
> > Earlier this week, a certificate for a domain resolving to 127.0.0.1 in
> > a Cisco application was revoked, because it was deemed to have been
> > compromised.
> > 
> > Dropbox, GitHub, Spotify and Discord (among others) have done the same
> > thing for years: they embed SSL certificates and private keys into their
> > applications so that, for example, open.spotify.com can talk to a local
> > instance of Spotify (which must be served over https because
> > open.spotify.com is also delivered over https).
> > 
> > This has happened for years, and these applications have certificates
> > issued by DigiCert and Comodo all pointing to 127.0.0.1 whose private
> > keys are trivially retrievable, since they're embedded in publicly
> > distributed binaries.
> > 
> > - GitHub: ghconduit.com
> > - Discord: discordapp.io
> > - Dropbox: www.dropboxlocalhost.com
> > - Spotify: *.spotilocal.com
> > 
> > Here is Spotify's, for example:
> > https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0
> > 
> > 
> > 
> > What I want to know is: how does this differ to Cisco's situation? Why
> > was Cisco's key revoked and considered compromised, but these have been
> > known about and deemed acceptable for years - what makes the situation
> > different?
> > 
> > It's been an on-going question for me, since the use case (as a software
> > developer) is quite real: if you serve a site over HTTPS and it needs to
> > communicate with a local client application then you need this (or, you
> > need to manage your own CA, and ask every person to install a
> > certificate on all their devices)
> > 
> > Thank you so much,
> > Annie
> > ___
> > dev-security-policy mailing list
> > dev-security-policy@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> > 
> 
> -- 
> Rob Stradling
> Senior Research & Development Scientist
> COMODO - Creating Trust Online
> Office Tel: +44.(0)1274.730505
> Office Fax: +44.(0)1274.730909
> www.comodo.com
> 
> COMODO CA Limited, Registered in England No. 04058690
> Registered Office:
>3rd Floor, 26 Office Village, Exchange Quay,
>Trafford Road, Salford, Manchester M5 3EQ
> 
> This e-mail and any files transmitted with it are confidential and 
> intended solely for the use of the individual or entity to whom they are 
> addressed.  If you have received this email in error please notify the 
> sender by replying to the e-mail containing this attachment. Replies to 
> this email may be monitored by COMODO for operational or business 
> reasons. Whilst every endeavour is taken to ensure that e-mails are free 
> from viruses, no liability can be accepted and the recipient is 
> requested to use their own virus checking software.
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread randomsyseng--- via dev-security-policy
> Moral of the story, if you have to ask if it's a disclosure, you are better 
> safe than sorry and keeping the info under close wraps until you confirm it.

I think it's better it was disclosed than had it not been disclosed at all. 

While I agree to an extent that there could have been more optimal ways for the 
disclosure in this particular case, I think we should not try to disencourage 
disclosure. If someone spots something and and asks a very generic question, 
I'm like 99% sure folks will tell him to be more detailed else they can't have 
an opinion. (Which basically is what he did, he asked one person a generic 
question and that person told him to ask it here, I guess it's reasonable that 
he was more detailed when doing so.) Now, if we tell people who spotted 
something AND did some additional research on it, which is IMHO commendable, 
that they better shouldn't have disclosed anything before checking with 
so-and-so and waited such-and-such an amount of time (which they might be aware 
of, or not), the next such person will likely just think, oh, sod it, maybe 
it's not so important anyway. (It's not like this was a complicated 0-day where 
the itsec engineer who found it would already have known exactly 
 who to contact in advance.) Blaming the person who disclosed it is not helpful 
I think. 
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-20 Thread Rob Stradling via dev-security-policy
[CC'ing rev...@digicert.com, as per 
https://ccadb-public.secure.force.com/mozillacommunications/CACommResponsesOnlyReport?CommunicationId=a05o03WrzBC&QuestionId=Q00028]


Annie,

"but these have been known about and deemed acceptable for years"

Known about by whom?  Deemed acceptable by whom?  Until the CA becomes 
aware of a key compromise, the CA will not know that the corresponding 
certificate(s) needs to be revoked.


Thanks for providing the Spotify example.  I've just found the 
corresponding certificate (issued by DigiCert) and submitted it to some 
CT logs.  It's not yet revoked:

https://crt.sh/?id=158082729

https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0 does 
appear to be the corresponding private key.


On 20/06/17 15:57, annie nguyen via dev-security-policy wrote:

Hi!

I'm not sure if this is the correct place to ask (I'm not sure where
else I would ask). I'm so sorry if this message is unwanted.

Earlier this week, a certificate for a domain resolving to 127.0.0.1 in
a Cisco application was revoked, because it was deemed to have been
compromised.

Dropbox, GitHub, Spotify and Discord (among others) have done the same
thing for years: they embed SSL certificates and private keys into their
applications so that, for example, open.spotify.com can talk to a local
instance of Spotify (which must be served over https because
open.spotify.com is also delivered over https).

This has happened for years, and these applications have certificates
issued by DigiCert and Comodo all pointing to 127.0.0.1 whose private
keys are trivially retrievable, since they're embedded in publicly
distributed binaries.

- GitHub: ghconduit.com
- Discord: discordapp.io
- Dropbox: www.dropboxlocalhost.com
- Spotify: *.spotilocal.com

Here is Spotify's, for example:
https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0



What I want to know is: how does this differ to Cisco's situation? Why
was Cisco's key revoked and considered compromised, but these have been
known about and deemed acceptable for years - what makes the situation
different?

It's been an on-going question for me, since the use case (as a software
developer) is quite real: if you serve a site over HTTPS and it needs to
communicate with a local client application then you need this (or, you
need to manage your own CA, and ask every person to install a
certificate on all their devices)

Thank you so much,
Annie
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy



--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
Office Tel: +44.(0)1274.730505
Office Fax: +44.(0)1274.730909
www.comodo.com

COMODO CA Limited, Registered in England No. 04058690
Registered Office:
  3rd Floor, 26 Office Village, Exchange Quay,
  Trafford Road, Salford, Manchester M5 3EQ

This e-mail and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed.  If you have received this email in error please notify the 
sender by replying to the e-mail containing this attachment. Replies to 
this email may be monitored by COMODO for operational or business 
reasons. Whilst every endeavour is taken to ensure that e-mails are free 
from viruses, no liability can be accepted and the recipient is 
requested to use their own virus checking software.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-20 Thread Ryan Sleevi via dev-security-policy
Previous certificates for GitHub and Dropbox have been revoked for this
reason.

If this problem has been reintroduced, they similarly need to be revoked.

On Tue, Jun 20, 2017 at 4:57 PM annie nguyen via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> Hi!
>
> I'm not sure if this is the correct place to ask (I'm not sure where
> else I would ask). I'm so sorry if this message is unwanted.
>
> Earlier this week, a certificate for a domain resolving to 127.0.0.1 in
> a Cisco application was revoked, because it was deemed to have been
> compromised.
>
> Dropbox, GitHub, Spotify and Discord (among others) have done the same
> thing for years: they embed SSL certificates and private keys into their
> applications so that, for example, open.spotify.com can talk to a local
> instance of Spotify (which must be served over https because
> open.spotify.com is also delivered over https).
>
> This has happened for years, and these applications have certificates
> issued by DigiCert and Comodo all pointing to 127.0.0.1 whose private
> keys are trivially retrievable, since they're embedded in publicly
> distributed binaries.
>
> - GitHub: ghconduit.com
> - Discord: discordapp.io
> - Dropbox: www.dropboxlocalhost.com
> - Spotify: *.spotilocal.com
>
> Here is Spotify's, for example:
> https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0
>
> 
>
> What I want to know is: how does this differ to Cisco's situation? Why
> was Cisco's key revoked and considered compromised, but these have been
> known about and deemed acceptable for years - what makes the situation
> different?
>
> It's been an on-going question for me, since the use case (as a software
> developer) is quite real: if you serve a site over HTTPS and it needs to
> communicate with a local client application then you need this (or, you
> need to manage your own CA, and ask every person to install a
> certificate on all their devices)
>
> Thank you so much,
> Annie
> ___
> dev-security-policy mailing list
> dev-security-policy@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Audit Reminder Email Summary

2017-06-20 Thread Kathleen Wilson via dev-security-policy
 Forwarded Message 
Subject: Summary of June 2017 Audit Reminder Emails
Date: Tue, 20 Jun 2017 19:00:06 + (GMT)

Mozilla: Audit Reminder
Root Certificates:
   Atos TrustedRoot 2011
Standard Audit: 
https://www.mydqs.com/kunden/kundendatenbank.html?aoemydqs%5BrequestId%5D=europev2-DQS-27B09FD368FD11E6BE26005056BA67F7-_v2&aoemydqs%5BdownloadKey%5D=615a9f0920a27ede37762410735c2deadd67547d&aoemydqs%5Baction%5D=downloadDocument&cHash=9653a97e7fec91c2194d
Audit Statement Date: 2016-07-11
BR Audit: 
https://www.mydqs.com/kunden/kundendatenbank.html?aoemydqs%5BrequestId%5D=europev2-DQS-27B09FD368FD11E6BE26005056BA67F7-_v2&aoemydqs%5BdownloadKey%5D=615a9f0920a27ede37762410735c2deadd67547d&aoemydqs%5Baction%5D=downloadDocument&cHash=9653a97e7fec91c2194d
BR Audit Statement Date: 2016-07-11
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   Autoridad de Certificacion Firmaprofesional CIF A62634068
Standard Audit: https://cert.webtrust.org/SealFile?seal=2032&file=pdf
Audit Statement Date: 2016-04-11
BR Audit: https://bug521439.bmoattachments.org/attachment.cgi?id=8809981
BR Audit Statement Date: 2016-08-05
EV Audit: https://bug521439.bmoattachments.org/attachment.cgi?id=8809982
EV Audit Statement Date: 2016-08-05
CA Comments: BR and EV audits have happened, but there are action plans being 
presented to the auditors. Primary issues are use of UTF8 instead of 
PrintableString in jurisdictionOfIncorporation, and a recently repealed Spanish 
law that required privat



Mozilla: Audit Reminder
Root Certificates:
   Chambers of Commerce Root
   Chambers of Commerce Root - 2008
   Global Chambersign Root
   Global Chambersign Root - 2008
Standard Audit: https://bug986854.bmoattachments.org/attachment.cgi?id=8775118
Audit Statement Date: 2016-06-17
BR Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8800807
BR Audit Statement Date: 2016-08-05
EV Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8800811
EV Audit Statement Date: 2016-08-05
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   COMODO RSA Certification Authority
   USERTrust ECC Certification Authority
   AAA Certificate Services
   AddTrust Class 1 CA Root
   AddTrust External CA Root
   AddTrust Public CA Root
   AddTrust Qualified CA Root
   COMODO Certification Authority
   COMODO ECC Certification Authority
   Secure Certificate Services
   Trusted Certificate Services
   UTN-USERFirst-Client Authentication and Email
   UTN-USERFirst-Hardware
   UTN-USERFirst-Object
   USERTrust RSA Certification Authority
Standard Audit: https://cert.webtrust.org/SealFile?seal=2058&file=pdf
Audit Statement Date: 2016-06-03
BR Audit: https://cert.webtrust.org/SealFile?seal=2060&file=pdf
BR Audit Statement Date: 2016-06-03
BR Audit:
BR Audit Statement Date:
EV Audit: https://cert.webtrust.org/SealFile?seal=2059&file=pdf
EV Audit Statement Date: 2016-06-03
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   EC-ACC
Standard Audit: https://cert.webtrust.org/SealFile?seal=2043&file=pdf
Audit Statement Date: 2016-05-30
BR Audit: https://bugzilla.mozilla.org/attachment.cgi?id=8815404
BR Audit Statement Date: 2016-05-30
CA Comments: Per CA: ETSI-EIDAS audits to be released by the 1st of June 2017.



Mozilla: Audit Reminder
Root Certificates:
   AffirmTrust Commercial
   AffirmTrust Networking
   AffirmTrust Premium
   AffirmTrust Premium ECC
Standard Audit: https://cert.webtrust.org/SealFile?seal=2115&file=pdf
Audit Statement Date: 2016-06-30
BR Audit: https://cert.webtrust.org/SealFile?seal=2116&file=pdf
BR Audit Statement Date: 2016-06-30
EV Audit: https://cert.webtrust.org/SealFile?seal=2093&file=pdf
EV Audit Statement Date: 2016-06-30
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   GlobalSign ECC Root CA - R5
   GlobalSign Root CA - R3
   GlobalSign Root CA
   GlobalSign Extended Validation CA - SHA256 - G2 - intermediate cert being 
treated as root during transition
Standard Audit: https://cert.webtrust.org/SealFile?seal=2056&file=pdf
Audit Statement Date: 2016-06-10
BR Audit: https://cert.webtrust.org/SealFile?seal=2054&file=pdf
BR Audit Statement Date: 2016-06-10
EV Audit: https://cert.webtrust.org/SealFile?seal=2055&file=pdf
EV Audit Statement Date: 2016-06-10
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   Government Root Certification Authority - Taiwan
Standard Audit: https://cert.webtrust.org/SealFile?seal=2050&file=pdf
Audit Statement Date: 2016-06-29
BR Audit: https://cert.webtrust.org/SealFile?seal=2051&file=pdf
BR Audit Statement Date: 2016-06-29
CA Comments: null



Mozilla: Audit Reminder
Root Certificates:
   Hellenic Academic and Research Institutions RootCA 2011
   Hellenic Academic and Research Institutions ECC RootCA 2015
   Hellenic Academic and Research Institutions RootCA 2015
Standard Audit: 
http://www.harica.gr/documents/HARICA-ETSI_CERTIFICATE_AUTH_W_ANNEX_REV1.pdf
Audit Statement Date: 2016-07-08
BR Audit: 
htt

Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread reisinger.nate--- via dev-security-policy
On Tuesday, June 20, 2017 at 12:52:02 PM UTC-4, Lee wrote:
> On 6/20/17, mfisch--- via dev-security-policy
>  wrote:
> > On Monday, June 19, 2017 at 7:37:23 PM UTC-4, Matt Palmer wrote:
> >> On Sun, Jun 18, 2017 at 08:17:07AM -0700, troy.fridley--- via
> >> dev-security-policy wrote:
> >> > If you should find such an issue again in a Cisco owned domain, please
> >> > report it to ps...@cisco.com and we will ensure that prompt and proper
> >> > actions are taken.
> >>
> >> I don't know, this way seems to have worked Just Fine.
> >>
> >> - Matt
> >
> > Does no-one else see the lack of responsible disclosure in this thread
> > distressing?
> 
> Nope.  The first requirement for "responsible disclosure" is knowing
> you're disclosing something.  Take a look at the original msg:
> -- I wasn't entirely sure whether this is considered a key compromise. I asked
> -- Hanno Böck on Twitter (https://twitter.com/koenrh/status/873869275529957376
> -- ), and he advised me 
> to
> -- post the matter to this mailing list.
>  <.. snip ..>
> -- If this is indeed considered a key compromise, where do I go from
> here, and what
> -- are the recommended steps to take?
> 
> If you want to argue that I should have replied with something about
> sending the info to ps...@cisco.com instead of just forwarding the 1st
> two messages in the thread to them.. yeah, maybe I should have done it
> that way.
> 
> > Instead -- this was posted to a public forum giving many thousands of people
> > the opportunity to chain a vector attack against Cisco CCO IDs (which in
> > some cases might lead to customer network compromise).
> 
> I'm curious - how does one use a cert for drmlocal.cisco.com to chain
> a vector attack against Cisco CCO IDs?
> 
> Regards,
> Lee

I think his complaint was in the fact that you laid out every single detail 
while simultaneously asking what you should do. I think you could have done 
without the vast detail and kept it very generic until you figured out who to 
contact, then let the vendors fix it. 

Moral of the story, if you have to ask if it's a disclosure, you are better 
safe than sorry and keeping the info under close wraps until you confirm it.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


When are public applications embedding certificates pointing to 127.0.0.1 OK?

2017-06-20 Thread annie nguyen via dev-security-policy
Hi!

I'm not sure if this is the correct place to ask (I'm not sure where
else I would ask). I'm so sorry if this message is unwanted.

Earlier this week, a certificate for a domain resolving to 127.0.0.1 in
a Cisco application was revoked, because it was deemed to have been
compromised.

Dropbox, GitHub, Spotify and Discord (among others) have done the same
thing for years: they embed SSL certificates and private keys into their
applications so that, for example, open.spotify.com can talk to a local
instance of Spotify (which must be served over https because
open.spotify.com is also delivered over https).

This has happened for years, and these applications have certificates
issued by DigiCert and Comodo all pointing to 127.0.0.1 whose private
keys are trivially retrievable, since they're embedded in publicly
distributed binaries.

- GitHub: ghconduit.com
- Discord: discordapp.io
- Dropbox: www.dropboxlocalhost.com
- Spotify: *.spotilocal.com

Here is Spotify's, for example:
https://gist.github.com/venoms/d2d558b1da2794b9be6f57c5e81334f0



What I want to know is: how does this differ to Cisco's situation? Why
was Cisco's key revoked and considered compromised, but these have been
known about and deemed acceptable for years - what makes the situation
different?

It's been an on-going question for me, since the use case (as a software
developer) is quite real: if you serve a site over HTTPS and it needs to
communicate with a local client application then you need this (or, you
need to manage your own CA, and ask every person to install a
certificate on all their devices)

Thank you so much,
Annie
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-20 Thread Gervase Markham via dev-security-policy
Hi Doug,

On 20/06/17 16:31, Doug Beattie wrote:
> I'd like to recommend a phase in of the requirement for technically 
> constrained CAs that issue Secure email certificates.

For those following along at home, that is this change:
https://github.com/mozilla/pkipolicy/issues/69
https://github.com/mozilla/pkipolicy/commit/f96076a01ef10e5d6a84fa4b042227512925cb7c

> We have 2 customers that can issue Secure Email certificates that are
> not technically constrained with name Constraints (the EKU is
> constrained to Secure Email and ClientAuth).>
> One customer operates the CA within their environment and has been
> doing so for several years. Even though we've been encouraging them to
> move back to a Name Constrained CA or a hosted service, 

To be clear: this customer has the ability to issue email certificates
for any email address on the planet, and they control their own
intermediate in their own infrastructure?

Do they have audits of any sort?

What are their objections to moving to a hosted service?

> The other customer complies the prior words in the Mozilla policy regarding 
> "Business Controls".  We have an agreement with them where we issue them 
> Secure Email certificates from our Infrastructure for domains they host and 
> are contractually bound to using those certificates only for the matching 
> mail account.  Due to the number of different domains managed and fact they 
> obtain certificates on behalf of the users, it's difficult to enforce 
> validation of the email address.  We have plans to add features to this 
> issuance platform that will resolve this, but not in the near term.

So even though this issuance is from your infrastructure, there are no
restrictions on the domains they can request issuance from?

Gerv
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread mfisch--- via dev-security-policy
On Tuesday, June 20, 2017 at 2:27:10 PM UTC-4, mfi...@fortmesa.com wrote:
> On Tuesday, June 20, 2017 at 2:06:00 PM UTC-4, Jonathan Rudenberg wrote:
> > > On Jun 20, 2017, at 10:36, mfisch--- via dev-security-policy 
> > >  wrote:
> > > 
> > > On Monday, June 19, 2017 at 7:37:23 PM UTC-4, Matt Palmer wrote:
> > >> On Sun, Jun 18, 2017 at 08:17:07AM -0700, troy.fridley--- via 
> > >> dev-security-policy wrote:
> > >>> If you should find such an issue again in a Cisco owned domain, please
> > >>> report it to ps...@cisco.com and we will ensure that prompt and proper
> > >>> actions are taken.
> > >> 
> > >> I don't know, this way seems to have worked Just Fine.
> > >> 
> > >> - Matt
> > > 
> > > Does no-one else see the lack of responsible disclosure in this thread 
> > > distressing?
> > > 
> > > Yes, the cert was revoked, and for all you know during the race that was 
> > > this public disclosure there could have been compromise. There are APT 
> > > actors watching this thread right now looking for opportunities.
> > 
> > The disclosure looks responsible to me.
> > 
> > The domain resolves to 127.0.0.1, which means that the private key can only 
> > be effectively leveraged by a privileged attacker that can forge DNS 
> > responses. An attacker that can do this can almost certainly also block 
> > online OCSP/CRL checks, preventing the revocation from being seen by 
> > clients. In general, revocation will not have any meaningful impact against 
> > misuse unless the certificate is included in OneCRL/CRLSets (for 
> > Firefox/Chrome).
> > 
> > Jonathan
> 
> Koen,
>   Cheers on the find and thanks for your contribution.
> 
> Jonathan,
> 
>   Is your argument that TLS checking on session key disclosure is not 
> necessary because man in the middle is game over?
> 
>   You're right there are only very limited ways this sort of thing can be 
> used (and maybe it can't be used at all). But it would be difficult to argue 
> effectively that:
> 
> a) It can't be used under specific circumstances to weaken security and 
> possibly combine with other attacks.
> 
> b) That someone who recognized this for what it was did not reasonably 
> believe it _shouldn't_ be public knowledge.
> 
> c) I acknowledge your point about the effectiveness of revocation.
> 
> My issue is not in fact with the disclosure at all, but the fact that noone 
> else on this thread pointed out that it is not the ideal disclosure 
> methodology (at least in order of events). It's now been said.
> 
> Both Cisco and the CA maintain infosec incident handling teams that are paid 
> specifically to handle these situations. Yes its true corporate infosec fails 
> a lot too, but a head start is ethical.
> 
> The culture of our industry needs to think hard about the power it wields so 
> it is not taken from us and wrapped up in ineffective and burdensome state 
> oversight.

ps, it was noted previously that if other cisco properties are a bit free 
wheeling with their cookie security it would be possible to leak.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread mfisch--- via dev-security-policy
On Tuesday, June 20, 2017 at 2:06:00 PM UTC-4, Jonathan Rudenberg wrote:
> > On Jun 20, 2017, at 10:36, mfisch--- via dev-security-policy 
> >  wrote:
> > 
> > On Monday, June 19, 2017 at 7:37:23 PM UTC-4, Matt Palmer wrote:
> >> On Sun, Jun 18, 2017 at 08:17:07AM -0700, troy.fridley--- via 
> >> dev-security-policy wrote:
> >>> If you should find such an issue again in a Cisco owned domain, please
> >>> report it to ps...@cisco.com and we will ensure that prompt and proper
> >>> actions are taken.
> >> 
> >> I don't know, this way seems to have worked Just Fine.
> >> 
> >> - Matt
> > 
> > Does no-one else see the lack of responsible disclosure in this thread 
> > distressing?
> > 
> > Yes, the cert was revoked, and for all you know during the race that was 
> > this public disclosure there could have been compromise. There are APT 
> > actors watching this thread right now looking for opportunities.
> 
> The disclosure looks responsible to me.
> 
> The domain resolves to 127.0.0.1, which means that the private key can only 
> be effectively leveraged by a privileged attacker that can forge DNS 
> responses. An attacker that can do this can almost certainly also block 
> online OCSP/CRL checks, preventing the revocation from being seen by clients. 
> In general, revocation will not have any meaningful impact against misuse 
> unless the certificate is included in OneCRL/CRLSets (for Firefox/Chrome).
> 
> Jonathan

Koen,
  Cheers on the find and thanks for your contribution.

Jonathan,

  Is your argument that TLS checking on session key disclosure is not necessary 
because man in the middle is game over?

  You're right there are only very limited ways this sort of thing can be used 
(and maybe it can't be used at all). But it would be difficult to argue 
effectively that:

a) It can't be used under specific circumstances to weaken security and 
possibly combine with other attacks.

b) That someone who recognized this for what it was did not reasonably believe 
it _shouldn't_ be public knowledge.

c) I acknowledge your point about the effectiveness of revocation.

My issue is not in fact with the disclosure at all, but the fact that noone 
else on this thread pointed out that it is not the ideal disclosure methodology 
(at least in order of events). It's now been said.

Both Cisco and the CA maintain infosec incident handling teams that are paid 
specifically to handle these situations. Yes its true corporate infosec fails a 
lot too, but a head start is ethical.

The culture of our industry needs to think hard about the power it wields so it 
is not taken from us and wrapped up in ineffective and burdensome state 
oversight.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread Jonathan Rudenberg via dev-security-policy

> On Jun 20, 2017, at 10:36, mfisch--- via dev-security-policy 
>  wrote:
> 
> On Monday, June 19, 2017 at 7:37:23 PM UTC-4, Matt Palmer wrote:
>> On Sun, Jun 18, 2017 at 08:17:07AM -0700, troy.fridley--- via 
>> dev-security-policy wrote:
>>> If you should find such an issue again in a Cisco owned domain, please
>>> report it to ps...@cisco.com and we will ensure that prompt and proper
>>> actions are taken.
>> 
>> I don't know, this way seems to have worked Just Fine.
>> 
>> - Matt
> 
> Does no-one else see the lack of responsible disclosure in this thread 
> distressing?
> 
> Yes, the cert was revoked, and for all you know during the race that was this 
> public disclosure there could have been compromise. There are APT actors 
> watching this thread right now looking for opportunities.

The disclosure looks responsible to me.

The domain resolves to 127.0.0.1, which means that the private key can only be 
effectively leveraged by a privileged attacker that can forge DNS responses. An 
attacker that can do this can almost certainly also block online OCSP/CRL 
checks, preventing the revocation from being seen by clients. In general, 
revocation will not have any meaningful impact against misuse unless the 
certificate is included in OneCRL/CRLSets (for Firefox/Chrome).

Jonathan
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread troy.fridley--- via dev-security-policy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Nick,

  I misspoke in my reply.  The certificate has been revoked and it has not been 
re-issued.  We have filed a post-stopping defect (Cisco Bug ID CSCve90409) 
against the product to ensure that the issue is not re-introduced.

The certificate in question was never used to transfer customer or service 
provider information over a public network.  The engineering team utilized the 
cert to protect an IPC channel between a users browser and a background process 
running on the host.

Rest assured that if the Cisco PSIRT or Cisco PKI teams had known that the 
certificate would be exposed in this manner we would have prevented it.  We 
have folks working with the responsible engineers to insure they understand the 
implications of their previous design.

Regards,
- -Troy
-BEGIN PGP SIGNATURE-
Comment: GPGTools - http://gpgtools.org

iEYEARECAAYFAllJYz4ACgkQ1ANYX3sx7SAkVgCeLwuYBx2LceI/SFU+kcSUFtHB
JysAoPm5UtGh+5gzzi/4Gzfdgj2UcGcL
=YznP
-END PGP SIGNATURE-


On Monday, June 19, 2017 at 8:50:24 AM UTC-4, Nick Lamb wrote:
> On Monday, 19 June 2017 09:32:20 UTC+1, troy.f...@cisco.com  wrote:
> >The compromised certificate for drmlocal.cisco.com serial number 
> > 6170CE2EC8B7D88B4E2EB732E738FE3A67CF672 has been revoked.  A new 
> > certificate is being reissued to drmlocal.cisco.com and we will work with 
> > the developers of the YES video player to ensure that the issue does not 
> > happen again.  
> 
> Troy, the name makes me suspicious, what - other than this trick which isn't 
> a permissible use - was the drmlocal.cisco.com name ever for in the first 
> place? If it doesn't have any legitimate use, there was no purpose in seeking 
> a re-issue of the certificate.
> 
> The right way to approach this problem will be to issue unique keys and 
> certificates to individual instances of the system, this both satisfies the 
> BRs and (which is why) achieves the actual security goal of partitioning each 
> customer so that they can't MitM each other.
> 
> It is a little surprising to me that (at least so far as I know) no 
> manufacturer has an arrangement with a CA to issue them large volumes of 
> per-device certificates in this way. I expect that Comodo (to name one which 
> definitely has business issuing very large volumes) would be able to 
> accommodate a deal to issue say, a million certificates per year with an 
> agreed suffix (say, .nowtv.cisco.com) for a fixed fee. The first time it's 
> attempted there would be some engineering work to be done in production and 
> software for the system, but nothing truly novel and that work is re-usable 
> for future products.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread Lee via dev-security-policy
On 6/20/17, mfisch--- via dev-security-policy
 wrote:
> On Monday, June 19, 2017 at 7:37:23 PM UTC-4, Matt Palmer wrote:
>> On Sun, Jun 18, 2017 at 08:17:07AM -0700, troy.fridley--- via
>> dev-security-policy wrote:
>> > If you should find such an issue again in a Cisco owned domain, please
>> > report it to ps...@cisco.com and we will ensure that prompt and proper
>> > actions are taken.
>>
>> I don't know, this way seems to have worked Just Fine.
>>
>> - Matt
>
> Does no-one else see the lack of responsible disclosure in this thread
> distressing?

Nope.  The first requirement for "responsible disclosure" is knowing
you're disclosing something.  Take a look at the original msg:
-- I wasn't entirely sure whether this is considered a key compromise. I asked
-- Hanno Böck on Twitter (https://twitter.com/koenrh/status/873869275529957376
-- ), and he advised me to
-- post the matter to this mailing list.
 <.. snip ..>
-- If this is indeed considered a key compromise, where do I go from
here, and what
-- are the recommended steps to take?

If you want to argue that I should have replied with something about
sending the info to ps...@cisco.com instead of just forwarding the 1st
two messages in the thread to them.. yeah, maybe I should have done it
that way.

> Instead -- this was posted to a public forum giving many thousands of people
> the opportunity to chain a vector attack against Cisco CCO IDs (which in
> some cases might lead to customer network compromise).

I'm curious - how does one use a cert for drmlocal.cisco.com to chain
a vector attack against Cisco CCO IDs?

Regards,
Lee
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread mfisch--- via dev-security-policy
On Monday, June 19, 2017 at 7:37:23 PM UTC-4, Matt Palmer wrote:
> On Sun, Jun 18, 2017 at 08:17:07AM -0700, troy.fridley--- via 
> dev-security-policy wrote:
> > If you should find such an issue again in a Cisco owned domain, please
> > report it to ps...@cisco.com and we will ensure that prompt and proper
> > actions are taken.
> 
> I don't know, this way seems to have worked Just Fine.
> 
> - Matt

Does no-one else see the lack of responsible disclosure in this thread 
distressing?

Yes, the cert was revoked, and for all you know during the race that was this 
public disclosure there could have been compromise. There are APT actors 
watching this thread right now looking for opportunities.

This could have been reported to the vendor, or if you are not happy with 
Cisco's security response, to the CA first. 24 hours is not too long to keep 
this under hat.

Instead -- this was posted to a public forum giving many thousands of people 
the opportunity to chain a vector attack against Cisco CCO IDs (which in some 
cases might lead to customer network compromise).

If our community does not work to encourage more responsible disclosures the 
governments will do it for us, and it won't be nice.

"Remember the Wassenaar"

Matt

Matthew Fisch, CISSP
mfi...@fortmesa.com
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


RE: Root Store Policy 2.5: Call For Review and Phase-In Periods

2017-06-20 Thread Doug Beattie via dev-security-policy
H Gerv,

I'd like to recommend a phase in of the requirement for technically constrained 
CAs that issue Secure email certificates.

We have 2 customers that can issue Secure Email certificates that are not 
technically constrained with name Constraints (the EKU is constrained to Secure 
Email and ClientAuth).

We'd like to propose:
- All new CAs shall comply with Policy 2.5 on its effective date
- All existing CAs can continue to operate in issuance mode for one year
- All existing CAs may continue to operate in maintenance mode to support 
revocation services for up to 1 additional year (allow all 1-year certificates 
to expire), then the CA must be revoked.

One customer operates the CA within their environment and has been doing so for 
several years.  Even though we've been encouraging them to move back to a Name 
Constrained CA or a hosted service, we've been unable to set firm plans in 
place without a Root program deadline we can reference.  Due to the nature of 
the company and their acquisitions, they need to keep supporting new domains so 
name constraints is difficult to keep up with.

The other customer complies the prior words in the Mozilla policy regarding 
"Business Controls".  We have an agreement with them where we issue them Secure 
Email certificates from our Infrastructure for domains they host and are 
contractually bound to using those certificates only for the matching mail 
account.  Due to the number of different domains managed and fact they obtain 
certificates on behalf of the users, it's difficult to enforce validation of 
the email address.  We have plans to add features to this issuance platform 
that will resolve this, but not in the near term.

Doug


> -Original Message-
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+doug.beattie=globalsign@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Thursday, June 8, 2017 11:43 AM
> To: mozilla-dev-security-pol...@lists.mozilla.org
> Subject: Root Store Policy 2.5: Call For Review and Phase-In Periods
> 
> Hi everyone,
> 
> I've made the last change I currently intend to make for version 2.5 of
> Mozilla's Root Store Policy. The last task before shipping it is to assess
> whether any of the changes require a phase-in period, i.e. for some reason,
> they can't be applicable immediately.
> 
> CAs and others are requested to comment, with rationale, as to why
> particular changes will need a phase-in period and what period they are
> proposing as appropriate. This is also an opportunity for interested parties 
> to
> do a general final review.
> 
> I hope to ship the policy immediately after the CAB Forum meeting in Berlin,
> which is happening from the 20th to the 22nd of June.
> 
> You can see the differences between version 2.4.1 and version 2.5 here in
> diff format (click "Files Changed" and then "Load Diff"):
> https://github.com/mozilla/pkipolicy/compare/2.4.1...master
> 
> or here in a more rich format:
> https://github.com/mozilla/pkipolicy/compare/2.4.1...master?short_path=b
> 7447c8
> (click "Files Changed" and scroll down).
> 
> The CCADB Policy changes are trivial and can be ignored.
> 
> Here is my summary of what's changed that's significant (with section
> numbers in brackets), although you should not rely on it to be complete:
> 
> 
> 1) Certificates with anyEKU have been added to the scope. (1.1)
> 
> 2) CAs are required to "follow industry best practice for securing their
> networks, for example by conforming to the CAB Forum Network Security
> Guidelines or a successor document". (2.1)
> 
> 3) Accounts which perform "Registration Authority or Delegated Third Party
> functions" are now also required to have multi-factor auth. (2.1)
> 
> 4) CAs are required to follow, but not required to contribute to,
> mozilla.dev.security.policy. (2.1)
> 
> 5) CAs are required to use only the 10 Blessed Methods for domain
> validation. (2.2) This requirement has already had a deadline set for it in 
> the
> most recent CA Communication; that deadline is 21st July 2017.
> 
> 6) WebTrust BR audits must now use version 2.2 or later of the audit criteria.
> (3.1.1)
> 
> 7) The ETSI audit criteria requirements have been updated to be accurate.
> (3.1.2.2). ETSI TS 102 042 and TS 101 456 audits will only be accepted for
> audit periods ending in July 2017 or earlier.
> 
> 8) There are further requirements on the information that needs to be
> contained in publicly-available audit information. (3.1.3)
> 
> 9) Mozilla now requires that auditors be qualified in the scheme they are
> using, unless agreed in writing beforehand. (3.2)
> 
> 10) When CAs do their BR-required yearly update of their CPs and CPSes,
> they MUST indicate this by incrementing the version number and adding a
> dated changelog entry. (3.3)
> 
> 11) The Mozilla CCADB Policy has been merged into the Root Store Policy,
> but the requirements have not changed. (4.1/4.2)
> 
> 12) CA are required at all times t

Re: ETSI auditors still not performing full annual audits?

2017-06-20 Thread Ryan Sleevi via dev-security-policy
Thanks, Kathleen, for raising these issues.

At a high level, this highlights an interesting concern. If we, as the
broader community, lack the expertise to appropriate review and consume the
audit reports as intended, it may signal a question about whether or not we
should consider consuming ETSI reports. Thus, to ensure ETSI reports
continue to be viable for CAs to provide, it would behove those supporters
and professionals to ensure there is a robust understanding about how to
consume such reports, much as there is similar ongoing discussion (and much
better expertise) towards the consumption of WebTrust reports.

The text you've described seems to align with the process outlined in
https://assets.kpmg.com/content/dam/kpmg/ch/pdf/kpmg-certification-compliance-and-methodology-en.pdf

However, that process notably differs from the expectations of the Mozilla
Root Certificate Policy and expectations.

As you've noted, the letter calls out that this is a main certification
audit (good!), but then notes surveillance audits are planned. While
permissible under the appropriate ETSI guidelines, for purposes of
inclusion in the Mozilla store, the expectation is that a full assessment
audit is performed each year. That is,
http://www.etsi.org/deliver/etsi_en/319400_319499/319403/02.02.02_60/en_319403v020202p.pdf
Section 7.9 permits (limited scope) assessments, but Mozilla does not.

Thus, despite being a main certification audit (which is expected), the
fact that it's called out that other items are planned is concerning.

I think this also aligns with why "agreed upon procedures" is concerning.
An AUP could indicate that the CA has gone above and beyond the set of
controls, or it may indicate that the CA has self-limited the scope of the
engagement to a subset of activities. It may be that the activities are not
performed, as we've seen similarly in WebTrust audits (for example, with
respect to certificate suspension). Unfortunately, this is not clear - and
there's not enough information in the report, as best I can tell, to
distinguish this case.

I am very concerned with respect to the "point in time" audit. There should
not be any ambiguity here - both with respect to Mozilla expectations and
to the general understanding, as previously discussed in the CA/Browser
Forum with the respective auditor communities. To an extent, this aligns
with my understanding of a "main certification audit" - that is, that they
may be inclined to be "point in time" audit, with the surveillance audit
being a review of the CA's adherence to that.

I agree there's some ambiguity with the statement of "Note that the
corresponding certification report was written in German and is only
intended for the client". On one sense, we want to ensure that the report
provided is consistent with and 'binding' of the auditor (conformance
assessment body) and a statement they'll stand by, hence the concern. That
said, the report also asserts KPMG's SCESm registration, so presumably,
this is an official statement. The scope is noted as discussed further at
https://www.seco.admin.ch/sas/PKI , except that URL isn't valid.

Similarly, I agree, it's concerning that it's noted as conforming to the
EVCP profile, but similarly notes "We were not engaged to and did not
conduct an examination, the objective of which would be the expression of
an opinion on the Application for Extended Validation (EV) Certificate."
It's not clear how to reconcile this difference.

Another element of uncertainty is that the policies are listed as
compliance to "DVCP and PTC-BR", except that with respect to EN 319 411-1,
it states that "NOTE: Within the context of the present document PTC is
used synonymously with EVC, DVC and OVC as per
CAB Forum documents. ". So is this indicative that they evaluated DVCP,
OVCP, and EVCP? Or something else? As best I can tell, "PTC-BR" is an
artifact from the predecessor document, TS 102 042, and not relevant in the
context of 411-1.

One other thing that I'm unclear with - it's notable that in audits
performed by other CABs provide certificates of compliance. For example,
TUVIT provides
https://www.tuvit.de/en/services/certification/certification-authorities-according-to-etsi/
. My understanding from the ETSI representatives to the CA/B Forum is that
such certifications are standard - but it's unclear that this represents
such a certification. Perhaps this is only unique to TUVIT?


On Mon, Jun 19, 2017 at 4:57 PM, Kathleen Wilson via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> On Monday, June 19, 2017 at 12:21:46 PM UTC-7, Peter Bowen wrote:
> > It seems there is some confusion. The document presented would appear
> > to be a Verified Accountant Letter (as defined in the EV Guidelines)
> > and can used as part of the process to validate a request for an EV
> > certificate.  It is not an audit report and is not something normally
> > submitted to browsers.
>
> Yet, it is the document that was provided to root sto

Re: Private key corresponding to public key in trusted Cisco certificate embedded in executable

2017-06-20 Thread Nick Lamb via dev-security-policy
On Tuesday, 20 June 2017 05:50:06 UTC+1, Matthew Hardeman  wrote:
> The right balance is probably revoking when misuse is shown.

Plus education. Robin has stated that there _are_ suitable CA products for this 
use case in existence today, but if I didn't know it stands to reason that at 
least some of the engineers at Cisco didn't know either.

Knowing what the Right Thing is makes it easier to push back when somebody 
proposes (as they clearly did here) the Wrong Thing. If, at the end of the day, 
Cisco management signs off on the additional risk from doing the Wrong Thing 
because it's cheaper, or faster, or whatever, that's on them. But if nobody in 
their engineering teams is even aware of the alternative it becomes a certainty.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: [EXT] Mozilla requirements of Symantec

2017-06-20 Thread Ryan Sleevi via dev-security-policy
On Mon, Jun 19, 2017 at 7:01 PM, Jakob Bohm via dev-security-policy <
dev-security-policy@lists.mozilla.org> wrote:

> NSS until fairly recently was in fact used for code signing of Firefox
> extensions using the public PKI (this is why there is a defunct code
> signing trust bit in the NSS root store).
>

This is not an accurate representation on why there is a code signing trust
bit in the NSS root store.


> I also believe that the current checking of "AMO" signatures is still
> done by NSS, but not using the public PKI.
>

If you mean with respect to code, sure, but that is a generic signature
checking.


> This makes it completely reasonable for other users of the NSS libraries
> to still use it for code signing, provided that the "code signing" trust
> bits in the NSS root store are replaced with an independent list,
> possibly based on the CCADB.
>

This is not correct. The NSS team has made it clear the future of this code
with respect to its suitability as a generic "code signing" functionality -
that is, that it is not.


> It also makes it likely that systems with long development / update
> cycles have not yet deployed their own replacement for the code signing
> trust bits in the NSS root store, even if they have a semi-automated
> system importing changes to the NSS root store.  That would of cause be
> a mistake on their part, but a very likely mistake.


This was always a mistake, not a recent one. But a misuse of the API does
not make a valid use case.
___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy