Re: Signature Algorithm: sha1WithRSAEncryption in /etc/pki/tls/cert.pem

2020-04-13 Thread Ryan Sleevi
There’s a lot going on here.

1) The discussion about /etc/pki/tls/cert.pem and ca-certificates belongs
with your distro
2) Assuming your distro ships the Mozilla Root Store, which few do
correctly and successfully, the discussion about root certificates belongs
with mozilla.dev.security.policy instead
3) However, the signature algorithm on a root certificate does not matter,
because the signature on the root isn’t used. Root certificates are just
used as RFC5280 trust anchors, which means only the encoded Subject and
subjectPublicKeyInfo matter.

Hopefully that addresses your concerns!

On Mon, Apr 13, 2020 at 10:33 PM zhujianwei (C) 
wrote:

> Hi, dev-tech-crypto
>
> I found /etc/pki/tls/cert.pem using 'Signature Algorithm:
> sha1WithRSAEncryption' from ca-certificates package. It is not safe
> algorithm.
> This is an unsafe algorithm. Are there plans to update to use a more
> secure algorithm?
> --
> dev-tech-crypto mailing list
> dev-tech-crypto@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Accessing Firefox key store for signing

2019-05-24 Thread Ryan Sleevi
On Sat, May 25, 2019 at 2:03 AM Nisar Hassan  wrote:

> Dear Team,
>
>
>
> Is there any way we can digitally sign the transaction in a web app by
> using
> the certificate stored at Firefox's key store.
>
> Best Regards,
>
> Nisar Hassan
>
> Professional Service Engineer (PKI Department)
>
> National Institutional Facilitation Technologies (Pvt.) Ltd.
>
> 5th Floor, AWT Plaza, I.I. Chundrigar Road, Karachi-74200.
>
> ' UAN: (92-21) 111-112-222 Ext 243.
>
> * nisar.has...@nift.pk


No APIs for interacting with users’ smartcards or certificates are provided
by the Web Platform. You can use TLS mutual authentication to identify the
user, if the user chooses to use such a certificate, but there is no access
provided to potentially hostile web pages to use users’ certificates or
private keys.

You can, however, use extensions that the user explicitly installs.

> 
>
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: deactivate Web Cryptography API

2019-05-24 Thread Ryan Sleevi
On Sat, May 25, 2019 at 2:03 AM  wrote:

> Hello,
>
> we are using Firefox. But we want to deactivate the API "Web Cryptography
> API". We are using an addon for this case -> "WEB API MANAGER". Our target
> is to deactivate this setting without using an addon.
>
> Is there any possibility for our Use Case ?
>
>
> The Addon blocks the following things:
>
> "Crypto.prototype.getRandomValues",
> "SubtleCrypto.prototype.decrypt",
> "SubtleCrypto.prototype.deriveBits",
> "SubtleCrypto.prototype.deriveKey",
> "SubtleCrypto.prototype.digest",
> "SubtleCrypto.prototype.encrypt",
> "SubtleCrypto.prototype.exportKey",
> "SubtleCrypto.prototype.generateKey",
> "SubtleCrypto.prototype.importKey",
> "SubtleCrypto.prototype.sign",
> "SubtleCrypto.prototype.unwrapKey",
> "SubtleCrypto.prototype.verify",
> "SubtleCrypto.prototype.wrapKey",
> "window.crypto"
> --


Why do you want to remove Web Platform APIs? Especially those that have no
storage or hardware interaction?

>
>
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Specifying allowed parameter encodings in Mozilla policy

2017-05-23 Thread ryan . sleevi
On Monday, May 22, 2017 at 3:58:21 AM UTC-4, Gervase Markham wrote:
> On 19/05/17 17:02, Ryan Sleevi wrote:
> > I support both of those requirements, so that we can avoid it on a
> > 'problematic practices' side :)
> 
> But you think this should be a policy requirement, not a Problematic
> Practice?
> https://wiki.mozilla.org/CA/Forbidden_or_Problematic_Practices

I think it should be a policy, but there's an amount of legacy existing 
certificates that constitutes a 'problematic practice'

> 
> > There's a webcompat aspect for deprecation - but requiring RFC-compliant
> > encoding (PKCS#1 v1.5) or 'not stupid' encoding (PSS) is a good thing for
> > the Web :)
> 
> Sure. I guess I'm hoping an NSS engineer or someone else can tell us how
> we can measure the webcompat impact. Does it require scanning a big
> corpus of certs?

I think you misunderstood. If you were to remove support within NSS, there 
would be a webcompat issue. That part can be noticed by CT.

However, you can conditionally 'gate' support on other factors (e.g. policy 
control within mozilla::pkix, much like SHA-1, that attempts to verify the 
'right' way, falls back to the 'accept stupidity' way, and then fail if 
stupidity is encountered in a cert with a notBefore > some deprecation date). 
That would avoid the immediate webcompat issue, and in O(# of years w/ last 
stupid cert), remove the fallback path entirely.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Specifying allowed parameter encodings in Mozilla policy

2017-05-19 Thread Ryan Sleevi
I support both of those requirements, so that we can avoid it on a
'problematic practices' side :)

There's a webcompat aspect for deprecation - but requiring RFC-compliant
encoding (PKCS#1 v1.5) or 'not stupid' encoding (PSS) is a good thing for
the Web :)

On Fri, May 19, 2017 at 9:57 AM, Gervase Markham  wrote:

> Brian Smith filed two issues on our Root Store Policy relating to making
> specific requirements of the technical content of certificates:
>
> "Specify allowed PSS parameters"
> https://github.com/mozilla/pkipolicy/issues/37
>
> "Specify allowed encoding of RSA PKCS#1 1.5 parameters"
> https://github.com/mozilla/pkipolicy/issues/38
>
> I am not competent to assess these suggestions and the wisdom or
> otherwise of putting them into the policy. I also am not able to draft
> text for them. Can the Mozilla crypto community opine on these
> suggestions, and what the web compat impact might be of enforcing them?
>
> Gerv
>
>
> --
> dev-tech-crypto mailing list
> dev-tech-crypto@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Google's past discussions with Symantec

2017-04-27 Thread Ryan Sleevi
(Wearing a Google Hat, if only to share what has transpired)

Symantec has recently shared in
https://www.symantec.com/connect/blogs/symantec-ca-proposal , as well as
https://groups.google.com/d/msg/mozilla.dev.security.policy/LRvzF2ZPyeM/OpvBXviOAQAJ
, a plan for what they believe is an appropriate resolution to the many
grave and serious security issues they've introduced into the Web
Ecosystem. While the community should certainly judge Symantec's proposal
on its merits, as to whether or not it demonstrates a basic understanding
of the issues and whether or not it provides any meaningful steps that were
not already expected of them, as a trusted CA, it is useful to understand
what has transpired over the past two weeks.

As noted in
https://groups.google.com/a/chromium.org/d/msg/blink-dev/eUAKwjihhBs/IIvNKEdHDQAJ
, at the beginning of this month, the Chrome team met with Symantec's
leadership to personally discuss and explains the issues and concerns
raised, despite having been in communication with Symantec over these
issues for months. As the number of issues that Symantec has had was so
great, we were unable to provide our perspective of the many failures and
the concerns that they signaled, and thus, a second meeting was scheduled,
as mentioned in
https://groups.google.com/a/chromium.org/d/msg/blink-dev/eUAKwjihhBs/PodHs8n5BAAJ

In both of these meetings, and in the following e-mail exchanges, we
stressed to Symantec the nature of the issues: issues with the
infrastructure, issues with oversight, and issues with audits. These issues
involved not just RAs, but Symantec's employees, whether those responsible
for issuing certificates or those who are tasked with overseeing the
security and compliance of the process itself.

In particular, during these discussions, we explained our perspective of
audits to Symantec. For example, we discussed the ways in which audits were
insufficient to demonstrate security - that is, a clean audit does not
demonstrate that an organization takes security seriously or that they have
meaningfully addressed concerns. We discussed the ways in which audits were
able to be 'gamed' by an organization, such as through limiting the scope
of the audit to only a subset of the activities, such as validation
activities, as a way of avoiding disclosure of the more fundamental
security failures. We stressed that, more than clean audits, we value the
transparency and timeliness of an organization in responding to issues in a
way that fully resolves the issue.

We shared with Symantec how their competitors - organizations such as
GoDaddy and DigiCert - have provided excellent examples for fully
responding to issues in a responsible, timely, and complete manner, even
when it may be disruptive to their customers. While an incident happening
is not ideal, by responding in a way that is beyond reproach, these CAs
have demonstrated an awareness of the security and ecosystem implications.
More importantly, it demonstrates how their competitors have respected the
Baseline Requirements, particularly around the requirement to revoke
certificates that the CA is made aware of not having been issued in
accordance with their CP/CPS, even if no misleading information is present.
We shared this with the hope of encouraging Symantec to take a thoughtful
approach in their proposal and to understand what is expected of them.

We shared how Google has responded to past CA failures - organizations like
DigiNotar, which downplayed the security implications to their customers
and found themselves summarily and permanently revoked, organizations like
WoSign, which mislead the web community while actively and knowingly
engaging in prohibited behaviours, and the seriousness of issuing or
failing to supervise subordinate CAs, which places all users - not just
their customers - at risk.

We highlighted the clear and undisputed evidence that Symantec's issues -
https://wiki.mozilla.org/CA:Symantec_Issues - extend well beyond the RA
certificates, and raise concerns about the conduct of their employees, the
security of their infrastructure, their awareness of what they have issued,
and their ability to effectively supervise it.

Following these meetings, we offered Symantec advice on how to make an
effective proposal that could objectively and meaningfully address the
concerns raised, while minimizing the impact to their customers. This was
done with the hope that they would use the opportunity to step up and raise
to the level expected of them, and as clearly demonstrated through the
incident responses of DigiCert, GoDaddy, and GlobalSign in the past month.
By providing a meaningful proposal, particularly one that complied with the
Baseline Requirements and the obligations upon all CAs, they would be able
to demonstrate their understanding and awareness of these issues.

Below is an excerpt of that message, shared with Symantec's CEO and CTO,
sent on April 18. I've removed a few pieces from this, as it appears 

Re: SSL_BYPASS_PKCS11

2017-03-07 Thread Ryan Sleevi
On Tue, Mar 7, 2017 at 3:28 PM, Rob Crittenden  wrote:

> SSL_BYPASS_PKCS11 is marked as deprecated in ssl.h. What are the plans
> on removing it?
>
>
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.28_release_notes
https://bugzilla.mozilla.org/show_bug.cgi?id=1303224
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-05 Thread Ryan Sleevi
On Tuesday, April 5, 2016, Hubert Kario <hka...@redhat.com> wrote:

> On Monday 04 April 2016 12:17:08 Ryan Sleevi wrote:
> > On Mon, Apr 4, 2016 at 11:32 AM, David Woodhouse <dw...@infradead.org
> <javascript:;>>
> wrote:
> > > Do you even have a way for a nickname to be entered in text form,
> > > such that you could "maliciously" be given a PKCS#11 URI instead of
> > > the normal "token:nickname" form? Perhaps a user could edit a
> > > config file? Or is it *all* selected via a GUI, as far as the user
> > > is concerned?
> > David,
> >
> > Let's work back from first principals, because you're being
> > self-contradictory in replies.
> >
> > As you yourself note, this change would mean that any time an
> > application can be introduced a "nickname" from some source (whether
> > an API, a configuration file, a command-line flag) suddenly has a new
> > semantic structure to the nickname over the present NSS.
> >
> > You present this as a positive change - all NSS using applications
> > don't need to change. I've tried, repeatedly, to explain to you how
> > that is an observable API difference. It means that nicknames supplied
> > (again, whatever the API) now have additional structure and form.
> >
> > Your justification seems to be that because you can't imagine my
> > application doing it, I shouldn't be concerned. But just re-read the
> > above and you can see how it affects every application - there's now a
> > new structure and form, and that changes how applications deal with
> > the API (*especially* if they did anything with that configuration
> > flag, which, under present NSS, is perfectly legal)
> >
> > Your change has observable differences. It breaks the API. Thus, it
> > makes sense to introduce a new API. An application cannot safely
> > assume it will 'just work' if your change was integrated - instead,
> > authors would need to audit every API call they interact with
> > nicknames from any source *other* than direct NSS calls, and see if
> > they're affected.
> >
> > That's simply not an appropriate way to handle API changes.
>
> I'm sorry Ryan, but I also don't see how this would break API.


Does that mean you don't understand the use cases and situations
outlined? Or that you don't believe they are valid use cases? Because those
are all very different statements.


> Stuff that didn't work previously and now will work is not something I
> would consider API or ABI break.


We have considered such changes multiple times in the past as breaks - most
typically on Red Hat's request! This is why multiple extensions and
behaviours are disabled by default in the TLS stack right now, even though
they are strictly additive. How would you see this as any different?


> I see David argumentation as completely valid and correct - this is
> acceptable change.


I still do not believe this is the case. We can certainly bring it up on
the next NSS call.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-04 Thread Ryan Sleevi
On Mon, Apr 4, 2016 at 4:09 PM, David Woodhouse  wrote:
> I'm perfectly happy to entertain the notion of adding new functions for
> PK11_FindCertsFromURI() (et al.), but I was looking for *real*
> information about whether it was actually necessary. Which you don't
> seem to be able to provide without disappearing into handwaving and
> hyperbole. So I'll take that as a 'no'. Thanks anyway.

David, are such remarks necessary? Do you believe they help creative a
positive community around NSS development? If you disagree, you can
disagree and say so. There are plenty of positive ways to do so, and
so such remarks are entirely unnecessary and contribute to a toxic
environment. There are plenty of perfectly professional ways to
disagree, but you don't seem interested in them.

I understand and appreciate that you want the standard to be "Show me
the code." But that's not the standard we set. If it was, there are
plenty of changes that NSS could have done that would have broken Red
Hat's customers - but we intentionally chose to take the path of least
breakage and risk, because that's what having a public, stable API is
about.

It's one thing to disagree, it's another to be unnecessarily rude and
dismissive. I do hope you can see the difference, and might think more
carefully about the type of behaviour you're encouraging by
practicing.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-04 Thread Ryan Sleevi
On Mon, Apr 4, 2016 at 3:53 PM, David Woodhouse  wrote:
> Of course it's an API change. But as noted, it's an API *addition*, in
> that it makes something work that didn't before.
>
> The criterion for such additions should be "if it isn't a *bad* thing
> for that to start working".
>
> What's missing from your argument is the bit where you explain why it's
> *bad* for an explicitly user-entered PKCS#11 URI to suddenly start
> working. That was, after all, the *point* of suggesting that the
> existing functions should be changed to accept such.

I've already tried to explain this several times to you. I don't feel
there's anything more useful to contribute. I'm not sure if you don't
understand, or you're not convinced, but in either event, I have
neither the time nor energy to continue to try to convince you,
especially when you seem fundamentally opposed to exploring or
articulating the reasons why not to pursue alternatives.

There are plenty of useful features we've introduced that are opt-in
by default, especially when their enablement leads to observable
changes. This is even true of security-relevant changes, due to the
potential to cause unexpected harm. You clearly don't seem interested
in taking a conservative or responsible choice - such as allowing
applications that wish to support PKCS#11 URIs to opt-in.

In any event, there's nothing more to say here, I remain opposed to
this change, and I feel I've given you enough good faith, in spite of
the original belligerence, in this to have made an attempt to explain
to you those reasons and concerns.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-04 Thread Ryan Sleevi
On Mon, Apr 4, 2016 at 3:45 PM, David Woodhouse  wrote:
> That won't change. Unless you explicitly use a new function that
> provides a URI instead of a nickname, of course.
>
> You will *only* get a URI from direct user input, in a situation where
> a user could already feed you any kind of nonsense.

And if you try to filter out that non-sense - for example, relying on
that format - you end up with bad results. If you try to act upon that
user data, on the assumption it follows the format - you end up with
bad results.

I appreciate your argument "but user provided!", but you seem to be
missing the core point - you're changing the syntax of an API's
arguments, in a way that breaks the previously-held pre and post
conditions. That's an API change.

If we can't agree on that, then it's no surprise that you don't agree
with or appreciate the objections, but I'm not sure what more can be
said, or needs to be said. It seems we've reached an ideological
impasse.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-04 Thread Ryan Sleevi
On Mon, Apr 4, 2016 at 12:39 PM, David Woodhouse  wrote:
>
> We usually reserve the term "breaks the API" for when something *used*
> to work, and now doesn't. Not when a previously-failing call now
> actually does something useful.

No, sorry David, that's not how we've done stuff in NSS.

When it has an observable difference, when it breaks the contract
previously provided, we prefer not to do so.

The contract that NSS has lived by is that applications have had to
know about the token:nickname vs nickname rule in the APIs. When you
get a nickname for a cert, depending on the function used, you may or
may not have to add the token. If you don't, you get bad results.

So we've had an API contract that the nicknames input to this function
have a syntax and format, because applications have to be aware of how
to construct it. Because applications have to be aware of how to
construct it, applications also have to be aware of how to deconstruct
it.

For example, it's required when importing certs that you don't
duplicate a nickname. So code that imports certs (such as from PFX
files) needs to be able to handle nickname collisions, so it needs to
be able to construct them. Introducing an overloading syntax of
PKCS#11 URIs could also lead to issues there (there's the
CERT_MakeCANickname function, but that doesn't handle the
token:nickname construct). A public API example is
SEC_PKCS12DecoderValidateBags.

This is why I'm arguing that "nickname" is not 'opaque' - you have to
know about the construction to get proper API behaviour - especially
because two public APIs, CERT_FindCertByNickname and
PK11_FindCertFromNickname, will return different results depending on
the construction. PK11_FindCertFromNickname will *only* search that
token, and only return *that* cert, while CERT_FindCertByNickname will
potentially return temporary/in-memory certs if you don't include the
token qualifier.

>
> And the latter is what we're talking about here. If you provide a
> PKCS#11 URI as the 'nickname', that will *fail* right now. And the
> proposal is that such a call will no longer fail.

I'm aware. And in doing so, you're breaking the API contract as to
what the format is to this function. If this was meant to be an opaque
format, and applications were "doing the wrong thing" by reaching into
it, you could make an argument that they were doing it wrong, it was
always opaque, and that it's fine to break them.

However, you can't just go change the format. API compatibility is
more than just ABI compatibility - it's about whether the
preconditions and postconditions hold. And allowing the input to be
different breaks the API preconditions as to what format the nickname
will be in, and those preconditions have indeed propagated into
NSS-using applications.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-04 Thread Ryan Sleevi
On Mon, Apr 4, 2016 at 11:32 AM, David Woodhouse  wrote:
> I don't see it. I still don't see *any* way for you to get a PKCS#11
> URI anywhere in the memory space of your application, unless you
> specifically ask for one with a new API — or unless you take untrusted
> input from the user or an editable config file, of course.

Exactly so. You fail to realize this as a change, but it's exactly that.

> I certainly don't see how it could cause you to display such URIs
> directly to the user, as you suggest.

So tell me, what do you expect an application to do that displays the
results from parsing such a configuration (for example, it reads from
a config file "token:nickname", parses that value, and displays GUI
for "token" and "nickname")? Is your belief that this is not an
acceptable usage of the NSS API, and not a guarantee of the NSS API
contract? Then just say that, and we can have a productive
conversation. But it should be terribly simple, and not require any
creativity or imagination, to see how such a scheme would be broken in
the introduction of "pkcs11:token=Foo;object=bar;" - the UI would
result in the token of "pkcs11" and the 'nickname' of
'token=Foo;object=bar".
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: RFC7512 PKCS#11 URI support

2016-04-04 Thread Ryan Sleevi
On Mon, Apr 4, 2016 at 11:32 AM, David Woodhouse  wrote:
> Do you even have a way for a nickname to be entered in text form, such
> that you could "maliciously" be given a PKCS#11 URI instead of the
> normal "token:nickname" form? Perhaps a user could edit a config file?
> Or is it *all* selected via a GUI, as far as the user is concerned?

David,

Let's work back from first principals, because you're being
self-contradictory in replies.

As you yourself note, this change would mean that any time an
application can be introduced a "nickname" from some source (whether
an API, a configuration file, a command-line flag) suddenly has a new
semantic structure to the nickname over the present NSS.

You present this as a positive change - all NSS using applications
don't need to change. I've tried, repeatedly, to explain to you how
that is an observable API difference. It means that nicknames supplied
(again, whatever the API) now have additional structure and form.

Your justification seems to be that because you can't imagine my
application doing it, I shouldn't be concerned. But just re-read the
above and you can see how it affects every application - there's now a
new structure and form, and that changes how applications deal with
the API (*especially* if they did anything with that configuration
flag, which, under present NSS, is perfectly legal)

Your change has observable differences. It breaks the API. Thus, it
makes sense to introduce a new API. An application cannot safely
assume it will 'just work' if your change was integrated - instead,
authors would need to audit every API call they interact with
nicknames from any source *other* than direct NSS calls, and see if
they're affected.

That's simply not an appropriate way to handle API changes.

If we can't agree on these first principles, there's no point in
discussing Chrome's behaviour, because it simply builds upon them.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-04-04 Thread Ryan Sleevi
On Monday, April 4, 2016, David Woodhouse  wrote:

>
> I didn't call you a liar. I simply said that I can't see how the
> statement you made could be anything but false. There are plenty of
> reasons that could be the case — including my own ignorance — which
> don't involve you telling a deliberate untruth.


Then you can realize there are dozens of ways to state that in a more
productive, less combative tone. Although you seem to believe the ends
justified the means.


> > Chrome has logic to preparse the nickname, which NSS does have an
> > internal format for, because it can affect how we present and how we
> > search. In some APIs, you need to prefix the nickname with the token
> > name, while  simultaneously, for our UI needs, we need to filter out
> > the token name before showing it to the user.
>
> None of which is proposed to change.


This is, of course, demonstrably false. One can no longer filter the inputs
to this API if your change is accepted, because the format will have
changed. For example, colon no longer becomes the separator between the
token and the nickname.

This is basic, so I'm not sure why you're suggesting it doesn't change.

>
> No. As repeatedly stated, we were *only* talking about allowing
> functions like PK11_FindCertsFromNickname() to accept a RFC7512
> identifier (PKCS#11 URI) in *addition* to the existing NSS nickname
> form. There is *no* way that such a change could possibly have the
> effect you describe.


See above. I just described to you how it can and would happen.

>
> Separately, we would also want to add new functions to obtain the
> RFC7512 identifier for a given object. But obviously those *couldn't*
> overload the existing functions to get nicknames; that would be silly.


And yet it would do exactly that, when indirected through a persistence
layer, which is part of the justification for this change. If the "get the
nickname from this config" returns a URI, which was explicitly part of the
justification for his change, then it very much is a change of a "get the
nickname" function.


> > 


Seriously? You're being extremely unhelpful David. Do you really think this
positively contributes? You're actively trying to needle, which is entirely
unhelpful to your goal.

>
> Regardless of tone, please try to pay attention to the actual issues,
> and make sure that you're not arguing against a straw man.


Please take your own advice into consideration before replying.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: RFC7512 PKCS#11 URI support

2016-04-04 Thread Ryan Sleevi
On Apr 4, 2016 7:15 AM, "David Woodhouse"  wrote:
>
> Ryan?
>
> Unless you are able to provide an explanation of how this would "break
> Chrome's use of the API", I shall continue to assume that your
> statement was false, and design accordingly.
>
> I certainly can't see how it could have any basis in reality.

Those sort of statements don't encourage productive discussion - "You're a
liar and I will ignore you and break you if you don't do what I want,
because I don't understand". This will be my last reply, given that. I
would hope in the future you might be more helpful in tone and tenor.

Chrome has logic to preparse the nickname, which NSS does have an internal
format for, because it can affect how we present and how we search. In some
APIs, you need to prefix the nickname with the token name, while
simultaneously, for our UI needs, we need to filter out the token name
before showing it to the user.

Introducing the URI code in the existing functions would be breaking NSS's
implicit and explicit API contracts, and would cause us to display such
URIs directly to the user. This is, in my view, unacceptable as a
contribution.

I tried to productively encourage you to a path that won't break, and that
adheres to NSS's guarantees of stability, but it seems you actively desire
to change the behavior of existing APIs for NSS users.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-03-19 Thread Ryan Sleevi
On Thursday, March 17, 2016, John Dennis <jden...@redhat.com> wrote:

> On 03/17/2016 10:52 AM, Ryan Sleevi wrote:
>
>> On a technical front, Chrome and Firefox, as browsers, have been
>> removing support for the notion of generic URIs, and investing in
>> aligning on the URL spec - that is, making a conscious decision NOT
>> to use URIs as URIs.
>>
>
> Could you clarify this statement?
>
> > NOT to use URIs as URIs
>
> Is this a typo?
>
> --
> John
>

No, it is not a typo.

Firefox, Chrome, and other browsers have been focused on
https://url.spec.whatwg.org as the IETF spec is inadequate and overbroad
and does not reflect the real world.

There's more to that story, but that's the simple answer.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: RFC7512 PKCS#11 URI support

2016-03-19 Thread Ryan Sleevi
On Thursday, March 17, 2016, David Woodhouse  wrote:

>
> It is a fundamental part of all the major Linux desktop distributions,
> and thus fairly much ubiquitous there.


This is a loaded statement, but I still believe this is overstated. But I
don't want to get into a "whose distro is better" pissing match.


> For fun I tried removing it
> recently on OpenSuSE, Fedora and Ubuntu — in all cases it basically
> wanted to remove most of the distro. Running 'dnf remove p11-kit' won't
> even play any more on Fedora. It just tells me that would require
> removing systemd and dnf itself, and tells me 'no'.
>
> So my proposal that on platforms where p11-kit exists, NSS should just
> link to it. But also, to avoid having to build and ship a separate
> library on platforms which didn't already have it, I think we should
> *import* the URI handling code from libp11-kit. That is mostly isolated
> to one file, of 1305 lines which compiles to roughly 10KiB of code
> under Linux/x86_64.
>
> Does that seem like the correct approach?


I disagree that this seems like a wise or balanced tradeoff to fork this
file. The lessons of SQLite show this just increases maintenance costs and
leads to divergence. I respond to this more below.


> The other open question, although it doesn't block the work at the
> start of the project, is whether we should be extending
> PK11_FindCertFromNickname() to accept RFC7512 URIs or whether we should
> *only* accept URIs in a new function.


I am still strongly opposed to introducing this behaviour to the existing
functions. The nickname functions already have significant magic attached
to them, both in parsing from NSS APIs and in providing to NSS APIs
(filtering or setting the token via parsing or adding to the token name,
respectively). This would definitely break Chrome's use of the API, and for
that, I think it should be an unacceptable change as it is not backwards
compatible.

On the topic itself, of support PKCS#11 URIs, I remain opposed, and I would
appreciate Richard's take on it. For Chrome, such a feature would have been
useless for our Windows and Mac ports, and *is* useless for our iOS and
ChromeOS ports. Further, we would not expose this functionality for our
Linux port even if it existed, due to cross-platform considerations. On a
technical front, Chrome and Firefox, as browsers, have been removing
support for the notion of generic URIs, and investing in aligning on the
URL spec - that is, making a conscious decision NOT to use URIs as URIs.
Anne on the Mozilla side has been working that effort, and can probably
speak more to that effort.

I would much rather that if this is introduced, it is done so behind a
compile time flag, and it's interactions with NSS as a whole kept as a
minimum. I understand and appreciate why Fedora/RHEL distros are interested
in this, but I don't believe it's something that Chrome would want, and I
don't believe it's likely something Firefox would want to ship when it
packages NSS, especially on non-Linux platforms.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: SHA-1 with 'notAfter >= 2017-1-1'

2016-01-21 Thread Ryan Sleevi
On Tue, January 19, 2016 2:56 pm, s...@gmx.ch wrote:
>  Hi
>
>  We're already having some discussions about SHA-1, but I'll split this
>  up into a new thread.
>
>  The initial goal of bug 942515 was to mark certs as insecure, that are
>  valid 'notBefore >= 2016-01-01' (means issued to use in 2016+) AND also
>  for certs that are valid 'notAfter >= 2017-1-1' (means still valid in
>  2017+).
>
>  The first condition has been implemented, but there are some
>  'compatibility' issues with MITM software. [1]
>  The second condition has not been implemented, but it was already
>  announced [2] and also considered to set the cut-off a half year earlier
>  to the  July 1, 2016. If this should really happen, we need to hurry up
>  on this discussion. Of course the problem mentioned in [1] should be
>  solved first.
>
>  Regards,
>  Jonas

Moving dev-tech-crypto to BCC

You've misread [2]. It is *not* about the notAfter but the notBefore. I
can assure you, based on our telemetry, there will still be some nasty
breakages with measuring on the notAfter. The goal of the announcement
(and as agreed by Mozilla, Microsoft, Google, and, of course, the
CA/Browser Forum) is that effective 2017-1-1, it's reasonable to turn off
support for SHA-1.

The only use of the notAfter, in the context of [2], was using that as a
signal to show some form of prominent warning in the developer console.
And that's been implemented for some time, AFAIK.

So the implementation of [2] is still something that, based on Firefox's
release calendar, puts it around Firefox 52 [3], thus needing to be
implemented sometime around late October / early November, 2016.


[2]
https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/
[3] https://wiki.mozilla.org/RapidRelease/Calendar


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


[ANNOUNCE] NSS 3.19.2 Release

2015-06-19 Thread Ryan Sleevi
The NSS Development Team announces the release of NSS 3.19.2

Network Security Services (NSS) is a patch release for NSS 3.19.

No new functionality is introduced in this release. This release addresses
a backwards compatibility issue with the NSS 3.19.1 release.

Notable Changes:
* In NSS 3.19.1, the minimum key sizes that the freebl cryptographic
implementation (part of the softoken cryptographic module used by default
by NSS) was willing to generate or use was increased - for RSA keys, to
512 bits, and for DH keys, 1023 bits. This was done as part of a security
fix for Bug 1138554 / CVE-2015-4000. Applications that requested or
attempted to use keys smaller then the minimum size would fail. However,
this change in behaviour unintentionally broke existing NSS applications
that need to generate or use such keys, via APIs such as
SECKEY_CreateRSAPrivateKey or SECKEY_CreateDHPrivateKey.

In NSS 3.19.2, this change in freebl behaviour has been reverted. The fix
for Bug 1138554 has been moved to libssl, and will now only affect the
minimum keystrengths used in SSL/TLS.


The full release notes are available at
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.19.2_release_notes

In addition to this release, the release notes for NSS 3.19 and NSS 3.19.1
have been updated to highlight that both releases contain important
security fixes - CVE-2015-2721 in NSS 3.19, CVE-2015-4000 in NSS 3.19.1.

The updated release notes for NSS 3.19 are available at
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.19_release_notes

The updated release notes for NSS 3.19.1 are available at
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.19.1_release_notes


The HG tag for NSS 3.19.2 is NSS_3_19_2_RTM. NSS 3.19.2 requires NSPR
4.10.8 or newer.

NSS 3.19.2 source distributions are available on ftp.mozilla.org for
secure HTTPS download:
https://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_19_2_RTM/src/

A complete list of all bugs resolved in this release can be obtained at
https://bugzilla.mozilla.org/buglist.cgi?resolution=FIXEDclassification=Componentsquery_format=advancedproduct=NSStarget_milestone=3.19.2

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PKCS#11 platform integration

2015-05-12 Thread Ryan Sleevi
On Tue, May 12, 2015 9:44 am, Peter Bowen wrote:
  How about an even simpler solution?   Don't have p11-kit load the
  PKCS#11 modules, just provide a list of paths and let the application
  pass those to NSS.  That way the application can choose to
  transparently load modules without user interaction, offer a UI option
  for load system modules, or provide a pick list of module to load.

Right, that's known as an NSS Module DB (and is in fact what the
pkcs11.txt parser is)

The shared library reports back the supported modules and configuration
flags, and then NSS loads and initializes them as if they were first-class
modules.

http://mxr.mozilla.org/nss/source/lib/sysinit/nsssysinit.c is an example
of this.

[Yes, it relies on a non-standard extension to PKCS#11's C_Initialize;
caveat emptor]

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PKCS#11 platform integration

2015-05-11 Thread Ryan Sleevi
On Mon, May 11, 2015 4:09 am, David Woodhouse wrote:
  I completely agree that Chrome should only ever load the modules which
  are configured to be loaded into Chrome. I'm surprised you feel the
  need to mention that.

Because you still don't understand, despite how many ways I'm trying to
say it.

It's not simply sufficient to load module X into Chrome or not. p11-kit's
security model is *broken* for applications like Chrome, at least with
respect to how you propose to implement.

Let's say you've got Module X.

Today, Chrome controls loading of modules. It can load module X into the
browser process (and trusted process) and *NOT* load Module X into a
sandboxed zygote process that it then uses to start renderers and such.

Because Chrome fully controls module loading, and uses the NSS documented
APIs, it can ensure that things are appropriately controlled. It can
guarantee exactly which modules can be loaded into the untrusted process -
such as the read-only, non-modiable root trust module.

You still don't seem to understand that distinction, because you keep
calling it broken. No, it's only broken with something like p11-kit
comes along and violates the API guarantees.

That's why I keep reiterating that the reasons for NSS's per-application
config extend beyond just No one's gotten around to it and are deeply
intertwined with *legitimate application's needs that p11-kit so far fails
to respect.

I don't know how many ways I can say it, but I'm trying to provide a
simple example that can be empirically validated about how your proposal
*fails* and causes security issues.

I think you keep leaping ahead of yourself in proposing, so I would again,
as I have privately, encouraged you to go back, start from first
principals, and make sure you *understand the requirements* before jumping
too far into proposing solutions. I think a solution can be found, but I
think we're going to continue to waste time if every other email jumps
three steps ahead.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PKCS#11 platform integration

2015-05-10 Thread Ryan Sleevi
On Sun, May 10, 2015 12:57 pm, David Woodhouse wrote:
  On Sun, 2015-05-10 at 12:47 -0700, Ryan Sleevi wrote:
  If the user requests NSS to load a module. It should load that module.
  And that module only. Period.

  The canonical per-user way to request an application to load a module is

NSS_Initialize and SECMOD_LoadModule.

Respect the API. Don't violate the API.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PKCS#11 platform integration

2015-05-10 Thread Ryan Sleevi
On Sat, May 9, 2015 3:30 pm, David Woodhouse wrote:
  On Fri, 2015-05-08 at 15:07 -0700, Ryan Sleevi wrote:
  Yes, it should. You'll introduce your users to a host of security issues
  if you ignore them (especially for situations like Chrome). For example,
  if you did what you propose to do, you'd be exposing people's smart card
  modules to arbitrary sandboxed Chrome processes

  So arbitrary sandboxes Chrome processes already have free rein to use
  certificates in my NSS database? Isn't that a problem *already*? And if
  people ever want to use the PKCS#11 token in their web browser, they're
  going to need to expose it anyway. And if they don't, the p11-kit
  configuration does permit a module to be visible in some applications
  and not others.

No David, that's quite the opposite of what I was saying. If you did what
you propose - patching to ignore the noModDB  friends - you'd be
introducing security issues. The security issues do not exist now. Your
patch would introduce them.

You don't need to expose it to the sandbox to use PKCS#11 in the web
browser. That's not how modern sandboxed browsers work.

And yes, your conclusion further emphasizes my original point - some
applications explicitly do not wish to have p11-kit introduced, and by
just blithely introducing it, you're introducing security vulnerabilities.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PKCS#11 platform integration

2015-05-10 Thread Ryan Sleevi
On Sat, May 9, 2015 3:30 pm, David Woodhouse wrote:
  No, you should be able to do it w/o patching NSS.

  OK... how?

  If the Shared System Database wasn't such an utter failure, not even
  being used by Firefox itself, then just installing it there would have
  been a nice idea. But *nothing* used the Shared System Database, and
  there isn't even a coherent documented way for NSS users to discover
  whether they should use it or not. If calling NSS_Initialize() with a
  NULL configdir worked and did the right thing (sql:/etc/pki/nssdb where
  it's setup, or sql:$HOME/.pki/nssdb otherwise), that would be nice...
  but it doesn't.

This is demonstrably not true, such in the case of Chrome.

Or did you mean Fedora's particular interpretation of how things should look?

Just use the canonical way to configure NSS to look for tokens - in which
it also finds your meta-configuration token - namely sql:$HOME/.pki/nssdb

And lean on the applications that don't respect NSS's configuration
semantics rather than trying to redefine NSS's configuration semantics.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PKCS#11 platform integration

2015-05-10 Thread Ryan Sleevi
On Sun, May 10, 2015 12:31 pm, David Woodhouse wrote:
  You don't need to expose it to the sandbox to use PKCS#11 in the web
  browser. That's not how modern sandboxed browsers work.

  That sounds like a bit of a failure of the sandboxing to me. Just so I
  understand what you're saying... regardless of whether the browser
  complies with the system policy for PKCS#11 modules, it's considered
  acceptable that a sandbox can happily authenticate using any of the
  certificates in my NSS database and any of the PKCS#11 tokens that I
  have manually enabled?

No, you don't understand what I'm saying, and have reached a conclusion
that again is the opposite.

I will try to break it down to it's core parts:

- Don't load a module unless the user has explicitly asked or configured
that module to be loaded.
- Do not patch NSS to load modules outside of the explicitly requested
modules.

Your patch fails on both of those.

It's really that simple. If you don't try to patch NSS to do something
crazy, it will surprisingly not do something crazy.

And to be as abundantly explicit as I can be: No, your assumptions about
how sandboxing works are quite flawed. The fact is that the module is
*not* loaded in the sandbox is the thing to preserve, which your patch
destroy.

If the user requests NSS to load a module. It should load that module. And
that module only. Period.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PKCS#11 platform integration

2015-05-08 Thread Ryan Sleevi
On Fri, May 8, 2015 5:38 am, David Woodhouse wrote:
  These days it does. Modern systems ship with p11-kit², which exists
  precisely to fill that gap and provide a standard discoverable
  configuration for installed PKCS#11 modules.

Your citation ( http://p11-glue.freedesktop.org/p11-kit.html ) fails to
support your claim that modern systems ship it, as I've noted elsewhere.

  Although it happens to be Fedora which is first, we obviously expect
  other distributions and operating systems to follow suit — in
  practice, even if not with official packaging policy mandates.

And of course, this note - that it's Fedora only - directly counters the
claim above that modern systems ship (it's an implied subject that _all_
modern systems do so, which is incorrect. It's not even fair to say _some_
modern systems support it, since it seems, from your evidence, that _one_
modern system requires it)

  Does this seem like the right approach?

No, you should be able to do it w/o patching NSS.

  Under precisely what
  circumstances should we be doing it — should it be affected by the
  noModDB and noCertDB flags?

Yes, it should. You'll introduce your users to a host of security issues
if you ignore them (especially for situations like Chrome). For example,
if you did what you propose to do, you'd be exposing people's smart card
modules to arbitrary sandboxed Chrome processes - a step BACK for security
that would introduce huge attack surface (by transitive loading of all
those modules dependencies, including p11-kit's)

  We may wish to give some consideration to how that would work when it
  is being loaded into an NSS application which might have its own
  database in another directory (some broken applications like Firefox
  still don't use ~/.pki/nssdb ☹) or indeed in the *same* directory
  (like Chrome does).

And consideration to some applications (like Chrome) that would not want
to load it.

As I've said elsewhere, I'm not fundamentally opposed to p11-kit, but I do
hope you can take this considerations in approach and claims into
consideration before advocating support. I appreciate you're enthusiastic,
and I'm not trying to tell you no, but I am trying to help you understand
that you're not exactly going to win advocates with the current approach.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: [bulk] PKCS#11 platform integration

2015-05-08 Thread Ryan Sleevi
On Fri, May 8, 2015 6:09 am, David Woodhouse wrote:
  On Linux distributions it *is* the platform's
  mechanism of choice for configuring PKCS#11 tokens. NSS needs to
  support it if it wants to integrate with the platform properly.

I'm sorry to continually push back on this, but you continue to make this
claim. This is a heady claim that lacks any evidence (so far) to support
it, beyond a particular distro.

1) You can't really talk about the platform's mechanism for Linux,
unless/until it's part of LSB. Beyond that, you're just waving your hands
in your air saying for some distros. Linux is a world where a thousand
flowers bloom and a distro exists for every particular person's needs, so
you can't just make broad sweeping statements like this.

2) It is _an_ option for the platform. Indeed, I'd suggest you've got a
cart leading the horse. AFAICT, NSS *is* part of LSB (
http://www.linuxbase.org/betaspecs/lsb/LSB-Common/LSB-Common/requirements.html
) but p11-kit is not. So you can equally argue (and more accurately argue)
that p11-kit is failing to integrate with the platform properly by failing
to register itself with NSS.


I have no fundamental objections to p11-kit - indeed, I think it's quite
handy. But I do take issue with such broad sweeping claims used to argue
for supporting it. It's an option, I get that some distros really like I,
I *personally* like it for some cases, but that does *not* argue it's a
good thing.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS support for RFC7512 PKCS#11 URIs

2015-05-05 Thread Ryan Sleevi
On Tue, May 5, 2015 8:55 am, David Woodhouse wrote:
  I'm talking about the serial numbers of the certs issued *by* the two
  My CAs.

Good to have that clarification :)

Different CAs (in as much as different public keys), but with the same
DER-encoded subject name (not necessarily the same DER-encoded issuer
name, but that's irrelevant), and the same starting serial number (1).

The question is how to distinguish certificate (or public key) objects
between the two, so that we could construct an unambiguous identifier.

I'm ignoring AKI/SKI - they're not unique disambiguators, even though some
people use them as such. Treat them as optional, untrusted. That it
works for you is great, but same problem here - not guaranteed.

We'll call them CA 1 and CA 2, even though they share the same subject,
because they share different public keys.

Let's look at the PKCS#11 attributes for the CKO_CERTIFICATE object type
with CKC_X_509

CKA_SUBJECT - required. Will be identical for the two objects
CKA_VALUE - required if CKA_URL is absent; unique
CKA_URL - required if CKA_VALUE is absent; unique if the certs are different
CKA_SERIAL_NUMBER - optional

(CKA_ID is optional, as are CKA_HASH_OF_[SUBJECT/ISSUER]_PUBLIC_KEY, the
latter two wouldn't be sufficient under same-CA rekey though)

So to uniquely identify a certificate, you look up for *all* CKA_SUBJECT
matches, then get the CKA_VALUE/CKA_URL to do the comparisons.

Does that work for PKCS#11 URLs? Absolutely not. That's because there IS
NOT a unique disambiguator that can be provided apriori if you don't know
the certificate. As Bob notes, it's entirely valid for two objects to have
the same CKA_ID and distinct CKA_SUBJECTs. In fact, that's *explicitly*
called out in the description of CKC_X_509

Since the keys are distinguished by subject name as well as identifier,
it is possible that keys for different subjects may have the same CKA_ID
value without introducing any ambiguity.

Further, I'm having a hard time finding a normative reference in any of
the PKCS#11 RFCs that require the CKA_ID values be unique for a given
CKA_SUBJECT (only non-normative descriptions that they should be, or are
intended to be).

Is this a problem in practice? Unknown. But it does indicate that the
PKCS#11 URIs are not in and of themselves sufficient to uniquely and
unambiguously identify an object, per spec.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS support for RFC7512 PKCS#11 URIs

2015-05-04 Thread Ryan Sleevi
On Mon, May 4, 2015 1:25 pm, David Woodhouse wrote:
  Surely that's not unique? Using the above example, surely the first
  certificate issued by the 2010 instance of 'My CA', and the first
  certificate issued by the 2015 instance, are both going to have
  identical CKA_ISSUER and CKA_SERIAL_NUMBER, aren't they?

No, every subject and serial must be unique. If the 2010 and 2015 instance
are distinct bytes, they need distinct serial numbers.


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Remove Legacy TLS Ciphersuites from Initial Handshake by Default

2015-03-16 Thread Ryan Sleevi
On Mon, March 16, 2015 1:06 pm, Erwann Abalea wrote:

  Phase RSA1024 out? I vote for it. Where's the ballot? :)

This is a browser-side change. No ballot required (the only issue *should*
be non-BR compliant certificates issued before the BR effective date)

https://code.google.com/p/chromium/issues/detail?id=467663 for Chrome, but
unfortunately, can't share the user data as widely. Perhaps Mozilla will
consider collecting this as part of their telemetry (if they aren't
already)

This still leaves 'internal CAs' as an open issue. However, we can limit
the enforcement to signatures that chain to a trusted CA, significantly
reducing the risk to end users of state-sponsored key factoring of
1024-bit keys. Which is certainly a reasonable concern, even for the most
paranoid.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Remove Legacy TLS Ciphersuites from Initial Handshake by Default

2015-03-16 Thread Ryan Sleevi
On Mon, March 16, 2015 10:24 am, Erwann Abalea wrote:
  Le lundi 16 mars 2015 10:29:08 UTC+1, Kurt Roeckx a écrit :
  On 2015-03-14 01:23, kim@safe-mail.net wrote:
   Is there an agreed timeline for deprecation of the technologies listed
  in the initial posting? We should be proactive in this field.
  
   For example, last month a plan to deploy 12000 devices to medical
  professionals has been finalised, despite the devices using 1024bit
  RSA keys - on the grounds that it works in current browsers and will
  likely keep working for the next 10 years. I am not happy about such
  outcomes.
 
  Whoever thinks that this will keep working for the next 10 years is
  clearly misinformed.  CAs should not be issuing such certificates.  If
  they do, please let us know which CA does that so we can talk to them
  about revoking them.

  There's nothing in the OP post saying those certificates would be issued
  under a public CA.

My goal is to phase these out in Chrome by the end of the year. We have
ample evidence that suggests this is reasonable.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Separating Firefox crypto code into removable parts

2015-03-08 Thread Ryan Sleevi
On Sat, March 7, 2015 12:20 pm, kim.da...@safe-mail.net wrote:
  Looking for comments about feasibility of breaking-up Firefox
  TLS/SSL-handling code into easily-removable sections.

  I want to fully separate NSS code from code that handles:

  1) MD5 signature handling

  2) SHA1 signature handling

  3) RSA key exchange

  4) CBC mode

  5) RC4 ciphers

  6) SSLv3

  7) TLSv1.0, TLSv1.1

  8) SEED, IDEA, 3DES, Camellia cyphers

  9) Secondary/Fallback handshake

  10) Insecure TLS version feedback

  and likely others.

  The intention is to phase out and eventually remove support for all of the
  above.

  Disabling those technologies in browser options is insufficient.
  FREAK-like attacks will exploit holes in the disabling mechanism to
  reenable them. Alternatively, malware, misguided forks, or clueless users
  will change those settings for the worse.

  Removing code from the source code is the only secure way. This also helps
  code maintainability, review, and certifiability.

  To facilitate easier code removal, the code needs to be properly separated
  first - and that is the goal of this project.

Hi Kim,

Unfortunately, I believe you start from a flawed premise, and thus the
rest sort of falls apart.

That is, while removing the code may help long term, separating the code
out does not help maintainability or readability - and in fact, can
greatly hinder them.

Plenty of implementations that have striven to fully separate out their
TLS 1.0, TLS 1.1, and TLS 1.2 stacks have thus fallen victim to bugs. For
example, I can think of one prominent implementor that failed to fix BEAST
because they only fixed one of their implementations, and left the others
vulnerable.

TLS version fallback, for example, is not implemented in NSS.

MD5/SHA1 signature handling is already separated, but you still need it in
some contexts, but not others. Even more so CBC - whose use in NSS extends
far beyond SSL and has many independent applications that would need
independent timelines for deprecation.

In short, I think the plan of separating out that code to be
well-intentioned, but misguided. It's also something that, as a mature
software project with many consumers, will require care and caution rather
than wholesale deleting. And since I've been accused in the past
(amusingly so) of trying to keep backdoors in, no, this is about the
tension of a responsible and open software development practice and
balancing the needs for security.

I also don't think there's the same urgency to delete the code. The
argument you give - that malware will enable it - is unfortunately an
unsolvable problem. I refer you to
https://technet.microsoft.com/en-us/library/hh278941.aspx to understand
why this is inherently so.

The other arguments are equally problematic. Misguided forks (there have
been none yet) and clueless users (whom, if we accept as a threat model,
can do anything and everything hostile to security, including running
malware) are similarly specious.

Rather than focus on removing the code, perhaps it's better to think about
how best to phase things out - with attention being paid to the attendant
issues.

For example, NSS is enabling TLS 1.2 by default (
https://bugzilla.mozilla.org/show_bug.cgi?id=1083900 ) and disabling SSL 3
by default ( https://bugzilla.mozilla.org/show_bug.cgi?id=1140029 ), but
both of these are huge steps with *real* interop issues for applications
using NSS. We can't just ignore those and say Too bad, so sad -
including for the most obvious reason, in that applications would just
switch to TLS libraries that understood and appreciated interop concerns.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Interested in reviving PSS support in NSS

2015-02-15 Thread Ryan Sleevi
On Sun, February 15, 2015 3:07 pm, Hanno Böck wrote:
  Unfortunately the code never got fully merged. Right now the state is
  that code for the basic functions exists in freebl, but all upper layer
  code is not merged. I think if I remember correctly the code currently
  in freebl will also not work in some corner cases (keys mod 8 != 0).

This is true for all of the RSA code in NSS (not handling non-mod 8 keys).
This is also fairly consistent across several cryptographic libraries.

I refactored this code to move it from softoken (the PKCS#11 shim) into
freebl (the crypto primitives), where unit tests were also added (to
bltest). This was tracked in
https://bugzilla.mozilla.org/show_bug.cgi?id=836019

There were also issues where it wasn't exposed (at the PKCS#11 layer) /
wasn't validating parameters correctly, but those should be fixed.

  The bugtracker entry is here:
  https://bugzilla.mozilla.org/show_bug.cgi?id=158750

  I would be motivated to take up that work again if someone from the
  NSS team would be willig to work on merging the code. I'd be interested
  in this because I want to make a proposal to get PSS support into TLS
  1.3 and it would certainly help if I could say that all major TLS
  libraries support it already.

I'm quite mixed on this as the motivation (PSS in TLS), but I think the
state of implementing and exposing PSS through the appropriate layers
(except for TLS) is worth doing.

Here's the current status of PSS support:
What works:
- It's implemented (and tested) in freebl/ with bltests
- It's implemented (but not tested) in softoken/ as a PKCS#11 mechanism
for C_Sign/C_Verify

What doesn't:
- There's no way to reach it from PK11_Sign, due to the fact that
PK11_Sign doesn't take a CK_MECHANISM_TYPE/SECITEM* param pair (like
PK11_Encrypt), and thus you can't supply the CK_RSA_PKCS_PSS_PARAMS that
you might have
- PK11_MapSignKeyType is used rather extensively, and it maps the
SECKEYPrivateKey KeyType incorrectly (rsaPssKey should map to PSS, but
doesn't)
- The more general issue of SECKEYPublicKey/SECKEYPrivateKey not handling
the PSS appropriately (nearly every switch on KeyType will handle this
wrong, for example; further, it's become enshrined in plenty of downstream
code as to how to handle PSS keys here)
- The SGNContext*/VFYContext* interface is entirely borked for PSS.
  - It assumes all the parameters can be expressed via a SECOidTag. That
is, it's missing hash alg, mgf alg, salt length (e.g. the
RSASSA-PSS-params construction)
  - SEC_GetSignatureAlgorithmOidTag is borked for PSS
  - SECOID_GetAlgorithmTag is borked for PSS
- CERT_VerifySignedData doesn't handle PSS (e.g. policy checking flags)


Of course, I'm ignoring all the issues with the nightmare that is the SPKI
for PSS/OAEP and those behaviours. If you're expecting CAs to issue such
certificates, I would call it extremely unlikely.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Reducing NSS's allocation rate

2014-11-11 Thread Ryan Sleevi
On Tue, November 11, 2014 10:26 am, Nicholas Nethercote wrote:
  On Mon, Nov 10, 2014 at 7:06 PM, Ryan Sleevi
  ryan-mozdevtechcry...@sleevi.com wrote:
 
  Not to be a pain and discourage someone from hacking on NSS

  My patches are in the following bugs:

  https://bugzilla.mozilla.org/show_bug.cgi?id=1094650
  https://bugzilla.mozilla.org/show_bug.cgi?id=1095307
  https://bugzilla.mozilla.org/show_bug.cgi?id=1096741

  I'm happy to hear specific criticisms.

  Nick
  --

Not trying to be a pain, but I don't think that's fair to position like
that. I'd rather the first question be answered - each one of your patches
adds more branches and more complexity. That part is indisputable, even if
it may be seen as small. However, what measures do we have to ensure that
this is meaningfully improving any objective measure of performance (other
than allocation churn, which allocators are exceptionally capable of
handling)? How do we ensure this doesn't regress? Otherwise, we're adding
complexity without any benefits. And I don't think there are - or at
least, I haven't seen, other than reduces allocations.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Reducing NSS's allocation rate

2014-11-10 Thread Ryan Sleevi
On Mon, November 10, 2014 6:51 pm, Nicholas Nethercote wrote:
  Hi,

  I've been doing some heap allocation profiling and found that during
  basic usage NSS accounts for 1/3 of all of Firefox's cumulative (*not*
  live) heap allocations. We're talking gigabytes of allocations in
  short browsing sessions. That is *insane*.

Could you explain why it's insane? I guess that's sort of why I poked you
on the bug. Plenty of allocators are rather smart under such churn, and
NSS itself uses an Arena allocator designed to re-use some (but
understandably not all) allocations.

Is there a set of performance criteria you're measuring? For example,
we're spending X% of CPU in the allocator, and we believe we can reduce it
to Y%, which will improve test Z.

Not to be a pain and discourage someone from hacking on NSS (we always
need more NSS hackers), but I guess I'm just trying to understand the
complexity (and locally caching / lazy instantiating always adds some
degree of complexity, though hopefully minor) vs performance (which is
hopefully being measured) tradeoffs. Further, if we aren't continuously
monitoring this, meaning it's not a metric integrated as something we
watch, then it seems very easy that any improvements you make can quickly
regress, which would be unfortunate for your work.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SSLKEYLOGFILE always enabled

2014-07-17 Thread Ryan Sleevi
On Wed, July 16, 2014 11:42 pm, Falcon Darkstar Momot wrote:
  When it comes to key material, it's an outstanding idea to err on the
  side of caution.

  Does anyone actually require this feature in a non-debug build?  If not,
  then it's completely unreasonable to leave it in such builds, even if
  it's not the weakest link and even if it doesn't break compliance.

  --Falcon Darkstar Momot
  --Security Consultant, Leviathan Security Group

Quite a few people, especially users of Chrome and Firefox, especially
those working to implement or deploy SPDY or HTTP/2.0 (which are over TLS,
ergo Wireshark/pcap can be a pain).

Given that the threat model requires a local attacker with same-privileges
as either of these applications (or influence over NSS environment), can
you describe a threat that could not be equally accomplished through
other, similarly trivial means (e.g. binary compromise)

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SSLKEYLOGFILE always enabled

2014-07-15 Thread Ryan Sleevi
On Tue, July 15, 2014 1:11 pm, Tom Ritter wrote:
  Is having it in by default useful enough to outweigh the risk?

  When the Dual_EC_DRBG news stories were blowing it, it was revealed
  that you could switch to it by just changing the Windows Registry.
  It's a Windows-supported backdoor - no malicious code needs to stay
  running on your system - just flip that bit, and delete yourself.
  After that, you're all set.

  Similarly, having this feature provided by default seems like it
  provides a very easy, supported way to extract sensitive key data to
  the filesystem or some other covert channel - without invalidating
  package signatures, hashes of libraries or binaries, etc.

  Don't get me wrong, it's invaluable to be able to use it for
  debugging, but I question to need to have it enabled by default...

  -tom

Either you control your machine, or you do not. Either the OS provides
robust controls, or it does not.

If an attacker has physical access to your machine and can set this, or if
an attacker can control your operating environment such that the
environment variable is set, it's all over. This is no different than
malware hijacking your browser of choice and hooking the API calls - which
we do see for both Firefox and Chrome.

Now, we can talk about grades of attacks, and finer nuances, but for a
debug bit that has to be set client side, it really seems a no-op, and for
which common sense would suggest is not a reasonable threat model.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SSLKEYLOGFILE always enabled

2014-07-15 Thread Ryan Sleevi
On Tue, July 15, 2014 1:11 pm, Tom Ritter wrote:
  Is having it in by default useful enough to outweigh the risk?

  When the Dual_EC_DRBG news stories were blowing it, it was revealed
  that you could switch to it by just changing the Windows Registry.
  It's a Windows-supported backdoor - no malicious code needs to stay
  running on your system - just flip that bit, and delete yourself.
  After that, you're all set.

  Similarly, having this feature provided by default seems like it
  provides a very easy, supported way to extract sensitive key data to
  the filesystem or some other covert channel - without invalidating
  package signatures, hashes of libraries or binaries, etc.

  Don't get me wrong, it's invaluable to be able to use it for
  debugging, but I question to need to have it enabled by default...

  -tom

Either you control your machine, or you do not. Either the OS provides
robust controls, or it does not.

If an attacker has physical access to your machine and can set this, or if
an attacker can control your operating environment such that the
environment variable is set, it's all over. This is no different than
malware hijacking your browser of choice and hooking the API calls - which
we do see for both Firefox and Chrome.

Now, we can talk about grades of attacks, and finer nuances, but for a
debug bit that has to be set client side, it really seems a no-op, and for
which common sense would suggest is not a reasonable threat model.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Chrome: From NSS to OpenSSL

2014-02-03 Thread Ryan Sleevi
On Mon, February 3, 2014 4:30 am, David Woodhouse wrote:
  On Mon, 2014-02-03 at 12:13 +, Alan Braggins wrote:
 
  Having support for PKCS#11 tokens at all is a pro, even if one
  irrelevant to the vast majority of users.

  That gets less true as we start to use PKCS#11 a little more. It isn't
  *just* about hardware tokens — things like gnome-keyring offer
  themselves as a PKCS#11 token, giving you a consistent certificate store
  which is automatically unlocked when you log in, etc. Integration with
  key stores on other operating systems is also fairly easy if you have a
  PKCS#11 interface to them.

At the risk of diverging even further from a suitable NSS discussion, I
will just say that it is categorically *not* the case that it's fairly
easy, as the concepts used in PKCS#11 do not reliably map over well to
other cryptographic APIs (CDSA/CSSM, Security.framework, CryptoAPI, CNG).
This is already true in Chromium's integration, where we use private (in
that they're not upstream, but are open sourced) patches to NSS to avoid
having to emulate PKCS#11 tokens. This is especially true when considering
the different UI interaction requirements.


  And it isn't just about keys either — we are starting to use it to get a
  coherent set of trusted certificates system-wide. Which has historically
  been a real PITA on Linux with a different set of trusted certs per
  crypto library, or even per application. Hell, even *Firefox* isn't
  properly using the NSS Shared System Database setup, and stupidly
  persists in having its *own* separate store. Fedora 19 starts to make
  sense of this and exports the trust database to a flat file for
  compatibility with OpenSSL, but it's not ideal and still only addresses
  *certs*, not keys.

See p11-kit, which was also referenced in the design doc, which Red Hat
actively contributes to, and which I think is an increasingly viable
solution both for NSS and other applications. This is something we're
looking at supporting, even with our migration to OpenSSL, so that whether
you're using Firefox or Chromium, things should just work if you're
using p11-kit.


  There is a middle ground between having decent PKCS#11 support where you
  can identify a key/cert by a PKCS#11 URL and everything Just Works™
  without jumping through hoops, and having PKCS#11 insinuate itself into
  *everything* as NSS has done. I think GnuTLS has the balance about
  right, and probably would have been a good choice — if it wasn't for the
  GPL allergy, although I note GnuTLS has at least switched back to
  LGPL2.1 in the latest releases.

  For OpenSSL to become viable, it's going to need a whole lot more than
  ENGINE_PKCS11.

Our experience has been that when integrated into the library itself (eg:
as NSS has done), it leads for a much poorer experience than when provided
by the application (eg: Chromium), even if it does require more work to
get there.

There's a balance to be struck, for sure, and NSS arguably errs too much
on one particular pattern, which is, again, part of our reasoning for
believing that the transition to OpenSSL will be more in the benefit of
our users and our growing list of needs from an SSL/cryptographic library.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Chrome: From NSS to OpenSSL

2014-01-31 Thread Ryan Sleevi
On Fri, January 31, 2014 9:18 am, Alan Braggins wrote:
  On 31/01/14 10:24, Julien Pierre wrote:
 
  On 1/27/2014 10:28, Kathleen Wilson wrote:
  Draft Design Doc posted by Ryan Sleevi regarding Chrome migrating from
  NSS to OpenSSL:
 
  https://docs.google.com/document/d/1ML11ZyyMpnAr6clIAwWrXD53pQgNR-DppMYwt9XvE6s/edit?pli=1

  Strange that PKCS#11 support is listed as a con for NSS .

  It is at least listed under pro as well (Having ENGINE_pkcs11
  listed under both for OpenSSL might make sense too.)


It was not accidental that it was listed under Con, nor do I see
ENGINE_pkcs11 as a Pro

As part of its fundamental design, NSS performs all operations using
PKCS#11 tokens. Even the internal cryptographic implementation is exposed
as a PKCS#11 token - softoken.

I realize that from a design and layering perspective, there are those
(especially within NSS developers) that feel this is a Pro. However, in
practical terms, this works out as a significant negative for performance.
It's not merely the function call overhead, but the abstraction prevents
significant compiler optimizations for space and size.

Further, the PKCS#11 API itself - while suitable for interacting with IO
contrained hardware tokens - is absolutely horrible for maximizing CPU
thoroughput.

Consider AES-GCM or AES-CBC-PAD. The former requires establishing a new
PKCS#11 context every time, which forces levels of key expansion. I
realize some NSS developers have suggested possible ways to improve this -
but the engineering effort is recognized to be complex. We've also been
committed to the newly-formed OASIS TC to try to improve things, but the
reality is that the current spec is broken for purpose.

Or consider AES-CBC-PAD, which is essentially *the same issue*. Because of
this, NSS's SSL layer only uses AES-CBC, and handles padding manually, so
that it can keep a single context around for the duration of the SSL
session.

On the server side, NSS is virtually non-existent on any platform that
deeply cares about performance because of this. Heck, there is a whole
PKCS#11 bypass mode precisely for this.

This reliance on PKCS#11 means that there are non-trivial overheads when
doing something as simple as hashing with SHA-1. For something that is
such a simple transformation, multiple locks must be acquired and the
entire NSS internals may *block* if using NSS on multiple threads, in
order to prevent any issues with PKCS#11's threading design.

I tried not to write too much on the negatives of NSS or OpenSSL, because
both are worthy of long rants, but I'm surprised to hear anyone who has
worked at length with PKCS#11 - like Oracle has (and Sun before) - would
be particularly praising it.

It's good for interop with smart cards. That's about it.

Cheers,
Ryan

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: [Ach] Proposal to Remove legacy TLS Ciphersuits Offered by Firefox

2014-01-02 Thread Ryan Sleevi
On Thu, January 2, 2014 1:25 pm, Julien Vehent wrote:
  Hi Aaron,

  On 2014-01-02 16:10, Aaron Zauner wrote:
  Hi Kurt,
 
  On 02 Jan 2014, at 21:51, Kurt Roeckx k...@roeckx.be wrote:
 
  On Thu, Jan 02, 2014 at 09:33:24PM +0100, Aaron Zauner wrote:
  I *think* they want to prefer CAMELLIA to AES, judging by the
  published ciphersuite.
  But the construction must be wrong because it returns AES first.
  If the intent is to
  prefer Camellia, then I am most interesting in the rationale.
  Thanks for reporting this!
 
  Yes. The intent was to prefer Camellia where possible. First off we
  wanted to have more diversity. Second not everybody
  is running a sandybridge (or newer) processor. Camellia has better
  performance for non-intel processors with about the
  same security.

  I would argue that our documents target server configurations, where
  AES-NI is now a standard.

I would have to disagree here. The two largest users of NSS, by *any*
measure, are Firefox and Chromium, both of which use it as a client.
Further, the notion of server configurations is further muddied by
efforts such as WebRTC, which sees a traditional client (eg:
heterogeneous configurations and architectures) acting as both a DTLS
client and a DTLS server.

NSS in server mode is so full of sharp edges and so few code examples that
I'd strongly discourage it being used as the reference for what NSS
SHOULD do.


 
  What’s the take on the ChaCha20/Poly1305 proposal by the Mozilla Sec.
  Team by the way?

  There are 5 security teams at Mozilla, so Mozilla Sec Team is a very
  large group.
  I think we all want a new stream cipher in TLS to replace RC4. But
  that's going
  to take years, and won't help the millions of people who don't replace
  their software
  that often.

Really? If anything, Firefox and Chromium have shown that new changes can
be deployed on the order of weeks-to-months, and with server opt-in (such
as NPN/ALPN), the majority of *users* traffic can be protected or enhanced
within a few weeks-to-months after.

Google already has deployed experimental support, for example. Likewise,
the adoption of SPDY - within Firefox and within a number of significant
web properties - show that it's significantly quicker than it used to be
to protect users.

You're correct that there's going to be a long-tail of sites that don't
update, sure, but rapid deployment is certainly an increasing possibility
for the majority of users.


   From an Operations Security point of view, the question is: how do we
  provide the
  best security possible, with the cards we currently have in our hands,
  and without
  discarding anybody. ChaCha20/Poly1305 isn't gonna help with that in the
  short term.

  - Julien

I strongly, vehemently disagree with that conclusion. Solutions like
ChaCha20/Poly1305 are able to reach a broad spectrum of users near
immediately and ubiquitously, providing meaningful security and speed
improvements to users. If the idea is that no solution is a good solution
until it's a ubiquitous solution, well, that's just silly and not
reflective of the world we live in at all.

  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Firefox's confusing saga in HTTPS warnings

2013-11-25 Thread Ryan Sleevi
On Mon, November 25, 2013 12:06 am, ianG wrote:
  For some reason, news.google.com has been captured by *.opendns.com and
  turned into a site warning.  OK, so I imagine there is a story about
  this somewhere, maybe it's just my ISP...



  But, imagine my surprise when I tried it on chrome and I got an
  informative message in ONE GO:

  You attempted to reach news.google.com, but instead you actually reached
  a server identifying itself as *.opendns.com. This may be caused by a
  misconfiguration on the server or by something more serious. An attacker
  on your network could be trying to get you to visit a fake (and
  potentially harmful) version of news.google.com.
  You cannot proceed because the website operator has requested heightened
  security for this domain.

  AMAZING!  Informative.  Clear.  Precise.  Immediate.  Fullsome.  Red!

Yellow. No longer Red.

And not always informative or clear - see
https://code.google.com/p/chromium/issues/detail?id=320649




  Meanwhile Firefox still wallows along with its epic saga that takes 4
  clicks and a masters degree in PKI so you can figure out the precise
  path to follow, to finally read that the thing returned wasn't what you
  expected, but if you still remember your PKI training you can try some
  certificate anthropology and dig out the bones of the domain to which
  you've been re-directed.



  Chrome's handling of this is so superior, I didn't even notice there is
  no option to continue to the site ...

  iang
  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto



-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: oddball, old cipher suite in firefox client hello

2013-11-01 Thread Ryan Sleevi
On Fri, November 1, 2013 5:30 pm, Wan-Teh Chang wrote:
  On Fri, Nov 1, 2013 at 1:28 AM, Jeff Hodges j...@somethingsimilar.com
  wrote:
 
  I dug through the NSS codebase and found where it was defined in
  lib/ssl/sslproto.h as:
 
/* New non-experimental openly spec'ed versions of those cipher
  suites. */
#define SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA 0xfeff
#define SSL_RSA_FIPS_WITH_DES_CBC_SHA   0xfefe

  We should remove these two nonstandard cipher suites from NSS.

  Wan-Teh
  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto


+1

Filed https://bugzilla.mozilla.org/show_bug.cgi?id=934033 for interested
parties to track.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-27 Thread Ryan Sleevi
On Fri, September 27, 2013 10:29 am, Eddy Nigg wrote:
  On 09/27/2013 08:12 PM, From Brian Smith:
  My question is not so much Is anybody using this functionality but
  rather What really terrible things, if any, would happen if we
  removed them?

  We might have to look for alternatives because when the card is removed
  or inserted with can trigger session termination and/or authentication.
  Whereas without it the card could be removed and the session would stay
  valid like any other session.


How do you deal with this in other browsers?

What are the specific features that you need?

Can you think of other ways that might be able to accomplish your goals
without introducing browser-specific APIs to the open web?

If so, what prevents/ed you from using them.

Have you considered approaching the WHATWG or W3C to actually see this
adopted as part of the Web Platform, rather than relying on legacy,
browser-specific solutions that are not standardized, interoperable
behaviour?

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-27 Thread Ryan Sleevi
On Fri, September 27, 2013 1:35 pm, Eddy Nigg wrote:
  On 09/27/2013 08:52 PM, From Ryan Sleevi:
 
  How do you deal with this in other browsers?

  Well, I don't...so far :-)

  However I'm aware of similar capabilities with IE.

  What are the specific features that you need?

  Detection of smart card removal or insertion.

Let me try it differently: What actions do you take on this information?

As far as I know, IE doesn't provide the smart card insertion/removal
events, except perhaps through ActiveX.

Why should a web page care about a user's hardware state, given that there
exist no Web APIs to actually leverage this hardware state?

This would be akin to wanting to know about USB events, for which there is
no USB API for in the Web [putting extensions aside for a moment]. Or
wanting to know when the user plugs in a new keyboard or mouse; why should
it matter?

I enthusiastically welcome Brian's proposal to remove these non-standard
features from the Web Platform, and am trying to get a better
understanding about what the actual use case is for them (as I believe
Brian is as well), in order to understand what, if anything, should
replace them.

Note that I do not (at this point) believe a replacement needs to be
implemented before removing them, but I suppose that's contingent on
finding what the actual use case is.


  Can you think of other ways that might be able to accomplish your goals
  without introducing browser-specific APIs to the open web?

  Maybe an extension, not sure.

  Have you considered approaching the WHATWG or W3C to actually see this
  adopted as part of the Web Platform, rather than relying on legacy,
  browser-specific solutions that are not standardized, interoperable
  behaviour?

  No, since it works for us there was never a desire for such a
  punishment. Besides that I'm not a browser vendor really.

  --
  Regards

  Signer:  Eddy Nigg, StartCom Ltd.
  XMPP:start...@startcom.org
  Blog: http://blog.startcom.org/
  Twitter: http://twitter.com/eddy_nigg

  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto



-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-27 Thread Ryan Sleevi
On Fri, September 27, 2013 2:22 pm, Eddy Nigg wrote:
  On 09/27/2013 11:52 PM, From Ryan Sleevi:
  Let me try it differently: What actions do you take on this information?

  Terminating a current session or triggering authentication to a new
  session.

When you define session, what do you mean here?

NSS already performs checking that the given smart card used to
authenticate is present whenever encrypting or decrypting data. This
includes cached session resumption as well.

This does not seem like it's a capability that needs to be or should be
exposed at the platform layer. At best, it seems like a proposal to change
how Firefox handles SSL in the browser, which may either be a feature
request or bug of PSM or NSS - but not a Web API.

If you're not relying on that client-authenticated SSL session, then it
sounds like an application design issue on your web apps side, rather than
something missing from the Web Platform.


  As far as I know, IE doesn't provide the smart card insertion/removal
  events, except perhaps through ActiveX.

  Yes exactly.

  Why should a web page care about a user's hardware state, given that
  there
  exist no Web APIs to actually leverage this hardware state?

  Consider a banking site or others like administrative sites that use
  client certificates (provided on a smart card) .

  This would be akin to wanting to know about USB events, for which there
  is
  no USB API for in the Web [putting extensions aside for a moment]. Or
  wanting to know when the user plugs in a new keyboard or mouse; why
  should
  it matter?

  Probably because we like to use a browser for such tasks instead of
  implementing a dedicated UI. And client certificates (which may be used
  on smart cards) are part of the browser capabilities.

Yes, but a website has no knowledge about whether or not the given client
certificate is on a smart card (nor can it, at least without out of band
knowledge).

This certainly doesn't seem like a use case that fits the web security
model, so I'm still trying to refine and understand what you're discussing
here.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-27 Thread Ryan Sleevi
On Fri, September 27, 2013 3:46 pm, Eddy Nigg wrote:
  On 09/28/2013 12:45 AM, From Ryan Sleevi:
  NSS already performs checking that the given smart card used to
  authenticate is present whenever encrypting or decrypting data. This
  includes cached session resumption as well.

  Not SSL session of course, but on the web application layer.

  If you're not relying on that client-authenticated SSL session, then it
  sounds like an application design issue on your web apps side, rather
  than
  something missing from the Web Platform.

  Of  course, how can the web application know if a smart card is removed
  otherwise? It must get that input from somewhere, doesn't it?

  Yes, but a website has no knowledge about whether or not the given
  client certificate is on a smart card.

  The web site probably not, but the web site operator - there are banks,
  health services and others (like us) that use smart cards knowing that
  the client certificate exists only in a smart card.

  This certainly doesn't seem like a use case that fits the web security
  model, so I'm still trying to refine and understand what you're
  discussing here.

  As explained - if a client certificate exists only on a smart card (by
  design enforced) and that cert is used for authentication, if the card
  is removed I want to trigger termination of the current session (call it
  log out) and if the card is inserted again authentication is performed
  again.

  That's the functionality which window.crypto.enableSmartCardEvents
  provides that is discussed here for removal. I assume it was put into
  the capabilities of FF exactly for this purpose in first place.


I'm sorry, but what you just described seems entirely unnecessary.

If your site requires a client certificate, and you know that a client
certificate is stored in a smart card, then you also know that when using
Firefox, and the smart card is removed, Firefox will invalidate that
SSL/TLS session.

Because your site requires a client certificate, their current session is
thus terminated. When they attempt to establish a new SSL/TLS session,
your site will again require a client certificate, and they either insert
a smart card or they don't.

Such an API seems entirely unnecessary - 'it just works' like above.

It sounds like you're using a weaker security guarantee though - namely,
that you're not requiring SSL/TLS client certificate authentication, and
thus want some other out of band way to know when the user removed their
smart card. The interoperable solution is simple - don't do that. When the
user removes their smart card, the SSL/TLS session is invalidated, and the
user is 'logged out'.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-27 Thread Ryan Sleevi
On Fri, September 27, 2013 4:09 pm, Eddy Nigg wrote:
  On 09/28/2013 01:59 AM, From Ryan Sleevi:
  If your site requires a client certificate, and you know that a client
  certificate is stored in a smart card, then you also know that when
  using
  Firefox, and the smart card is removed, Firefox will invalidate that
  SSL/TLS session.

  Not really - except in case you require the cert authentication on every
  exchange between the client and server. I don't believe that many do
  this as it makes it incredible slow and some browser will prompt for the
  certificate over an over again.

But Firefox (and Chrome, IE, Safari, and Opera) won't.

I'm not sure FIrefox supporting some non-Web Platform feature on the basis
that some other browser makes it hard, especially when the number of
browsers that support the feature beyond Firefox is 0.


  When the user removes their smart card, the SSL/TLS session is
  invalidated, and the
  user is 'logged out'.

  Kind of, he'll get the infamous ssl_error_handshake_failure_alert error
  that nobody knows what it is, but that's not how such web apps are
  usually implemented. They do the client authentication dance once and
  continue with a application controlled session.

And such webapps could presumably use iframes or XHRs with a background
refresh to a login domain, and when such a fetch fail, know the smart card
was removed, and thus treat it as the same. All without non-standard
features being exposed.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-27 Thread Ryan Sleevi
On Fri, September 27, 2013 4:09 pm, Eddy Nigg wrote:
  On 09/28/2013 01:59 AM, From Ryan Sleevi:
  If your site requires a client certificate, and you know that a client
  certificate is stored in a smart card, then you also know that when
  using
  Firefox, and the smart card is removed, Firefox will invalidate that
  SSL/TLS session.

  Not really - except in case you require the cert authentication on every
  exchange between the client and server. I don't believe that many do
  this as it makes it incredible slow and some browser will prompt for the
  certificate over an over again.

But Firefox (and Chrome, IE, Safari, and Opera) won't.

I'm not sure FIrefox supporting some non-Web Platform feature on the basis
that some other browser makes it hard, especially when the number of
browsers that support the feature beyond Firefox is 0.


  When the user removes their smart card, the SSL/TLS session is
  invalidated, and the
  user is 'logged out'.

  Kind of, he'll get the infamous ssl_error_handshake_failure_alert error
  that nobody knows what it is, but that's not how such web apps are
  usually implemented. They do the client authentication dance once and
  continue with a application controlled session.

And such webapps could presumably use iframes or XHRs with a background
refresh to a login domain, and when such a fetch fail, know the smart card
was removed, and thus treat it as the same. All without non-standard
features being exposed.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-27 Thread Ryan Sleevi
On Fri, September 27, 2013 5:51 pm, Robert Relyea wrote:


  On 09/27/2013 05:01 PM, Ryan Sleevi wrote:
  On Fri, September 27, 2013 4:09 pm, Eddy Nigg wrote:
   On 09/28/2013 01:59 AM, From Ryan Sleevi:
  If your site requires a client certificate, and you know that a client
  certificate is stored in a smart card, then you also know that when
  using
  Firefox, and the smart card is removed, Firefox will invalidate that
  SSL/TLS session.
   Not really - except in case you require the cert authentication on
  every
   exchange between the client and server. I don't believe that many do
   this as it makes it incredible slow and some browser will prompt for
  the
   certificate over an over again.
  But Firefox (and Chrome, IE, Safari, and Opera) won't.
 
  I'm not sure FIrefox supporting some non-Web Platform feature on the
  basis
  that some other browser makes it hard, especially when the number of
  browsers that support the feature beyond Firefox is 0.
  Ryan is correct. What FF does not do is reload the page when the smart
  card is removed. The most common use of smart card events is forcing the
  reloading the page.

  NOTE: there is still an issue that Firefox doesn't provide a way for the
  web page to flush it's own cache. If you've made a connection without a
  cert, there's no way to say try again with the cert. This doesn't affect
  removal, but it does affect insertion.

FWIW, Chrome does. We're working on refining this. But that's not a
standard behaviour, and if sites want to rely on that, they should rely on
standards-defined behaviours.

 
  When the user removes their smart card, the SSL/TLS session is
  invalidated, and the
  user is 'logged out'.
   Kind of, he'll get the infamous ssl_error_handshake_failure_alert
  error
   that nobody knows what it is, but that's not how such web apps are
   usually implemented. They do the client authentication dance once and
   continue with a application controlled session.

  Actually FF does a full handshake, what kind of error you get depends on
  what bits the server said. If you pass request not require, then the
  handshake completes with the server getting no cert for the connection.
  And such webapps could presumably use iframes or XHRs with a background
  refresh to a login domain, and when such a fetch fail, know the smart
  card
  was removed, and thus treat it as the same. All without non-standard
  features being exposed.
  You still don't get the page refresh when the smart card is removed.

But you don't need that in the iframe/XHR situation. You can implement a
variety of techniques (hanging gets, COMET, meta refresh, etc) to
accomplish this.


  I don't have a problem with going for an industry standard way of doing
  all of these things, but it's certainly pretty presumptuous to remove
  these features without supplying the industry standard replacements and
  time for them to filter through the internet.

  bob

Bob,

The fact that these are problems that sites MUST solve for other browsers,
I don't see why it's necessary to find a replacement first.

I realize that some might find this functionality to make Firefox more
compelling. People found ActiveX compelling. That does not mean it's good
for the Internet at large, or the Web Platform.

I certainly am not one to make decisions about Firefox's goals for the Web
Platform, given what I work on, but I applaud efforts to remove
non-standard features and to standardize features. But I don't think one
must be held hostage to the other - the fact that these problems exist for
other UAs means that the only sites that will be affected are those coded
*specifically* to be Firefox only - and that does not a good Internet
make.

Cheers,
Ryan


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


[ANNOUNCE] NSS 3.15.2 Release

2013-09-26 Thread Ryan Sleevi
The NSS team has released Network Security Services (NSS) 3.15.2, which is
a minor release.

The HG tag is NSS_3_15_2_RTM. NSS 3.15.2 requires NSPR 4.10 or newer.

Detailed release notes are available at
https://developer.mozilla.org/en-US/docs/NSS/NSS_3.15.2_release_notes and
reproduced below.

This release includes security-relevant fixes (CVE-2013-1739).


Introduction

Network Security Services (NSS) 3.15.2 is a patch release for NSS 3.15.
The bug fixes in NSS 3.15.2 are described in the Bugs Fixed section
below.

Distribution Information

NSS 3.15.2 source distributions are also available on ftp.mozilla.org for
secure HTTPS download:
* Source tarballs:
https://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_15_2_RTM/src/


Security Advisories

The following security-relevant bugs have been resolved in NSS 3.15.2.
Users are encouraged to upgrade immediately.
* Bug 894370 - (CVE-2013-1739) Avoid uninitialized data read in the event
of a decryption failure.

New in NSS 3.15.2

New Functionality
* AES-GCM Ciphersuites: AES-GCM cipher suite (RFC 5288 and RFC 5289)
support has been added when TLS 1.2 is negotiated. Specifically, the
following cipher suites are now supported:
  - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  - TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
  - TLS_RSA_WITH_AES_128_GCM_SHA256

New Functions
* PK11_CipherFinal has been introduced, which is a simple alias for
PK11_DigestFinal.

New Types
* No new types have been introduced.

New PKCS #11 Mechanisms
* No new PKCS#11 mechanisms have been introduced

Notable Changes in NSS 3.15.2
* Bug 880543 - Support for AES-GCM ciphersuites that use the SHA-256 PRF
* Bug 663313 - MD2, MD4, and MD5 signatures are no longer accepted for
OCSP or CRLs, consistent with their handling for general certificate
signatures.
* Bug 884178 - Add PK11_CipherFinal macro

Bugs fixed in NSS 3.15.2
* Bug 734007 - sizeof() used incorrectly
* Bug 900971 - nssutil_ReadSecmodDB() leaks memory
* Bug 681839 - Allow SSL_HandshakeNegotiatedExtension to be called before
the handshake is finished.
* Bug 848384 - Deprecate the SSL cipher policy code, as it's no longer
relevant. It is no longer necessary to call NSS_SetDomesticPolicy because
all cipher suites are now allowed by default.

A complete list of all bugs resolved in this release can be obtained at
https://bugzilla.mozilla.org/buglist.cgi?resolution=FIXEDclassification=Componentsquery_format=advancedtarget_milestone=3.15.2product=NSSlist_id=7982238

Compatibility

NSS 3.15.2 shared libraries are backward compatible with all older NSS 3.x
shared libraries. A program linked with older NSS 3.x shared libraries
will work with NSS 3.15.2 shared libraries without recompiling or
relinking. Furthermore, applications that restrict their use of NSS APIs
to the functions listed in NSS Public Functions will remain compatible
with future versions of the NSS shared libraries.

Feedback

Bugs discovered should be reported by filing a bug report with
bugzilla.mozilla.org (product NSS).

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-16 Thread Ryan Sleevi
On Fri, August 16, 2013 6:36 am, Rob Stradling wrote:
  On 15/08/13 18:15, Chris Richardson wrote:
  I believe this plan would have poor side effects.  For example, if Apple
  ships clients with a broken ECDSA implementation [0], a server cannot
  detect detect if a connecting client is an Apple product and avoid the
  use
  of ECDSA in that subset of connections.  Instead, ECDSA suddenly becomes
  unsafe for anyone to use anywhere.

  Chris,

  Firefox already offers ECDHE-ECDSA ciphersuites, so I don't think
  Brian's plan would introduce any _new_ side effects relating to that OSX
  (10.8..10.8.3) bug.

I think the point was that fingerprinting the TLS handshake has some
positive value, and is not inherently negative - as demonstrated by that
OpenSSL patch.

That's not to suggest that every UA shold report the UA string in the TLS
handshake, but just pointing out that when mistakes (in implementations)
happen, it's nice to be able to identify them and work around.

Cheers,
Ryan


  Are you suggesting that Firefox should drop support for all ECDHE-ECDSA
  ciphersuites?
  Or are you suggesting that NSS should implement the equivalent of that
  proposed OpenSSL patch, so that NSS-based TLS servers can avoid
  attempting to negotiate ECDHE-ECDSA with broken OSX clients?
  Or what?


  Should browsers drop support now for all TLS features that might
  possibly suffer broken implementations in the future?
  (For example, AGL would like to get rid of AES-GCM because it's hard to
  implement securely.  See
  https://www.imperialviolet.org/2013/01/13/rwc03.html)


  [0]:
  https://github.com/agl/openssl/commit/0d26cc5b32c23682244685975c1e9392244c0a4d
 
 
  On Thu, Aug 8, 2013 at 10:30 PM, Brian Smith br...@briansmith.org
  wrote:
 
  Please see https://briansmith.org/browser-ciphersuites-01.html
 
  First, this is a proposal to change the set of sequence of ciphersuites
  that Firefox offers. Secondly, this is an invitation for other browser
  makers to adopt the same sequence of ciphersuites to maximize
  interoperability, to minimize fingerprinting, and ultimately to make
  server-side software developers and system administrators' jobs easier.
 
  Suggestions for improvements are encouraged.
 
  Cheers,
  Brian
  --
  Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto
 

  --
  Rob Stradling
  Senior Research  Development Scientist
  COMODO - Creating Trust Online
  Office Tel: +44.(0)1274.730505
  Office Fax: +44.(0)1274.730909
  www.comodo.com

  COMODO CA Limited, Registered in England No. 04058690
  Registered Office:
 3rd Floor, 26 Office Village, Exchange Quay,
 Trafford Road, Salford, Manchester M5 3EQ

  This e-mail and any files transmitted with it are confidential and
  intended solely for the use of the individual or entity to whom they are
  addressed.  If you have received this email in error please notify the
  sender by replying to the e-mail containing this attachment. Replies to
  this email may be monitored by COMODO for operational or business
  reasons. Whilst every endeavour is taken to ensure that e-mails are free
  from viruses, no liability can be accepted and the recipient is
  requested to use their own virus checking software.
  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto



-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Where is NSS used?

2013-07-10 Thread Ryan Sleevi
On Mon, July 8, 2013 12:00 pm, Rick Andrews wrote:
  I need to remove some 1024-bit roots from Firefox’s trust store, but I
  realize that these trusted roots are part of the NSS library, and that the
  NSS library is used by lots of other software, not just Firefox. Removing
  these roots may have far-reaching consequences. I understand that there
  isn't a list of all the different places where NSS is used, but can anyone
  provide some guidance? Even a broad incomplete list of NSS users is better
  than nothing. Thanks!
  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto


Rick,

I think you may find it better to consider moz.dev.sec.policy, in the hope
of reaching the people watching for additions. The issue is that there are
a vast, vast number of applications that use the Mozilla Root Certificate
Program data, but without using NSS. The removal of these roots would
equally affect them.

This includes, for example, nearly every major Linux distribution
(typically as part of their ca-certificates package), which are further
consumed by a variety of applications and libraries (including OpenSSL,
GnuTLS, and plenty of 'home-grown' solutions, unfortunately).

That said, the operation of Mozilla's Root Program is done according to
the needs and abilities of NSS, and these secondary consumers are not
'officially' supported.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


[ANNOUNCE] NSS 3.14.3 Release

2013-02-20 Thread Ryan Sleevi
The NSS Development Team is pleased to announce the release of NSS 3.14.3.

The official release notes are available at
https://developer.mozilla.org/en-US/docs/NSS/NSS_3.14.3_release_notes ,
and are reproduced at the end of this message.

This release includes mitigations for recently discussed Lucky Thirteen
attack (CVE-2013-1620). However, please note the limitations of the
mitigations discussed in the release notes below.



Introduction:

Network Security Services (NSS) 3.14.3 is a patch release for NSS 3.14.
The bug fixes in NSS 3.14.3 are described in the Bugs Fixed section
below.

Distribution Information

* The CVS tag is NSS_3_14_3_RTM. NSS 3.14.3 requires NSPR 4.9.5 or newer.
* NSS 3.14.3 source distributions are also available on ftp.mozilla.org
for secure HTTPS download:
  - Source tarballs:
https://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_14_3_RTM/src/

New in NSS 3.14.3

* No new major functionality is introduced in this release. This release
is a patch release to address CVE-2013-1620.

New Functions

* in pk11pub.h
 - PK11_SignWithSymKey - Similar to PK11_Sign, performs a signing
operation in a single operation. However, unlike PK11_Sign, which uses a
SECKEYPrivateKey, PK11_SignWithSymKey performs the signature using a
symmetric key, such as commonly used for generating MACs.

New Types

* CK_NSS_MAC_CONSTANT_TIME_PARAMS - Parameters for use with
CKM_NSS_HMAC_CONSTANT_TIME and CKM_NSS_SSL3_MAC_CONSTANT_TIME.
New PKCS #11 Mechanisms
* CKM_NSS_HMAC_CONSTANT_TIME - Constant-time HMAC operation for use when
verifying a padded, MAC-then-encrypted block of data.
CKM_NSS_SSL3_MAC_CONSTANT_TIME - Constant-time MAC operation for use when
verifying a padded, MAC-then-encrypted block of data using the SSLv3 MAC.

Notable Changes in NSS 3.14.3

* CVE-2013-1620
Recent research by Nadhem AlFardan and Kenny Patterson has highlighted a
weakness in the handling of CBC padding as used in SSL, TLS, and DTLS that
allows an attacker to exploit timing differences in MAC processing. The
details of their research and the attack can be found at
http://www.isg.rhul.ac.uk/tls/, and has been referred to as Lucky
Thirteen.

NSS 3.14.3 includes changes to the softoken and ssl libraries to address
and mitigate these attacks, contributed by Adam Langley of Google. This
attack is mitigated when using NSS 3.14.3 with an NSS Cryptographic Module
(softoken) version 3.14.3 or later. However, this attack is only
partially mitigated if NSS 3.14.3 is used with the current FIPS validated
NSS Cryptographic Module, version 3.12.9.1.

* Bug 840714 - certutil -a was not correctly producing ASCII output as
requested.
* Bug 837799 - NSS 3.14.2 broke compilation with older versions of sqlite
that lacked the SQLITE_FCNTL_TEMPFILENAME file control. NSS 3.14.3 now
properly compiles when used with older versions of sqlite.

Acknowledgements

* The NSS development team would like to thank Nadhem AlFardan and Kenny
Patterson (Royal Holloway, University of London) for responsibly
disclosing the issue by providing advance copies of their research. In
addition, thanks to Adam Langley (Google) for the development of a
mitigation for the issues raised in the paper, along with Emilia Kasper
and Bodo Möller (Google) for assisting in the review and improvements to
the initial patches.

Bugs fixed in NSS 3.14.3

*
https://bugzilla.mozilla.org/buglist.cgi?list_id=5689256;resolution=FIXED;classification=Components;query_format=advanced;target_milestone=3.14.3;product=NSS

Compatibility

* NSS 3.14.3 shared libraries are backward compatible with all older NSS
3.x shared libraries. A program linked with older NSS 3.x shared libraries
will work with NSS 3.14.3 shared libraries without recompiling or
relinking. Furthermore, applications that restrict their use of NSS APIs
to the functions listed in NSS Public Functions will remain compatible
with future versions of the NSS shared libraries.

Feedback

* Bugs discovered should be reported by filing a bug report with
bugzilla.mozilla.org (product NSS).

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Web Crypto API(s) and what Mozilla wants / needs

2013-02-14 Thread Ryan Sleevi
On Thu, February 14, 2013 10:43 am, Robert Relyea wrote:
  On 02/14/2013 07:54 AM, David Dahl wrote:
  - Original Message -
  From: Gervase Markhamg...@mozilla.org
  To: mozilla-dev-tech-cry...@lists.mozilla.org
  Cc: Eric Rescorlae...@mozilla.com, Brian
  Smithbsm...@mozilla.com, Brendan Eichbren...@mozilla.com, Ben
  Adidabenad...@mozilla.com, Brian Warnerwar...@mozilla.com
  Sent: Thursday, February 14, 2013 5:22:41 AM
  Subject: Re: Web Crypto API(s) and what Mozilla wants / needs
 
  On 13/02/13 20:55, David Dahl wrote:
  The main issue is: What does Mozilla actually need here? What is
  Mozilla's official policy or thinking on a crypto API for the DOM?
  As you are the Mozillian with most experience in this area, I'd say
  that
  insofar as we will ever have an official policy, it's likely to be
  what
  you think (after taking the input of others, as you are doing).
  Please
  feel empowered :-)
  Ah, thanks! I am however, not a 'crypto expert' and would like the
  actual experts to weigh in and set the 'policy' (for lack of a better
  word.) At this point in the game, it would seem that FirefoxOS, with
  it's enhanced security model, would benefit greatly from APIs like this.
  I am hoping that will help in garnering the resources to implement
  and/or develop an engineering schedule for this.
 
  -david
  Well, I am quite pleased with the approach of providing a limited
  controllable set of primitives that are easy to use. The encrypt/sign -
  decrypt/verify using PKI completely sounds like the right first
  primitive to supply, along with seal/unseal. Key management/key exchange
  is the hardest part to get right in crypto. Both of these provide the
  simplest model for managing these things.

Agreed on key management/key exchange. Note that the current proposal
intentionally largely tries to avoid these matters, for that reason.
Instead, it operates on the presumption that the user has a Key object,
and the question is what operations can be performed with it.


  I'm sure there are lots of applications where these primitives are
  insufficient, but enabling a stable set that is easy for the non-crypto
  person to get right definately sounds like the right way to move
  forward. (Both of these also has the advantage of allowing you to define
  API's where algorithm selection can be automatic, meaning the users
  automatically get new algorithm support without having to change the
  javascript application.

Bob,

As you mentioned, there are lots of applications where these primitives
are insufficient. Certainly, NSS would not be in usable today for Firefox
or Chromium if it adopted only the high-level approach being proposed (and
as reflected in APIs like KeyCzar and NaCL). Likewise, NSS's highest-level
APIs (like S/MIME) go largely unmaintained/unused, while the low-level
crypto is used in a variety of projects (as shown at the sheer number of
packages converted at
http://fedoraproject.org/wiki/FedoraCryptoConsolidation ).

Do you know of any applications where they *would* be sufficient? Do you
anticipate non-crypto people to be able to use 'crypto', even high-level,
for the development of an overall secure system? I'm aware of the
arguments made in http://cr.yp.to/highspeed/coolnacl-20120725.pdf , and I
certainly support a high-level API, but I don't think you avoid any of the
thorny issues (algorithm negotiation, wire format, etc), and I'm not sure
that the high-level API makes the overall *application* any more or less
secure than a low-level API using recognized primitives.

I guess it's my way of suggesting I'm more concerned about the places
where these primitives are insufficient, and I'm less convinced of the
idea that it any more easier for the non-crypto person to get right.
Given your long-standing role in NSS, I'm curious your thoughts on the
types of applications that would be able to actually (and successfully,
and securely) use such an API.

Cheers,
Ryan

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Web Crypto API(s) and what Mozilla wants / needs

2013-02-14 Thread Ryan Sleevi
On Thu, February 14, 2013 11:55 am, John Dennis wrote:
  On 02/14/2013 02:34 PM, Ryan Sleevi wrote:
  On Thu, February 14, 2013 10:43 am, Robert Relyea wrote:
On 02/14/2013 07:54 AM, David Dahl wrote:
  - Original Message -
  From: Gervase Markhamg...@mozilla.org
  To: mozilla-dev-tech-cry...@lists.mozilla.org
  Cc: Eric Rescorlae...@mozilla.com, Brian
  Smithbsm...@mozilla.com, Brendan Eichbren...@mozilla.com, Ben
  Adidabenad...@mozilla.com, Brian Warnerwar...@mozilla.com
  Sent: Thursday, February 14, 2013 5:22:41 AM
  Subject: Re: Web Crypto API(s) and what Mozilla wants / needs
 
  On 13/02/13 20:55, David Dahl wrote:
  The main issue is: What does Mozilla actually need here? What is
  Mozilla's official policy or thinking on a crypto API for the DOM?
  As you are the Mozillian with most experience in this area, I'd say
  that
  insofar as we will ever have an official policy, it's likely to be
  what
  you think (after taking the input of others, as you are doing).
  Please
  feel empowered :-)
  Ah, thanks! I am however, not a 'crypto expert' and would like the
  actual experts to weigh in and set the 'policy' (for lack of a better
  word.) At this point in the game, it would seem that FirefoxOS, with
  it's enhanced security model, would benefit greatly from APIs like
  this.
  I am hoping that will help in garnering the resources to implement
  and/or develop an engineering schedule for this.
 
  -david
Well, I am quite pleased with the approach of providing a limited
controllable set of primitives that are easy to use. The encrypt/sign
  -
decrypt/verify using PKI completely sounds like the right first
primitive to supply, along with seal/unseal. Key management/key
  exchange
is the hardest part to get right in crypto. Both of these provide the
simplest model for managing these things.
 
  Agreed on key management/key exchange. Note that the current proposal
  intentionally largely tries to avoid these matters, for that reason.
  Instead, it operates on the presumption that the user has a Key object,
  and the question is what operations can be performed with it.
 
 
I'm sure there are lots of applications where these primitives are
insufficient, but enabling a stable set that is easy for the
  non-crypto
person to get right definately sounds like the right way to move
forward. (Both of these also has the advantage of allowing you to
  define
API's where algorithm selection can be automatic, meaning the users
automatically get new algorithm support without having to change the
javascript application.
 
  Bob,
 
  As you mentioned, there are lots of applications where these primitives
  are insufficient. Certainly, NSS would not be in usable today for
  Firefox
  or Chromium if it adopted only the high-level approach being proposed
  (and
  as reflected in APIs like KeyCzar and NaCL). Likewise, NSS's
  highest-level
  APIs (like S/MIME) go largely unmaintained/unused, while the low-level
  crypto is used in a variety of projects (as shown at the sheer number of
  packages converted at
  http://fedoraproject.org/wiki/FedoraCryptoConsolidation ).
 
  Do you know of any applications where they *would* be sufficient? Do you
  anticipate non-crypto people to be able to use 'crypto', even
  high-level,
  for the development of an overall secure system? I'm aware of the
  arguments made in http://cr.yp.to/highspeed/coolnacl-20120725.pdf , and
  I
  certainly support a high-level API, but I don't think you avoid any of
  the
  thorny issues (algorithm negotiation, wire format, etc), and I'm not
  sure
  that the high-level API makes the overall *application* any more or less
  secure than a low-level API using recognized primitives.
 
  I guess it's my way of suggesting I'm more concerned about the places
  where these primitives are insufficient, and I'm less convinced of the
  idea that it any more easier for the non-crypto person to get right.
  Given your long-standing role in NSS, I'm curious your thoughts on the
  types of applications that would be able to actually (and successfully,
  and securely) use such an API.

  Sorry to butt in on a question directed to Bob, but ...

  Here's one data point. I constantly hear the complaint from developers
  that NSS is too low level and using it is too hard. They wonder why
  there can't be a higher level API that insulates them from many of the
  quirky details they find somewhat incomprehensible leaving them with
  doubts about the correctness of what they've done and dismayed at the
  time it took to accomplish it.

  So yes, I think higher level API's would be welcome. I also think it
  would be welcome if the high level API interfaces permitted swapping out
  the low level crypto library on which they are based. Why? It's not
  unusual for someone with a problem to be asked, can you use X, Y, or Z
  instead and tell me if you still have the issue. That's a non-starter
  for many

Re: Proposing: Interactive Domain Verification Approval

2012-12-31 Thread Ryan Sleevi
On Mon, December 31, 2012 10:23 am, Kai Engert wrote:
  On Mon, 2012-12-31 at 16:26 +0100, Kai Engert wrote:
  I propose to more actively involve users into the process of accepting
  certificates for domains.

  I propose the following in addition:

  Each CA certificate shall have a single country where the CA
  organization is physically located (they already contain that).

  If the CA's country matches the country of the domain being visited,
  then we proceed automatically.

  Example: A US based CA and a user visiting a .us site.

  If the domain being visited is a non-country specific domain, like .com
  or .org, then we could securely query the domain registry. We could
  pin in the software, which set of CAs are allowed for talking SSL/TLS
  to a domain registry. The CA should be located in the same country where
  the root zone of that domain is being operated.

  We learn that a specific domain is registed in the US. If the CA used by
  the site is based in the US, too, everything is fine. Or if the domain
  is registed in Germany, and the site presents a certificate from a
  german CA, we could accept it too.

  For all scenarios where we see a mismatch, it could be argued that
  something is wrong, and we could use the UI that I have described. Maybe
  that can be further enhanced.

  This system would encourage local authorities, and reduce the power of
  the currently big players in the CA world.

  (I'd prefer such a system over DNSSEC, where everything chains up to a
  single key in just one country.)

  Kai

  (Credits: Thanks to @_not_you who suggested to find some heuristic to
  improve this proposal.)



So far, the two proposals are:
1) Nag the user whenever they want to make a new secure connection. This
nag screen is not shown over HTTP, so clearly, HTTP is preferable here.
2) Respect national borders on the Internet.

If anything, the more user interaction, even once, of a technically
complex nature, is enough to disincentivize any site operator from using
SSL. Oh, my Firefox users are going to see a prompt? I don't want to send
them to SSL then, because they'll complain / it will be a lost sale.

Even once is enough. Otherwise, why would sites even bother getting a CA
certificate, since they can already condition users to 'pin' to their
self-signed cert by virtue of clicking through.

I don't see national borders on the Internet an even remotely plausible
idea. Why should Americans trust their governments (who have the legal
force to compel CAs operating in the US) more than those in Iran trust
theirs (who also have the legal force to compel CAs operating in Iran)

I cannot take any proposal to more actively involve users in the process
of accepting certificates for domains, because for the millions of users,
that very statement is too much. We don't actively involve users in
handling of Duns and Bradstreet numbers, we don't actively involve users
in handling corporate tax returns, and we certainly don't involve users in
supply chain provenance, so why is the certificate somehow 'more'
accessible?

If the goal is to reduce power or risk, then something like Certificate
Transparency should be the game. Having transparent reports of issuance
and the ability to monitor for misissuance should be the end goal -
whether you're operating a CA that serves 50 domains or 500 million
domains. It's highly elitist to suggest those 500 million are worth more
than those 50 - especially if real users' lives are at risk for those 50
domains.

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


NSS 3.14.1 release notes

2012-12-18 Thread Ryan Sleevi
The NSS Team is pleased to announce the NSS 3.14.1 release.

Please read the NSS 3.14.1 release notes at:
https://developer.mozilla.org/en-US/docs/NSS/NSS_3.14.1_release_notes

Cheers,
Ryan

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: duplicate SSL record in different TCP packets from a Google Drive client

2012-11-06 Thread Ryan Sleevi
On Mon, November 5, 2012 10:12 am, Peter Djalaliev wrote:
  Hello,

  There seems to be a possible problem with the SSL implementation used in
  Google Drive on MacOS 10.8.2.  I seems that this SSL implementation is NSS
  - please let me know if you know that Google Drive uses a different SSL
  implementation and I should direct this question elsewhere.

  Packet captures of SSL flows between the Google Drive client application
  and the Google servers it talks to show the following possible problem.
  During the application data phase of the TLS connection, the Google Drive
  client sends two consecutive TCP packets with different TCP sequence
  numbers, both containing the same encrypted SSL record.  The cipher suite
  used is TLS_RSA_WITH_AES_128_CBC_SHA.

  A normal SSL server talking to Google drive will likely fail to decrypt
  the duplicated SSL record and verify its MAC, because AES decryption is
  used in CBC mode, and the duplicated SSL record should have a different
  SSL sequence number.  However, it looks like the flow proceeds just fine.

  Can anybody here comment on this behavior?  Is there a better place to ask
  this question?

  Best Regards,
  Peter Djalaleiv
  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto


Hi Peter,

To the best of my knowledge, Google Drive for Mac (the desktop sync
client) does not use NSS.

A better forum for continuing the discussion and reporting the issue will
likely be at http://productforums.google.com/forum/#!forum/drive

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.13.x ... releasenotes?

2012-10-29 Thread Ryan Sleevi
On Mon, October 29, 2012 8:32 am, Bernhard Thalmayr wrote:
  Hi all,

  sorry for this post, but I was not able to find the releasenotes for NSS
  version 3.13.x neither using Google nor querying the archive


  http://www.mozilla.org/projects/security/pki/nss/release_notes.html

  does not show anyhthing for 3.13.x ... by intention?

No, not by intention, just poor management of our release process that
we're working to improve.

The following query may help  -
https://bugzilla.mozilla.org/buglist.cgi?version=3.13.1;version=3.13.2;version=3.13.3;version=3.13.4;version=3.13.5;resolution=FIXED;classification=Components;query_format=advanced;component=Libraries;component=Libraries;product=NSS
- which shows all the bugs fixed in 3.13.1, 3.13.2, 3.13.3, 3.13.4, and
3.13.5.

Regrettably, however, it only shows bugs that were flagged as fixed in
those revisions - which wasn't always consistently done. For 3.14.0, we've
tried to do this much better, hence the better release notes.


  Thanks and regards,
  Bernhard

  --
  Painstaking Minds
  IT-Consulting Bernhard Thalmayr
  Herxheimer Str. 5, 83620 Vagen (Munich area), Germany
  Tel: +49 (0)8062 7769174
  Mobile: +49 (0)176 55060699

  bernhard.thalm...@painstakingminds.com - Solution Architect

  This e-mail may contain confidential and/or privileged information.If
  you are not the intended recipient (or have received this email in
  error) please notify the sender immediately and delete this e-mail. Any
  unauthorized copying, disclosure or distribution of the material in this
  e-mail is strictly forbidden.
  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.13.x ... releasenotes?

2012-10-29 Thread Ryan Sleevi
On Mon, October 29, 2012 9:04 am, Bernhard Thalmayr wrote:
  Thanks for the details Ryan.

  With NSS 3.12.X there seemed to be ''

  but with NSS 3.13.x the follwing error is raised ...

  /usr/lib64/libnssutil3.so: undefined symbol: PL_ClearArenaPool

  NSS deliverd with RHEL 6.3...

  I need to look at the API though ...

  Regards,
  Bernhard

PL_ClearArenaPool is an NSPR API call.

This suggests that the version of NSS you're using is not compatible with
the version of NSPR being used.

Based on searching this newsgroup for the release notifications, 3.13.2
was paired with an NSPR 4.9 release.

Which version of NSPR are you using?



  Am 10/29/12 4:45 PM, schrieb Ryan Sleevi:
  On Mon, October 29, 2012 8:32 am, Bernhard Thalmayr wrote:
Hi all,
 
sorry for this post, but I was not able to find the releasenotes for
  NSS
version 3.13.x neither using Google nor querying the archive
 
 
http://www.mozilla.org/projects/security/pki/nss/release_notes.html
 
does not show anyhthing for 3.13.x ... by intention?
 
  No, not by intention, just poor management of our release process that
  we're working to improve.
 
  The following query may help  -
  https://bugzilla.mozilla.org/buglist.cgi?version=3.13.1;version=3.13.2;version=3.13.3;version=3.13.4;version=3.13.5;resolution=FIXED;classification=Components;query_format=advanced;component=Libraries;component=Libraries;product=NSS
  - which shows all the bugs fixed in 3.13.1, 3.13.2, 3.13.3, 3.13.4, and
  3.13.5.
 
  Regrettably, however, it only shows bugs that were flagged as fixed in
  those revisions - which wasn't always consistently done. For 3.14.0,
  we've
  tried to do this much better, hence the better release notes.
 
 
Thanks and regards,
Bernhard
 
--
Painstaking Minds
IT-Consulting Bernhard Thalmayr
Herxheimer Str. 5, 83620 Vagen (Munich area), Germany
Tel: +49 (0)8062 7769174
Mobile: +49 (0)176 55060699
 
bernhard.thalm...@painstakingminds.com - Solution Architect
 
This e-mail may contain confidential and/or privileged information.If
you are not the intended recipient (or have received this email in
error) please notify the sender immediately and delete this e-mail.
  Any
unauthorized copying, disclosure or distribution of the material in
  this
e-mail is strictly forbidden.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto
 
 


  --
  Painstaking Minds
  IT-Consulting Bernhard Thalmayr
  Herxheimer Str. 5, 83620 Vagen (Munich area), Germany
  Tel: +49 (0)8062 7769174
  Mobile: +49 (0)176 55060699

  bernhard.thalm...@painstakingminds.com - Solution Architect

  This e-mail may contain confidential and/or privileged information.If
  you are not the intended recipient (or have received this email in
  error) please notify the sender immediately and delete this e-mail. Any
  unauthorized copying, disclosure or distribution of the material in this
  e-mail is strictly forbidden.
  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto



-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: libnss x86 DRNG

2012-10-01 Thread Ryan Sleevi
On Mon, October 1, 2012 3:08 pm, Michael Demeter wrote:
  Hello,

  I work in the Open Source Technology group at Intel in the security group.

  I have been tasked with contacting the maintainer of libnss to start
  discussions about the possibility of Intel submitting patches to enable
  the new HW based digital random number generator.

  What I would like to do is to have a short (or long) discussion over how
  you would like to see this done…In the current implementation it will do
  the right thing if drngd(Fedora 18) is running since libnss still pulls
  from /dev/random. But it does a lot of unnecessary work afterwards since
  the HW based DRNG for /dev/random can be used directly.

  What I would like to do is to implement native DRNG functions to replace
  the current functions if the HW is available..So I would like some input
  as to how you would like to see this implemented or if there is any
  interest at all..

  Thanks


  Michael Demeter
  Staff Software Engineer
  Open Source Technology Center - SSG
  Intel Corporation

Hi Michael,

There is definite interest in being able to take advantage of hardware
intrinsics - whether they be the DRNG or the AESNI instructions. For
example, NSS just recently added support for AES-GCM, and taking better
advantage of PCLMULQDQ is one of the items on the roadmap, since currently
no support exists (there is support for AESENC/AESDEC and it will be used
as the bulk AES function when support is detected at runtime)

There are one of two places the the DRNG can go in. One would be to
utilize it within NSPR (Netscape Portable Runtime), which NSS makes use of
for a number of primitive types and cross-platform abstraction. This would
make it available to any applications that depended on NSPR (which there
are many). However, they may not need as strong a source of hardware
entropy, but it's worth considering.

Within NSPR, the core entry point is
http://mxr.mozilla.org/security/source/nsprpub/pr/src/misc/prrng.c , which
shuffles you off to the platform-specific RNG (eg:
nsprpub/pr/src/md/windows/w32rng.c  or nsprpub/pr/src/md/unix/uxrng.c )

In the Unix implementation, you can see some inline intrinsics are already
being used, albiet non-portably.

Within NSS proper, the RNG is handled by freebl (the core primitives),
which are then exposed as a software PKCS#11 token in softoken (aptly
named, right?)

The FreeBL implementation is declared at
http://mxr.mozilla.org/security/source/security/nss/lib/freebl/secrng.h ,
and then implemented accordingly through sysrand.c, unix_rand.c, and
win_rand.c in the same directory.

Now, as for actual exposing/implementation, presumably an approach similar
to the use of AES-NI would be appropriate. For that, look at freebl's
intel-aes.h, as well as it's related use in rijndael.c

http://mxr.mozilla.org/security/source/security/nss/lib/freebl/rijndael.c#969
 - compile time check to disable
 - run time env check for a flag to disable
 - run-time input check to make sure it can be used
 - Function signature typedef that can match either the 'native' (Intel)
implementation or the 'builtin' (NSS) implementation, and the function
pointer is updated accordingly.

Right now, I'm not aware of a good cross-platform assembler solution in
use for NSS - eg: yasm to abstract for cross-platform object file
generation. So for usage on both Windows and Posix/BSD, it may be
necessary to write two implementations.

Feel free to ask more questions if the above is vague. I'm sure Wan-Teh,
Bob, or Brian will chime in if I misdirected, but I'm fairly confident
that the above is the right approach for integrating hardware specific
features (for now at least).

If you or any of your coworkers feel especially motivated, support for
PCLMULQDQ would be hip too ;)

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Building and running NSS for Android.

2012-07-10 Thread Ryan Sleevi
On Tue, July 10, 2012 12:32 pm, Robert Relyea wrote:
  On 07/09/2012 02:03 PM, Anders Rundgren wrote:
  Ian,
  Pardon me if I was a bit terse in my response.
 
  What I meant was simple that Operating Systems manage
  critical resources but only occasionally keys.  That is,
  access to persistent keys should only be done through
  OS calls like it has been the case for files since at
  least 40 years back.  However, keys have other properties
  than files but that still don't make the concept bad; just
  different.
 
  Example: A key may be owned by a user but it might still not
  be granted access by all the user's applications because the
  key is (in most cases) provided by another party.  NSS and JDK
  seems to be severely lagging in this respect.
 
  I don't think porting NSS to Android necessarily is a prerequisite
  for porting Firefox to Android.  IMO, it is rather a disadvantage
  with multiple keystores and systems.
 
  Anders

  I think you have misunderstood what I was doing.

  To date both android and chrome already use NSS ports in android, it's
  just built in their environment. What I've done is set up NSS so we can
  build it stand alone (in the NSS environment) and also to build to NSS
  tools so we can run the NSS tests. This is for 2 reasons 1) to have a
  big endian platform in our regular tinderbox, and 2) have a tinderbox
  test for one of the major platforms FF is already supporting.

  bob

Small clarification: Chrome does not use NSS on Android.

The discussion so far of Chrome+Android has been in the context of how
Chrome runs it unit tests on Android devices, since there are quite a few
similarities between Chrome's and Firefox's/NSS's testing needs. Both use
buildbot, both have to cross-compile with the NDK, both require tests that
involve network services that cannot be hoisted on the device, etc. As NSS
is integrated into the Mozilla tinderbox, I've just been trying to provide
guidance and experience for how we've solved similar problems.

Cheers,
Ryan

 
  On 2012-07-06 12:54, Anders Rundgren wrote:
  On 2012-07-06 10:29, ianG wrote:
  On 6/07/12 16:14 PM, Anders Rundgren wrote:
  On 2012-07-06 01:51, Robert Relyea wrote:
  I've gotten NSS to build and mostly run the tests for Android.
  Cool!
 
 
  There are
  still a number of tests failing, so the work isn't all done, but it
  was
  a good point to snapshot what I had.
  How does this compare/interact with Android's built-in key-store?
 
  I'm personally unconvinced that security subsystems running in the
  application's/user's own security context represent the future since
  they don't facilitate application-based access control unless each
  application does its own enrollment.
 
  The way I see this is that security subsystems running in the
  app/user's
  own security context is sub optimal for development cost purposes.
  And,
  ???
 
  running in the platform's security context is sub optimal for security
  motives.
  I'm not sure I understand the rationale here.
 
  Where the sweet spot is tends to vary and isn't really a universally
  answerable question.
  Anders
 
  iang
 
 
 


  --
  dev-tech-crypto mailing list
  dev-tech-crypto@lists.mozilla.org
  https://lists.mozilla.org/listinfo/dev-tech-crypto


-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-25 Thread Ryan Sleevi
Sean,

The Path Building logic/requirements/concerned you described is best
described within RFC 4158, which has been mentioned previously.

As Brian mentioned in the past, this was 'lumped in' with the description
of RFC 5280, but it's really its own thing.

libpkix reflects the union of RFC 4158's practices and RFC 5280's
requirements. As you note in your spreadsheet, libpkix already implements
the majority of 5280 (at least, the important to browsers / commonly
used in PKIs including Internet PKIs). While libpkix tries for some of
4158, it isn't exactly the most robust, nor is 4158 the end-all and be-all
of path building strategies.

I believe that over time, it would be useful (ergo likely) to implement
some of the scoring logic described in 4158 and hand-waved at by
Microsoft's CryptoAPI documentation, rather than its current logic of just
applying its checkers to see if the path MIGHT be valid in a DFS search,
so that libpkix returns not just a good path, but a close-to-optimal path,
and can also provide diagnostics for the paths not taken.

Ryan

  I ended up writing a lot of text in response to this post, so, I am
  breaking up the response into three mini-responses.

  Part I

  On 1/18/2012 4:23 PM, Brian Smith wrote:
Sean Leonard wrote:
The most glaring problem however is that when validation fails, such
as in the case of a revoked certificate, the API returns no
certificate chains
   
My understanding is that when you are doing certificate path
  building, and you have to account for multiple possibilities any any
  point in the path, there is no partial chain that is better to return
  than any other one, so libpkix is better off not even trying to return a
  partial chain. The old code could return a partial chain somewhat
  sensibly because it only ever considered one possible cert (the best
  one, ha ha) at each point in the chain.
   

  For our application--and I would venture to generalize that for all
  sophisticated certificate-using applications (i.e., applications that
  can act upon more than just valid/not valid)--more information is a
  lot better than less.

  I have been writing notes on Sean's Comprehensive Guide to Certification
  Path Validation. Here's a few paragraphs of Draft 0:

  Say you have a cert. You want to know if it's valid. How do you
  determine if it's valid?

  A certificate is valid if it satisfies the RFC 5280 Certification Path
  Validation Algorithm. Given:
  * a certification path of length n (the leaf cert and all certs up to
  the trust anchor--in RFC 5280, it is said that cert #1 is the one
  closest to the trust anchor, and cert n is the leaf cert you're
  validating),
  * the time,
  * policy-stuff, -- hand-wavy because few people in the SSL/TLS world
  worry about this but it's actually given a lot of space in the RFC
  * permitted name subtrees,
  * excluded name subtrees,
  * trust anchor information (issuer name, public key info)

  you run the algorithm, and out pops:
  * success/failure,
  * the working public key (of the cert you're validating),
  * policy-stuff, -- again, hand-wavy
  and anything else that you could have gleaned on the way.


  But, this doesn't answer the obvious initial question: how do you
  construct a certification path of length n if you only have the
  initial cert? RFC 5280 doesn't prescribe any particular algorithm, but
  it does have some requirements (i.e., if you say you support X, you MUST
  support it by doing it Y way).

  Certification Path Construction is where we get into a little bit more
  black art and try to make some tradeoffs based on speed, privacy,
  comprehensiveness, and so forth.

  Imagine that you know all the certificates ever issued in the known
  universe. Given a set of trust anchors (ca name + public key), you
  should be able to draw lines from your cert through some subset of
  certificates to your trust anchors. What you'll find is that you've got
  a big tree (visually, but not necessarily in the computer science sense;
  it's actually a directed acyclic graph), where your cert is at the root
  and the TAs are at the leaves. The nodes are linked by virtue of the
  fact that the issuer DN in the prior cert is equal to the subject DN in
  the next cert, or to the ca name in the trust anchor.

  Practically, you search the local database(s) for all certificates that
  match the issuer DN in the subject. If no certificates (or in your
  opinion, an insufficient number of certificates) are returned, then, you
  will want to resort to other methods, such as using the caIssuers AIA
  extension (HTTP or LDAP), looking in other remote stores, or otherwise.

  The ideal way (Way #1) to represent the output is by a tree, where each
  node has zero or more children, and the root node is your target cert.
  In lieu of a tree, you can represent it as an array of cert paths
  (chains) (way #2). Way #2 is the way that Microsoft
  CertGetCertificateChain validation function returns 

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Ryan Sleevi
(resending from the correct address)

  On 01/04/2012 03:51 PM, Brian Smith wrote:
  Ryan Sleevi wrote:
  IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation,
  while non-libpkix is not. That isn't to say the primitives don't exist
  -
  they do, and libpkix uses them - but that the non-libpkix path doesn't
  use
  them presently, and some may be non-trivial work to implement.
  It would be helpful to get some links to some real-world servers that
  would require Firefox to do complex path building.
  Mostly in the government. They higher 3rd parties to replace our current
  path processing because it is non-conformant. In the real world, FF is
  basically holding the web back because we are the only major browser
  that is not RFC compliant! We should have had full pkix processing 5
  years ago!


To echo what Bob is saying here, in past work I saw problems on a weekly
basis with non-3280 validating libraries within the areas of government,
military, and education - and these are not just US-only problems. The
'big ideas' of PKI tended not to take off commercially, especially in the
realm of ecommerce, but huge amounts of infrastructure and energy has been
dedicated to the dream of PKI elsewhere.

While you talk about the needs of Firefox with regards to NSS' future, I
think it is important to realize that libpkix is the only /open/
implementation (at least, as far as I know) that even comes close to
3280/5280, at least as is available to C/C++ applications. The next
closest is probably Peter Gutmann's cryptlib, which unfortunately is not
widely used in open-source projects. Note, for other languages, you have
Sun/Oracle's Java implementation (which libpkix mirrors a very early
version of, as discussed in the libpkix history) and the Legion of the
Bouncy Castle's C# implementation.

These are the same customers who are often beholden to keep IE 6/ActiveX
around for legacy applications. So while much energy is being put forth
(including from Microsoft) to move these organizations to 'modern' systems
that can support a richer web, if their security needs can't be met by
Firefox, then there will be a problem (or, like Bob said, they'll make
their own - and weigh that as a cost against switching from MSFT).

A couple examples would include the GRID project (which uses a
cross-certified mesh - http://www.igtf.net/), the US government's Federal
PKI Bridge CA (
https://turnlevel.com/wp-content/uploads/2010/05/FederalBridge.gif ), and
the DOD/DISA's PKI setup. The layout of the DOD PKI is fairly similar to
those among various European identity card PKIs, with added
cross-certification for test roots so that third-parties can develop
interoperable software.

However, even outside the spectrum of government/enterprise, you still see
issues that 3280/5280 address better than the current non-libpkix
implementation. EV certificates (and soon, the CA/B Forum Baseline
Specifications) rely on proper policy OID validation - but the failure to
match the OID is not a validation failure, it's just a sign of a 'lesser'
level of identity assurance. CA key rollover is incredibly common.
Likewise, as CAs buy eachother out, you end up with effectively bridge or
mesh topologies where they cross-certify eachother for legacy systems.

As far as non-TLS-compliant servers, I think that's an
oversimplification. It relies on the assumption that 1) There is one and
only one root certificate 2) the server knows all the trust anchors of the
client. Both statements can be shown to be demonstrably false (just look
at how many cross-certified verisign or entrust roots there are, due to CA
key rollovers). So there is no reasonable way for a server to send a
client a 'complete' chain, nor to send them a chain that they can know
will validate to the clients trust anchors. At best, only the EE cert
matters.

For all of these reasons, I really do think libpkix is a huge step forward
- and it's many nuances and bugs can be things we should work on solving,
rather than trying to determine some minimal set of functionality and
graft that onto the existing pre-libpkix implementation.

Speaking with an individual hat on, there are only a few reasons I can
think of why Chromium /wouldn't/ want to use libpkix universally on all
supported platforms:
1) On Windows, CryptoAPI simply is a more robust (5280 compliant) and
extendable implementation - and many of these government/enterprise
sectors have extended it, in my experience, so having Chromium ignore
those could be problematic.
2) On Mac, I haven't had any time to explore developing a PKCS#11 module
that can read Keychain/CDSA-based trust anchors and trust settings.


I would be absolutely thrilled to be able to use libpkix for the Mac
implementation - Apple's path building/chain validation logic is horrid
(barely targets RFC 2459), and they're on their way to deprecating every
useful API that returns meaningful information, over-simplifying it to
target the iOS market. This has been

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Ryan Sleevi
  On 01/04/2012 03:51 PM, Brian Smith wrote:
  Ryan Sleevi wrote:
  IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation,
  while non-libpkix is not. That isn't to say the primitives don't exist
  -
  they do, and libpkix uses them - but that the non-libpkix path doesn't
  use
  them presently, and some may be non-trivial work to implement.
  It would be helpful to get some links to some real-world servers that
  would require Firefox to do complex path building.
  Mostly in the government. They higher 3rd parties to replace our current
  path processing because it is non-conformant. In the real world, FF is
  basically holding the web back because we are the only major browser
  that is not RFC compliant! We should have had full pkix processing 5
  years ago!


To echo what Bob is saying here, in past work I saw problems on a weekly
basis with non-3280 validating libraries within the areas of government,
military, and education - and these are not just US-only problems. The
'big ideas' of PKI tended not to take off commercially, especially in the
realm of ecommerce, but huge amounts of infrastructure and energy has been
dedicated to the dream of PKI elsewhere.

While you talk about the needs of Firefox with regards to NSS' future, I
think it is important to realize that libpkix is the only /open/
implementation (at least, as far as I know) that even comes close to
3280/5280, at least as is available to C/C++ applications. The next
closest is probably Peter Gutmann's cryptlib, which unfortunately is not
widely used in open-source projects. Note, for other languages, you have
Sun/Oracle's Java implementation (which libpkix mirrors a very early
version of, as discussed in the libpkix history) and the Legion of the
Bouncy Castle's C# implementation.

These are the same customers who are often beholden to keep IE 6/ActiveX
around for legacy applications. So while much energy is being put forth
(including from Microsoft) to move these organizations to 'modern' systems
that can support a richer web, if their security needs can't be met by
Firefox, then there will be a problem (or, like Bob said, they'll make
their own - and weigh that as a cost against switching from MSFT).

A couple examples would include the GRID project (which uses a
cross-certified mesh - http://www.igtf.net/), the US government's Federal
PKI Bridge CA (
https://turnlevel.com/wp-content/uploads/2010/05/FederalBridge.gif ), and
the DOD/DISA's PKI setup. The layout of the DOD PKI is fairly similar to
those among various European identity card PKIs, with added
cross-certification for test roots so that third-parties can develop
interoperable software.

However, even outside the spectrum of government/enterprise, you still see
issues that 3280/5280 address better than the current non-libpkix
implementation. EV certificates (and soon, the CA/B Forum Baseline
Specifications) rely on proper policy OID validation - but the failure to
match the OID is not a validation failure, it's just a sign of a 'lesser'
level of identity assurance. CA key rollover is incredibly common.
Likewise, as CAs buy eachother out, you end up with effectively bridge or
mesh topologies where they cross-certify eachother for legacy systems.

As far as non-TLS-compliant servers, I think that's an
oversimplification. It relies on the assumption that 1) There is one and
only one root certificate 2) the server knows all the trust anchors of the
client. Both statements can be shown to be demonstrably false (just look
at how many cross-certified verisign or entrust roots there are, due to CA
key rollovers). So there is no reasonable way for a server to send a
client a 'complete' chain, nor to send them a chain that they can know
will validate to the clients trust anchors. At best, only the EE cert
matters.

For all of these reasons, I really do think libpkix is a huge step forward
- and it's many nuances and bugs can be things we should work on solving,
rather than trying to determine some minimal set of functionality and
graft that onto the existing pre-libpkix implementation.

Speaking with an individual hat on, there are only a few reasons I can
think of why Chromium /wouldn't/ want to use libpkix universally on all
supported platforms:
1) On Windows, CryptoAPI simply is a more robust (5280 compliant) and
extendable implementation - and many of these government/enterprise
sectors have extended it, in my experience, so having Chromium ignore
those could be problematic.
2) On Mac, I haven't had any time to explore developing a PKCS#11 module
that can read Keychain/CDSA-based trust anchors and trust settings.


I would be absolutely thrilled to be able to use libpkix for the Mac
implementation - Apple's path building/chain validation logic is horrid
(barely targets RFC 2459), and they're on their way to deprecating every
useful API that returns meaningful information, over-simplifying it to
target the iOS market. This has been a sore point for many Apple users

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-03 Thread Ryan Sleevi
Snip
  Are there any other benefits?

IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation, while
non-libpkix is not. That isn't to say the primitives don't exist - they
do, and libpkix uses them - but that the non-libpkix path doesn't use them
presently, and some may be non-trivial work to implement.

One benefit of libpkix is that it reflects much of the real world
experience and practical concerns re: PKI that were distilled in RFC 4158.
I also understand that it passes all the PKITS tests (
http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html ),
while non-libpkix does not (is this correct?)

Don't get me wrong, I'm not trying to be a libpkix apologist - I've had
more than my share of annoyances (latest is http://crbug.com/108514#c3 ),
but I find it much more predictable and reasonable than some of the
non-3280 implementations - both non-libpkix and entirely non-NSS
implementations (eg: OS X's Security.framework)

The problem that I fear is that once you start trying to go down the route
of replacing libpkix, while still maintaining 3280 (or even better, 5280)
compliance, in addition to some of the path building (/not/ verification)
strategies of RFC 4158, you end up with a lot of 'big' and 'complex' code
that can be a chore to maintain because PKI/PKIX is an inherently hairy
and complicated beast.

So what is the new value trying to be accomplished? As best I can tell, it
seems focused around that libpkix is big, scary (macro-based error
handling galore), and has bugs but only few people with expert/domain
knowledge of the code to fix them? Does a new implementation solve that by
much?

From your list of pros/cons, it sounds like you're primarily focused on
the path verification aspects (policies, revocation), but a very important
part of what libpkix does is the path building/locating aspects (depth
first search, policy/constraint based edge filtering, etc). While it's not
perfect ( https://bugzilla.mozilla.org/show_bug.cgi?id=640892 ), as an
algorithm it's more robust than the non-libpkix implementation in my
experience.

  As for #5, I don't think Firefox is going to be able to use libpkix's
current OCSP/CRL fetching anyway, because libpkix's fetching is
serialized
  and we will need to be able to fetch revocation for every cert in the
chain in parallel in order to avoid regressing performance (too much) when
  we start fetching intermediate certificates' revocation information. I
have an idea for how to do this without changing anything in NSS, doing
all the OCSP/CRL fetching in Gecko instead.

A word of caution - this is a very contentious area in the PKIX WG. The
argument is that a correct implementation should only trust data as far
as it can throw it (or as far as it can be chained to a trusted root).
Serializing revocation checking by beginning at the root and then working
down /is/ the algorithm described in RFC 3280 Section 6.3. In short, the
argument goes that you shouldn't be trusting/operating on ANY information
from the intermediate until you've processed the root - since it may be a
hostile intermediate.

libpkix, like CryptoAPI and other implementations, defers revocation
checking until all trust paths are validated, but even then checks
revocation serially/carefully.

Now, I recognize that such an approach/interpretation is not universally
agreed upon, but I just want to make sure you realize there is a reasoning
for the approach it currently uses. For some people, even AIA chasing is
seen as a 'bad' idea - even if, in practice, every sane user agent does it
because of so many broken TLS implementations/webservers out there.

While not opposed to exploring, I am trying to play the proverbial devil's
advocate for security-sensitive code used by millions of users, especially
for what sounds at first blush like a cut our losses proposal.

Ryan




-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


RE: Question about pathlen extension checked

2011-09-20 Thread Ryan Sleevi
My reading of RFC 3280/5280 and from implementation experience with NSS,
CryptoAPI, OpenSSL, and other implementations is that no, that is not
correct.

CA:TRUE with a pathlen:0 is conformant to RFCs 3280/5280. The most common
cause for this would be for a CA certifying an intermediate, but that
intermediate should not be allowed to further mint new intermediates. The
intermediate will be flagged with CA:TRUE, pathlen:0.

CA:FALSE with a pathlen:anything is non-conformant.

CA:TRUE with pathlen omitted indicates there are no constraints on the
length of the path (pathlen: -1).

basicConstraints MUST NOT be omitted for CA certificates (or more aptly,
certificates which sign other certificates). It MUST be present and MUST be
critical.

basicConstraints MAY be omitted for end-entity certificates, and if
present, it MAY be critical. The absence of basicConstraints is the same as
CA:FALSE with no pathlen.

Again, CA:TRUE, pathlen:0 = A CA certificate that can mint any number of
certificates, but none of those certificates may be used to sign other
certificates (aka: no certificate issued may be used as an intermediate)

 -Original Message-
 From: dev-tech-crypto-bounces+ryan-
 mozdevtechcrypto=sleevi@lists.mozilla.org [mailto:dev-tech-crypto-
 bounces+ryan-mozdevtechcrypto=sleevi@lists.mozilla.org] On Behalf
 Of Ralph Holz (TUM)
 Sent: Tuesday, September 20, 2011 1:51 PM
 To: mozilla-dev-tech-cry...@lists.mozilla.org
 Cc: mozilla's crypto code discussion list
 Subject: Re: Question about pathlen extension checked
 
 Hi,
 
 Thanks for the replies, it's very much appreciated. It takes careful
 reading of RFC 3280 if you don't want to miss the crucial distinction
 between intermediate certificate on the path and certificate on the
 path - thanks for the highlighting.
 
 My conclusion from all this is that the many certs with CA:TRUE and
 pathlen:0 are not conformant, but not able to operate as CAs, either.
 Right?
 
 Interesting that there are so many, tho.
 
 Thanks,
 Ralph
 --
 dev-tech-crypto mailing list
 dev-tech-crypto@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-tech-crypto

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


RE: Question about pathlen extension checked

2011-09-19 Thread Ryan Sleevi
 
 On 09/18/2011 03:15 AM, Ralph Holz (TUM) wrote:
  Hi,
 
  does NSS check the pathlength extension in an issuing certificate?
 yes.
I am particularly wondering if pathlen:0 is honoured.
 According to the spec, which means no limit. NSS limits the size of the
 total chain to prevent loop attacks, so in practice you can't have an
 'infinite' pathlen, but our chain limit is quite large, and you are
 likely to run into protocol issues using chains of that size.
 
 If you really want pathlen of '0', then just set the isCA bit to
 FALSE;).
 

Bob, is that a correct reading of pathlen:0?

RFCs 3280 4.2.1.10 reads:

   The pathLenConstraint field is meaningful only if the cA boolean is
   asserted and the key usage extension asserts the keyCertSign bit
   (section 4.2.1.3).  In this case, it gives the maximum number of non-
   self-issued intermediate certificates that may follow this
   certificate in a valid certification path.  A certificate is self-
   issued if the DNs that appear in the subject and issuer fields are
   identical and are not empty.  (Note: The last certificate in the
   certification path is not an intermediate certificate, and is not
   included in this limit.  Usually, the last certificate is an end
   entity certificate, but it can be a CA certificate.)  ***A
   pathLenConstraint of zero indicates that only one more certificate
   may follow in a valid certification path.***  Where it appears, the
   pathLenConstraint field MUST be greater than or equal to zero.  Where
   pathLenConstraint does not appear, no limit is imposed.

An absent pathLenConstraint means unlimited, but a pathLenConstraint of 0
is 0 or 1 more certs. (Emphasis in ***)

RFC 5280 4.2.1.9 tweaks the text, with the following change to the starred
text:

   A pathLenConstraint of zero indicates that no non-
   self-issued intermediate CA certificates may follow in a valid
   certification path.

This just restates the same requirement - the chain from a CA with a pathLen
of 0 must terminate in the issued cert - it may not contain any
intermediates.

Further, in context of Ralph's conversations on the Cryptography mailing
list, the pathLenConstraint only matters if cA is TRUE. cA being FALSE and
asserting pathLenConstraint makes no sense/is not conformant. For that, see
the remainder of 4.2.1.10/4.2.1.9:

   CAs MUST NOT include the pathLenConstraint field unless the cA
   boolean is asserted and the key usage extension asserts the
   keyCertSign bit.

So if you're looking at certs, they assert cA is TRUE, and have a pathLen of
0, they MAY be used to issue certificates with cA FALSE (end-entity
certificates), but MAY NOT be used to issue certificates with cA TRUE
(intermediates).

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


RE: Certificate login in Firefox - how does it work?

2010-11-28 Thread Ryan Sleevi
 -Original Message-
 From: dev-tech-crypto-bounces+ryan-
 mozdevtechcrypto=sleevi@lists.mozilla.org [mailto:dev-tech-crypto-
 bounces+ryan-mozdevtechcrypto=sleevi@lists.mozilla.org] On Behalf
 Of Matej Kurpel
 Sent: Sunday, November 28, 2010 11:24 AM
 To: mozilla's crypto code discussion list
 Subject: Re: Certificate login in Firefox - how does it work?
 

[snip]
 On 26. 11. 2010 22:20, ryan-mozdevtechcry...@sleevi.com wrote:
  While I've not spent time hacking with PCKS#11, my understanding is
  that the C_Sign function should be treating the input as raw/opaque,
  dictated by the mechanism that was used to initialize. If you're
  relying on the input being in a particular format, you need to ensure
  that format is specified in the underlying PKCS#11 specification for
  that mechanism, otherwise it sounds like you're making assumptions that
  shouldn't be made.
 This assumption is made by NSS and not by me. When signing e-mail in
 Thunderbird, it sends DigestInfo (with DER-encoded OID and Hash value),
 and when performing a SSL login, it sends raw data. The mechanism used
 is always CKM_RSA_PKCS. I don't have a bulletproof way to determine
 which of these two cases it is.

What I meant to say is that your attempt to interpret the data being sent in
is an assumption: that the data is meaningful. According to the PKCS#11
specification, you are just supposed to sign the data as provided, provided
it meets the constraints inposed (l = k-11). You're correct that, in the
case of Thunderbird, it will send a full DigestInfo structure to be signed
[1], while in the case of TLS client auth, it only sends the hashes [2]. In
an ideal world, your PKCS#11 module should not ascribe any meaning to that
data.

Please consider reviewing the PKCS#11 specification again [3]. For
CKM_RSA_PKCS, it reads This mechanism corresponds only to the part of PKCS
#1 v1.5 that involves RSA; it does not compute a message digest or a
DigestInfo encoding as specified for the md2withRSAEncryption and
md5withRSAEncryption algorithms in PKCS #1 v1.5 . The actual specification
for PKCS #1 v1.5 is at [4]. Section 10.1 details how signatures are
computed, and includes this: The signature process consists of four steps:
message digesting, data encoding, RSA encryption, and
octet-string-to-bit-string conversion. When combined with what the PKCS#11
specification says for how the CKM_RSA_PKCS method behaves, it becomes clear
that the mechanism should only apply the last two steps: RSA encryption,
and octet-string-to-bit-string conversion.

The PKCS #1 v1.5 specification details the RSA encryption portion of
signatures in section 10.1.3, which states that it should be computed using
the behaviour described in Section 8.1 with a block type of 01 and a private
key as described in Section 7. Ignoring the private key format as unrelated
to this discussion, Section 8.1 describes the Encryption behaviour. While
not intending to quote the whole thing, the format is described as:

A block type BT, a padding string PS, and the data D shall be formatted into
an octet string EB, the encryption block.

  EB = 00 || BT || PS || 00 || D .   (1)

Putting it all together, when PK11_Sign/C_Sign is called with CKM_RSA_PKCS,
what you are provided as input is D. It may be a DigestInfo, where the
caller has computed the hash of the original message M, and then encoded
both it and the hash mechanism OID into the structure, as Thunderbird does
and as specified by PKCS #7. But it may also be a bare hash, as described
in TLS v1.0, where the DigestInfo is omitted and D is the concatenation of
both the MD5 and SHA-1 hashes. Or it may be neither - simply raw data that
should be encrypted, perhaps by using some new method that accommodates some
weakness in the PKCS #1 v1.5 encryption/signature method. [5]

In terms of conceptualizing the relationship between PKCS#11's CKM_RSA_PKCS
and the PKCS #1 v1.5 specification, PK11_Sign is better thought of as
PK11_Encrypt - the behaviour of the mechanism is specified by Section 8.1,
Encryption, rather than Section 10.1, Signatures.

As I mentioned, CryptoAPI does not expose this raw functionality - it only
allows encryption of previously computed hashes, and it will compute the
DigestInfo for you before encrypting. However, as I mentioned, you can
suppress the computing the DigestInfo by passing CRYPT_NOHASHOID. Further,
you can import the hash, generated via NSS, into CryptoAPI, by using
CALG_SSL3_SHAMD5 and using CryptSetHashParam(HP_HASHVAL). This works because
your input data, D, is 36 bytes, the same length as CALG_SSL3_SHAMD5, and
presumably the CSP allows you to set HP_HASHVAL. The net result of this is
the behaviour you desire.

The reason why this is important is that lets say another application wishes
to use your PKCS #11 module. Rather than implementing PKCS #7 or TLS 1.0, it
implements the vendor-specific Sleevinet protocol (The future of the
Intertubes). The Frooble message of the 

RE: Certificate login in Firefox - how does it work?

2010-11-27 Thread Ryan Sleevi
 On 2010-11-26 13:20 PDT, ryan-mozdevtechcry...@sleevi.com wrote:
 [snip]
  And to save you a bit of trouble/pain: for CryptoAPI, you cannot
  simply sign raw data - you can only sign previously hashed data. I
  understand this to mean that you cannot write a pure PKCS#11 -
  CryptoAPI mapper, whether .NET or at the raw Win32 level, because the
  CryptoAPI specifically forbids signing raw data of arbitrary length,
  while PKCS#11 permits it [7]. Your best bet, and a common approach
 for
  the specific case of TLS client authentication, is to combine
  CryptCreateHash/CryptSetHashParam(HP_HASHVAL)/CryptSignHash.
 [snip]
  [7] http://msdn.microsoft.com/en-us/library/aa380280(VS.85).aspx
 
 Ryan, Thanks for your comprehensive answer to Matej's question.
 I suspect that not many readers of this list are very familiar with the
 crypto capabilities of .NET.  Speaking of
 CryptSetHashParam(HP_HASHVAL),
 http://msdn.microsoft.com/en-us/library/aa380270(VS.85).aspx says:
 
  HP_HASHVAL.
 
  A byte array that contains a hash value to place directly into the
 hash
  object. [snip]
 
  Some cryptographic service providers (CSPs) do not support this
  capability.
 
 Do you know which, if any, of Microsoft's CSPs do not support it?
 
 --
 /Nelson Bolyard
 --

No, unfortunately I don't know - although I have not had any particular
problems with this approach using the stock RSA CSP, PROV_RSA_SCHANNEL
[1]. Since the concept of a CSP provider type may not be clear, a single DLL
may implement multiple CSP types - as does the standard Microsoft DLL - each
type representing a different set of required algorithms, key lengths,
cipher modes, etc that MUST be implemented [2]. My understanding is that any
provider implementing the PROV_RSA_SCHANNEL CSP type MUST support/implement
this sequence [3], but CSPs / implementations of other provider types are
neither required to nor particularly encouraged to.

In the Microsoft Smart Card Minidriver Certification requirements, it
details the API sequence for both CNG (Cryptography Next Generation, the
Vista+ API) and CryptoAPI use for SSL client authentication (RSA) [4]. The
sequence I described above is the sequence that Microsoft has documented as
part of their Sequence Tests for conformant smart card minidrivers, so it
can be presumed to be widely supported. A minidriver isn't required for a
smart card to work with Windows, so it's not a MUST support, but the
sequence is fairly well documented by Microsoft as being how they do it, so
any vendor wanting to interoperate with Internet Explorer and the like will
almost certainly implement it. It's also worth nothing, in the CNG world,
this sequence is completely different, due to a whole set of new APIs meant
to abstract SSL/TLS [5], and the concept of mapping to PKCS#11 really goes
out the window.

I should again note that my response was really in the context of SSL client
authentication, which was the specific case of Matej's original question,
and does not readily generalize to a pure PKCS#11 mapping. This is because
the HP_HASHVAL must be the same length as the output of the hash algorithm,
whereas the PKCS#11 mapping allows any size, up to k - 11, where k is the
length in bytes of the RSA modulus. In addition to the documentation you
referenced, there is also the note in CPSetHashParam [6], which is the
low-level function that a CSP exports for CryptoAPI to call when the
high-level function is called. The note for HP_HASHVAL there includes the
following: This parameter gives applications the ability to sign hash
values, without having access to the base data. Because the application and
the user have no idea what is being signed, this operation is intrinsically
risky. It is envisioned that most CSPs will not support this parameter. 

And, to bring it back to NSS, this is the approach that exists in NSS's
CryptoAPI wrapper/PKCS#11 provider [7], which also briefly comments on some
of the above and previously discussed behaviours. The wrapper wraps the
CKM_RSA_PKCS mechanism, as well as basic object certificate retrieval and
storage, if I recall correctly, so it's a useful reference of sorts for this
conversation. I'm not aware if this was actually deployed/shipped,
considering the code exists almost verbatim as the day it was checked in,
but it still exists in NSS CVS, so it is worth mentioning. It also works
around a couple other CryptoAPI gotchas (eg: CryptSignHash outputs
little-endian, rather than big-endian), so I'd encourage anyone interested
in finding out more to check it out. 

Hope that helps,
Ryan

[1] http://msdn.microsoft.com/en-us/library/aa388168(VS.85).aspx
[2] http://msdn.microsoft.com/en-us/library/aa380244(VS.85).aspx 
[3] http://msdn.microsoft.com/en-us/library/aa387710(VS.85).aspx 
[4]
http://www.microsoft.com/whdc/device/input/smartcard/sc-minidriver_certreqs.
mspx , Section 2.5.8.1
[5] http://msdn.microsoft.com/en-us/library/ff468652(VS.85).aspx 
[6] http://msdn.microsoft.com/en-us/library/aa379855(VS.85).aspx 

RE: Fwd: Hi, I have three questions about embed bank CA cert in Firefox

2010-07-21 Thread Ryan Sleevi
 -Original Message-
 From: dev-tech-crypto-bounces+ryan-
 mozdevtechcrypto=sleevi@lists.mozilla.org [mailto:dev-tech-crypto-
 bounces+ryan-mozdevtechcrypto=sleevi@lists.mozilla.org] On Behalf
 Of Gervase Markham
 Sent: Wednesday, July 21, 2010 1:22 PM
 To: dev-tech-crypto@lists.mozilla.org
 Cc: Amax Guan
 Subject: Re: Fwd: Hi, I have three questions about embed bank CA cert
 in Firefox
 
 On 21/07/10 07:26, Amax Guan wrote:
  But if you generate a user Certificate that's issued by a untrusted
 CA,
  there will be an alert popup.
 
 Can some NSS or PSM hacker explain why this is?
 
 Gerv

While neither an NSS nor PSM hacker, the implementation details (in
moz-central) are at [1]

If any certs beyond the user cert are supplied, then ImportValidCACerts() is
called. The certificates are all imported as temporary certificates, then
each certificate is tested to see if a chain can be built [2].

The simple way to work around this would be to only supply the user's
certificate in the application/x-x509-user-cert (since the user's cert is
not placed through this verification logic) OR, first supply (and have the
user install) the CA certificate of the issuing authority using the
previously recommended application/x-x509-ca-cert.

As for a good answer why, I can only speculate, but I suspect some code
paths would be affected by blindly importing certificates without first
vetting their chain. eg, a malicious party could supply a certificate that
appeared to be the same as a valid intermediate CA certificate (except that
the signature was wrong, naturally). If that certificate ended up being
selected during chain building/locating by subject/etc (pre-libpkix/STAN),
then it would cause connections using that intermediate to fail (a DoS).

Actually, a little CVS blame digging, and it turns out this is the case. See
[3]

[1]
http://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/src/nsNSS
CertificateDB.cpp#885
[2]
http://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/src/nsNSS
CertificateDB.cpp#772
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=249004



-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


RE: Restricting SSL cert issuance within specified domain

2010-06-02 Thread Ryan Sleevi
 That's great news! Is there a corresponding bug number or other way I
 can track the progress to see which version of NSS it gets into?

https://bugzilla.mozilla.org/show_bug.cgi?id=394919

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto