Re: Cipher suits, signature algorithms, curves in Firefox

2016-05-05 Thread Brian Smith
Zoogtfyz  wrote:

> This is my recommendation for changes to the supported ciphersuits in
> Mozilla Firefox. I performed rigorous compatibility testing and everything
> works as advertized. I used Firefox telemetry data, SSL Pulse data, and my
> own tests to verify that *not a single* publicly accessible website would
> get handshake errors compared to today.
>

Awesome.

>
> Reasoning:
> 1) Too many people put 256bit CBC cipher suits at higher priority than
> 128bit AEAD cipher suits because they don't know what they are doing.
>

Agreed.


> 2) 256bit AES cipher suits have known issues compared to 128bit AES cipher
> suits. It is not well studied yet how much those issues apply to the cipher
> suit implementation in TLS. Given that 256bit GCM cipher suits will not be
> added to Firefox, it is better to disable 256bit AES cipher suits
> completely.
>

When I the part of https://briansmith.org/browser-ciphersuites-01 regarding
AES-256, I didn't do a good job. A lot of people have interpreted what I
wrote as saying AES-256 is bad or worse cryptographically than AES-128.
That isn't what I meant to write. AES-256 still has some significant
advantages over AES-128. In particular, the larger key size helps w.r.t.
quantum computers. Further, the larger key size helps in preventing some
multi-user attacks. Even if we think that these merits are small, others do
not think they are small, and so there will always be websites that prefer
AES-256. Also, with AES-NI and similar optimizations on CPUs, AES-256 is
not too much slower than AES-128.

So, I don't think that dropping AES-256 is the right thing to do. Instead,
the ECDHE-AES-256-GCM cipher suites should be added to Firefox. Note that
they were just recently added to Google Chrome.


> 3) DHE (not ECDHE) cipher suits are far too often implemented incorrectly,
> most often with default common DH primes, DH parameter reuse, or generally
> weak bitstrenght (equivalent to 1024bit RSA, which is already considered
> insecure in Firefox). Hence it's better to remove support for DHE (not
> ECDHE) cipher suits rather than give false sense of security.
>

I agree. I think if people want non-ECC DHE cipher suites, then at a
minimum we need to define new cipher suite IDs for them that imply keys of
at least 2048 bits. Unless/until that happens, they are more trouble than
they are worth.

Note that Chrome recently reached the same conclusion.


> Additionally, Firefox 45esr currently supports these signature algorithms
> in this ordering:
> SHA256/RSA, SHA384/RSA, SHA512/RSA, SHA1/RSA, SHA256/ECDSA, SHA384/ECDSA,
> SHA512/ECDSA, SHA1/ECDSA, SHA256/DSA, SHA1/DSA
>
> I recommend changing it to these in this ordering:
> SHA512/ECDSA, SHA512/RSA, SHA384/ECDSA, SHA384/RSA, SHA256/ECDSA,
> SHA256/RSA, SHA1/ECDSA, SHA1/RSA
>

I suggest you read the text that Google's David Benjamin added to the TLS
1.3 draft regarding this.

Also, see
https://groups.google.com/d/msg/mozilla.dev.security.policy/smAUN2Rtc78/EuQoSyvmAwAJ
where I argued something similar.

Reasoning:
> 1) *not a single* publicly accessible website uses DSA (not ECDSA)
> signatures anymore.


I agree DSA should be dropped


> 3) Ordering from strongest to weakest, as opposed to what it is today.
>

There are other considerations to take into account other than "strength",
as David Benjamin's proposal and my suggestion linked above show.


> Additionally, Firefox 45esr currently supports these elliptic curves in
> this ordering:
> secp256r1, secp384r1, secp521r1
>
> I recommend removing support for secp521r1 since it is not supported in
> the wild, Chrome does not support it, and we should be moving away from
> secp curves to e.g. x25519. Once again, *not a single* publicly accessible
> website breaks with this change.
>

I agree. See https://bugzilla.mozilla.org/show_bug.cgi?id=1128792.

Is your test data set and code available somewhere? It sounds interesting.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: AES-256 vs. AES-128

2015-11-30 Thread Brian Smith
Julien Vehent  wrote:

> The original thread [1] had a long discussion on this topic. The DJB batch
> attack redefines the landscape, but does not address the original concerns
> around AES-256 resistance. To me, the main question is to verify whether
> AES-256 implementations are at least as resistant as AES-128 ones, in which
> case the doubled key size provides a net benefit, and preferring it is a
> no-brainer.
>
> [1]
> http://www.mail-archive.com/dev-tech-crypto@lists.mozilla.org/msg11247.html


The discussion above was biased in favor of what was best for FirefoxOS and
FxAndroid.

That discussion also didn't account for the emergence of ChaCha20-Poly1305.
I believe it makes more sense for the server to prefer 256-bit cipher
suites than when I wrote in the discussion above, but ChaCha20-Poly1305
needs to be taken into consideration to account for ARM clients. And
unfortunately most software (OpenSSL in particular) isn't ready for
ChaCha20-Poly1305 yet.

It may be useful to compare the processing cost of AES-128, AES-256, and
gzip/deflate when making your case. In particular, if you are compressing
every response then the difference between AES-128 and AES-256 probably
doesn't matter much to you.

Regarding the batch attack mentioned by DJB, make sure you understand how
it does and does not apply to TLS. See [1] and [2] and note how
client_write_IV/server_write_IV are used.

[1] https://www.ietf.org/mail-archive/web/tls/current/msg15573.html
[2] https://www.ietf.org/mail-archive/web/tls/current/msg16088.html

Cheers,
Brian
-- 
https://briansmith.org/
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: PKCS#11 platform integration

2015-05-11 Thread Brian Smith
David Woodhouse dw...@infradead.org wrote:

 The sysadmin should be able to configure things for *all* users
 according to the desired policy, rather than forcing each user to set
 things up for themselves.

 And in turn the *developers* of the operating system distribution
 should be able to set a default policy for the sysadmin to build upon.


Actually, this is the opposite of Firefox's policy. Firefox *intentionally*
doesn't do that. It may be possible to hack things to make it work (I
believe RHEL and Fedora do something like that already, for example), but
those hacks violate the spirit, if not the letter, of the Firefox trademark
policy regarding unauthorized modifications of Firefox. And, also, AFAICT,
those kinds of hacks may stop working at any time.

Said differently, there is nothing special about Linux. Just as Firefox
intentionally doesn't use Windows's central certificate trust database on
Windows, and just as it doesn't use Mac OS X's central certificate trust
database on Mac OS X, it shouldn't use a Linux distro's central certificate
trust database.

Put yet another way, it is basically Mozilla's policy to make sysadmins'
and Linux distros' jobs difficult in this area, because doing so is sort of
required for Mozilla to maintain autonomy over its root CA inclusion
policy. Thus, fixing this kind of problem is actually harmful.

That said, of course it would be nice if smart cards and client
certificates worked automatically, but those improvements need to be in
such a way that they wouldn't change the trust and non-trust of server
certificates.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Problems with FF and internal certificates

2015-05-04 Thread Brian Smith
On Fri, May 1, 2015 at 9:11 AM, Tanvi Vyas tv...@mozilla.com wrote:

  On Apr 27, 2015, at 2:03 PM, Michael Peterson 
 michaelpeterson...@gmail.com wrote:
  Now, in the album I posted above (https://imgur.com/a/dmMdG), the last
 two screenshots show a packet capture from Wireshark. It appears that
 Firefox does not support SHA512, which is kind of supported by this article
 (
 http://blogs.technet.com/b/silvana/archive/2014/03/14/schannel-errors-on-scom-agent.aspx).
 I'm not exactly sure this is true, and it seems like a silly thing for
 Firefox to drop support though (this previously worked), especially if
 every other browser in the world supports this.
 
  So there's everything we've found, and some of my assumptions. Does
 anyone know what is actually going on with Firefox. Is this a bug? Are we
 doing something wrong? How do we fix this?


SHA-384 is SHA-512 truncated to 384 bits.

I guess your ECDSA certificate is using the P-384 curve. If so, your
SHA-512 digest is truncated to ~384 bits in order to work with the P-384
curve. (If you are using the P-256 curve, then it is truncated to ~256
bits.)

Consequently, there's no advantage to using SHA-512 instead of SHA-384.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: FF 37 - ssl_error_no_cypher_overlap with java SSL and java generated self-signed certificates

2015-04-08 Thread Brian Smith
Gervase Markham g...@mozilla.org wrote:
 On 07/04/15 17:32, Hanno Böck wrote:
 Are you using DSA? Firefox removed DSA recently (which is good - almost
 nobody uses it and it's a quite fragile algorithm when it comes to
 random numbers).

 Hanno's probably hit the nail on the head here.
 https://bugzilla.mozilla.org/show_bug.cgi?id=1073867 was fixed in
 Firefox 37.

The removal of the DSS-based *cipher suites* was
https://bugzilla.mozilla.org/show_bug.cgi?id=1107787, which was also
done in Firefox 37.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What's My Chain Cert?

2015-03-27 Thread Brian Smith
Rob Stradling rob.stradl...@comodo.com wrote:
 The README [1] says:
 If multiple certificate chains are found, the shortest one is used.

 That's a good strategy for a browser to employ when deciding which chain to
 show in its certificate viewer, but it's unlikely to be the best strategy
 when configuring a server.
 There's often a cross-certificate issued by an older root to a newer root.
 For compatibility with browsers that don't trust the newer root, the server
 should send that cross-certificate too (even though it isn't part of the
 shortest chain).

 Using the longest available chain isn't always the correct strategy either
 though.

See also CloudFlare's cfssl bundle tool, which has an option to
build the most client-compatible cert chain bundle:
https://github.com/cloudflare/cfssl.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Remove Legacy TLS Ciphersuites from Initial Handshake by Default

2015-03-16 Thread Brian Smith
Ryan Sleevi ryan-mozdevtechcry...@sleevi.com wrote:
 On Mon, March 16, 2015 1:06 pm, Erwann Abalea wrote:

  Phase RSA1024 out? I vote for it. Where's the ballot? :)

 This is a browser-side change. No ballot required (the only issue *should*
 be non-BR compliant certificates issued before the BR effective date)

 https://code.google.com/p/chromium/issues/detail?id=467663 for Chrome, but
 unfortunately, can't share the user data as widely. Perhaps Mozilla will
 consider collecting this as part of their telemetry (if they aren't
 already)

The Fx telemetry is
https://bugzilla.mozilla.org/show_bug.cgi?id=1049740 and the Fx bug
for removing support for 2048-bit certificates is
https://bugzilla.mozilla.org/show_bug.cgi?id=1137484.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Interested in reviving PSS support in NSS

2015-02-16 Thread Brian Smith
Hanno Böck ha...@hboeck.de wrote:
 Brian Smith br...@briansmith.org wrote:
 Having new oids with sane pre-defined parameters would vastly simplify
 things. Back when I wrote that code I thought changing the standard is
 harder than implementing the non-optimal spec, but I might've been
 wrong.

To clarify: I'm suggesting that you parse the raw RSA-PSS parameters
from the signature and from the public key into a tuple
(hashAlgorithm, maskGenAlgorithm, saltLength) like you normally would.
Then, for certain tuples, define OIDs that are only used internally in
NSS to identify (using the SECOidTag representation) that combination.
These OIDs would never be seen on the wire.

This would mean, in addition, that instead of having an rsaPSSKey
type, that we'd have an rsa_PSS_SHA256_MGF1SHA256_32_key type and an
rsa_PSS_SHA384_MGF1SHA384_48_key type.

 Such an RFC could also just declare that keys not divisable by 8 are
 disallowed and by that fix that problem as well.

Sure, but in practice it isn't a problem. Everybody's been doing the
same for RSA-PKCS#1.5 forever already.

 I don't really know what channels I'd have to go through to pursue
 such a preset-OID. Can an OID be defined by an RFC? How does the
 interaction between the OID registration and RFCs work? Is this
 something the CFRG would do or some other entity in the IETF?

As I mentioned above, you don't need to define these OIDs in an RFC,
since they would exist only for the purpose of fitting into NSS's API.

The purpose of the RFC would be to nail down which (hashAlgorithm,
maskGenAlgorithm, saltLength) are allowed and mandatory to support for
certificates.

Note that Microsoft's documentation hints that they implemented
RSASSA-PSS-SHA256 using the tuple (SHA256, MGF1-SHA1, 20) instead of
(SHA256, MGF1-SHA256, 32) like I would expect. Perhaps their
RSASSA-PSS-SHA384 is (SHA384, MGF1-SHA1, 20) too? If so, is it more
important to interop with them than to have all the parameters match
in the intuitive way?

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Interested in reviving PSS support in NSS

2015-02-15 Thread Brian Smith
Ryan Sleevi ryan-mozdevtechcry...@sleevi.com wrote:
   - It assumes all the parameters can be expressed via a SECOidTag. That
 is, it's missing hash alg, mgf alg, salt length (e.g. the
 RSASSA-PSS-params construction)

I believe there are only a small number of (hashAlgorithm, mgf alg,
salt length) combinations that need to be supported, namely these two:

(sha256, mgf1-SHA256, 32 bytes)
(sha384, mgf1-SHA384, 48 bytes)

I think that in NSS, these combinations can be identified internally
with some new OID, perhaps in the Netscape OID tree.

Note that the PSS RFC says that SHA-1 is the default for everything.
By not supporting SHA-1 at all, we avoid having to deal with any
implicit encodings of the various parameters. The PSS RFC also says
that SHA-1 is mandatory, but that silliness is just an invitation for
somebody to get their name as an author of a new, reasonable, RFC.

Thoughts?

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Interested in reviving PSS support in NSS

2015-02-15 Thread Brian Smith
[+antoine]

Hanno Böck ha...@hboeck.de wrote:
 Unfortunately the code never got fully merged. Right now the state is
 that code for the basic functions exists in freebl, but all upper layer
 code is not merged.

There are multiple upper layers and, depending on your goals, some
should be prioritized higher than others.

 I think if I remember correctly the code currently
 in freebl will also not work in some corner cases (keys mod 8 != 0).

IIUC, this is not urgent to support and may not be worth supporting at
all. IIRC, there are lots of places in NSS and mozilla::pkix that
explicitly reject keys and signatures that are not multiples of 8
bits.

 The bugtracker entry is here:
 https://bugzilla.mozilla.org/show_bug.cgi?id=158750

That bug is too big and messy to make sense of at this point. Also,
some of the patches that haven't been checked in yet should be split
up. I suggest that you proceed as follows:

1. Split 000a-pss-verification-v15.diff into two patches: One part
that adds the pk11wrap functionality, and a separate part that adds
the cryptohi functionality. Put each new patch in its own new NSS bug.

2. Move 0009-add-pk11-mgfmap-v3.diff, 000b-pss-sign-v15.diff, and
000c-tests-v2.diff to a new bug.

3. Move 0012-fix-pss-verification-for-uncommon-keysizes-v5.diff to a
new bug, which will have low priority.

4. Close the existing bug as RESOLVED FIXED.

Even with all the above patches landed, Firefox and other Gecko-based
applications will not accept PSS signatures for certificates. Of the
above patches, only the patch to add PK11_VerifyWithSigAlg is relevant
to Gecko. New patches for mozilla::pkix and for its test suite, which
basically duplicate all the work in the rest of the patches mentioned
above, would be needed. But...

 I want to make a proposal to get PSS support into TLS
 1.3 and it would certainly help if I could say that all major TLS
 libraries support it already.

First somebody needs to create a reasonable specification detailing
exactly which subset of the PSS specification should be supported for
TLS. The current PSS specification allows *way* too much flexibility
and also has terrible defaults. I believe Antoine and his team have a
good idea of what a reasonable subset of PSS would look like. I
recommend working with him to develop such a spec. Without such a
spec, I wouldn't support adding PSS support to mozilla::pkix.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Reducing NSS's allocation rate

2014-11-11 Thread Brian Smith
On Mon, Nov 10, 2014 at 9:04 PM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
 In your analysis, it would be better to use a call stack trace depth
 larger than 5 that allows us to see what non-NSS function is calling
 into NSS.

 I've attached to the bug a profile that uses a stack trace depth of 10.

Unfortunately, 10 isn't enough to see the non-NSS entry (one that
doesn't start with security/nss/) for every case. However, it looks
like the data supports the types of changes that you are making and
also my suggestions for coalescing and caching results, as well as my
suggestion to avoid constructing CERTCertificate objects. Depending on
how much effort you're willing to invest in this, many (probably most)
of those allocations can be avoided. David Keeler is very familiar
with the code in security/certverifier and security/manager/ssl/src
that would be changed to implement the additional things I suggested,
so I suggest you talk to him about it.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Reducing NSS's allocation rate

2014-11-10 Thread Brian Smith
On Mon, Nov 10, 2014 at 6:51 PM, Nicholas Nethercote
n.netherc...@gmail.com wrote:
 I've been doing some heap allocation profiling and found that during
 basic usage NSS accounts for 1/3 of all of Firefox's cumulative (*not*
 live) heap allocations. We're talking gigabytes of allocations in
 short browsing sessions. That is *insane*.

 I filed https://bugzilla.mozilla.org/show_bug.cgi?id=1095272 about
 this. I've written several patches that fix problems, one of which has
 r+ and is awaiting checkin; check the dependent bugs.

In your analysis, it would be better to use a call stack trace depth
larger than 5 that allows us to see what non-NSS function is calling
into NSS.

The checks done in mozilla::pkix's CheckPublicKeySize can easily be
optimized. But, first check how often the call stack contains
CheckPublicKey vs VerifySignedData; CheckPublicKey can be optimized
even more than VerifySignedData.

My original plans for VerifySignedData was for it to have a cache
added to it, if/when performance testing showed that there was a
performance problem. It is likely that such a cache is important, even
without the heap thrashing that you are concerned about.

Also, there is already a bug on file about caching and coalescing SSL
server cert verification results in SSLServerCertVerification. This is
trickier than the type of caching you can do in VerifySignedData but
it is potentially a bigger win. Also, I think recent changes to
Gecko's connection management (the parallelism to a new host
restricted to 1 bug being fixed) made it more important to do at
least the coalescing part.

Note that when bug 1036103 is fixed (which will be basically whenever
I get around to posting one more patch), it will be possible to avoid
any of the NSS CERT_* API during certificate verification, if people
are willing to do a little (probably quite a bit, actiually)
refactoring.

That that, except for the calls to
SECKEY_DecodeDERSubjectPublicKeyInfo and SECKEY_ExtractPublicKey in
CheckPublicKeySize, mozilla::pkix allocates no memory at all, ever
(once CheckNameConstraints is replaced, which is the thing that is one
patch away from happening).

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Road to RC4-free web (the case for YouTube without RC4)

2014-10-22 Thread Brian Smith
On Sun, Jun 29, 2014 at 11:18 AM, Hubert Kario hka...@redhat.com wrote:

 The number of sites that prefer RC4 while still supporting other ciphers
 are
 very high (18.6% in June[1], effectively 21.3% for Firefox[6]) and not
 changing much. The percent of servers that support only RC4 is steadily
 dropping (1.771% in April[3], 1.194% in May[2], 0.985% in June[1]).

 Because of that, disabling RC4 should be possible for many users. The big
 exception for that was YouTube video servers[4] which only recently gained
 support for TLS_RSA_WITH_AES_128_GCM_SHA256.


Sorry that I couldn't say more earlier, but please see this message from
Adam Langley of Google about YouTube working on
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256:

http://www.ietf.org/mail-archive/web/tls/current/msg14112.html

And TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 support is coming -- it's already
enabled in some locations.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Announcing Mozilla::PKIX, a New Certificate Verification Library

2014-10-05 Thread Brian Smith
On Thu, Oct 2, 2014 at 9:03 AM,  davpj...@ozemail.com.au wrote:
 Maybe there is something that can be done to hep this situation? Maybe these 
 old private certificates need to be cleaned out on upgrade? Or maybe 
 something in the code that is going nuts trying to validate these private 
 certificates needs to be fixed?

There is already a bug report about this issue:
https://bugzilla.mozilla.org/show_bug.cgi?id=1056341

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Announcing Mozilla::PKIX, a New Certificate Verification Library

2014-08-05 Thread Brian Smith
On Tue, Aug 5, 2014 at 9:51 AM,  mjle...@gmail.com wrote:
 Since updating to 31, I have not been able to log into a self signed web page:

 Secure Connection Failed

 An error occurred during a connection to taiserver:444. Certificate key usage 
 inadequate for attempted operation. (Error code: 
 sec_error_inadequate_key_usage)

 How do I get this corrected?

Please file a bug at
https://bugzilla.mozilla.org/enter_bug.cgi?product=Corecomponent=Security:%20PSM
(Product: Core, Component: PSM) and attach the certificate in question
to the bug report.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


David Keeler is now the module owner of PSM

2014-08-01 Thread Brian Smith
Hi,

Amogst other things, PSM is the part of Gecko (Firefox) that connects
Gecko to NSS and other crypto bits.

David Keeler has taken on most of the responsibility for keeping
things in PSM running smoothly and so it makes sense to have him be
the module owner. After asking the other PSM module peers, I went
ahead and made that change:

https://wiki.mozilla.org/Modules/Core#Security_-_Mozilla_PSM_Glue

Congratulations David!

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


ChaCha20-Poly1305 in Gecko/Firefox (was Re: Road to RC4-free web (the case for YouTube without RC4))

2014-07-10 Thread Brian Smith
On Thu, Jul 10, 2014 at 4:53 AM, Henri Sivonen hsivo...@hsivonen.fi wrote:

 On Tue, Jul 1, 2014 at 11:58 PM, Brian Smith br...@briansmith.org wrote:
  I am interested in discussing what we can do to help more server side
  products get better cipher suites by default, and on deciding whether we
  add support for ChaCha20-Poly130[5].

 Out of curiosity, what's holding back a decision to implement
 ChaCha20-Poly1305?


As you probably know, Google Chrome already ships some ChaCha20-Poly1305
cipher suites. They have a patch that they apply on top of NSS to implement
them. I recently asked a couple of our friends on the Chrome team about
contributing that patch to NSS proper. Apparently, the implementation of
those cipher suites diverges from the current or some expected future draft
of the IETF specification. Consequently, it isn't clear that it is a good
idea to drop that patch into NSS as-is. And, if we modify the patch to
match the current/future IETF documents then Firefox wouldn't be able to
interoperate with *.google.com using ChaCha20-Poly1305.

So, either we'd have to decide on having Firefox implement an
already-obsolete variant of the cipher suites (temporarily, of course) or
we'd have to find some partner sites (perhaps still *.google.com) that are
willing to speak the new variants of the cipher suites, for it to be
useful. This may require updated patches for OpenSSL in order for those
servers to even be able to do that.

Also, Chromium has a patch on top of NSS that allow the browser to
dynamically reorder the cipher suite list presented in the Client Hello
message. Chromium uses this in order to put the ChaCha20-Poly1305 cipher
suites ahead of the AES-GCM cipher suites on platforms that are lacking AES
and/or GCM processor instructions. That is, usually ChaCha20-Poly1305 is
ordered ahead of AES-GCM on ARM but AES-GCM is ahead of ChaCha20-Poly1305
on x86. We'd have to decide whether that would be appropriate for Firefox
and if so we'd need to add that functionality to NSS.

So, what initially looked like a minor amount of effort turned into a more
significant effort. If there is somebody interested in taking this on, I
would be very happy to help them with it.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Curve25519 and/or Curve41417 and/or Alternatives in Gecko/Firefox (was Re: Road to RC4-free web (the case for YouTube without RC4))

2014-07-10 Thread Brian Smith
On Thu, Jul 10, 2014 at 5:33 AM, Kurt Roeckx k...@roeckx.be wrote:

 [snip]
 An other alternative is using curve25519.  It's also not standardized yet, 
 but at this time it seems more likely to be standardized first.

Thanks for bringing up curve25519. I'd like to share a recent paper
written by Daniel J. Bernstein, Chitchanok Chuengsatiansup, and Tanja
Lange:

  Curve41417: Karatsuba revisited.''
  http://cr.yp.to/papers.html#curve41417

Section 1.5, Is high security useful? is particularly interesting.

I think it is likely the case that Curve25519 solves the wrong
problem*: it tries to be faster than NIST P-256 but only the same
strength, but I think a new standard curve should be the same speed as
NIST P-256 but much stronger. My thinking is that now, when Curve25519
isn't an option, everybody is using P-256 without significant
performance complaints. This shows that we don't really need something
faster than P-256. Further, as the paper states in section 1.5, there
are quite a few reasons to want to have a security level higher than
~125 bits, if we can get it with reasonable performance and without
compromising other security goals, which we apparently can, according
to this paper.

By the way, an extra notable merit of this paper is that they focused
on ARM performance

I would like to hear what others think about this, including what
people think Gecko should do.

Cheers,
Brian

* Besides performance, Curve25519 solves other problems, but in
general all of the other new alternatives like curve41417 solve them
too.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Road to RC4-free web (the case for YouTube without RC4)

2014-07-10 Thread Brian Smith
On Thu, Jul 10, 2014 at 5:00 AM, Hubert Kario hka...@redhat.com wrote:
 - Original Message -
 From: Brian Smith br...@briansmith.org

snip

 However, it is likely that crypto libraries that make the two changes above
 will also have support for TLS_ECDHE_*_WITH_AES_*_GCM cipher suites too.
 So, I hope that they also enable TLS_ECDHE_*_WITH_AES_*_GCM at the same
 time they deploy these changes.

snip

 What basis do you have to assume that server administrators will actually
 upgrade their Apache/nginx/lighttpd/OpenSSL/etc. installations?

In this thread you pointed out that a number of websites had updated
their servers to add TLS_RSA_WITH_AES*_GCM* and disable
TLS_RSA_WITH_*_CBC_*, so that Firefox now only negotiates RC4 with
them when it could be negotiating AES-GCM. The fact that they updated
their servers to add non-ECDHE AES-GCM support is good evidence that
these server administrators are paying attention and are likely to
update if/when their server software vendor gives it to them if it
solves a need (like improving what Firefox negotiates), right?

Regarding your request about how to write the addon: I don't have time
to work on that addon, but I know it is possible to write it.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Road to RC4-free web (the case for YouTube without RC4)

2014-07-09 Thread Brian Smith
On Tue, Jul 1, 2014 at 7:15 PM, Julien Pierre julien.pie...@oracle.com
wrote:

 On 7/1/2014 14:05, Brian Smith wrote:

 I think, in parallel with that, we can figure out why so many sites are
 still using TLS_ECDHE_*_WITH_RC4_* instead of TLS_ECDHE_*_WITH_AES* and
 start the technical evangelism efforts to help them. Cheers, Brian

 The reason for sites choosing RC4 over AES_CBC might be due to the various
 vulnerabilities against CBC mode, at least for sites that support TLS 1.0 .
 I think a more useful form of evangelism would be to get sites to stop
 accepting SSL 3.0 and TLS 1.0 protocols.


Servers that cannot, for whatever reason, support the AES-GCM cipher
suites, should be changed to prefer AES-CBC cipher suites over RC4-based
cipher suites at least for TLS 1.1 and later.

Most sites are not going to stop accepting SSL 3.0 and/or TLS 1.0 any time
soon, because they want to be compatible with Internet Explorer on Windows
XP and other software that doesn't support TLS 1.1+.

However, in the IETF, there is an effort, spearheaded by our friends at
Google, for solving the downgrade problem:
http://tools.ietf.org/html/draft-ietf-tls-downgrade-scsv-00

This simple feature, if implemented by the browser and by the server,
allows the server to recognize that the browser has tried a non-secure
downgrade to a lower version of TLS. Once the server recognizes that, the
server can reject the downgraded connection. The net effect is that,
assuming modern browsers quickly add support for this mechanism, the server
can be ensure that it only uses CBC cipher suites with modern browsers over
TLS 1.1 or later and that it never uses RC4-based cipher suites with modern
browsers (in conjunction with the prefer AES-CBC cipher suites over RC4
cipher suites change I suggest above).

However, it is likely that crypto libraries that make the two changes above
will also have support for TLS_ECDHE_*_WITH_AES_*_GCM cipher suites too.
So, I hope that they also enable TLS_ECDHE_*_WITH_AES_*_GCM at the same
time they deploy these changes.

FWIW, I filed bugs [1][2] for adding support for
draft-ietf-tls-downgrade-scsv-00 to NSS, Gecko, and Firefox.

Cheers,
Brian

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1036737
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1036735
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Road to RC4-free web (the case for YouTube without RC4)

2014-07-09 Thread Brian Smith
On Wed, Jul 2, 2014 at 5:28 AM, Hubert Kario hka...@redhat.com wrote:

  On 7/1/2014 14:05, Brian Smith wrote:
   I think, in parallel with that, we can figure out why so many sites
   are still using TLS_ECDHE_*_WITH_RC4_* instead of
   TLS_ECDHE_*_WITH_AES* and start the technical evangelism efforts to
   help them. Cheers, Brian
  The reason for sites choosing RC4 over AES_CBC might be due to the
  various vulnerabilities against CBC mode, at least for sites that
  support TLS 1.0 .

 problem is that to support AES-GCM and ECDHE you need the very newest
 both Apache and OpenSSL.


 If you have older Apache, you do get TLS 1.2 and you do get SHA-256
 suites, but you can't use ECDHE.


It depends on what distro you are using and how old of an Apache you are
talking about. Debian has shown it is relatively straightforward to
backport ECDHE support to Apache 2.2.x, so I think other distros will also
be able to do so. I'm sure it isn't a trivial effort, but it is definitely
worthwhile to do so.


 You also can't set different cipher order for TLS1.1 and up and TLS1.0
 and lower.


The software can be changed to add this feature, and those changes can be
backported.


 So a server that has order like this:
 DHE-RSA-AES128-GCM-SHA256
 DHE-RSA-AES128-SHA256
 AES128-GCM-SHA256
 AES128-SHA256
 RC4-SHA
 DHE-RSA-AES128-SHA
 AES128-SHA

 will negotiate RC4 with Firefox. Such configuration has about 2% of
 servers.


I understand. But, I think the best way of accommodating those servers is
for the server software vendor to provide an (semi-)automatic update that
enables the TLS_ECDHE_*_WITH_AES*_GCM_* cipher suites.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Road to RC4-free web (the case for YouTube without RC4)

2014-07-09 Thread Brian Smith
On Wed, Jul 2, 2014 at 5:08 AM, Hubert Kario hka...@redhat.com wrote:

 Also, see Gavin's email here about adding such prefs in general. He
  basically says, Don't do it. Note that Gavin is the Firefox module
 owner:
 
 https://groups.google.com/d/msg/mozilla.dev.platform/PL1tecuO0KA/e9BbmUAcRrwJ

 As Benjamin notes,
 an add-on is a much better way to suggest people customize these
 things, and writing an add-on that sets a pref is trivial.

 So you'd accept code that is able to change this preference but doesn't
 expose it through about:config?

 I'm more that willing to create such patchset and extension pair.


It is already possible to write such an extension without any new Firefox
APIs, because extensions can call NSS functions.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Intent to unimplement: proprietary window.crypto functions/properties

2014-06-27 Thread Brian Smith
On Fri, Jun 27, 2014 at 9:19 AM, David Keeler dkee...@mozilla.com wrote:

 On 06/27/2014 07:37 AM, Nathan Kinder wrote:
  On 06/27/2014 12:13 AM, Frederik Braun wrote:
  To be frank, I have only ever seen the non-standard crypto functions
  used in attacks, rather than in purposeful use.
 
  That doesn't mean that aren't being purposefully used.  The current
  crypto functions are used pretty heavily by Dogtag Certificate System
  [1], and this has been the case for many years.
 
  I believe that one of the big things lacking in WebCrypto is a suitable
  replcement for generateCRMFRequest(), which allows for key escrow.  I'm
  not certain if an addon will be able to replace this functionality.

 Looking at the working draft of the spec[0], there are functions to
 generate, export, and wrap keys, so it looks like webcrypto can be used
 to implement key escrow (unless I'm misunderstanding the term).
 Again, though, addons can pretty much do anything, so if webcrypto isn't
 up to the task, an addon should be able to fill the gap.


The issue is that the WebCrypto API uses a totally separate keystore from
the X.509 client certificate keystore (if it doesn't, it should be), and
the stuff that Red Hat does is about client certificates. AFAICT, WebCrypto
doesn't get have any mechanism for accessing the client certificate store
and it isn't clear if/when that would be added.

However, an addon would be able to do these things, because the addon could
literally just use the crypto code that you are proposing to remove,
without the DOM parts. However, now that Firefox is minimizing the amount
of NSS that is exposed from libnss3 and friends on non-Limux platforms, it
is probably the case that the addon will need to use platform-specific APIs
to access the operating system keystore, at least on non-Linux platforms.
However, I think that is a good idea anyway, because Firefox (and
Thunderbird) should be using the native OS for client certificates and
S/MIME certificates anyway.

As far as whether it is OK to remove functionality that some websites are
depending on: In this case, I think you can remove functionality that
Chrome currently doesn't support (using the same APIs or different APIs)
without hesitation. For example, I don't think Chrome can do the key escrow
thing so I don't see why Firefox needs to support it either. The advantages
of deleting the code outweigh the value that Firefox gains from supporting
those things.

We had a conversation about this a year ago on this mailing list and AFAICT
nobody has made any effort at W3C to standardize anything related to the
functionality for which you are proposing removal. I think that is a good
indication of how unimportant it is. So, +1 from me.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Announcing Mozilla::PKIX, a New Certificate Verification Library

2014-04-28 Thread Brian Smith
On Mon, Apr 28, 2014 at 4:45 PM, Erwann Abalea eaba...@gmail.com wrote:

 The chain builder can test all possible issuers until it finds a valid one
 (that's what OpenSSL does, for example). The AKI is only here to say
 pssst, this is most probably the certificate you should try first.


Right. We need to measure whether our lack of support for AKI/SKI, which
causes us to do such a brute-force search, actually causes real-world
performance concerns. I am hoping that it doesn't matter so that we can
remove AKI/SKI from the WebPKI X.509 profile.

There's another missing check on the new PKIX lib, PrivateKeyUsagePeriod
 extension. It's been declared as deprecated in RFC2459 and 3280, isn't
 mentioned anymore in RFC5280, but it's still defined in X.509, and used on
 some places, such as CSCA (e-passports, where CA renewal with rekey also
 happen on a regular basis).


CSCA is out of scope for mozilla::pkix, at least at this time. More
generally, PrivateKeyUsagePeriodand other deprecated features, including
*all* of the proprietary Netscape/NSS extensions like Netscape Cert Type,
are also out of scope. Note that new features like the Must-Staple
extension *are* within scope and can/should/will be added.


  I agree, it's not mandatory at all. And even if a bunch of certificates
 is sent along the EE cert, the RP is supposed to take them as potential
 candidates for its chain building algorithm. Potential candidates only.


Exactly. (In the code, the candidate issuer is called potentialIssuer.)

Thanks for looking so closely at the code. Please let me know if you have
any questions that would help with your investigation of it.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: OCSP stapling problems

2014-03-11 Thread Brian Smith
On Tue, Mar 11, 2014 at 3:20 AM, Hanno Böck ha...@hboeck.de wrote:

 I wanted to bring up an issue regarding OCSP stapling.
 I filled this bug shortly after Firefox 27 came out:
 https://bugzilla.mozilla.org/show_bug.cgi?id=972304

 Short conclusion: If you have enabled OCSP stapling on your server this
 will break the possibility to add certificate exceptions with Firefox
 27.

 I find it a bit worrying that this issue hasn't received any attention
 yet. To make this clear: This made me disable OCSP stapling on my
 production machines with customers. And it's a serious regression to
 the previous version 26.


First, it is important to point out to others reading this that this
problem only affects certificates that don't chain to a trusted root CA
and/or which are considered invalid by Firefox for some other reason.
AFAICT, there is no problem with OCSP stapling in Firefox for valid
(according to Firefox) certificates.

In Firefox 30 (or so), we will switch to a different way of verifying
certificates, including a different way of processing OCSP responses. In
the new way, we won't validate the OCSP response at all for a certificate
that we do not trust, whether it is stapled or not. I believe this will
resolve the issue you are experiencing.

Because we're overhauling all of the certificate verification processing,
and because this is an issue that only affects invalid certificates, and
because there is a workaround (disable OCSP stapling until Firefox 30 is
released), this isn't going to be a high priority. I understand that can be
frustrating but we'll never get the new certificate processing turned on if
we keep going back to fix these issues with the old certificate processing.

It would be great if you could test the new way of doing certificate/OCSP
verification. To do so, please download Firefox 30 Nightly from
http://nightly.mozilla.org/. After you install it, go to about:config and
add a new entry:

1. Right click in the list of preferences and choose New - Boolean.
2. Enter the name security.use_insanity_verification
3. Change the value of the new pref to true.

You may have to clear your cache and restart your browser for the change to
fully take effect.

If you try this, let me know if it resolves the issue for you.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Where are others SHA256 cipher suits in Firefox 27?

2014-02-06 Thread Brian Smith
On Wed, Feb 5, 2014 at 9:54 PM, Rasj rasj...@gmail.com wrote:
 Where are others? For example:
 TLS_RSA_WITH_AES_256_CBC_SHA256 (0x3d)

See https://briansmith.org/browser-ciphersuites-01.html. Also, please
look at the archives of this mailing list. There have been several
DOZEN of emails about the cipher suite selection on this mailing list.

in particular: This proposal does not include the new HMAC-SHA256 or
HMAC-SHA384 ciphersuites that were added in TLS 1.2 since there is not
a clear need for them. Given our current understanding, HMAC-SHA-1,
HMAC-SHA-256, and HMAC-SHA-384 are all more-or-less equal in terms of
security given how they are used in TLS. (This is HMAC-SHA, not plain
SHA. Also, there is a huge difference in the ability to resist an
offline attack vs. the ability to resist an online attack.) Avoiding
these ciphersuites also allows us to sidestep the possibility of
performance regressions from enabling TLS 1.2.

 Many web-sites have only TLS_RSA_WITH_AES_256_CBC_SHA256 as kind of 
 strong(even without PFS) and weak RC4 and 3DES.

Please provide some examples of such sites.

Some of the information in my proposal is outdated. For example, it
seems like we don't have constraints on handshake size, since we've
discovered workarounds for those compatibility issues. So, we have
more flexibility to add these cipher suites than we previously did.

My main thought here is that these cipher suites aren't really
increasing security, but they are decreasing performance, when they
are used. We should advocate instead for cipher suites that increase
security and increase performance. That means, for example, cipher
suites that waste fewer bytes per record on nonces and padding. See
https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-04 for an
example of what I mean.

More generally, the CBC-HMAC mode of cipher suites is outdated and
should just be replaced. We have to support the older variants for
backward compatibility, but I don't see much reason, right now, to add
support for the new, less efficient, no-more-secure, variants.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Sites which fail with tls 1.0

2014-02-05 Thread Brian Smith
On Wed, Feb 5, 2014 at 5:39 PM,  cl...@jhcloos.com wrote:
 Is the retry logic in nss or in mozilla-central?  And if the latter,
 can anyone help narrow the search?  I didn't find anything relevant
 in comm-central.

It is in mozilla-central, in
security/manager/ssl/src/nsNSSIOLayer.cpp. See these bugs:
https://bugzilla.mozilla.org/show_bug.cgi?id=839310
https://bugzilla.mozilla.org/show_bug.cgi?id=945195

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Sites which fail with tls 1.0

2014-01-28 Thread Brian Smith
On Mon, Jan 27, 2014 at 2:22 PM,  cl...@jhcloos.com wrote:
 In case anyone is keeping a list, while helping a relative I determined
 that timewarnercable.com's login server (wayfarer.timewarnercable.com)
 will not work with tls 1.1 or 1.2.  The connection fails after the client
 right after the client hello.

 I had to set security.tls.version.max to 1 to get ff (26) or sm (2.23)
 to get her (relevant) profile to log in to their site.

Hi,

What is the value of security.tls.version.min? It should have the
default value of 0. If not, could you please try again with
security.tls.version.min=0 and security.tls.version.max=3?

Also, could you try with Firefox 27 beta? Firefox 27 is supposed to be
released next week. The link to the beta version is here:
http://www.mozilla.org/en-US/firefox/beta/

When I try with Firefox Nightly, I find that we do fail to negotiate
TLS 1.2 and then we try TLS 1.1 and fail at that. But then we retry
with TLS 1.0 and that succeeds. I am curious why that is not happening
for you with Firefox 26, since Firefox 26 should have the retry logic
in it already.

Thank you very much for your help with this!

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Remove legacy TLS Ciphersuits Offered by Firefox

2014-01-27 Thread Brian Smith
On Mon, Jan 27, 2014 at 9:26 AM,  ripber...@aol.com wrote:
 On Monday, January 27, 2014 6:19:42 AM UTC-7, Kurt Roeckx wrote:
   2) NIST is a US government standards board that drives a lot of compliance
  regulation. There are companies what will want to be able show that they
  are NIST compliant. The standard at this point does NOT allow you to
  use Camellia. So there should be some way to configure the browser so 
 that
  it uses only FIPS approved algorithms (i.e. NOT CAMELLIA). Otherwise 
 you're
  probably going to be getting the same sort of feedback about I can't use
  Firefox because it cannot be made NIST 800-131a compliant that you got
  about I can't use Firefox because it does not support TLS 1.2.

Camellia may get disabled by default soon. But, RC4 won't get disabled
by default soon; we may do what MSIE11 does, but that's not quite the
same thing. You can configure Firefox to disable RC4 and Camellia
cipher suites using about:config. Search for rc4 and camellia to
find those prefs.

5) I'm trying to tell you what the standard says - not whether I agree with
   it. I don't get to pick. The standard does not allow Camellia (because
   it is too new). But the standard does support and justify taking away
   the set of suites that Marlen suggested. I was just giving a more
   explicit rational for dropping them.

No NIST standard is the standard for Firefox.

I have sent feedback to NIST about its draft recommendations for TLS,
trying to convince them that they should change their guidance to be
in line with what web browsers do in their default configurations.
However, NIST doesn't dictate what we do. In particular, we won't
constrain ourselves to doing only things that NIST 800-131a recommends
in the default configuration. However, assuming it isn't too much
work, we'll support options that allow you to configure Firefox to
conform with NIST guidance.

   The client is not obligated to enforce NIST 800-131a. But I
   would suggest two things:
 1) There should be a visible indication whenever a web site ends up with a
connections that has less than 112 bit security. Perhaps even ask the
user if he really wants to connect to a site with 'weak' security. This
might motivate some of these sites to fix their security.
 2) There should be a configuration control to block connection to a weak
sites period.
 Weak = See description at end of post.

This seems like a reasonable suggestion.

  = 112 bit, but their collision resistance isn't that good.  That means
 in an HMAC they can perfectly be used.

 MD5 is not a FIPS approved algorithm. It has known issues with collision
 resistance. The NIST 800-131a standard says do not use it - not even in an 
 HMAC. Kind of agree that it should be relatively OK in an HMAC, but any known 
 flaw is a potential attack vector.

There are real compatibility issues with turning off the
HMAC-MD5-based cipher suite. However, you can turn it off in
about:config; search for md5

 SHA-1 is 160 bits which as a hash gives it 80 bit security strength. In an
 HMAC, it has 160 bit security which is fine. The item above is about
 digital signatures - not MACs - the point is - all those RSA-/SHA-1
 signatures on Certificates out there are NOT good. Also using SHA-1 in the
 TLS signing protocol is not good - and that's what you get even with TLS 1.2
 if you don't send a Hash and Signature Algorithm extension that prohibits 
 SHA-1.

It is a good point that we need to change what we do in the TLS
handshake when we stop accepting SHA-1 signatures.

It may be reasonable to implement a don't accept SHA-1 signatures
preference similar to the one we just removed for MD5.

 I might be wrong, but I thought as long as the client and the server do not 
 BOTH use a static key, then you still have forward perfect secrecy. And I 
 thought the definition in TLS provided for the certificate owner to have a 
 static DH key and for the authenticator to use an ephemeral key when DH or 
 ECDH was used. If this is correct, I'd guess the static key on the server 
 side might save some time. Anyway I'm on shaky ground here perhaps - in my 
 mind, I like it when there are fewer options and clearer choices about what 
 to use. I do notice that the Suite B cipher suites only use ECDHE so that 
 might be some indication that DH or ECDH are not a strategic path. Again - 
 the only point is the standard allows them if there is any reason to support 
 them.

I do think that ephemeral-static key exchange is something that we
could consider. I even mentioned it in my original proposal:
https://briansmith.org/browser-ciphersuites-01.html. However, there
are basically zero servers that support it.

 And BTW - I haven't heard the answer about the Client hello extension for 
 Hash and Signature Algorithm. Does FF send this? Do we know what % of sites 
 tolerate it?

We include it in TLS 1.2 client hellos and we include 

Re: Proposal to Remove legacy TLS Ciphersuits Offered by Firefox

2014-01-27 Thread Brian Smith
On Mon, Jan 27, 2014 at 10:49 AM,  ripber...@aol.com wrote:
 On Monday, January 27, 2014 10:52:44 AM UTC-7, Brian Smith wrote:
 On Mon, Jan 27, 2014 at 9:26 AM,  ripber...@aol.com wrote:

 I can't speak for FF - and I've certainly read enough standards to say
 that there are too many standards.  I do think that the IETF does listen
 to NIST however. And if you care about security, the security of your
 implementation is a function of the cryptographic algorithms used.
 So I'd suggest that NIST is telling FF and everyone else where they
 should be to be secure. That being said, the reality of rolling forward
 to a new security level in the real world is much more of a random
 walk through the park. I appreciate your consideration of my comments.

NIST is one input into the IETF process, and NIST and IETF are both
inputs into our decision making. We also help IETF and NIST to update
their proposed standards. There is currently work going on at the IETF
to define best practices for TLS, including recommended cipher suites,
recommended TLS versions to support, and recommended features to
support. However, I suspect that those recommendations will written
such that they define the minimum set of functionality that a good
implementation should support. I suspect they won't fully define what
an application like Firefox has to do to be simultaneously secure and
backward-compatible. The good news, though, is that things are getting
better all around, AFAICT.

 If you understand what it is saying, NIST 800-131a is
 actually pretty clear on what it recommends and why;
 and it points to other standards that do some of the
 explaining. However, it would have been very helpful
 if they had actually bothered in an appendix to
 indicate all the cipher suites that are OK (for TLS
 and IPSEC).

You should look at the SP800-52 draft, which more clearly specifies
how NIST recommends that TLS applications work.
http://csrc.nist.gov/publications/PubsDrafts.html#SP-800-52-Rev.%201.

 I would like to hear more about what you are recommending
 NIST do to align with default configurations (perhaps
 something for a separate email).

I will send my feedback regarding the NIST SP800-52 draft to this
mailing list in a separate thread.

 It would be good if the IETF wrote some standards
 to obsolete stuff that is not used and tell folks to get
 off stuff that is known to be not secure. Perhaps it is
 a work in progress - it's hard to keep up. Even better
 if they had more implementation guidelines for
 implementors.

IETF is currently working on BCP documents to define best practices
for the use of TLS in applications, including web browsers and web
servers. I recommend that you subscribe to the IETF UTA and TLS
working group mailing lists:

https://tools.ietf.org/wg/uta/
https://www.ietf.org/mailman/listinfo/uta
https://tools.ietf.org/wg/tls/
https://www.ietf.org/mailman/listinfo/tls

Also, the HTTP working group at IETF has added some improved minimum
requirements for the use of TLS by HTTP/2 clients and servers, based
on feedback from Mozilla, Google, and others. See
http://http2.github.io/http2-spec/index.html#TLSUsage

 A control to stop accepting SHA-1 signatures would be
 desirable. I would say for the forseeable future, it would
 have to default to off - I'll give you odds that 90% of
 the certificates still have SHA-1.

Agreed. You may be interested in
https://bugzilla.mozilla.org/show_bug.cgi?id=942515 which is about a
way for transitioning away from SHA-1 without breaking backward
compatibility.

 So perhaps I'll try to make an offer here. I really like FF's ideology. I'd 
 like to try to put together a set of 'guidelines' for configuring it to be 
 NIST 800-131a compliant -  e.g. describe what configuration controls are 
 necessary. I think you've told me most of that above - though I think there 
 were a few other things in the list to check out. I'll take that offline from 
 this chat. It's something I could use in my environment. It might be 
 something that would be useful to others as a reference.

It would be great if you could write this up. Please start a new
thread with your initial submission. I think one issue you may run
into is that Firefox's current version of NSS isn't FIPS-140
validated. You may find the following useful:
http://kb.mozillazine.org/Security.tls.version.*
https://developer.mozilla.org/en/docs/NSS/FIPS_Mode_-_an_explanation

Note that the FIPS Mode - an explanation and the support.mozilla.org
article it links to are somewhat outdated.

Also, I recommend that you write your document for Firefox 27 and
later, because Firefox 27 makes substantial changes to the default TLS
configuration of Firefox, including enabling TLS 1.2 and AES-GCM by
default.

Thanks!

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Remove legacy TLS Ciphersuits Offered by Firefox

2013-12-15 Thread Brian Smith
On Sun, Dec 15, 2013 at 8:46 AM, Kurt Roeckx k...@roeckx.be wrote:

 But some people are also considering disabling it by default,
 as I think all other where talking in this thread, not just
 reduce the preference.

  For the same reason, the server ciphersuite that we recommend at
  https://wiki.mozilla.org/Security/Server_Side_TLS
  does not drop Camellia, but lists it at the bottom of the ciphersuite.
  It's a safe choice, but not one that we recommend.

 As far as I know the reasons for not recommending it are:
 - It's slower
 - It probably doesn't have much constant-time implementations.

 So as I understand it, the reason for not recommending it don't
 have anything to do with the security of Camellia itself.


Because of unfortunate design choices, Camellia is (along with AES)
difficult to implement in constant time with high performance. That *is* a
serious fault in the algorithm. AES-NI is a workaround for AES, but no such
workaround exists for Camellia. In addition, Firefox supporting Camellia
while other browsers don't is bad for interoperability. Finally, other
browsers have demonstrated that Camellia isn't needed for web
compatibility, so removing support for Camellia means we can avoid
maintaining Camellia.

Like I've said before, for any cipher that we support TLS_RSA_* for, we
should be supporting some TLS_ECDHE_* variants, so that we don't make
servers choose between the cipher they need/want to use and ephemeral key
exchange. So, to keep Camellia support, we'd need to implement and enable
the TLS_ECDHE_* variants. But, it doesn't seem worth the effort when it
doesn't seem to improve interoperability, performance, or security.

I think instead we are better off spending resources on making AES-GCM
constant-time(-ish) and on adding support for ChaCha20+Poly1304. Google
already has constant-time (I think) ChaCha20+Poly1304 patches for NSS and
there's also been progress on constant-time(-ish) GHASH implementations for
NSS. Note that ChaCha20+Poly1304, by design, is straightforward to
implement in a high-speed, constant-time fashion.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Remove legacy TLS Ciphersuits Offered by Firefox

2013-12-13 Thread Brian Smith
On Fri, Dec 13, 2013 at 10:48 PM, marlene.pr...@hushmail.com wrote:

 I present a proposal to remove some vulnerable/deprecated/legacy TLS
 ciphersuits from Firefox. I am not proposing addition of any new
 ciphersuits, changing of priority order, protocol removal, or any other
 changes in functionality.


Hi,

Thank you for suggesting these changes, and thank you for posting your
message on the public mailing list. (I also appreciate the private email
you sent me on the subject.)

I will comment on your proposal again later. However, I want to share with
you some usage data from Firefox 28 Beta, that I think we will find helpful
in understanding what servers do. These numbers represent the cipher suite
chosen by the server for 4,011,451 real-life full handshakes in Firefox 28
beta.

First, here are the figures, sorted according to the order we offer the
cipher suite in the ClientHello:

Cipher Suite  Count   %
--
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256   567,486  14.15%
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 332,786   8.30%
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA   10,952   0.27%
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA  0   0.00%
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA   19,472   0.49%
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA  0   0.00%
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA   0   0.00%
TLS_ECDHE_RSA_WITH_RC4_128_SHA   19,117   0.48%
TLS_ECDHE_ECDSA_WITH_RC4_128_SHA  4,601   0.11%
TLS_DHE_RSA_WITH_AES_128_CBC_SHA226,177   5.64%
TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA44   0.00%
TLS_DHE_RSA_WITH_AES_256_CBC_SHA 23,319   0.58%
TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA 1,088   0.03%
TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA   557   0.01%
TLS_DHE_DSS_WITH_AES_128_CBC_SHA  9   0.00%
TLS_DHE_DSS_WITH_AES_256_CBC_SHA  0   0.00%
TLS_RSA_WITH_AES_128_CBC_SHA  1,053,521  26.26%
TLS_RSA_WITH_CAMELLIA_128_CBC_SHA18   0.00%
TLS_RSA_WITH_AES_256_CBC_SHA 36,203   0.90%
TLS_RSA_WITH_CAMELLIA_256_CBC_SHA 0   0.00%
TLS_RSA_WITH_3DES_EDE_CBC_SHA 7,065   0.18%
TLS_RSA_WITH_RC4_128_SHA  1,507,191  37.57%
TLS_RSA_WITH_RC4_128_MD5201,845   5.03%

Below are the same figures, sorted by frequency (most popular first). The
final column is an indication, of the cipher suites you suggest to remove,
whether I think this data offers strong evidence for the removal; Remove-
means the data seems to contradict your recommendation, Remove? means
more study is needed, and Remove+ means that the data supports your
conclusion.

Cipher Suite Count   %
--
TLS_RSA_WITH_RC4_128_SHA 1,507,191  37.57% Remove-
TLS_RSA_WITH_AES_128_CBC_SHA 1,053,521  26.26% Remove-
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256  567,486  14.15%
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256332,786   8.30%
TLS_DHE_RSA_WITH_AES_128_CBC_SHA   226,177   5.64%
TLS_RSA_WITH_RC4_128_MD5   201,845   5.03%
TLS_RSA_WITH_AES_256_CBC_SHA36,203   0.90%
TLS_DHE_RSA_WITH_AES_256_CBC_SHA23,319   0.58%
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA  19,472   0.49%
TLS_ECDHE_RSA_WITH_RC4_128_SHA  19,117   0.48% Remove?
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA  10,952   0.27%
TLS_RSA_WITH_3DES_EDE_CBC_SHA7,065   0.18% Remove-
TLS_ECDHE_ECDSA_WITH_RC4_128_SHA 4,601   0.11% Remove?
TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA1,088   0.03% Remove?
TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA  557   0.01% Remove?
TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA   44   0.00% Remove?
TLS_RSA_WITH_CAMELLIA_128_CBC_SHA   18   0.00% Remove?
TLS_DHE_DSS_WITH_AES_128_CBC_SHA 9   0.00% Remove?
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA 0   0.00%
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA 0   0.00%
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA  0   0.00% Remove+
TLS_DHE_DSS_WITH_AES_256_CBC_SHA 0   0.00% Remove+
TLS_RSA_WITH_CAMELLIA_256_CBC_SHA0   0.00% Remove+

Your idea of offering a subset of cipher suites during the initial
handshake, and then falling back to another handshake later, requires more
discussion and more measurements to be done. I would like to do something
similar to what you suggest.

Note that my Remove+/?/- comments should not be taken as an acceptance or
rejection of your suggestions. I just want you to know my initial
impression, based on a quick look of the data.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


David Keeler is now a PSM peer

2013-11-21 Thread Brian Smith
Hi all,

Please join me in welcoming David Keeler as a PSM peer! Amongst many
other things, David implemented the HSTS preload list, integrated OCSP
stapling into Firefox, and is current implementing the OCSP
Must-Staple feature, which is a key part of our goal of making
certificate status checking faster and more effective. I've been very
impressed by his work and I know many others have been similarly
impressed.

I also shortened up the list of PSM peers so that it only includes
people who are still actively reviewing patches in PSM. I want to
thank Kai Engert and Bob Relyea for the huge contributions that
they've made in PSM. I still recommend that you ask them, or other NSS
peers, for advice whenever you need help with anything to do with NSS
or PKI. Their knowledge of the how  why in those areas is invaluable.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-11-20 Thread Brian Smith
On Tue, Nov 19, 2013 at 9:14 AM, Kurt Roeckx k...@roeckx.be wrote:
 On Mon, Nov 18, 2013 at 06:47:08PM -0800, Wan-Teh Chang wrote:
 On Mon, Nov 18, 2013 at 4:57 PM, Brian Smith br...@briansmith.org wrote:
 
  Also, AES implementations are highly optimized, well-audited,
  well-tested, and are more likely to be side-channel free. Camellia
  doesn't get used very often. Yet, some websites (most famously,
  Yahoo!), prefer Camellia over AES, even when we offer AES at higher
  priority in the handshake.

 There must be a misunderstanding. NSS offers Camellia at higher
 priority than AES in the ClientHello.

I think you might be right. I remember testing the new cipher suite
order and I was still seeing Camellia being used on
https://login.yahoo.com. But, I tried it again now and it is using AES
with the new cipher suite order. It is very possible that my original
testing of this was off; perhaps due to the HTTP cache or user error.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-11-18 Thread Brian Smith
On Sun, Nov 10, 2013 at 4:39 AM, Kurt Roeckx k...@roeckx.be wrote:
 On Sat, Nov 09, 2013 at 02:57:48PM -0800, Brian Smith wrote:
 Last week, I also learned that ENISA, a European standards group,
 recommends Camellia alongside AES as a future-proof symmetric cipher
 algorithm; see [4].

 They recommend:
 - *_AES_*_GCM_*
 - *_CAMELLIA_*_GCM_*
 - *_AES_*_CCM_*

Thanks. I filed bug 940119 about adding the TLS_ECDHE_*_CAMELLIA_GCM_*
cipher suites.

 As I already mentioned a few time, I'm still missing some of
 the *_AES_*_GCM_* ciphers, specially the DHE ones.

It should be easy to add TLS_DHE_*_GCM_* cipher suites to NSS.

However, I am not sure it is a good idea to add TLS_DHE_*_GCM_* or
TLS_RSA_*_GCM_* cipher suites to Firefox (or other browsers, for that
matter).

Regarding the TLS_DHE_* variants, I think that we should spend some
effort on advocating support for the TLS_ECDHE variants first. In
particular, you mentioned that Apache 2.2 doesn't support ECDHE. Well,
I'd rather work on backporting Apache 2.4's ECDHE support to Apache
2.2 than add the TLS_[DHE_]RSA_*_GCM_* cipher suites to Firefox.
Unfortunately, DHE cipher suites don't work well in current Apache 2.2
either, because of the hard-coded 1024-bit parameters. I don't think
it would be reasonable to backport the better DHE support from Apache
trunk to Apache 2.4 since there are compatibility issues with doing
so. Also, ultimately, we'd like to use TLS_ECDHE_* cipher suites for
performance reasons.

Regarding the TLS_RSA_* variants, like I said before, I think we
should avoid adding new cipher suites for RSA key exchange to Firefox,
to encourage websites to use the ECDHE variants, which help toward
minimizing the fallout of a private key compromise. I am currently
expecting that the One-RTT variant of the TLS 1.3 handshake will
require ECDHE support anyway.

Regardless, I think we can avoid adding those things for now, and
revisit this later when we see what happens with TLS 1.3 and when we
see how successful (or not) our advocacy attempts are.

 I think we probably want to still disable Camellia
 cipher suites by default in the long term anyway, but I did not
 disable them in Firefox Nightly yet. In order for it to make sense to
 continue offering Camellia cipher suites long term, we would need to
 improve NSS's support for Camellia to add the ECDHE variants of the
 Camellia cipher suites. Currently, I think the best course of action
 is to let the current configuration ship, then disable Camellia
 support, and eventually add ECDHE_*_WITH_CAMELLIA_* support to NSS, so
 that it is ready in case some problem with AES is found.

 I don't understand the part where you want to disable it.

Originally I was very concerned about the TLS ClientHello message
size, because we were under the impression that we had to keep it
under 256 bytes. That is the reason I prioritized starting this
discussion so highly, in fact. But, at IETF88, we learned that there
may be another workaround to the interop problems such that we don't
have to keep our ClientHello message size under 256 bytes. Still, we
shouldn't be wasteful with our ClientHello message size, since we'll
always want to keep it under ~1400 bytes for performance and
reliability reasons. 1400 bytes might sound like a lot now, but people
have already been talking about TLS extensions that could easily eat
up the majority of that space.

Also, AES implementations are highly optimized, well-audited,
well-tested, and are more likely to be side-channel free. Camellia
doesn't get used very often. Yet, some websites (most famously,
Yahoo!), prefer Camellia over AES, even when we offer AES at higher
priority in the handshake. I am not sure how much the performance or
existence of lack of side-channel-free implementations of Camellia
matter yet. In Firefox, we've kept Camellia enabled for now, and added
some telemetry to measure how often each cipher is used, to inform our
future decision making here.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: oddball, old cipher suite in firefox client hello

2013-11-01 Thread Brian Smith
On Fri, Nov 1, 2013 at 1:28 AM, Jeff Hodges j...@somethingsimilar.com wrote:
   /* New non-experimental openly spec'ed versions of those cipher suites. */
   #define SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA 0xfeff
   #define SSL_RSA_FIPS_WITH_DES_CBC_SHA   0xfefe

 Does anyone know what spec this cipher suite came from? And, perhaps, why
 it's still a good idea to be in the client hello? This last question I ask
 very gently and out of curiosity.

See 
http://www-archive.mozilla.org/projects/security/pki/nss/ssl/fips-ssl-ciphersuites.html

Based on reading that, these cipher suites seem to be be a way to
backport the TLS 1.0 PRF to SSL 3.0 after NIST decided that the SSL
3.0 PRF was unacceptable, back when TLS 1.0 was still new and shiny. I
agree it makes sense to remove it from Firefox's ClientHello and we
already have plans for that. See
https://briansmith.org/browser-ciphersuites-01.html.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Removing SSL 2.0 from NSS (was Re: Removing dead code from NSS)

2013-10-07 Thread Brian Smith
On Fri, Oct 4, 2013 at 6:52 PM, Ludovic Hirlimann
ludovic+n...@mozilla.com wrote:
 Hi,

 AFAIK NSS still contains code for SSL2 , but no product uses it. SSL2
 has been turned off at least 2 years ago. By removing SSL2 code we get :

 Smaller librarie
 faster compile time + test time

 What do you guys think ?

Hi Ludovic,

I do think it is time to remove SSL 2.0 support from libssl. The size
of libssl won't be much different and it won't compile much faster.
However, removing SSL 2.0 code from libssl will enable us to make the
code much easier to understand in ways that I am 100% sure will
positively impact the security of our SSL3/TLS code. So, I propose
that libssl remove SSL 2.0 support in NSS 3.16. I will be happy to
write the patch for it; I actually have it partially done already.

I can think of at least one serious bug in libssl that likely would
have been avoided if not for the additional complexity of needing to
deal with SSL 2.0. Plus, not having to deal with the SSL 2.0 code will
definitely enable us to improve the SSL3/TLS code easier in the
future. I can think of multiple times where the need to deal with the
SSL 2.0 code has slowed down the implementation of improvements to the
newer protocols. This is an unreasonable cost for us to have to incur
for a feature that we know nobody should be using.

When the NSS team discussed this topic previously, we had agreed that
we wouldn't remove the SSL 2.0 code before TLS 1.2 was implemented, so
that Red Hat could have a version of NSS with both SSL 2.0 and TLS 1.2
for their long-term release. Now TLS 1.2 is implemented and we should
move forward with the removal.

I think it is likely that some vendors of NSS-based products with very
conservative backward-compatibility guarantees, like Oracle and maybe
Red Hat, may need to continue supporting SSL 2.0 in their products due
to promises that they've made. If so, either we should create a branch
for these organizations to maintain, or we should create a branch of
libssl without SSL 2.0. I am OK with doing things either way, though I
prefer to have the NSS trunk be the SSL-2.0-less branch that Mozilla
contributes to.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-10-07 Thread Brian Smith
Mountie Lee moun...@paygate.net wrote:
 SEED was adopted to encourage escaping ActiveX dependency in Korea
 e-commerce environment.

Many people at Mozilla, including us platform engineers, want this
too. Our goal is to get rid of plugins on the internet completely.
And, also, personally I think it is a great idea for Mozilla to do
more to get Firefox working in South Korea. So, I think we all agree
on the goals.

 at last year, adding SEED to WebCrypto API adopted as Action Item.
 the editor sent question any user agent plan to implement SEED

 I can not say discussing terminating SEED support in mozilla

Whether SEED gets implemented to the WebCrypto API is a separate issue
from whether we continue to support SEED in TLS. If we want to add
SEED support to WebCrypto then we can do that even if we don't have
SEED in TLS. I am not going to promise that we will implement SEED as
part of the WebCrypto effort, but I do promise to give it serious
consideration.

 minor algorithm itself has the meaning.
 it will be helpful for neutralizing or keeping possibilities.

I agree that this is a concern. This is one of the reasons we are
looking into the Salsa/ChaCha algorithms, as a backup or replacement
for AES.

Finally, software vendors, including Mozilla, need to work with the
Korean government to agree on what to do about the Korean crypto
regulations. Mozilla has been supporting SEED for TLS for a long time
and it seems to have had no positive impact. If in the future the
software industry and the Korean government decide that SEED in TLS is
the way forward, then we can add SEED back if we remove it now.
However, I am skeptical that the software industry is going to agree
that SEED in TLS is the right path forward.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: set default on for SHA2 for TLS1.1+ on firefox

2013-10-07 Thread Brian Smith
On Wed, Oct 2, 2013 at 2:28 AM, Mountie Lee moun...@paygate.net wrote:
 Hi.
 currently SHA2 hash algorithm is used in TLS1.1 and 1.2
 mozilla firefox is supporting it now.

Hi,

Are you referring to the TLS_*_SHA256 cipher suites, or something
else? I believe that we support SHA256-based signatures everywhere
already.

If you are referring to the TLS_*_SHA256 cipher suites, then the
current plan is to never enable them in Firefox. It seems Chrome has
decided on something similar, as they disabled those cipher suites
after they added AES-GCM support.

If you are referring to something other than the TLS_*_SHA256 cipher
suites, please be more specific as to what you are referring to.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removing SSL 2.0 from NSS (was Re: Removing dead code from NSS)

2013-10-07 Thread Brian Smith
On Mon, Oct 7, 2013 at 3:20 PM, Robert Relyea rrel...@redhat.com wrote:
 On 10/07/2013 12:44 PM, Wan-Teh Chang wrote:
 On Mon, Oct 7, 2013 at 11:17 AM, Brian Smith br...@briansmith.org wrote:
 I think it is likely that some vendors of NSS-based products with very
 conservative backward-compatibility guarantees, like Oracle and maybe
 Red Hat, may need to continue supporting SSL 2.0 in their products due
 to promises that they've made. If so, either we should create a branch
 for these organizations to maintain, or we should create a branch of
 libssl without SSL 2.0.
 The burden of maintaining the branch should fall on the people who
 still need SSL 2.0, so we should remove SSL 2.0 from the trunk. It is
 not that hard for a competent NSS developer to support an NSS 3.15
 branch for another three years.
 Please don't completely screw us over here. I would prefer to be able to
 track NSS updates, particularly since they are pulled in by mozilla. (we
 completely rebase nss whenever we have to pick up new mozilla releases
 that need it).

I think if some Linux distributors would continue to use the code that
contains SSL 2.0 support, then it would be better for Firefox to link
libssl statically to avoid using that variant of libssl.

 That being said, I think we could split the ssl 2.0 code out stand
 along. The only issue is ssl2 hello-ssl3, which would probably mean
 figuring out some why to make that transition that puts the burden on
 the ssl2 code.

 Ideally so ideally we could completely fork the SSL2 code to use it's
 own gather buffers.

This is much easier said than done, because many bits of data are
shared between the implementation of SSL 2.0 and the later versions.
The point of removing SSL 2.0 would be to make the code simpler so
that we can be confident that it is correct, and to make it easier to
improve. Refactoring the SSL 2.0 code in the manner you suggest is
counterproductive to both of those aims, and recent experience gives
clear evidence of that.

 Right now I'm trying to see if I can get management to let us drop SSL2
 support in some upcoming RHEL 6 release. I've already dropped it in
 RHEL7, and I think we may be at the point in RHEL-5 where we may not be
 updating NSS except for some extreme fixes.

That is a Red Hat problem, not a Mozilla problem. The Mozilla project
is bigger than just Firefox and Gecko-based products, but I don't
think the Mozilla project's interests extend so far as to be concerned
about Red Hat backward compatibility guarantee to its customers. We
are willing to help Red Hat when it is reasonable, but I think this
issue has reached the point there it is now unreasonable to carry on
as before.

 One thing that could help is
 to make sure the next mozilla CSB release supports SSL2 that will give
 RHEL 6 some more runway...

For a long time, Gecko-based products hard-code SSL 2.0 to be disabled
and there is no option for enabling SSL 2.0 support in Gecko products.
I would not accept the addition of such an option either. If there is
some server that is SSL 2.0 only then I will be glad to have Firefox
stop working with that server, so that the server admin feels pressure
to improve the security of the server.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-10-07 Thread Brian Smith
On Thu, Sep 12, 2013 at 7:06 AM, Julien Vehent jul...@linuxwall.info wrote:
 It seems that AES-256 is always 25% to 30% slower than AES-128, regardless of 
 AES-NI, or the CPU family.

 The slowest implementation of AES-256 has a bandwidth of 21MBytes/s, which is 
 probably fast enough for any browser

 If performance was the only reason to prefer AES-128, I would disagree with 
 the proposal. But your other arguments regarding AES-256 not
 provided additional security, are convincing.

 This paper: eprint.iacr.org/2007/318.pdf
   On the complexity of side-channel attacks on AES-256
 - methodology and quantitative results on cache attacks -

Perhaps my arguments were a little over-stated though. The attack I
referenced in the proposal is the related-key attack on reduced-round
AES-256. That is something I should have emphasized. Really, I am
speculating that this shows that thinking AES-256 is hugely more
secure than AES-128 is questionable, but it isn't a slam-dunk case.

The side-channel attack paper you cited seems like the more
interesting one. It doesn't seem like an argument against AES-256 on
its own though, since it still says AES-256 is more difficult to
attack through the given side channels than AES-128.

So, the main remaining question with AES-256 vs. AES-128 is not
whether AES-128 is just as secure as AES-256. Instead, we have to
decide whether AES-256 a better security/performance trade-off vs
AES-128. I agree with you that the performance numbers for AES-256 vs.
AES-128 do not make this a slam-dunk. We should do the measurements on
a typical Firefox OS device and see if there is a significant
difference there. Until then, unless others suggest otherwise, I think
I will just keep the relative ordering of analogous AES-256 and
AES-128 cipher suites the same as they are in NSS today.

 However, it refers to software implementations of AES. Do we know if this
 result still applies for AESNI?

One takeaway from your email is that with AES-NI I don't see a strong
case for prefering AES-128 over AES-256. The issue is really what to
do about the non-AES-NI case, assuming we all agree that the presence
of AES-NI shouldn't affect the order that the client suggests cipher
suites in.

Thank you very much for taking the time to do these measurements and
sharing your insight.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-10-07 Thread Brian Smith
On Mon, Oct 7, 2013 at 6:05 PM, Mountie Lee moun...@paygate.net wrote:
 SHA2 hash required in e-commerce transaction by the korean regulation.
 and which is also used in TLSv1.1+.

Hi,

First, we will be enabling TLS 1.2 in Firefox very soon.

But, I think you may be referring to SHA-2-based cipher suites
proposed in this internet draft:
http://tools.ietf.org/html/draft-bjhan-tls-seed-00

Unfortunately, that internet draft expired and also the draft didn't
even specify the cipher suite code points.

Where can I find the current version of the Korean regulations on
encryption. I have read this article:
http://www.koreatimes.co.kr/www/news/biz/2012/04/123_109059.html

That article notes that SEED is actually not mandatory in Korea any
more. If so, it seems like a good idea to help the Korean community
standardize on more common algorithms, right?

That article also notes that implementations other than the ActiveX
control have to be certified by the Korean government in order to be
used. So, it seems like our SEED implementation could not be used
legally anyway, since it hasn't been certified. Is that your
understanding?

My understanding is that the Korean government would also require
websites that fall under these regulations to use certificates issued
by some Korean certificate authorities. But, Mozilla does include
either of the Korean certificate authorities and it seems unlikely to
happen soon. See https://bugzilla.mozilla.org/show_bug.cgi?id=335197

Finally, the SEED cipher suite we do currently support does not
support ephemeral key exchange. I see that the internet draft I linked
to above does attempt to specify SEED cipher suites that support
ephemeral key exchange.

So, it seems pretty clear to me that it is OK to disable the SEED
cipher suite we have currently enabled for now, while we figure out
all the things that are necessary to help our Korean users.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-10-03 Thread Brian Smith
sst...@mozilla.com wrote:
 Do we have telemetry on how frequently these APIs are used?

I expect that, of the small percentage of people that are using these APIs, 
they are using them (except signText) very infrequently--like once a year. When 
I talked to Ehsan and Andrew Overholt about this, we agreed that the numbers 
would be pretty meaningless because telemetry is per browser session and we 
can't track users longitudinally. Also note that telemetry may under-count Red 
Hat's customers, as I imagine many of them are running in networks where 
administrators disable telemetry, and also they are all running the ESR 
release, I think. Finally, I suspect that any use of signText() is highly 
localized to specific reasons, which we also cannot capture with Telemetry, 
AFAICT.

Regardless, I am willing to add telemetry to verify that these APIs are not 
being used shockingly often. But first, let's decide on what the threshold for 
making a decision would be. For example, let's say the number comes back as 
less than 1/1000 of sessions are using these APIs. Would that be considered 
evidence in favor of removal?

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-28 Thread Brian Smith
On Sat, Sep 28, 2013 at 7:52 AM, Sean Leonard dev+mozi...@seantek.com wrote:
 On 9/27/2013 5:51 PM, Robert Relyea wrote:

 I don't have a problem with going for an industry standard way of doing
 all of these things, but it's certainly pretty presumptuous to remove these
 features without supplying the industry standard replacements and time for
 them to filter through the internet. bob

Why isn't keygen good enough?

AFAICT, based on this discussion and others, smart card events are not
a must-have feature for any website. They are a nice-to-have, at best.
At worst, they are the wrong model altogether. Either way,
clearcutting them to make our sandboxing project easier and faster to
complete still seems to make sense for me. I understand that that
isn't the most friendly strategy towards the websites that are using
that feature, but it is extremely unfriendly (at best) of us toward
hundreds of millions of people to be giving them a browser without a
sandbox. Sandboxing is a must-have-ASAP feature.

 In addition, it would be a great shame to remove this set of APIs from
 Firefox because the Mozilla platform itself uses them for chrome-privileged
 purposes. If you search smartcard-insert, for example:
 http://mxr.mozilla.org/mozilla-central/search?string=smartcard-insert

 Our
 Firefox extension makes use of these events (in addition to the other APIs)
 so that would directly impact us as well.

Good point. We can still keep them around in chrome-privileged
contexts because chrome-context stuff lives in the parent process and
so would not be affected by sandboxing. So, if we preserved the
chrome-context stuff, would your extensions still work?

 It is one thing to remove the blink tag, which most users have found
 annoying or harmful (epilepsy). Removing crypto functionality in contrast
 impacts critical security functionality for many users.

Again, smartcard events don't seem like critical functionality and
keygen exists. signText is the only API I can see where there is no
replacement and that would be difficult to go without. But, it is also
problematic because of its UI; I don't think its UI is sufficiently
clear and bulletproof that it is really effective at conveying exactly
what is being signed--especially when you consider
internationalization and localization issues.

 The Internet is made good when people can use it to do productive work.
 Removing functionality that is used by vendors and users for no reason other
 than purity is unproductive and costly.

My main motivation is to make the sandboxing project easier to
complete ASAP. The WebAPI team has the purity goal and it isn't my
place to judge that, as I don't know as much as they do about the web
API standardization situation.

 By the logic of purity,
 XMLHttpRequest should have been removed a long time ago because it was an
 IE-proprietary feature. The open web is an ecosystem of server-side and
 client-side technologies where everyone can innovate by introducing new
 things. If it's a useful feature, you can copy it.

XHR became standardized. Which of the Mozilla-proprietary APIs in
window.crypto.* do you think has any chance at all of standardization?
Ryan Sleevi is on the Chrome team and works on the W3C Web Crypto API
spec., and based on what he's written, he seems to agree with me that
they don't.

I also agree with Ryan that we don't have to feel committed to these
APIs just because we implemented them back when we were trying to make
money selling the enterprise server software that these APIs were
created to support. So, I'm not going to predicate the removal of
these APIs on the creation of replacements. However, there's no reason
why this community can't help formulate, standardize, and even
implement the replacements before we ship a browser without these
APIs. In fact, everybody at Mozilla would love for that to happen.
But, also, I don't think we're going to be sad if we ship a sandboxed
browser without any such APIs.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of generateCRMFRequest

2013-09-26 Thread Brian Smith
On Mon, Apr 8, 2013 at 2:52 AM, helpcrypto helpcrypto
helpcry...@gmail.com wrote:

 While awaiting to http://www.w3.org/TR/WebCryptoAPI/ Java applets for
 client signning, signText and keygen are needed.
 Also things like Handling smart card events or Loading PKCS #11
 modules is being use by many.
 So, you _CANT_ remove
 https://developer.mozilla.org/en-US/docs/JavaScript_crypto

 If you want/need more detailed discussions, dont hesitate to ask me.

Hi,

Yes, I am interested in hearing why you think we cannot remove these functions.

I have met with several members of our DOM and web API teams and we've
tentatively agreed that we should remove these functions if at all
possible--as soon as 2014Q1. That is, we're hoping to remove all of
window.crypto.* except getRandomValues, and all of window.pkcs11.*
(smart card events). Mozilla's policy about web API removal is to make
an announcement that gives websites at least three releases (18 weeks)
of notice. This is not that announcement, but I hope to make such an
announcement soon.

We don't expect other browsers to ever implement this API, so they are
effectively a Mozilla-proprietary API. We are trying to avoid creating
our own proprietary APIs in the hopes that other browsers will do the
same. You can see some of the guidelines we are working on here:
https://wiki.mozilla.org/User:Overholt/APIExposurePolicy

If we were to try to standardize this functionality, we would almost
definitely have to make substantial changes to the APIs as part of the
standardization process. For example, the current APIs assume some
equivalence relation between RSA key sizes and ECC curve strength.

I think smart card events are especially problematic because they seem
to add additional privacy/fingerprinting exposure.

generateCRMFRequest seems like it can be replaced by some JavaScript +
keygen + server-side processing, and we suspect that sites that are
using GenerateCRMFRequest in Firefox must already do this for other
browsers. I understand that keygen is not the greatest thing in the
world either, but it has the benefit of at least having a
specification for browsers to follow.

signText seems to be the most difficult thing to remove because there
is no way to emulate its smart card access capabilities with standard
web APIs. At the same time, the way signText works is problematic
enough that I suspect no other browser will ever implement it.

We are working on creating a multi-process sandbox for web content,
similar to the sandboxes used in other web browsers. This is one of
the few remaining APIs that isn't implemented in a
multi-process-friendly manner, and given all the above issues we don't
want to spent time converting it to be multi-process-friendly.

Instead, I would like to figure out what, if any, alternative to
signText we can provide, and also what we need to do to enhance our
kegen implementation to help websites migrate away from
generateCRMFRequest.

I am very curious, in particular, what products that use
generateCRMFRequest and/or signText do for other browsers, especially
Chrome.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-26 Thread Brian Smith
On Thu, Aug 22, 2013 at 11:21 AM, Robert Relyea rrel...@redhat.com wrote:

 So looking at this list, I think we have a major inconsistency.

 We put Ephemeral over non-ephemeral, but we put 128 over 256.

 While I'm OK with Ephemeral (PFS) over non-ephermal (non-pfs), I think
 in doing so we are taking a much more significant performance hit we get
 back by putting 128 over 256.


It is not exactly true that PFS always has more of a negative performance
impact than AES-128 vs AES-256. In Chromium, PFS ciphersuites will be
faster than non-PFS cipher suites because Chromium requires PFS
ciphersuites to be used for False Start. Also, I have an idea for how we
can make PFS cipher suites + False Start work much more commonly on the
web, that won't work for RSA key exchange. So, ultimately, I expect PFS
cipher suites to be a performance win over non-PFS cipher suites.

But, raw performance isn't the only issue. We expect that PFS cipher suites
*can* have significantly better security properties (see below) if the
server puts in the effort to make the encryption keys actually ephemeral,
and we expect that they are generally no worse they are no worse regarding
security (barring disastrous implementation errors).

Conversely, it isn't clear that AES-256 offers any significant security
advantage over AES-128, though it is clearly slower, even on my
AES-NI-equipped Core i7 processor. First, AES-128 has held up pretty well
so that it might just be good enough in general. Secondly, as I already
pointed out in my proposal, some research has shown that AES-256 doesn't
seem to offer much more security than what we understand AES-128 to offer.
See http://www.schneier.com/blog/archives/2009/07/new_attack_on_a.html and
https://cryptolux.org/FAQ_on_the_attacks. Thirdly, when non-constant-time
implementations are used, AES-256 seems to offer more opportunity for
timing attacks than AES-128 does, due to more rounds and larger inputs.


 The attack profile protection of PFS versus non-PFS is basically two
 points:
 1) some government agency could force a server to give up it's private
 keys and decrypt all the traffic sent to that server. But we already
 know that government agencies with such power simply ask for the the
 data on the server.


This argument seems to assume that all the data that was transferred over
the network is stored on the server. But, this is not necessarily the case.
I don't think that is a reasonable assumption. The site may have already
deleted the data (perhaps at the request of the user) from the server.
Also, there is a lot of transient data that is never stored anywhere.
Finally, sometimes it is more interesting for the attacker to know what
data was transmitted than to know what data is on the server. For example,
if somebody is trying to prosecute me for posting my album collection to
MegaUpload, the existence of the album data on the MegaUpload server may
not be too useful, whereas a record of me doing the upload of that data
with my actual credentials may be.


 I still think PFS algorithms are useful and agree with preferring them,
 but compared to 128 versus 256 it seems like the cost/security tradeoffs
 are actually less for the PFS algorithms.


First, I agree with the overall idea that the performance cost of AES-256
over AES-128 isn't huge. However, I do think that it is significant, at
least for mobile clients where such concerns are most critical--not just
speed, but also battery life. We can gather the numbers (perhaps others
already have them) if that helps.

Something to note is that MSIE has always put AES-128 cipher suites ahead
of AES-128 cipher suites. They also put RSA cipher suites ahead of PFS
cipher suites, though.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-26 Thread Brian Smith
On Mon, Aug 26, 2013 at 2:24 PM, Brian Smith br...@briansmith.org wrote:

 Something to note is that MSIE has always put AES-128 cipher suites ahead
 of AES-128 cipher suites. They also put RSA cipher suites ahead of PFS
 cipher suites, though.



I meant: MSIE has always put AES-128 cipher suites ahead of **AES-256**
cipher suites.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-18 Thread Brian Smith
On Thu, Aug 15, 2013 at 10:15 AM, Chris Richardson ch...@randomnonce.orgwrote:

 I believe this plan would have poor side effects.  For example, if Apple
 ships clients with a broken ECDSA implementation [0], a server cannot
 detect detect if a connecting client is an Apple product and avoid the use
 of ECDSA in that subset of connections.  Instead, ECDSA suddenly becomes
 unsafe for anyone to use anywhere.


I think your argument is more about the Future work: A comprehensive
profile for browsers' use of TLS part of the document, since the
fingerprinting that OpenSSL is now using to detect Safari 10.8 uses the
presence and ordering of TLS extensions like SNI that are not in the scope
of the current proposal. Although I think browsers could now realistically
all agree on the sequence of ciphersuites to offer by default in their
client hello, we're far from standardizing on the contents of the entire
client hello. Let's defer the debate the pros/cons of completely
eliminating fingerprinting in TLS until it is more realistic to do so (if
ever).

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-18 Thread Brian Smith
On Fri, Aug 16, 2013 at 11:13 AM, Camilo Viecco cvie...@mozilla.com wrote:

 Hello Brian

 I think this proposal has 3 sections.
 1. Unifing SSL behavior on browsers.
 2. Altering the criteria for cipher suite selection in Firefox (actually
 NSS)
 3. removing certain cipher suites from the default firefox ciphersuite.

 On 2:
 The proposal is not clear. I want an algorithmic definition.


snip

This criteria gets to your ordering proposal. What do you think of
 re-framing your list in a criteria like

this? (note national ciphers could go in step 6 instead of step 3).


That sounds reasonable to me. I did not invest too much effort on making
the results computable from the rationale section because I think it is
likely that a lot (or all) of the rationale section would be reduced or
removed from any IETF internet draft that proposed a web browser profile of
TLS.


 On 3:

 Not adding:
 TLS_(EC?)DHE_RSA_WITH_AES_(**128|256)_CBC_SHA256
 Disagree I dont think a potential performance issue should prevent us from
 deploying that suite as there could be sha1 attacks that we dont know of.


Now that NSS has AES-GCM, we have an alternative to HMAC-SHA1. Also, if we
are a little presumptuous, we can expect to have a third alternative in
Salsa20/12+(UMAC|VMAC|Poly1305) sometime in the near future. If we find it
is important to offer HMAC-SHA256/384 later, we can do so then. But, if we
add them now, we will have difficulty removing them later.


 If we have enough space in the handshake I see no problem in including
 them.


We will have to determine whether the 256-byte client hello limitation is
really something that we have to deal with in the long term. But, even if
that turns out now to be something we need to ever worry about, I would
still be against adding HMAC-SHA256/384 when there seem to be better
alternatives that do not regress performance from what we're offering now.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-18 Thread Brian Smith
On Fri, Aug 16, 2013 at 5:58 PM, Wan-Teh Chang w...@google.com wrote:

 On Fri, Aug 16, 2013 at 3:36 PM, Rob Stradling rob.stradl...@comodo.com
 wrote:
 
  Wan-Teh, why do you think Firefox should specify a preference for ECDSA
 over
  RSA?

 Because ECDSA is more secure than RSA, and ECC implementations will
 become faster over time.

 The ordering of RSA and ECDSA is really a symbolic gesture right now
 because they each require a certificate, and very few websites have
 both an RSA certificate and an ECDSA certificate.


I agree that the ordering of ECDSA vs. RSA is mostly a symbolic gesture at
this point in time due to the small number of websites that have both types
of certificates, and the motivations for those sites to have both types.
But, I don't think we should base the order that browsers choose based on
this symbolic meaning; instead, we should base the ordering on what gives
the client the best security/performance/compatibility tradeoff.

I am not sure that we can say that ECDSA is more secure than RSA in a
meaningful way. The old Debian OpenSSL bug and the new Android OpenSSL bug
both offer some empirical evidence that implementation errors in PRNGS are
more likely to reduce security than the theoretical concerns that would
indicate that ECDSA is generally more secure than RSA. Also, the minimum
RSA key size that is acceptable according to the baseline requirements is
2048 bits. For digital signatures, there seems to be quite a significant
margin between what seems to be needed to authenticate a single TLS
handshake and the security level that RSA 2048 offers. If we assume that
websites will generally choose the smallest/fastest key that is considered
acceptable according to the CABForum baseline requirements (RSA 2048 and
ECDSA P-256) then especially on ARM there is quite a performance advantage
for the client to get an RSA signature instead of an ECDSA signature. If
the server is choosing which certificate to use based on the client's
preferences instead of its own performance needs, then the client should be
suggesting what is best for the client, on the assumption that the server
is making a rational decision.

More generally, the ordering I suggested isn't intended to be the ordering
that servers should use if they are configured to disregard the client's
preferences. For example, many servers wouldn't want to choose CBC-based
ciphersuites over RC4 yet if they are concerned about BEAST or Lucky 13.
But, for NSS-based clients, it does make sense to choose the CBC-based
ciphersuites in the proposal over RC4 because the NSS-based clients have
implemented fixes for BEAST and Lucky 13, but not for the RC4 issue.

I will update the proposal to mention these things.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-18 Thread Brian Smith
On Mon, Aug 12, 2013 at 6:52 AM, Gervase Markham g...@mozilla.org wrote:

 On 09/08/13 18:12, Brian Smith wrote:
  No, each combination is hard-coded into its own distinct code point that
 is
  registered with IANA:
 
 https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4
 .
  This is a design choice based on the fact that many crypto modules don't
  let you mix/match algorithms at will, and because you often
 can't/shouldn't
  move keys between crypto modules.

 OK. So you are choosing from a fixed palette, and changing that palette
 is outside the scope of this proposal?


It is possible to add new cipher suites, but new cipher suites should have
substantial value and a realistic shot at becoming widely-deployed.


 I agree this is theoretically possible but, as Tom points out, if we
 posit an attacker who can see your traffic, the chances of you
 concealing the identity of your user agent from him are pretty small.

 When risk is there to a user of having a network eavesdropper able to
 tell that they are using a particular browser? If I had an exploit for a
 particular browser, I'd just try it anyway and see if it worked. That
 seems to be the normal pattern.


One example is Tor: it tries to look like a normal browser so that it is
hard to detect that you are using Tor. And, if Tor is properly configured
then the network attacker will never see any non-TLS traffic.


   * Re: Camellia and SEED: we should talk to the organisations which
  pushed for their addition, and our business development people in the
  region, before eliminating them. (This is not to say that we will
  definitely not remove them if they raise objections.)
 
  Do you have suggestions for who to contact?

 The first person I would talk to would be Gen Kanai g...@mozilla.com,
 although he may put you in touch with others.


Thanks. I will send ask him to forward a link to these threads to the
people he thinks may be interested in it.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-09 Thread Brian Smith
On Fri, Aug 9, 2013 at 3:27 AM, Gervase Markham g...@mozilla.org wrote:

 * Can you provide some background or references on exactly how
 ciphersuite construction and choice works? Can I invent e.g.
 TLS_DHE_ECDSA_WITH_AES_128_MD5 or some other random combination of
 elements? Can any combination be encoded in the ClientHello? Does the
 server support specific sets, or will it support my combination if it
 supports all the component pieces?


No, each combination is hard-coded into its own distinct code point that is
registered with IANA:
https://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-4.
This is a design choice based on the fact that many crypto modules don't
let you mix/match algorithms at will, and because you often can't/shouldn't
move keys between crypto modules.


 * We should avoid leaking the distinction between mobile and desktop
 products in the TLS handshake, which means that the handshake should
 look the same on mobile and desktop.

 Why is this a goal? There are many, many other ways of determining this
 distinction, some supported by Mozilla (e.g. the UserAgent string).


There is a difference between leaking to somebody on the network and
leaking to the server you are connecting to. Remember that TLS hides the
User-Agent string and other HTTP-level information is hidden from others on
the network. So, if Firefox for Android and Firefox for Desktop use the
exact same TLS handshaking logic/parameters, then it should be possible to
make them indistinguishable from each other.


 The same question applies to your point about avoiding TLS
 fingerprinting. I think it should be a goal to make it hard to
 distinguish between specific instances of Firefox, but it seems to be
 not a goal to avoid people distinguishing between Firefox and IE, or
 Firefox for desktop and Firefox for Android.


If every browser's TLS handshake were to look the same, then observers on
the network wouldn't be able to tell browsers apart, though the website you
are connecting to obviously would. I admit that is a state that is likely
to be difficult to obtain.

* this proposal does not recommend supporting the CBC-HMAC-SHA-2-based
 ciphersuites that those browsers recently added

 Can you spell out why? Is it because they don't offer forward secrecy?


It is explained below. Worse performance, no security benefit, and they
take up space in the handshake.


 * In the course of testing TLS 1.2 and the ALPN TLS extension, the
 Chromium team recently found that some servers choke when the
 ClientHello message in the TLS handshake is larger than 256 bytes.

 How many bytes are taken up per ciphersuite? How many can we probably
 fit in, if we say we need to include all the other stuff?


They take two bytes per ciphersuite. If the 256-byte limitation cannot be
worked around, then we basically can't afford to waste *any* bytes in the
TLS handshake. It is already likely going to be very difficult for us to
support the ALPN extension as it is, even after making these reductions.


 * Re: Camellia and SEED: we should talk to the organisations which
 pushed for their addition, and our business development people in the
 region, before eliminating them. (This is not to say that we will
 definitely not remove them if they raise objections.)


Do you have suggestions for who to contact?


 * Given our current understanding, HMAC-SHA-1, HMAC-SHA-256, and
 HMAC-SHA-384 are all more-or-less equal in terms of security given how
 they are used in TLS.

 However, if we never use them, then have to switch to them because a
 problem arises with HMAC-SHA-1, how will they have received any testing?




 More generally, is there a place for including ciphersuites 'of the
 future', perhaps at lower priority, to try and make sure there aren't
 problems or interop issues with them?


We will soon have AES-GCM and we'll hopefully soon have
Salsa20/12+(Poly1305|UMAC|VMAC) to mitigate that. Relying on either/both of
those alternatives kills more birds with fewer stones. I think ultimately
we'd rather have all HMAC-based ciphersuites also marked deprecated, for
performance and security reasons.


 * Your final section says what cipersuites should be added and dropped.
 Is this simply a config change with testing, or does it require code to
 be written in one or more of the TLS stacks?


Dropping ciphersuites is a simple configuration change.

In the top table, the notes column lists the ciphersuites that will
require code changes to NSS and to SChannel. Bug
880543https://bugzilla.mozilla.org/show_bug.cgi?id=880543tracks the
addition of AES-GCM ciphersuites to NSS's libssl. OpenSSL
already implements them. I think Google may be working on
Sals20/12+Poly1305 ciphersuites and there has been some small progress on
adding Sals20/12 ciphersuites in the IETF TLS working group.

Reordering ciphersuites in SChannel can be done with a configuration change
to the app. Reordering ciphersuites in NSS either requires us to reorder

Proposal to Change the Default TLS Ciphersuites Offered by Browsers

2013-08-08 Thread Brian Smith
Please see https://briansmith.org/browser-ciphersuites-01.html

First, this is a proposal to change the set of sequence of ciphersuites
that Firefox offers. Secondly, this is an invitation for other browser
makers to adopt the same sequence of ciphersuites to maximize
interoperability, to minimize fingerprinting, and ultimately to make
server-side software developers and system administrators' jobs easier.

Suggestions for improvements are encouraged.

Cheers,
Brian
-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: libnss3.so available on FireFox on Android?

2013-07-30 Thread Brian Smith
See
https://mxr.mozilla.org/mozilla-central/source/services/crypto/modules/WeaveCrypto.js#123
and https://bugzilla.mozilla.org/show_bug.cgi?id=583209
and https://bugzilla.mozilla.org/show_bug.cgi?id=648407




On Tue, Jul 30, 2013 at 11:58 PM, hv hishamkanni...@gmail.com wrote:

 Hi,

 I was not able to open NSS on FF android. Is NSS available on FireFox on
 Android?
 I tried the follwing:

 var ds = Services.dirsvc.get(GreD, Components.interfaces.nsILocalFile);
 var libName = ctypes.libraryName(nss3);
 ds.append(libName);

 var nsslib = ctypes.open(ds.path); // FAILS TO OPEN

 --
 dev-tech-crypto mailing list
 dev-tech-crypto@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-tech-crypto




-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: libnss3.so available on FireFox on Android?

2013-07-30 Thread Brian Smith
On Wed, Jul 31, 2013 at 1:58 AM, Robert Relyea rrel...@redhat.com wrote:

 On 07/30/2013 04:27 PM, Brian Smith wrote:

 See
 https://mxr.mozilla.org/**mozilla-central/source/**
 services/crypto/modules/**WeaveCrypto.js#123https://mxr.mozilla.org/mozilla-central/source/services/crypto/modules/WeaveCrypto.js#123
 and 
 https://bugzilla.mozilla.org/**show_bug.cgi?id=583209https://bugzilla.mozilla.org/show_bug.cgi?id=583209
 and 
 https://bugzilla.mozilla.org/**show_bug.cgi?id=648407https://bugzilla.mozilla.org/show_bug.cgi?id=648407


 Oh, I didn't get that it was a call from inside extension services. I did
 read through the bug, and it sounds like there is some work the extension
 needs to do to get access to NSS functions, but it wasn't clear what it
 was. Brian is there a document that deals with what extensions need to do
 to change to this new monolithic linking?


Basically, all of the NSS libraries except softoken and freebl get folded
into libnss3.so. But, not on Linux. So, if you're going to use something
that is not normally in libnss3.so, you need to have two code paths: one
for Linux, and one for all other platforms.

Basically, the WeaveCrypto.js link seems to be doing the things that are
required, such as making sure PSM is initialized so that NSS is initialized.

But, now I remember that WeaveCrypto.js got replaced with a Java
implementation on Android a while back. So, maybe WeaveCrypto.js isn't
handling the Android case correctly.

If you are still having trouble then try adding a lib/ to the path on
Android. And, if that doesn't work, ask on #developers on irc.mozilla.org.

Cheers.
Brian




 bob





 On Tue, Jul 30, 2013 at 11:58 PM, hv hishamkanni...@gmail.com wrote:

  Hi,

 I was not able to open NSS on FF android. Is NSS available on FireFox on
 Android?
 I tried the follwing:

 var ds = Services.dirsvc.get(GreD, Components.interfaces.**
 nsILocalFile);
 var libName = ctypes.libraryName(nss3);
 ds.append(libName);

 var nsslib = ctypes.open(ds.path); // FAILS TO OPEN

 --
 dev-tech-crypto mailing list
 dev-tech-crypto@lists.mozilla.**org dev-tech-crypto@lists.mozilla.org
 https://lists.mozilla.org/**listinfo/dev-tech-cryptohttps://lists.mozilla.org/listinfo/dev-tech-crypto






 --
 dev-tech-crypto mailing list
 dev-tech-crypto@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-tech-crypto




-- 
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of Revocation Lists feature (Options - Advanced - Revocation Lists)

2013-05-09 Thread Brian Smith
Julien Pierre wrote:
 If this is about removing the feature from NSS altogether on the
 other hand, I would like to state that we have several several
 products at Oracle that use NSS and rely on the ability to have
 CRLs stored in the database, and processed during certificate
 validation.

I am not proposing any change to NSS.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of Revocation Lists feature (Options - Advanced - Revocation Lists)

2013-05-02 Thread Brian Smith
Robert Relyea wrote:
 Oh, in that case I can say we have customers that definately need to
 use CRLs that have been loaded and stored in the database.

  To be clear, I don't know of any reason to consider the processing
  of already-loaded CRLs as a requirement for Firefox.

 Oh, then I'd say we really can't remove it

Right now, I am thinking that we will remove it in the default configuration of 
Firefox. If the user has switched Firefox to use the system's certificate 
verification logic, and the system's certificate validation logic happens to 
use this information, then it will get used. The good thing (for Firefox) is 
that it will be the operating system's responsibility to provide tools to 
manage this and also to make sure that it works, independently of anything that 
Firefox explicitly does.

I think this will address the desires of your customers, since they are using 
system NSS with libpkix on RHEL, right? This would also meet the goal of 
driving the web towards sane, uniform, and standardized certificate processing, 
since Firefox wouldn't do it by default.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of Revocation Lists feature (Options - Advanced - Revocation Lists)

2013-05-01 Thread Brian Smith
Sean Leonard wrote:
 Brian Smith wrote:
  The Revocation Lists feature allows a user to configure Firefox
  to poll the CAs server on a regular interval. As far as I know,
  Firefox is the only browser to have such a feature. Other browser
  either ignore CRLs completely or download CRLs on an as needed
  basis based on a URL embedded in the certificate.
 
 This is not true.
 
 The Microsoft Windows CryptoAPI stack allows users (and admins) to
 load CRLs manually, not just via an automated network call during
 certificate validation. These CRLs are checked by default (indeed,
 in preference to the network download) if they are present. Admins
 can push updated CRLs to PCs as well.

Thanks for correcting my mistake. I did search for this feature, but I could 
not find it. How does one access this feature?

 All applications on Windows that use CryptoAPI use the CRL store.
 This includes Internet Explorer, Google Chrome (at least certain
 versions and/or derivative Chromium products), and all other
 Windows-based apps that use the operating system-provided SSL or
 other certificate-touching functions (Outlook, the rest of the
 Office Suite, Authenticode, driver/kernel signing, etc.).

Also, like Jan noted in this thread, and as others have noted previously, this 
feature seems to be buggy. So, I hope that nobody has been relying on this 
feature for anything important. And, given that it seems to be 
broken/unreliable for a long time, underused, and difficult to figure out, it 
seems silly to devote any resources to fixing it, when it is much easier to 
just remove it and forget about it.

 Similar statements could be made about libnss on *nix, if and when
 multiple apps use libnss and the same stores.

  For example, in its default configuration, Google Chrome ignores
  CRLs, AFAICT (they use some indirect mechanism for handling
  revocation, which will be discussed in another thread).
 
 Not necessarily true. See above. It may be more accurate to say that
 Chrome does not take any special effort to download CRLs of its own
 accord. 

It seems, insofar as this feature might be useful, that it would be better to 
integrate with the systems' CRL stores, rather than have our own. And, in 
general, for systems that care about this kind of stuff, it seems better in 
general to just have an option to use the system's certificate validation logic 
(e.g. Windows CryptoAPI), as configured by the sysadmin/user, for certificate 
validation, instead of Gecko/NSS's certificate validation. For the type of user 
that requires such special handling and centralized control, that seems like 
the real solution to this problem and related problems.

Still, insofar as revocation checking is important, it is equally important on 
all platforms. But, I seriously doubt that many of these platforms will ever 
have this feature. That is, certificates have to be safe even for platforms 
that don't have this feature, and given that, it seems pointless to have this 
if other mechanisms must already be sufficient.

 Additionally, the UI for Revocation Lists is part of pippki, which
 is a core Mozilla Toolkit component. Removing the UI would be tantamount
 to removing it from all other apps, including Thunderbird. In theory you
 could remove it--or the button--from just Firefox, but what would be
 the point? You would just be removing functionality that has already
 proven its utility.

It would be removing broken functionality that slows down progress fixing much 
more important broken functionality.

Also, before I became module owner of PSM, I had it clarified that decisions in 
mozilla-central are to be focused on the needs of Firefox and FirefoxOS. If 
Thunderbird or another application needs a particular (mis)feature of PSM that 
Firefox/FirefoxOS doesn't need, then they can add that feature back in the 
products that need it (and fix all the bugs in those misfeatures).

 As long as you follow RFC 5280

RFC 5280 allows us to do revocation checking in any way we choose.

 However, improvements to the core certificate validation logic are
 NOT improvements if they ignore valuable revocation information. If a
 user (or admin) intentionally ships Firefox--or any app--with
 **additional revocation information**, that user preference ought to
 be respected.

It's going to be a long week. :)

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of Revocation Lists feature (Options - Advanced - Revocation Lists)

2013-05-01 Thread Brian Smith
Robert Relyea wrote:
 Brian, I was under the impression you wanted to remove the CRL
 autofetching feature (where you enter a URL and a fetching time and
 the CRL will automatically be fetched). When I looked at the UI, it
 looked like it had both the URL fetching feature as well as the
 ability to manage downloaded CRLs. I think you need to be careful
 about removing the management ability with CRLs. The most important
 part of the UI is the ability to delete CRLs which may have gotten
 into the database.

My intent is to remove/disable all aspects of this feature: the UI *and* the 
processing of CRLs stored in the database.

 Any the processing of already loaded CRLs is part of NSS proper. You
 can load them and delete them by hand with crlutil. What you can't do
 is have them automatically refreshed.
 
 Sean, is it the ability to load offline CRLs or the automatically
 fetch/refresh them that you object to. I already know that processing
 offline, already loaded CRLs are a requirement, so it's not going
 away from NSS anytime soon.

To be clear, I don't know of any reason to consider the processing of 
already-loaded CRLs as a requirement for Firefox.

Anyway, I wouldn't get to hung up about what NSS currently does. We can always 
change Firefox and/or NSS to get the behavior we need.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Removal of Revocation Lists feature (Options - Advanced - Revocation Lists)

2013-04-30 Thread Brian Smith
Hi all,

I propose we remove the Revocation Lists feature (Options - Advanced - 
Revocation Lists). Are there any objections? If so, please explain your 
objection.

A certificate revocation list (CRL) is a list of revoked certificates, 
published by the certificate authority that issued the certificates. These 
lists vary from 1KB to potentially hundreds of megabytes in size.

Very large CRLs are not super common but they exist: Reportedly, GoDaddy (A CA 
in our root CA program) has a 41MB CRL. And, Verisign has at least one CRL that 
is close to 1MB on its own, and that's not the only CRL that they have. the US 
Department of Defense is another example of an organization known to have 
extremely large CRLs.

The Revocation Lists feature allows a user to configure Firefox to poll the 
CAs server on a regular interval. As far as I know, Firefox is the only browser 
to have such a feature. Other browser either ignore CRLs completely or download 
CRLs on an as needed basis based on a URL embedded in the certificate. For 
example, in its default configuration, Google Chrome ignores CRLs, AFAICT (they 
use some indirect mechanism for handling revocation, which will be discussed in 
another thread). AFAICT, the Revocation Lists feature was added to Firefox a 
long time ago when there were IPR concerns about the as needed behavior. 
However, my understanding is that those concerns are no longer justified. In 
another thread, we will be discussing about whether or not we should implement 
the as needed mechanism. However, I think that we can make this decision 
independently of that decision.

Obviously, the vast majority of users have no hope of figuring out what this 
feature is, what it does, or how to use it.

Because of the potential bandwidth usage issues, and UX issues, it doesn't seem 
like a good idea to add this feature to Mobile. But, also, if a certificate 
feature isn't important enough for mobile*, then why is it important for 
desktop? We should be striving for platform parity here.

Finally, this feature complicates significant improvements to the core 
certificate validation logic that we are making.

For all these reasons, I think it is time for this feature to go.

Cheers,
Brian

[*] Note: I make a distinction between things that haven't been done *yet* for 
mobile vs. things that we really have no intention to do.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Root Certificates in Firefox OS (was Re: NSS in Firefox OS)

2013-04-19 Thread Brian Smith
Rob Stradling wrote:
  I presume that Firefox OS trusts NSS's Built-in Root Certificates
  [1], but what (if anything) does Firefox OS do for EV SSL?

As you found, Firefox OS doesn't have an EV UI, and in fact I just disabled the 
EV validation logic in B2G for performance reasons, given that it was wasted 
effort without a UI.

  Does Firefox OS import PSM's list of EV-enabled Root Certificates?
  [2]

It did, but I just disabled that since it wasn't being used for anything.

Note that this wasn't a policy decision. It could be that we will have an EV 
indicator in the future on B2G. I expect we will eventually try to make all our 
products consistent, one way or another.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Firefox behavior for CDP and AIA

2013-04-15 Thread Brian Smith
Rick Andrews wrote:
 I know that FF allows you to choose a CRL and it will check status
 against that CRL when it finds a cert issued by the CRL issuer. Does
 anyone know if FF uses the CDP in the cert or the cert's issuer name
 as a key to find the CRL?

I assume you are talking about the Revocation Lists feature exposed in the 
Options  Advanced  Certificates UI.

It uses the cert's issuer name. In particular, it uses CERT_CheckCRL, which 
calls cert_CheckCertRevocationStatus, which calls AcquireDPCache, which looks 
things up by issuer name. I didn't look to see Whether we allow multiple CRLs 
for a given issuer name.

 The reason I ask is in regards to partitioned CRLs, where a CA could,
 for example, have one CRL for odd serial numbers and one for even.
 The CA would put the appropriate CDP in each cert, but would that
 confuse FF?

I'm not sure. The Revocation Lists feature is somewhat unmaintained and may 
be removed.

 Same question about OCSP responses and AIA.

Currently, Firefox uses the first OCSP responder URL listed in the end-entity's 
cert's AIA for doing OCSP fetches.

 Does anyone know the answers for IE?

I am not sure exactly what IE does, but IIRC Microsoft has very good 
documentation on MSDN regarding revocation checking in Windows.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Clarification regarding SEC_PKCS7VerifyDetachedSignatureAtTime

2013-04-08 Thread Brian Smith
 What does this mean for building Firefox?
 
 If you want to build a development snapshot of Firefox against a
 systemwide installed NSS, and you want to build Firefox 22 aurora at
 this time, you have the following choices:
 
 - don't build Firefox 22 aurora until Mozilla cleaned up the
   situation.
   If you are waiting for that to happen, you could remind Mozilla
   to either apply bug 853776 to aurora 22
   or to extend bug 858231 to cover aurora 22, too.

I will apply the patches for bug 853776 to mozilla-aurora. The patch for that 
is going through try now:
https://tbpl.mozilla.org/?tree=Tryrev=5d0543e962b6

 Let's hope this kind of situation will remain an exception and can be
 avoided in the future.

The expectation should be that there will be local patches in mozilla-central 
and mozilla-aurora whenever those patches are the fastest way to get work done 
for Firefox. I will do what I can to get as many patches upstreamed first but 
in order for Mozilla to be able to experiment and test changes we want to 
upstream, to minimize disruption to the other users of NSS, we should utilize 
the ability to have private patches in mozilla-central and mozilla-aurora more.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Removal of generateCRMFRequest

2013-04-01 Thread Brian Smith
See https://bugzilla.mozilla.org/show_bug.cgi?id=524664 (bug 524664) and
See 
https://developer.mozilla.org/en-US/docs/JavaScript_crypto/generateCRMFRequest

My understanding is that keygen is supposed to replace 
window.crypto.generateCRMFRequest.

I have no idea how common window.crypto.generateCRMFRequest is. Is it obsolete? 
Should it be removed? Does anybody have a link to a site that is using it for 
its intended purpose?

If it is obsolete, I would like to remove it ASAP.

More generally, I would like to remove all the Mozilla-proprietary methods and 
properties from window.crypto; i.e. all the ones 
athttps://developer.mozilla.org/en-US/docs/JavaScript_crypto. Some of them are 
actually pretty problematic. Are there any worth keeping?

Thanks,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Ensuring certificate chain when creating certificates in memory without db.

2013-02-08 Thread Brian Smith
passf...@googlemail.com
 I use SSL_ConfigSecureServer with a certificate which was created in
 memory (no db). The certificate was created with the
 CERT_CreateCertificate passing the CA's issuer. The same cert was
 also signed with the CA's key. The CA cert was also created on the
 fly, i.e. without the need to setup a DB. My understandings are that
 SSL_ConfigSecureServer will extract the chain from the certificate
 using CERT_CertChainFromCert but since at no stage I am somehow
 embeding the CA into the resulting cert how is this going to work?
 
 I am not sure if it is possible to embed the CA cert data in the cert
 created by CERT_CreateCertificate. If this is possible, can you
 point me to an example how this is done?

Every time you create a CERTCertificate object, NSS adds the certificate to a 
hidden global hash table in memory, keyed by the subject name. When doing 
certificate path building (CERT_CertChainFromCert, CERT_VerifyCert, et al.) NSS 
looks up the issuer names in that global hash table. Consequently, as long as 
you have a reference to the CERTCertificate for the certs in the cert chain at 
the time libssl calls CERT_CertChainFromCert, libssl will be able to construct 
the cert chain correctly.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


NSS 3.14.2 BETA 3 tagged ; NSS 3.14.2 BETA 3 + one patch now required to build mozilla-central

2013-01-27 Thread Brian Smith
Hi all,

I tagged NSS 3.14.2 BETA 3 and pushed it to mozilla-inbound to fix build 
breakage of ASAN and dxr builds.

Also, now mozilla-central contains a patch for bug 834091. That patch adds a 
new public function to libsmime, SEC_PKCS7VerifyDetachedSignatureAtTime, which 
fulfills a last-minute B2G 1.0 need. I expect that function to get added to NSS 
in NSS 3.14.3 (not NSS 3.14.2), but first I want to make some additional 
changes to it which will change its signature and its semantics.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


NSS 3.14.2 BETA 1 (NSS_3_14_2_BETA1) tagged and is now the minimum version for Gecko 20

2012-12-20 Thread Brian Smith
Today I tagged the CVS HEAD with the NSS_3_14_2_BETA1 tag and imported it into 
Gecko (pushed to inbound).

We expect to finalize the NSS 3.14.2 release in late January, 2013. In the 
interim, you can pull and build NSS betas from CVS:

After mozilla-inbound is merged into mozilla-central, NSS 3.14.2 BETA1 or later 
will be the minimum version of NSS for Gecko. This is relevant primarily for 
people building Gecko-based applications using the --use-system-nss option.

cvs -d:pserver:anonym...@cvs-mirror.mozilla.org:/cvsroot co -r NSS_3_14_2_BETA1 
NSS
cd mozilla/security/nss
make nss_build_all

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


NSS_3_14_2_BETA2 landed, with some patches backported to mozilla-beta (was CVS HEAD tagged NSS_3_14_1_BETA1 and pushed it to mozilla-inbound)

2012-12-07 Thread Brian Smith
Brian Smith wrote:
 Brian Smith bsm...@mozilla.com
  https://bugzilla.mozilla.org/show_bug.cgi?id=816392
 
 This change was backed-out before it reached mozilla-central from
 mozilla-inbound because NSS crashes during startup on Android (and
 probably B2G). See bug 817233.

After the backout, I tagged NSS_3_14_2_BETA2 on the CVS HEAD and pushed it to 
mozilla-central.

Three patches were backported to mozilla-aurora and mozilla-beta as they are 
required for B2G. See:
https://bugzilla.mozilla.org/show_bug.cgi?id=818717

Firefox 18 will build correctly against NSS_3_14_2_BETA2 but it won't build 
correctly against NSS 3.14. However, if you build Firefox 18 against 
NSS_3_14_2_BETA2, you will be able to run Firefox 18 against NSS 3.14, because 
all of the changes that Gecko 18 depends are build-time-only and do not affect 
the ABI.

An alternative would be to apply just the patch for bug 808218 to system NSS 
for building Firefox 18. The other two backported fixes are not needed for 
Firefox; they are only needed for B2G.

Firefox 18 will be released on the week of 2013-01-06. This should give us 
plenty of time to finalize the NSS 3.14.1 RTM release for Linux distros to 
package it, if they choose to have their Firefox 18 packages depend on NSS 
3.14.1 instead of NSS 3.14.

I hope this unusual situation does not cause too much inconvenience.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


CVS HEAD tagged NSS_3_14_1_BETA1 and pushed it to mozilla-inbound

2012-11-30 Thread Brian Smith
I tagged the CVS HEAD as NSS_3_14_1_BETA1 and pushed it to mozilla-inbound.

If you are building Firefox or a Gecko-based application using 
--use-system-nss, you need to use the NSS 3.14.1 beta release, as that is now 
the minimum required version to build Gecko.

Thanks for the help from everybody on this.

https://bugzilla.mozilla.org/show_bug.cgi?id=816392

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


How does SMIME work in NSS (was Re: NSS in Firefox OS)

2012-10-26 Thread Brian Smith
Vishal wrote:
 On Saturday, October 20, 2012 10:33:58 PM UTC+5, Brian Smith wrote:
  Anders Rundgren wrote:  Anyway, I guess that Firefox OS uses NSS?
   Is it still is based on the idea that key access is done in the
   application context rather than through a service? B2G (Firefox
  OS) does use NSS. Nothing has changed regarding the process
  separation between Gecko and the private key material. However,
  B2G uses a process separation model where the Gecko parent
  (chrome) process is separated from the web content. Cheers, Brian
 
 Can someone give a detailed view if how smime works in nss ?

I don't work on S/MIME stuff. If I had to learn it, I would start by reading 
the source code to cmsutils, and the header files for lib/smime.

http://mxr.mozilla.org/security/source/security/nss/cmd/smimetools/cmsutil.c
http://mxr.mozilla.org/security/source/security/nss/lib/smime/cmst.h
http://mxr.mozilla.org/security/source/security/nss/lib/smime/cms.h
http://mxr.mozilla.org/security/source/security/nss/lib/smime/smime.h

Then, I would search for SMIME in the Thunderbird source code:
https://mxr.mozilla.org/comm-central/search?string=SMIMEcase=1find=findi=filter=^[^\0]*%24hitlimit=tree=comm-central

Cheers,
Brian

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.14 release

2012-10-26 Thread Brian Smith
julien.pie...@oracle.com wrote:
 Oracle still ships NSS with many products even though we are no
 longer actively involved with its development.

It is important to have somebody at least monitoring the bugs filed/fixed in 
the NSS component in bugzilla. See 
https://bugzilla.mozilla.org/userprefs.cgi?tab=component_watch for how you can 
subscribe to a feed of all NSS bug discussions.

Chris Newman wrote:
 --On October 24, 2012 22:19:40 -0700 Julien Pierre
 julien.pie...@oracle.com wrote:
  Oracle still ships NSS with many products even though we are no
  longer actively involved with its development. We do pick up new
  releases from time to time. We picked up 3.13.x last year and I'm
  looking into picking up 3.14 .
  2)
  - The NSS license has changed to MPL 2.0. Previous releases were
  released under a MPL 1.1/GPL 2.0/LGPL  2.1 tri-license. For more
  information about MPL 2.0, please see
  http://www.mozilla.org/MPL/2.0/FAQ.html. For an additional
  explantation
  on GPL/LGPL compatibility, see security/nss/COPYING in the source
  code.
 
  This may be a serious problem also, but IANAL, so that is not for
  me to decide.
 
 Will vulnerability fixes can be provided on the NSS 3.13.x patch
 train? And if so, is there a date when vulnerability fixes will no
 longer be provided for that version?

First, I think pretty much everybody agrees that, concerns about backward 
compatibility aside, the changes that were made were all positive. And, so, we 
have to balance backward compatibility with old versions of NSS with 
compatibility with websites on the internet and compatibility with web 
browsers. Now, there are no people actively contributing to NSS that are 
arguing in favor of absolute backward compatibility.

AFAICT, there is no plan to work on 3.13.x any more. IMO, it is better to 
continue to focus development on the trunk.

Even if somebody were to backport fixes to 3.13.x, then that work would also be 
under the MPL 2.0, for various reasons that, at this point, I think we cannot 
do anything about. For example, all the fixes in the new version are assumed to 
have been contributed under MPL 2.0. See the MPL 2.0 FAQ that contains the 
email address to send licensing questions to: 
http://www.mozilla.org/MPL/2.0/FAQ.html

Also, I thought the goal was/is to remove the bypass mode code soon. Perhaps 
that decision will partly be based on how much it gets in the way of the TLS 
1.2 implementation? I would be surprised if we required the TLS 1.2 
implementation to support the bypass mode.

By the way, I think it would be very useful to know what causes the difference 
in performance between the bypass mode and the non-bypass mode, to see if we 
can optimize the non-bypass mode so that everybody (including users of NSS 
outside of libssl) can get the performance improvements.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS 3.14 release

2012-10-26 Thread Brian Smith
Julien Pierre wrote:
 I know what code changes are necessary. I'm only a developer on a
 couple of NSS applications at this point, not an NSS maintainer.
 If this was only about those couple of apps, it wouldn't be an issue.
 But there are other apps in Oracle that could be affected.
 I can safely say that tracking and modifying every single app that
 this binary compatibility change may affect is not going to happen at
 Oracle at this point. Many other apps may not have the same kind of
 tests we have for ciphers and won't even catch the issue. As NSS gets
 distributed as patches to many existing application, binary
 compatibility is a requirement.

Generally everybody is trying to maintain binary compatibility by default. But, 
there are other concerns too, such as compatibility with other implementations, 
and/or cost of maintenance issues, that may sometimes outweigh any binary 
compatibility requirement. 

Regarding the cipher suite changes, I think that overall, more applications 
will benefit than will be hurt. In fact, although it is probably the case that 
some Oracle products are affected, it isn't clear from your message whether 
the effect is negative or positive.

 I agree that they should be, but the decision of the defaults was
 always up to the application until now.

When an application does not explicitly set the set of enabled cipher suites, 
and/or it doesn't set a particular SSL option, then IMO it is saying Let the 
NSS library decide what is best for me. If that isn't a good policy for an 
application, then it should set the options explicitly.

 Unless the DES ciphers were broken, I don't see the rationale
 for this change.

These were the not the 3DES ciphers; they were the original, weak, DES ciphers. 
In 2012 it is not worth analyzing whether DES ciphers are strong enough to keep 
enabled, because it is clear that they are obsolete, and everybody recommends 
against their use. But, also, see:
http://en.wikipedia.org/wiki/Data_Encryption_Standard#Security_and_cryptanalysis

In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than 
one day, using 128 Spartan-3 5000's.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS in Firefox OS

2012-10-20 Thread Brian Smith
Anders Rundgren wrote:
 Anyway, I guess that Firefox OS uses NSS?
 Is it still is based on the idea that key access is done in the
 application context rather than through a service?

B2G (Firefox OS) does use NSS. Nothing has changed regarding the process 
separation between Gecko and the private key material.

However, B2G uses a process separation model where the Gecko parent (chrome) 
process is separated from the web content.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: undefined reference to '_InterlockedIncrement' when compiling nss under MinGW

2012-10-09 Thread Brian Smith
weizhong qiang wrote:
 Thanks for the instruction. I tried the build of nss on with
 mozillabuild tool (with MS VC and MS SDK, using MS compiler for
 compilation) on Win7. And the build did pass.
 But the build with MinGW/MSYS (using gcc for compilation) still
 failed.

 I hope the build (with MS compiler) can be used for my software
 (which uses gcc for compilation).

If at all possible, I recommend using the MozillaBuild toolchain to build NSS, 
with OS_TARGET=WIN95, with Visual Studio 2010. That is the best-supported 
configuration on Windows because that is what we use for official builds of NSS 
for Firefox.

I am sure it must be possible to use DLLs compiled with Visual Studio in a 
program that is built with GCC, because all built-in Windows DLLs are built 
with Visual Studio. But, I am not sure how to do it.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Accessing Mozilla NSS library functions in JavaScript-XPCOM

2012-10-09 Thread Brian Smith
Brian Teh wrote:
 Currently, my extension uses the NSS library which is coded in C++.
 But due to 6-week release cycle for Thunderbird, I am wondering
 whether are there examples on how to use Mozilla NSS for
 JavaScript-XPCOM, to avoid the need for re-compiling the binary
 components?

Currently, you should not use jsctypes or other JS-to-C++ bridges to access NSS 
in Gecko. One reason is that proper use of NSS in Gecko requires you to 
extend the native C++ nsNSSShutDownObject class in many (most? all?) 
circumstances.

The second reason is that we (Mozilla) have found it to be surprisingly 
difficult to correctly use jsctypes with NSS. In particular, you must be very 
careful that the garbage collector will not garbage collect your native code 
objects. This is one of the reasons I always recommend people to avoid jsctypes 
completely.

That said, I heard that Thunderbird is going to be based on the Gecko 17 
Extended Support Release for the next 11 months or so. That may mean you may 
not need to recompile your binary addons for Thunderbird so frequently any 
more, but I am not sure. I have CC'd Mark Banner to get clarification about 
that.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Alternative for SGN_DecodeDigestInfo

2012-04-06 Thread Brian Smith
Robert Relyea wrote:
 Why are they linking with Freebl anyway? It's intended to be a
 private interface for softoken. It's a very good way to find
 yourself backed into a corner.

Right. This was a long time ago. You helped me add the J-PAKE implementation to 
Softoken after we discovered this problem.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Combining OCSP stapling with advance MITM preparation

2012-04-06 Thread Brian Smith
Kai Engert wrote:
 The domain owner
 could configure their server to include this OCSP response in all TLS
 handshakes, even though this OCSP response is unrelated to the server
 certificate actually being used.

For complete protection, the real domain holder would have to staple all the 
OCSP responses for all compromised certificates in every full SSL handshake it 
does, until those certificates expire.

How do you compare this with 
http://tools.ietf.org/html/draft-evans-palmer-key-pinning-00?

In that mechanism, the server staples information that pins the public key of 
the cert such that certs with different public keys will automatically be 
dis-trusted by the browser.

The Evens/Palmer pinning mechanism has an advantage in that it protects against 
mis-issued certs before the issuing CA or the domain owner even learns about 
them.

The Mozilla security team is already planning to implement the Evens/Palmer 
mechanism in Firefox and Chrome has implemented it, AFAICT.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Recent builds of NSS on Windows?

2012-04-04 Thread Brian Smith
 Today, a buggy old/legacy modutil.exe binary we are using, made me
 try building NSS using mingw. Once again.

The only way I recommend building NSS on Windows is with Microsoft Visual C++ 
and the mozilla-build package located at 
https://developer.mozilla.org/en/Windows_Build_Prerequisites#MozillaBuild_.2F_Pymake

Looking at your errors, it seems like the problem is a general problem with the 
MinGW/MSYS configuration itself, and not a problem specific to NSS. In my 
experience, more than a year ago, getting a correct GCC-based toolchain working 
in MSYS/MinGW (not just for building NSS, but in general) that isn't using an 
ancient version of GCC has been impractically difficult. 

See https://bugzilla.mozilla.org/show_bug.cgi?id=570340 where there is a 
MinGW/MSYS/GCC user trying to build NSS. If you are not him, then he might be 
more helpful than I.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: To NSS-Java or not to NSS-Java, thats the question.

2012-04-04 Thread Brian Smith
helpcrypto helpcrypto wrote:
 IMHO, this is some that needs some clarification, as Mozilla *IS*
 supporting it developing JSS but at the same time saying we do not
 support it, 

Some people who are part of the Mozilla project maintain JSS. I will help 
review patches to JSS if/when the members of the NSS team that want to continue 
supporting JSS ask me to. That is as much enthusiasm for JSS as you are likely 
to get from Mozilla employees.

 and other options dont work properly due to some bugs
 that need to be fixed...or not. Google Chrome works well and is
 taking some advantage on this feature (too).

Google Chrome is exposing NSS to Java/JSS on Mac OS X? I did not think that 
Chrome uses the NSS certificate database at all on Mac OS X.

 -Does mozilla *WANT* Java use certificates stored on NSS to do
 document signning?
 -What about Java applets?
 -Is mozilla going to *AVOID* Java use certificates, or consider this
 as an undocumented/undesired behaviour?
 -What about Java applets?

We already expose window.crypto.signText which supposedly will sign documents 
using certificates stored in NSS on all *desktop* Firefox versions. This should 
be accessible from Java via the Java-JS bridge that I know nothing about.

 -Supporting this (or document sign with XAdES or any other advanced
 systems) is one of mozilla's targets?

In Firefox and Thunderbird? No.

 -Will patches which fix this issues merged (if correct) in branch, or
 will they become marked as WONTFIX?

It depends on whether the bug is in JSS, NSS, Gecko, and what exactly the bug 
is, and how complex the patch is. If you provide more details of what doesn't 
work, we can discuss whether it is reasonable to try to fix it.

 We dont want to rely on undocumented/undesired behaviour, and will
 like to discuss whats the official opinion on this.

I cannot tell you an official opinion but I would say that I personally would 
not bet any money on depending on Java + NSS integration to work reliably in 
Firefox, because that would be a very low priority for most Gecko developers.

 Consider the following example:
 Signning a document with XAdES format with a certificate stored
 on NSS.
 Can it be done? 

I am not sure.

 How should it be done?

I am not sure how to solve whatever problems you are having in the short term.

In the long term:

1. Write patches that replace the usage of NSS in Firefox with usage of the 
system certificate store for client certificates.
2. Help specify and develop the DOMCrypt JS API in Firefox, including 
integration of DOMCrypt with the system certificate store.
3. Rewrite the applet in JS. If you can't, then have your Java application use 
the JS-Java API we have to use the DOMCrypt JS API to sign your documents.

I noticed that you seem to be considering reading the NSS keyX.db and certX.db 
files from Java directly. Keep in mind that it is not supported to access these 
files directly, that these files may change format at any time (e.g. Red Hat 
would like Firefox and Thunderbird to switch to the SQLite-based format), and 
that hopefully Firefox and Thunderbird will eventually stop using these NSS 
certificate databases completely except on Linux. None of those things will 
happen any time soon, but I expect them all to happen eventually.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Alternative for SGN_DecodeDigestInfo

2012-04-04 Thread Brian Smith
Robert Relyea wrote:
 On 03/24/2012 03:05 PM, VJ wrote:
  I'm trying to use RSA_HashCheckSign() function to verify the
  message.
 How are you even Linking with RSA_HashCheckSign()?

I don't know what platform JV is on, but I know on Mac OS X, all the internal 
symbols in FreeBL and maybe other libraries are exported. This is how the 
Firefox Sync developers got so far in developing their JavaScript 
implementation of J-PAKE based on FreeBL's internal math library; they did all 
their development and testing on Mac OS X and when they were done, they were 
surprised to find they were using functions that you can't even reference on 
Windows (and Linux?).

I am not sure if there is something we can do about this problem for Mac OS X.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Mozilla Team-about the upcoming branding changes at Symantec/VeriSign, and working to implement them in Mozilla/Firefox

2012-03-09 Thread Brian Smith
Geoffrey Noakes wrote:
 
 The *only* change we are asking of Mozilla is to change Verified by:
 VeriSign, Inc. in the hover-over box to Verified by Norton:

In Firefox, we show the name of the organization that issued the intermediate 
certificate (the subject O= field of the intermediate certificate) in the hover 
box. This information comes directly from the intermediate certificate.

I have been told, but haven't verified, that other browsers show the name of 
the organization that issued the root certificate (the subject O= field of the 
root certificate) in their UI.

The first question is: Should we change our UI to be the same as other 
browsers? My answer is no. It *is* a good idea to show the root certificate's 
organization name in this part of the UI. But, it is also important to show all 
the intermediate organizations' names in this part of the UI too. See the 
recent TrustWave incident for motivation. If others agree, then I will file a 
bug about implementing a change to display the O= field from all CA 
certificates in the chain in this UI.

The second question is: Should we change the string in the display of the 
*root* certificate from VeriSign, Inc. to Norton. My answer is no, because 
AFAICT this field should contain the legal name of the organization that owns 
the root certificate. In this case, it would be Symantec Corporation or 
VeriSign, Inc. depending on the new corporate structure of VeriSign. If 
Symantec changes the legal name of this organization to Norton then this 
would be an acceptable and required change. (However, that is impossible, 
because US law requires businesses include Inc., Corporation, LLC., etc 
in their legal name.)

The third question is: Should the UI replace the display of the O= field of 
*intermediate* certificates that chain to Symantec/VeriSign's roots to Norton 
when the value is VeriSign, Inc. My answer is no. See the recent TrustWave 
incident for motivation. It is important to display the information in the 
intermediate certificates exactly as we received it in the certificate. We have 
too many more important things to do. And, our users do not benefit from such a 
change. 

I am interested in hearing other peoples' thoughts on the matter.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Brian Smith
Eddy Nigg wrote:
 On 02/09/2012 12:18 AM, From Nelson B Bolyard:
 BTW, this proposal wouldn't be a problem if it would cover, lets say
 the top 500 sites and leave the rest to the CAs. There would be
 probably also the highest gains.

Effectively, we would be making the most popular servers on the internet 
faster, and giving them a significant competitive advantage over less popular 
servers. I am not sure this is compatible with Mozilla's positions on net 
neutrality and related issues.

AFAICT, improving the situation for the top 500 sites (only) would be the 
argument for *mandatory* OCSP stapling and against implementing Google's 
mechanism. The 500 biggest sites on the internet all have plenty of resources 
to figure out how to deploy OCSP stapling. The issue with OCSP stapling is the 
long tail of websites, that don't have dedicated teams of sysadmins to very 
carefully change the firewall rules to allow outbound connections from some 
servers (where previously they did not need to) and/or implement deploy DNS 
resolvers on their servers (where, previously, they might not have needed any), 
and/or upgrade and configure their web server to support OCSP stapling (which 
is a bleeding edge feature and/or not available, depending on the server 
product).

A better (than favor the Alexa 500) solution may be to do auto-load CRLs for 
the sub-CA that handles EV roots (assuming that CAs that do EV have or could 
create sub-CAs for EV roots for which there would be very few revocations, 
which may require standardizing some of the business-level decision making 
regarding when/why certificates can be revoked), or similar things. This would 
at least reduce the cost for the long tail of websites to a low* fixed yearly 
fee. I am not sure this would be completely realistic or sufficient though.

I am also concerned about the filtering based on reason codes. Is it realistic 
to expect that every site that has a key compromise to publicly state that 
fact? Isn't it pretty likely that after a server's EE certificate has been 
revoked, that people will tend to be less diligent about protecting the private 
key and/or asking for the cert to be revoked with a new reason code?

However, I don't think we should reject Google's improvement here because it 
isn't perfect. OCSP fetching is frankly a stupid idea, and AFAICT, we're all 
doing it mostly because everybody else is doing it and we don't want to look 
less secure. In the end, for anything serious, we have been relying (and 
continue to rely) on browser updates to *really* protect users from 
certificate-related problems. And, often we're making almost arbitrary 
decisions as to which breaches of which websites are worth issuing a browser 
update for. Google is just improving on that. Props to Adam, Ben, Wan-Teh, 
Ryan, and other people involved.

Cheers,
Brian

* Yes, I consider the price of even EV certificates to be almost 
inconsequential, compared to the overall (opportunity) cost of a person needed 
to securely set up and maintain even the most basic HTTPS server.
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Regarding PSM with external SSL library

2012-01-26 Thread Brian Smith
Ashok Subash wrote:
 Hi Brian,
 
 We have made some progress. We could statically build nss and link on
 our platform.

Do you mean statically link NSS into Firefox? If so, there are several gotchas 
that need to be taken into account. See Wan-Teh's patch at 
https://bugzilla.mozilla.org/show_bug.cgi?id=534471 which addresses some/all of 
them on Windows for *Chrome*. I imagine the issues are similar but not quite 
the same for Firefox and/or for other platforms.

 Is there any other porting points i've missed? Your
 inputs/suggestions will help us to solve this faster.

I wish I could be more helpful but it is really hard to tell the problem from 
the description given. Also, it is hard for me to diagnose problems with 
Firefox 3.6.x because I have *literally* never even checked out the source code 
for Firefox 3.6.x before. (I started at Mozilla during the development of 4.0.)

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of NSS and/or NSPR from the API exposed to addons

2012-01-19 Thread Brian Smith
Mike Hommey wrote:
 But linux users are not necessarily up-to-date with the latest NSS. I
 seriously doubt the number of users with the very last system nss
 exceeds 10% of the linux user base except in exceptional good
 timing cases (like when ubuntu is released with the latest version),
 but that doesn't last long).

If the system NSS isn't new enough, then Firefox's local version of NSS would 
be used. And, if that is complicated to implement at all, then we can just 
avoid trying to optimize how we load NSS on Linux at all. To be honest, you 
would likely be the one to implement any of these optimizations on Linux, if 
they are ever to happen at all. 

I am not intending to optimize NSS or rearrange it for code size on Linux *at 
all* because of these issues. For example, the idea of linking NSS into libxul 
*on Linux* was taken off the table a long time ago, because of these issues and 
others. Gecko (or Firefox and Thunderbird individually) would have its own 
special build configuration of NSS on Android, Windows, Mac, and B2G *only*, 
according to the current plan. The same build configuration we have now, which 
is the same build configuration that system NSS builds are done with (more or 
less), would be the build configuration used on Linux for the indefinite future.

AFAICT, any distro that ships its own builds of Firefox seems to configure 
Firefox to use system NSS and system NSPR, and that effectively means that 
those distros have to be on their toes with the latest NSS and NSPR releases 
available as installable packages whenever they release a new version of 
Firefox, since every version of Firefox going forward will require the very 
latest NSS and/or NSPR for the foreseeable future. If this doesn't work for 
them then they will have to stop configuring their Firefox packages to depend 
on system NSS and/or system NSPR packages.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Review of changes to the HTTP spec

2012-01-19 Thread Brian Smith
HTTPbis seems to be in its final stages. Although it is supposed to be a 
somewhat minor revision, quite significant changes have been made to the spec. 
We should review the changes and make sure we provide our feedback before it is 
too late. In particular, if there is some change that we think we will not 
implement because we think the change is bad for whatever reason, we should 
push back on the change. That is probably the only useful feedback we could 
have this late in the game. 

Similarly, other HTTP working group work is at or nearing last call status, and 
we should review it.

Examples:
http://greenbytes.de/tech/webdav/#wg-httpbis
http://greenbytes.de/tech/webdav/#draft-reschke-http-status-308
http://greenbytes.de/tech/webdav/#draft-nottingham-http-new-status

This seems like it would be a significant amount of work. And, it probably 
can't be delayed too much. It might not be a good idea to delay the start of 
this review until after the Necko team workweek.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Documentation of differences between our networking stack and other browsers' stacks and security bug triaging

2012-01-19 Thread Brian Smith
I think that we should start some documentation (e.g. a wiki) that documents 
the differences between our implementation and other browsers' implementations, 
along with a justification for the difference and/or a link to a bug about 
resolving the difference.

Examples of differences, off the top of my head: 
https://wiki.mozilla.org/Necko/Differences

There are obviously many, many more. I think it is important to document at 
least the most important differences (security-related issues and/or 
compatibility issues and/or significant performance differentiators) ahead of 
our team priorities meeting, because I think this kind of explicit TODO list 
will likely have some impact on allocation of resources for various projects on 
the team and/or will reset our expectations on particular goals. Accordingly, 
please help fill in the list. (This invitation is extended to 
non-Moco-employees too; anybody can edit that wiki page.)

I will probably end up filing a large number of sg:moderate bugs out of this 
list of differences, which will require much more than one quarter of work to 
clear. That is, it is probably the case that goals regarding reducing security 
bug counts of various severities to zero seem realistic now only because we are 
significantly undercounting security bugs. 

Enumerating the differences between implementations regarding network security 
features will help correct the undercounting, but we will also need to go back 
through existing bugs and ensure they have the correct rating--especially bugs 
that currently do not have any rating. Doing this triage will be a lot of work 
and we will have to come up with a plan as to how to do it. Definitely, I do 
not have time to do it all myself. So, it is a good idea for everybody on the 
team to become very familiar with the security bug severity rating system, and 
to rate as any existing security-related bugs in Necko they know of ahead of 
the team work week. The rating guide is here:

   https://wiki.mozilla.org/Security_Severity_Ratings

If you have questions regarding the severity ratings, please ask on 
dev-security.

Also, for any security bug that was introduced by a patch you wrote, or that 
you reviewed (if the patch was written by someone who isn't a Necko peer or 
MoCo employee), please assign the bug to yourself. The main guidelines I am 
going to recommend regarding security bugs in our components are:

   * Make sure security bugs are assigned to a Necko peer or MoCo
 employee if they are expected to be fixed in the current quarter.

   * The people who introduced a vulnerability are responsible, by
 default, for fixing that vulnerability.

   * The set of existing security bugs to fix is to be more-or-less
 set and clearly enumerated at the start of the quarter.

Obviously, all of this is open for discussion. But, this is basically the 
foundation of what I will propose during the planned security bug meeting 
during the work week that I will be leading. 

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of NSS and/or NSPR from the API exposed to addons

2012-01-19 Thread Brian Smith
Eitan Adler wrote:
 Brian Smith wrote:
  If the system NSS isn't new enough, then Firefox's local version of
  NSS would be used.
 
 From a packager point of view, please don't automagically detect
 these things. If the system NSS is supported provide an option
 --with-system-nss which if not set will use the bundled NSS.

I suggested this automagic mechanism only for when --use-system-nss is NOT 
used (i.e. basically for Mozilla-distributed Firefox only). --use-system-nss 
would force the use of system NSS. The idea behind it is to get 
Mozilla-distributed Firefox to integrate with other software on the system a 
little more sanely and a little more efficiently than it currently does, *if* 
(big IF) the distro has a new-enough NSS package.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Review of changes to the HTTP spec

2012-01-19 Thread Brian Smith
Thanks Wan-Teh.

- Original Message -
 From: Wan-Teh Chang w...@google.com
 To: mozilla's crypto code discussion list 
 dev-tech-crypto@lists.mozilla.org
 Sent: Thursday, January 19, 2012 9:57:29 AM
 Subject: Re: Review of changes to the HTTP spec
 
 On Thu, Jan 19, 2012 at 1:43 AM, Brian Smith bsm...@mozilla.com
 wrote:
  HTTPbis seems to be in its final stages. Although it is supposed to
  be a
  somewhat minor revision, quite significant changes have been made
  to
  the spec. We should review the changes and make sure we provide our
  feedback before it is too late. In particular, if there is some
  change that
  we think we will not implement because we think the change is bad
  for
  whatever reason, we should push back on the change. That is
  probably
  the only useful feedback we could have this late in the game.
 
 Brian,
 
 Did you mean to post this message to the dev-tech-network discussion
 group?
 
 Wan-Teh
 --
 dev-tech-crypto mailing list
 dev-tech-crypto@lists.mozilla.org
 https://lists.mozilla.org/listinfo/dev-tech-crypto
 
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of NSS and/or NSPR from the API exposed to addons

2012-01-18 Thread Brian Smith
Mike Hommey wrote:
 Please note that this is going to be a problem on systems that have
 system nspr and nss libraries that other system libraries use.

I am intending to avoid changing how NSS is linked on Linux, at least at the 
beginning. My priorities are are Android and Windows first, then Mac.

In the long run, for performance reasons, we should probably prefer the system 
NSS libraries to our own, whenever the system NSS libraries are available and 
are the right version, because at least some of them are likely to already have 
been loaded into RAM by other applications. It seems like this may avoid the 
types of issues you are concerned about too.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Removal of NSS and/or NSPR from the API exposed to addons

2012-01-18 Thread Brian Smith
Benjamin Smedberg wrote:
 I have no particular opinion about whether this is a good idea for
 NSS. I do think that we should not do this for NSPR unless we
 decide to remove support for binary XPCOM components in Firefox
 (per the ongoing discussion in dev.planning). Many of our XPCOM
 code patterns assume/require use of NSPR, and I don't think it's
 the right time to try changing that.

NSPR is not nearly as big as NSS and we use a large percentage of NSPR 
functionality. So, this kind of dead code elimination for NSPR would not 
necessarily be a justifiable win considering potential compatibility pain, like 
it would be for NSS. And also, I don't know (yet) of any startup time, 
correctness, or security wins from doing this for NSPR, like I do know for NSS.

Even if we link NSS and/or NSPR into libxul, we can easily create forwarder 
DLLs with the old DLL names that forward calls to the retained functions in 
libxul, so that any binary components that link to NSS/NSPR and only use the 
NSS/NSPR functions we retain will work correctly.


  3. Help out with the DOMCrypt effort that ddahl is leading, which
  will create a W3C-standardized Javascript API for cryptography for
  *web* applications. I suspect it would be non-trivial, but
  possible, to expose the DOMCrypt API to extensions. I suspect that
  this would replace APIs from #1 and #2 above.
 I tend to think that extensions would get this basically for free
 because they have access to the DOM.

It is likely that DOMCrypt will base part of its key management system on a 
same-origin policy, and that origin-based key management policy is likely to be 
inappropriate for a bunch of applications. Also, I do not know how JetPack 
(Addon SDK) deals with any origin, and I bet there will need to be some 
new/different glue to get DOMCrypt to fit into the JetPack model.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-18 Thread Brian Smith
Sean Leonard wrote:
 The most glaring problem however is that when validation fails, such
 as in the case of a revoked certificate, the API returns no
 certificate chains 

My understanding is that when you are doing certificate path building, and you 
have to account for multiple possibilities any any point in the path, there is 
no partial chain that is better to return than any other one, so libpkix is 
better off not even trying to return a partial chain. The old code could return 
a partial chain somewhat sensibly because it only ever considered one possible 
cert (the best one, ha ha) at each point in the chain.

 and no log information.

Firefox has also been bitten by this and this is one of the things blocking the 
switch to libpkix as the default mechanism in Firefox. However, sometime soon I 
may just propose that we change to handle certificate overrides like Chrome 
does, in which case the log would become much less important for us. See bug 
699874 and the bugs that are referred to by that bug.

 The only output (in the revoked case) is
 SEC_ERROR_REVOKED_CERTIFICATE. This is extremely unhelpful because it
 is a material distinction to know that the EE cert was revoked,
 versus an intermediary or root CA.

Does libpkix return SEC_ERROR_REVOKED_CERTIFICATE in the case where an 
intermediate has been revoked? I would kind of expect that it would return 
whatever error it returns for could not build a path to a trust anchor 
instead, for the same reason I think it cannot return a partial chain.

 Such an error also masks other possible problems, such as whether
 a certificate has expired, lacks trust bits, or other information.

Hopefully, libpkix at least returns the most serious problem. Have you found 
this to be the case? I realize that most serious is a judgement call that may 
vary by application, but at least Firefox separates cert errors into two 
buckets: overridable (e.g. expriation, untrusted issuer) and 
too-bad-to-allow-user-override (e.g. revocation).

 Per above, we never used non-blocking I/O from libpkix; we use it in
 blocking mode but call it on a worker thread. Non-blocking I/O never
 seemed to work when we tried it, and in general we felt that doing
 anything more than absolutely necessary on the main thread was a
 recipe for non-deterministic behavior.

This is also what Firefox and Chrome do internally, and this is why the 
non-blocking I/O feature is not seen as being necessary.

 The downside to blocking mode is that the API is one-shot: it is not
 possible to check on the progress of validation until it magically
 completes. When you have CRLs that are  10MB, this is an issue.
 However, this can be worked around (e.g., calling it twice: once for
 constructing a chain without revocation checking, and another time
 with revocation checking), and one-shot definitely simplifies the
 API for everyone.

As I mentioned in another thread, it may be the case that we have to completely 
change the way CRL, OCSP, and cert fetching is done in libpkix, or in 
libpkix-based applications anyway, for performance reasons. I have definitely 
been thinking about doing things in Gecko in a way that is similar to what you 
suggest above.

 We do not currently use HTTP or LDAP certificate stores with respect
 to libpkix/the functionality that is exposed by CERT_PKIXVerifyCert.
 That being said, it is conceivable that others could use this feature,
 and we could use it in the future. We have definitely seen LDAP URLs in
 certificates that we have to validate (for example), and although
 Firefox does not ship with the Mozilla Directory (LDAP) SDK,
 Thunderbird does. Therefore, we encourage the maintainers to leave it
 in. We can contribute some test LDAP services if that is necessary for
 real-world testing.

Definitely, I am concerned about how to test and maintain the LDAP code. And, I 
am not sure LDAP support is important for a modern web browser at least. Email 
clients may be a different story. One option may be to provide an option to 
CERT_PKIXVerifyCert to disable LDAP fetching but keep HTTP fetching enabled, to 
allow applications to minimize exposure to any possible LDAP-related exploits.

 Congruence or mostly-similar
 behavior with Thunderbird is also important, as it is awkward to
 explain to users why Penango provides materially different
 validation results from Thunderbird.

I expect that Thunderbird to change to use CERT_PKIXVerifyCert exclusively 
around the time that we make that change in Firefox, if not exactly at the same 
time.

  From our testing, libpkix/PKIX_CERTVerifyCert is pretty close to RFC
 5280 as it stands. It would be cheaper and more useful for the
 Internet community if the maintainers put the 5% more effort necessary
 to finish the job, than the 95% to break compliance. If this is
 something that you want to see to believe, I can try to compile some
 kind of a spreadsheet that illustrates how RFC 5280 stacks up with
 the current PKIX_CERTVerifyCert 

Re: Removal of NSS and/or NSPR from the API exposed to addons

2012-01-18 Thread Brian Smith
Mike Hommey wrote:
  In the long run, for performance reasons, we should probably prefer
  the system NSS libraries to our own, whenever the system NSS
  libraries are available and are the right version, because at
  least some of them are likely to already have been loaded into RAM
  by other applications. It seems like this may avoid the types of
  issues you are concerned about too.
 
 Except if we change the current trend, which is to use unreleased
 nspr/nss code in mozilla, there's no way this can be sustainable.

The system NSS libraries will no longer be the right version in that case.

We (NSS team) have agreed to make sure that Firefox *releases* will always be 
compatible with the latest NSPR and NSS release. Almost always, Firefox beta 
releases will have that property too. But, often -nightly and -aurora won't be 
compatible with the latest NSPR or NSS release, though they will usually be 
compatible with the NSPR and NSS CVS trunk. The current situation is in 
-nightly and -aurora is exceptional.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


libpkix maintenance plan (was Re: What exactly are the benefits of libpkix over the old certificate path validation library?)

2012-01-12 Thread Brian Smith
We (me, Kai, Bob, Wan-Teh, Ryan, Elio, Kai) had a meeting today to discuss the 
issues raised in this thread. We came to the following conclusions:

Ryan seems to be a great addition to the team. Welcome, Ryan!

Gecko (Firefox and Thunderbird) will make the switch to libpkix. See Ryan's 
comments about his ideas for expanding Chromium's usage of libpkix.

We will reduce the complexity of libpkix in the following ways:

   * We will drop the idea of supporting non-NSS certificate 
 library APIs, and we will remove the abstraction layers
 over NSS's certhigh library. That means dropping the idea
 of using libpkix in OpenSSL or in any OS kernel, for
 example. Basically, much of the pkix_pl_nss layer can be
 removed and/or folded into the core libpkix layer or into
 certhigh, if doing so would be helpful.

   * We will drop support for non-blocking I/O from libpkix.
 It isn't working now, and we will remove the code that
 handles the non-blocking case as we fix bugs, to make 
 the code easier to maintain.

   * More generally, we will simplify the coding style to make
 it easier to read, understand, and maintain. This includes
 splitting large functions into smaller functions, removing
 unnecessary abstractions, removing simple getter/setter
 functions, potentially renaming internal (to libpkix)
 functions to make the code easier to read, removing
 non-PKCS#11 certificate stores (e.g. HTTP, LDAP), etc.
 (I think we agreed to remove LDAP support, but also agreed
 that it wasn't a high priority. This is a little unclear to
 me.)

We are not going to attempt any kind of spring cleaning sprint on libpkix. 
Basically, developers working on libpkix should feel free to do any of the 
above when it helps simplify the implementation of an important fix or 
enhancement to libpkix.

We will not consider complete RFC 5280 (et. al.) support a priority. We will 
basically implement a subset of RFC 5280 (et al.), with an emphasis on features 
used in the existing PKITS tests, and with the primary emphasis on making 
existing real websites work securely and reliably. We will evaluate new RFC 
5280 features and/or new additions to PKITS critically and make cost/benefit 
and priority decisions on a feature-by-feature basis. Do not expect significant 
new RFC 5280 (et. al.) functionality to be added to libpkix any time soon, even 
if that functionality is specified by some (old) RFC already, unless that 
functionality already has significant usage. If there is RFC 5280 (et al.) 
functionality in libpkix that goes beyond what PKITS tests, then we may even 
consider removing that functionality if it causes problems (e.g. security 
vulnerabilities) and a proper fix for that feature is too time consuming. (I 
don't think others are as eager to do this as I am, and it is diffi
 cult to determine whether a feature is actually being relied upon or not, so I 
consider this last thing to be somewhat unlikely and rare if it ever happens.)

We did not come up with a plan on how to end-of-life the old classic 
certificate path validation/building. It might be the case that certhigh is 
implemented in a way enables us to easily make enhancements to it to improve 
libpkix-based processing without breaking the old classic API. I am a little 
skeptical that it will be easy to make improvements to certhigh to improve 
libpkix without having to do significant extra work to keep the classic API 
working.

In my opinion, it is a very good idea for applications to move to remove their 
dependencies on the classic API. Once Firefox is using libpkix exclusively, 
there will be little interest from Mozilla in fixing bugs in the classic 
library, and I got the idea that others feel similarly.

Let me know if there is anything I missed or am mistaken about.

Cheers,
Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Brian Smith
Jean-Marc Desperrier wrote:
 Brian Smith a écrit :
  3. libpkix can enforce certificate policies (e.g. requiring EV
  policy OIDs). Can the non-libpkix validation?
 
 EV policy have been defined in a way that means they could be
 supported by a code that handles an extremely tiny part of all what's
 possible with RFC5280 certificate policies.

Right. How much of PKIX a client actually needs to implement is still an open 
question in my mind.

 They could even not be supported at all by NSS, and instead handled
 by a short bit of code inside PSM that inspects the certificate chain
 and extract the value of the OIDs. Given that the code above NSS needs
 anyway to have a list of EV OIDs/CA name hard coded (*if* I'm
 correct, I might be wrong on that one), it wouldn't change things that
 much actually.

AFAICT, it is important that you know the EV policy OID you are looking for 
during path building, because otherwise you might build a path that has a cert 
without the EV policy even when there is another possible path that uses certs 
that all have the policy OID.

On the other hand, do we really need to do path building at all? It seems 
reasonable to me to require that sites that want EV treatment to return (in 
their TLS Certificates message) a pre-constructed path with the correct certs 
(all with the EV policy OID) to verify (sans root), which is what the TLS 
specification requires anyway. So, I would say that, AFAICT, practical EV 
support doesn't really require PKIX processing, though other things might.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Regarding PSM with external SSL library

2012-01-05 Thread Brian Smith
Ashok Subash wrote:
 We'll go with your suggestion of using NSS after size reduction for
 this project for our security requirements. But right now we cannot
 upgrade to latest firefox due to the current schedule and resources
 we have for this project. We will follow the guidelines listed in
 the 611781 as well your other suggestions in the mail. It will be
 great if you can support us if we hit a roadblock.

The best way to get such support is to attach ask questions and to post your 
patches in bugs in our bugzilla database. Try to write patches in a way that is 
beneficial to the overall NSS and Gecko (Firefox) projects, so that we can 
incorporate those patches into the mainline Gecko and/or NSS source code. If 
you identify new ways to shrink NSS besides the ways listed in those bugs, then 
please file new bugs and document your findings in them (And please CC me in 
the bug report). It is likely that any such reductions in the size of NSS that 
you make for Firefox 3.6 will be applicable to Firefox 12+ as our usage of NSS 
hasn't changed much between 3.6 and 12. Whenever I get around to working on bug 
611781, the improvements I make will probably benefit your project as well 
(possibly requiring some small modifications.)

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Ryan Sleevi wrote:
 IIRC, libpkix is an RFC 3280 and RFC 4158 conforming implementation,
 while non-libpkix is not. That isn't to say the primitives don't exist -
 they do, and libpkix uses them - but that the non-libpkix path doesn't use
 them presently, and some may be non-trivial work to implement.

It would be helpful to get some links to some real-world servers that would 
require Firefox to do complex path building.

No conformant TLS server can require RFC 4158 path building. I would like to 
understand better how much of RFC 3280, 4158, and 5280 is actually required for 
an HTTPS client. (Non-TLS usage like S/MIME in Thunderbird is a separate 
issue.) After all, the TLS specifications are pretty clear that the server is 
*supposed* to provide the full path to the root in its Certificate message, so 
even the dumbest path building code will work with any TLS-conformant server. 
Then, for Firefox, all of the complexity of the libpkix path building is purely 
there to handle non-conformant servers.

AFAICT, we can split these non-conformant servers into two classes: 
misconfigured servers, and enterprise/government servers. It seems very likely 
to me that simpler-than-RFC4158 processing will work very well for 
misconfigured servers (maybe just do AIA cert fetching is enough?). But, how 
much of RFC3280/4158 do real-world TLS-non-conformant government/enterprise 
servers without AIA cert information in the certs use? (Knowing nothing about 
this topic, I wouldn't be surprised if just do AIA cert fetching works even 
for these cases.)

 I find it much more predictable and reasonable than some of the
 non-3280 implementations - both non-libpkix and entirely non-NSS
 implementations (eg: OS X's Security.framework)

Thanks. This is very helpful to know.

 The problem that I fear is that once you start trying to go down the
 route of replacing libpkix, while still maintaining 3280 (or even
 better, 5280) compliance, in addition to some of the path building
 (/not/ verification) strategies of RFC 4158, you end up with a lot
 of 'big' and 'complex' code that can be a chore to maintain because
 PKI/PKIX is an inherently hairy and complicated beast.

 So what is the new value trying to be accomplished? As best I can
 tell, it seems focused around that libpkix is big, scary (macro-based
 error handling galore), and has bugs but only few people with
 expert/domain knowledge of the code to fix them? Does a new
 implementation solve that by much?

I am not thinking to convert any existing code into another conformant RFC 
3280/4158/5280 implementation. My goal is to make things work in Firefox. It 
seems like conform to RFC 3280/4158/5280 isn't a sufficient condition and I 
am curious if it is even a necessary condition. If RFC 3280/4158/5280 is a 
necessary condition (again, for a *web browser* only, not for a S/MIME and 
related things), then fixing existing problems with libpkix seems like the more 
reasonable path. My question is whether those RFCs actual describe what a web 
browser needs to do.

  As for #5, I don't think Firefox is going to be able to use
  libpkix's current OCSP/CRL fetching anyway, because libpkix's
  fetching is serialized and we will need to be able to fetch
  revocation for every cert in the chain in parallel in order
  to avoid regressing performance (too much) when we start
  fetching intermediate certificates' revocation information. I
  have an idea for how to do this without changing anything in NSS,
  doing all the OCSP/CRL fetching in Gecko instead.
 
 A word of caution - this is a very contentious area in the PKIX WG.

I am aware of all of that. But, I know some people don't want to turn on 
intermediate revocation fetching in Firefox at all (by default) because of the 
horrible performance regression it will induce. We can (and should) also 
improve our caching of revocation information to help mitigate that, but the 
fact is that there will be many important cases where fetching intermediate 
certs will cause a serious performance regression. There are other things we 
could do to avoid the performance regression instead of parallelizing the 
revocation status requests but they are also significant departures from the 
standards.

 While not opposed to exploring, I am trying to play the proverbial
 devil's advocate for security-sensitive code used by millions of
 users, especially for what sounds at first blush like a cut our
 losses proposal.

A few months ago, I had a discussion about Kai, where he asked me a question 
that he said Wan-Teh had asked him: are we committed to making libpkix work or 
not? This thread is the start of answering that question.

I am concerned that the libpkix code is hard to maintain and that there are 
very few people available to maintain it. If we have a group of people who are 
committed to making it work, then Mozilla relying on libpkix is probably 
workable. But, it is a little distressing that Google Chrome seems to avoid 
libpkix 

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Gervase Markham wrote:
 On 04/01/12 00:59, Brian Smith wrote:
  5. libpkix has better AIA/CRL fetching: 5.a. libpkix can fetch
  revocation information for every cert in a chain. The non-libpkix
  validation cannot (right?). 5.b. libpkix can (in theory) fetch
  using
  LDAP in addition to HTTP. non-libpkix validation cannot.
 
 5b) is not a significant advantage; everything CABForum is doing
 requires HTTP access to revocation information, as many SSL clients
 don't have LDAP capabilities.

That is true for Firefox, but the LDAP code might be(come) useful for 
Thunderbird. I don't know how well tested it is or even if it works, though.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Robert Relyea wrote:
 7. libpkix can actually fetch CRL's on the fly. The old code can only
 use CRL's that have been manually downloaded. We have hacks in PSM to
 periodically load CRL's, which work for certain enterprises, but not
 with the internet.

I am not too concerned with the fetching stuff. Fetching is not a hard problem 
to solve other ways, AFAICT.

 OCSP responses are cached, so OCSP fetching on common intermediates
 should not be a significant performance hit. Chrome is using this
 feature (we know because we've had some intermediates in were
 revoked).

When I browse with libpkix enabled (which also enables the intermediate 
fetching), connecting to HTTPS websites (like mail.mozilla.com).

Also, Chrome only uses libpkix on Linux, right?

Like I said in my other message, my main concern is that libpkix is huge and we 
don't have a lot of people lined up to maintain it or even understand it.

Ryan's comments are encouraging though.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-04 Thread Brian Smith
Brian Smith wrote:
 Robert Relyea wrote:
 When I browse with libpkix enabled (which also enables the
 intermediate fetching), connecting to HTTPS websites (like
 mail.mozilla.com)

... is much slower, at least when the browser starts up. We may be able to fix 
this with persistent caching of intermediates but it is still going to be slow 
the first time you go somewhere that uses a new intermediate--including the 
first time you browse to any HTTPS website after installing Firefox, which is 
critical, because users start judging us at that point, not after we've filled 
and warmed up our various caches.

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Developing pkcs11 module for Firefox

2012-01-04 Thread Brian Smith
Robert Relyea wrote:
 On 01/04/2012 09:04 AM, Anders Rundgren wrote:
  There is a capi module in the NSS source tree, but it purposefully
  does not surface removable CAPI modules under the assumption that
  such devices already have PKCS #11 modules.

While it may be true that they have PKCS#11 modules, the user probably does not 
have the PKCS#11 module installed, but they probably have the CAPI module 
installed. The idea motivating the consideration of supporting CAPI is to have 
a zero configuration experience for switching from other browsers (especially 
IE) to Firefox. The possibility of plug-and-play smartcards in Windows 7 pushes 
us more towards CAPI support on Windows.

I now have five smartcard tokens (for accessing my new Chinese bank accounts) 
and they all have CAPI modules installed but only one has a PKCS#11 module even 
available for me to install into Firefox.

 I was primarily trying to avoid a loop. The CAPI drivers we use are
 CAPI to PKCS #11. The configurations I was running with had the
 PKCS #11 module installed in NSS and the CAPI to PKCS #11 module
 installed in capi.

Interesting. I did not know that. Unfortunately, I doubt there would be an easy 
way to automatically locate the PKCS#11 module given the CAPI module.

I am curious as to how smartcard management is supposed to work for Linux. It 
seems to me that it would be ideal for Firefox to support the shared DB on 
Linux. Are there OS-level tools for managing the shared DB. For example, is 
there an OS-level UI for adding/removing PKCS#11 modules in Fedora/RHEL that 
would make Firefox's UI for this redundant?

- Brian
-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


  1   2   >