Re: Chrome: From NSS to OpenSSL

2014-04-08 Thread Jean-Marc Desperrier

Ryan Sleevi a écrit :

That was an interesting rant, thanks.


reliance on PKCS#11 means that there are non-trivial overheads when
doing something as simple as hashing with SHA-1. For something that is
such a simple transformation, multiple locks must be acquired and the
entire NSS internals may*block*  if using NSS on multiple threads, in
order to prevent any issues with PKCS#11's threading design.


I don't believe that PKCS#11's threading design mandates that. 
Implementation easily can have that problem, and NSS sure does, but I 
think it would be possible to design a PKCS#11 implementation than let's 
you do hashing without requiring locks.
Or maybe, it's more because of PKCS#11 session management rules that you 
hardly can avoid them.



[...]  I'm surprised to hear anyone who has
worked at length with PKCS#11 - like Oracle has (and Sun before) - would
be particularly praising it.

It's good for interop with smart cards. That's about it.


More or less. Already with HSM there's some serious performance 
concerns, and it's probably quite related to what you describe.
And for smart cards, there's the fact that's it's a particularly error 
prone interface. All PKCS#11 code that I know that's been tested to 
interfaces with many cards/HSM has a very large number of quirks to run 
around various failures or buggy behaviors.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ECDSA support in Thunderbird

2013-03-07 Thread Jean-Marc Desperrier

Robert Relyea a écrit :

- Original Message -

On Tue, 2013-02-26 at 17:05 -0500, Robert Relyea wrote:

 http://pki.fedoraproject.org/wiki/ECC_Capable_NSS


Isn't it about time Red Hat started shipping non-crippled versions?

RFC 6090 is two years old now...


It's never been a technical issue, and that's pretty much all I can say about 
the issue:(,


Isn't it about time Red Hat reads the W3C Security Patent Advisory Group 
conclusions about Certicom's claims on the Elliptic Curve DSA  DH 
algorithms ?

http://www.w3.org/2011/xmlsec-pag/pagreport.html

Certicom is a member of W3C. Their membership made it, in the context of 
the PAG, mandatory to fully disclose all the IP they owned that was 
relevant to implementation of  Elliptic Curve DSA in the XML Security 
standard (but not being member of the XML Security WG made it 
non-mandatory for them to provide a compliant license, see 
http://lists.w3.org/Archives/Public/public-xmlsec-comments/2011Jan/.html 
)


The caveat is however that the conclusions of the PAG (If you base 
yourself on RFC 6090, *the lawyers* say you're safe from Certicom's IP) 
don't necessarily apply to the use of elliptic curves outside of the 
specific algorithms used by XML Security.


Which means not outside of :
- ECDSA as described in 
http://www.w3.org/2008/xmlsec/Drafts/xmldsig-core-20/#sec-ECDSA
- ECDH and ECDH key agreement as described in 
http://www.w3.org/TR/xmlenc-core1/#sec-ECCKeyValue


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: VISA drops the password and replaces it with - NOTHING

2012-08-02 Thread Jean-Marc Desperrier

Anders Rundgren a écrit :

http://www.finextra.com/news/announcement.aspx?pressreleaseid=45624

Current platforms are useless for banking so what else could they do?


What role does the password serve here, except forcing me to create an 
unrequired account by every merchant I decide to use ?

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-22 Thread Jean-Marc Desperrier

Erwann Abalea a écrit :

if Google could come up with an efficient mechanism so that
revocation is really checked, that's cool. The less than 100k is a
challenge, I'd like to see how it will be solved


The more since all those random serial numbers can't be compressed.

I wonder if he wasn't misinterpreted, and actually meant below 100k *per 
day*. With the differential sending of information, that's a lot more 
doable.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Google about to fix the CRL download mechanism in Chrome

2012-02-22 Thread Jean-Marc Desperrier

Erwann Abalea a écrit :

Who will come with a 12-dan black bar UI?


That's a joke on the fact it goes full-cycle at 12-dan and we're back to 
a white belt, right ? But double-width, so you *can* tell the difference 
with the normal white bar ;-)

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Google about to fix the CRL download mechanism in Chrome

2012-02-08 Thread Jean-Marc Desperrier

Hi,

Google just published the changes they are about to do in the revocation 
checking in Chrome :

http://www.imperialviolet.org/2012/02/05/crlsets.html

In my opinion, maybe somewhat opposite to the way they describe it, 
fundamentally they are not *at* *all* changing the standard PKI method 
of revocation check.


They are instead just solving a number of flaws in the way the CRL 
revocation information is fetched by browser, therefore implementing a 
new CRL fetching method that *works*, replacing the current *broken* one.


To work properly, CRL fetching must be done in advance of accessing the 
site. This never worked properly when you had to individually, locally 
determine the list of CRL to download.
Therefore establishing  centrally the list of public CRL to download, 
and pushing the result to browsers *is* the proper solution.


The other trouble with CRL in that in practice the only solution that's 
available is to download complete CRL, that include all revocation 
reason, resulting in awful bandwidth requirements.


Whereas the optimal solution would be to download each day a delta CRL, 
with only the difference with the previous day, and containing only the 
revocation reasons you *really* care about (key compromise).


By centrally converting the CRL format to a proprietary optimized format 
that contains only that, they can do it, without implementing in the 
browser the complex delta, by reason, CRL splitting mechanism that 
theoretically exists, but that nobody ever got right (and nobody will, 
as getting it right also depends on every CA getting it right, when 
their solution just *doesn't*).


The cross-signing (replacing the original signature on CRL by a new 
signature/integrity layer) this solution requires is certainly not a 
problem, it just has to be done right, which is not difficult when you 
already have a secure software update diffusion channel.


In conclusion I'm 100% in favor of Mozilla adopting this solution, 
instead of trying to invent new schemes, that are very hard to get right 
: Most people spend a lot of time on them only to realize at the end 
that making things differently usually only means making a very slightly 
differently weighted choice between all the possible parameters of a 
security solution, that ends up not really much better than the 
original, even thought you were initially convinced the original was 
very broken.


I hope I have convinced you Google's solution is not new at all, which 
is great. If it's not actually new, it's much easier to be convinced 
it's pure *enhancement*, and not change, on the current solution, so 
there's no significant drawback, and no initially non-obvious potential 
danger, at adopting it.


PS : I probably won't be much on-line in the next one-and-half week, 
just had to post this before :-)

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Jean-Marc Desperrier

Robert Relyea a écrit :

7. libpkix can actually fetch CRL's on the fly. The old code can only
use CRL's that have been manually downloaded. We have hacks in PSM to
periodically load CRL's, which work for certain enterprises, but not
with the internet.


PSM's periodic CRL download's certainly quite broken, but OTOH on the 
fly CRL fetching certainly won't work either on the Internet with 
regard to the delay it induces.



I'm ok if someone wanted to rework the libpkix code itself, but trying
to shoehorn in the libpkix features into the old cert processing code is
the longer path to getting to something stable. Note that the decision
to move away from the old code was made by those who knew it best.


Probably quite true, but the question of why libpkix is so big stays, it 
very unlikely it brings a value proportionate to it's size.


In the best of world, I'd vote for a complete reworking of it.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Jean-Marc Desperrier

Brian Smith a écrit :

3. libpkix can enforce certificate policies (e.g. requiring EV policy
OIDs). Can the non-libpkix validation?


EV policy have been defined in a way that means they could be supported 
by a code that handles an extremely tiny part of all what's possible 
with RFC5280 certificate policies.


They could even not be supported at all by NSS, and instead handled by a 
short bit of code inside PSM that inspects the certificate chain and 
extract the value of the OIDs. Given that the code above NSS needs 
anyway to have a list of EV OIDs/CA name hard coded (*if* I'm correct, I 
might be wrong on that one), it wouldn't change things that much actually.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: What exactly are the benefits of libpkix over the old certificate path validation library?

2012-01-05 Thread Jean-Marc Desperrier

Robert Relyea a écrit :

On 01/04/2012 05:56 PM, Brian Smith wrote:

  Robert Relyea wrote:

  On 01/04/2012 04:18 PM, Brian Smith wrote:
  In the cases where you fetch the intermediates, the old code will not
  work!


[...] I'm talking about
fetching intermediates themselves because they weren't included in the
chain. I thought that is what you were talking about. That was certainly
what I was talking about.


Well, as Rob noted that's *very* surprising because the standard code 
will *not* work in that case, so you're talking about a case that's 
broken in the non-libpkix world which should be a rare case.

And not the one where performance is the main concern.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: HTML KEYGEN element not working with ECC keys

2011-11-29 Thread Jean-Marc Desperrier

Scott Thomas a écrit :

keygen name=spkac keytype=EC keyparams=secp384r1/
but the keys are not generated.

i have checked that ECC support from mozilla was removed, can any body
confirm it or tell the way how to enable it, ?
https://bugzilla.mozilla.org/show_bug.cgi?id=367577
Ideas / thoughts ??


Well as you've seen in the bug, it's all about legal considerations.

As I reported in it, the Certicom IETF contribution had been extended in 
2008 to CMS (and SSH) which means ECC usage for CMS signing in 
Thunderbird is safe. However key generation might very well be another 
matter, especially since Certicom explicitly excludes CAs from the 
royalty free usage.


Brian reported there that this should be investigated by the legal 
group, but I'm worried nothing happened since then.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: DOMCrypt API developments

2011-06-17 Thread Jean-Marc Desperrier

David Dahl wrote:

I find this API effort very interesting, however I'm left with the
  feeling you wish to leave out the use of PKI elements.
  A really neutral API would work both with and without PKI.

Public Key crypto is actually the main use case of this API.


I meant more certificate/X509 based PKI, and I've reached the conclusion 
that's very certainly *not* the main use case of this API.
The webcrypto-api proposal is oriented around certificate/X509/smartcard 
PKI, I end up with the feeling the two proposal lives in different realms.


I think the most important part to understand the context of your 
proposal is the phrase in the notes that says :
Each origin will only have access to the asymmetric private key 
generated for it.
So this is a low level API that allows a web application to do anything 
either useful or stupid with the keys it generates, based on the fact it 
will only access it's own keys, and if it does something stupid, it will 
not affect any other application.


You should modify the proposal to say this first, so that people get the 
context easily.


One question is left of what should be the security level of the private 
key. Is it allowed to create them in a HTTP origin ?
For HTTPS origin, what happens if the user clicks yes to connect to an 
invalid server, does it have access to data already created for the 
origin it claims to have that isn't validated ? Whilst we're at it, what 
happens when the server certificate change ?
One solution can be to keep it simple and have exactly the same level of 
security as for local storage, so in effect storing the private keys 
inside local storage (explicitly or implicitly).


This context also means that application need to implement a lot of 
things above that low level API to do something significant. For exemple 
in the API the keys for example have no identifier, no context of use 
(signature, encryption), no time limit, etc.


Given all that, I wonder if an API that gives directly to a low level, 
optimised big num API (with random) wouldn't be a simple, bare metal but 
effective, alternative ? (knowing that the approach *can't* be applied 
to smartcards where you *need* the people who are allowed to access the 
smartcard to not do something stupid)


So, about the content of the API itself :
- why does the CryptoKeyPair dictionnary associate a salt and an iv with 
the key pair ?
- Actually I don't see CryptoKeyPair and CryptoConfiguration be actually 
used anywhere
- why doesn't the getPublicKey function have any argument ? Is the 
callback called once for every public key that is available ?
If that's how it works, then the caller needs to associate each 
DOMString pubKey to an identify, and store that somewhere separately. As 
each pubKey is unique, it should work, but it sounds a bit painful to 
have nothing to help in the API.

- I think it'd be better if createHash had a streamable interface.
And probably also encrypt, decrypt.
- In effect PKCryptoMessage should be opaque (Even if it's useful to see 
that the process works given what PKCryptoMessage actually contains).
- HMAC use symetric keys, in the usual definition, so I'd expect a 
symetric key argument to createHMAC. Then I'd expect some key argument 
to verifyHMAC, or a return value of createHMAC that's more complex than 
a simple DOMString in order to store what's needed for verification. The 
interface looks simply broken in it's current state.
- How do users of the API know if a algorithm is accepted, which ones 
are available ?

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: DOMCrypt API developments

2011-06-14 Thread Jean-Marc Desperrier

David Dahl wrote:

 From: L. David Barondba...@dbaron.org
 On Monday 2011-06-13 15:31 -0700, David Dahl wrote:

 In trying to get the word out about a browser crypto API I am
 championing (see:
 https://wiki.mozilla.org/Privacy/Features/DOMCryptAPISpec/Latest
 ), I wanted to post here for feedback and criticism.


 Hmm, I may as well respond here, though I'm a little concerned about
 how many places the feedback on this is going.


I am trying to communicate this to as many interested parties as
possible. I also don't want to force anyone to join another list to
offer feedback. I am not planning to post this to any further mailing
lists.


However you did not post this to the obvious choice of 
mozilla.dev.tech.crypto.


I find this API effort very interesting, however I'm left with the 
feeling you wish to leave out the use of PKI elements.

A really neutral API would work both with and without PKI.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Announcing an experimental public S/MIME keyserver

2011-06-10 Thread Jean-Marc Desperrier

Kai Engert wrote:

I'm thinking the following could solve the problem


Please help me: which problem is it, that you want to solve, that isn't
yet solved by the current implementation?


Ease of use, understandability of the process for the average user.

Average users fills a form, and that's all (OK, he also must fill the 
captcha, and we're getting at the very limit of what is acceptable for 
him here). He doesn't enter his email, doesn't have to receive an email 
first and doesn't need to go in his inbox and click on it, etc.
The email redirection thing works, but we just can't sell it. It's not 
a product, it's a lab's prototype.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Announcing an experimental public S/MIME keyserver

2011-06-08 Thread Jean-Marc Desperrier

Kai Engert wrote:

  Another short note: The problem with solely distributing the S/MIME
  certs is that a MUA does not have the S/MIME capabilities of the cert
  owner's MUA. So the sender MUA might choose a weak symmetric cipher.
  ...
  So the safest way is still to send a signed e-mail for cert exchange.
:-/

This seems to be solved with my implementation, because my keyserver can
forward the original signed message.


But it's not really a great solution.

I'm thinking the following could solve the problem if done by the 
receiving software (thunderbird/seamonkey) :
- allow the mime-type application/x-x509-email-cert to be in pkcs#7/cms 
format (this actually is already allowed)
- check if the pkcs#7 received in this way actually contain a 
cryptographically valid signature (without testing the cert chain, just 
testing that the signature value has been produced by the signature 
certificate)
- if the signature is cryptographically correct, then, in addition to 
the signer's certificate, import if present the content of the 
sMIMECapabilities attribute of the pkcs#7
- in the verification of the pkcs#7, do not do the verification of the 
actual content of the signature (so if it is a detached pkcs#7, don't 
return in error because you don't have access to the actual data of the 
signature, and if it's an opaque pkcs#7, don't verify it either, which 
allows to remove it and make the pks#7 smaller)

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Mixed HTTPS/non-HTTPS content in IE9 and Chrome 13 dev

2011-05-27 Thread Jean-Marc Desperrier

On 18/05/2011 19:25, Brian Smith wrote:

No, he meant dev.security


I could have been more explicit.


and he cross-posted and set the follow-up
header on his message to point to that newsgroup. I agree that if
there's any discussion, it can/should happen there.


But my message ended up with an incorrect reply-to header, I don't why, 
I'm quite sure I didn't put it. This mail-news gateway is broken in a 
number of way (not least Message-ID that are not guaranteed to be the 
same in the ML and in newsgroups).

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Mixed HTTPS/non-HTTPS content in IE9 and Chrome 13 dev

2011-05-18 Thread Jean-Marc Desperrier

Brian Smith wrote:

See https://twitter.com/#!/scarybeasts/status/69138114794360832:
Chrome 13 dev channel now blocks certain types of mixed content by
default (script, CSS, plug-ins). Let me know of any significant
breakages.

See
https://ie.microsoft.com/testdrive/browser/mixedcontent/assets/woodgrove.htm
 IE9: http://tinypic.com/view.php?pic=11qlnhys=7
Chrome: http://tinypic.com/view.php?pic=oa4v3ns=7

IE9 blocks all mixed content by default, and allows the user to
reload the page with the mixed content by pushing a button on its
doorhanger (at the bottom of the window in IE).

Notice that Chrome shows the scary crossed-out HTTPS in the address
bar.


This is actually much more a suject for the .security group, Brian.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Policy Update Discussion: Third-Party SubCAs

2011-04-28 Thread Jean-Marc Desperrier

Robert Relyea wrote:

One interesting
historical note is the final solution was based on a suggestion of one
Jean-Marc Desperrier;).


Well, when rereading that bug to check it all, I mistakenly thought that 
NSS 3.9 was the first version with libpkix and that the change only 
applied to libpkix.


BTW isn't there somewhere a page with the corespondance between NSS and 
Firefox version ? I believe there is one, but can't find it again.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Certificate Problem in FF 4

2011-04-09 Thread Jean-Marc Desperrier

On 08/04/2011 19:31, Jay Garcia wrote:

Now let's see what turns up.


At this point, I can not reproduce the problem.

https://www.ausnetservers.net.au/webmail (as well as the others) 
forwards to vps-serv-1.ausnetservers.net.au that times out.


However this happens after I've added the exception to accept the 
crm.ausnetservers.net.au cert as valid for www.ausnetservers.net, and I 
can check in my configuration that the exception is active.


And after that, I still see crm.ausnetservers.net.au as a valid site 
with no verification problem, so can't repro.


As Honza and WTC have noted, crm.ausnetservers.net.au is badly 
configured. It does not send the correct certificate chain. However 
modern versions of NSS memorize intermediate certificates, so if Firefox 
has ever loaded a page issued by RapidSSL already, this will work 
nonetheless. This means that if crm.ausnetservers.net.au is seen 
correctly the first time, it should be seen correctly also the second 
time, so what is described in the report, if reproducible, would be a 
bug even if the config is wrong.
But I can't repro, although I do not know if the time-out on 
vps-serv-1.ausnetservers.net.au is a reason for it (it should not, but 
who knows).

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Certificate Problem in FF 4

2011-04-08 Thread Jean-Marc Desperrier
This should be on crypto, not security, transferring. I have an hard 
time testing it fully because of time-outs on vps-serv-1.ausnetservers.net


But the problem seems to be :
- With Firefox 4, adding an exception for a cert on domain X prevents 
from continuing to accept this cert as valid on domain y, with an error 
saying that the issuer of the cert is unknown


With the repro procedure of :
- go to https://crm.ausnetservers.net.au
- Should work OK, chains to RapidSSL/GeoTrust
- go to https://www.ausnetservers.net.au/webmail
- add an exception for https://www.ausnetservers.net.au/webmail
(the certificate sent is actually the certificate of 
crm.ausnetservers.net.au )

- go back to https://crm.ausnetservers.net.au
- will now be broken, no issuer found

Jay Garcia wrote:

This from one of the posters to our Mozilla Contribute List/Forum:



Hi,

i was 110% sure that there was an issue with the new Fixfox 4 and i was
right!

The issue only occurs when you add a ssl certificate that is self signed
on a website.

First off if we have never been to the site we DO NOT have an issue and
if you have never added an exception to a self signed certificate while
using firefox 4 it WILL NOT have the issue.

I have been testing over the last few days and this is my conclusion and
it backups up what my clients are saying and also answers why my staff
always get the same issue.

if you go to:

https://www.ausnetservers.net.au/webmail
https://www.ausnetservers.net.au/cpanel
https://www.ausnetservers.net.au/whm

and you accept the certificate because its self signed and you continue.
keep in mind before doing this you have gone to
https://crm.ausnetservers.net.au and not had an issue.

Anyway you add the exception and you go on. Some time afterwards or
stright away in my case i go back to https://crm.ausnetservers.net.au
and i get the same error i got last time.

 crm.ausnetservers.net.au uses an invalid security certificate.

 The certificate is not trusted because no issuer chain was provided.

 (Error code: sec_error_unknown_issuer)

however i do not get this error if i have never added an exception to a
self signed website on my domain that i have a valid and paid for ssl
certificate on.

The reason why it worked for you is because you had never added a self
signed certificate that was on the ausnetservers.net.au website.

So yes, there is a flaw in FF 4, why i dont know. Why it dont effect the
older versions or IE i have no idea.

Please let me know your findings

==



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: TLS-SRP (was Re: J-PAKE in NSS)

2011-03-09 Thread Jean-Marc Desperrier

Brian Smith wrote:

An augmented PAKE user authentication protocol might be very useful
for some things, but TLS-SRP seems very troublesome. IIRC, there are at
least four deal-breaking problems with TLS-SRP as a substitute for PKI:


I don't see it as a substitute for PKI, only as a substitute for 
user/password. And my point from start was not really that NSS must 
implement an EKE protocol, but that if there was one then it should be 
SRP rather than JPAKE.


BTW about the patent situation, I did my research, the conclusion is 
that they are various patents about everything EKE, but SRP has the 
advantage above most others protocols, including JPAKE, that Stanford 
owns a patent on it (the license is free for any usage) whose claims are 
apparently not shared with other existing patents, so if someone wants 
to claim rights on it, they'd first have to show Stanford shouldn't have 
received this patent. Not that this never happens (cf 
Microsoft/Lucent/Fraunhofer), but it's still much less likely than to 
just to hope nobody will ever claim rights on the format you're using.



1. The user's username is sent in the clear. The user's username should be 
protected.


You mean for privacy reasons, not as a way to limit brute force attacks ?

Although I don't see SRP as a replacement for PKI, I'm tempted to say 
that the equivalent of the username in PKI is the certificate, and that 
the certificate is not protected at all. In a SSL session with client 
authentification, the client certificate is sent in the clear (except in 
the case of IIS, that open a non-authenticated SSL session and 
renegotiates it at the time it needs user authentication).



2. The strength of the authentication of the website to the user is
a  function of the strength of that user's password; that is, a user with a
weak password will have a very weak assurance of the server's identity.
(I don't remember if this is exactly correct, but I think so.)


That seems correct to me, and that's really an important point to take 
into account for the security of SRP based solution, thanks.



3. The user cannot verify the identity of the server until after the
password has been entered. However, we've trained users to enter their
passwords only after verifying the server's identity.


The rule doesn't change so much : you still need to enter your password 
inside a secure element, ie if we teach user it's OK to enter their SRP 
password in a non secure GUI because it won't be sent to the server we 
loose.



4. You cannot identify the server until after you've created a
username/password on that server. But, account creation usually requires
giving the server personally identifying information that should be
protected by encryption and only sent after the server has been
authenticated.

Using the TLS_SRP_SHA_RSA_* cipher suites avoids problems #2 and #3
and using a non-SRP ciphersuite for account signup solves #4. But, that
requires using PKI and #1 is still a big problem.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: J-PAKE in NSS

2011-03-07 Thread Jean-Marc Desperrier

Brian Smith wrote:

Jean-Marc Desperrier wrote:

[...]  (I'd expect it instead to leave
the AES256 key inside NSS and just get back the handle to it to
encrypt what it needs later. [...]).



 The kind of improvement you described above will be made to resolve
 Bug 443386 and/or Bug 638966.

I think Bug 638966 is slightly different, it's about permanently storing 
the secret keys in the NSS db (I don't know if that's really possible, 
typically the db only stores privates keys).


Doing this could solve bug 443386 except that SRP is not a FIPS approved 
algorithm so I'm sure if the module ought to still be able to do SRP 
when in FIPS mode.



[...] The
resolution of Bug 443386 will change that interface in a
non-backward-compatible way (except for Sync, which will get modified
to use the new interface in lockstep).


You are considering to remove PBKDF2 ? If so, the encrypted result will 
be incompatible before/after the move ?



I am very interested in what you want to use SRP for (especially
other than Sync).


If I need SRP, I can have it in an extension, it's just a pity to not 
get it from NSS.


But Curl, that supports secret keys from version 7.21.4, with GnuTLS 
only at the moment but is pushing hard to get in in Openssl also, 
apparently has simply given up about having TSP-SRP support when 
compiled with NSS.


I see in an old doc that Johnathan was considering SRP support in 
Firefox for 3.next ( https://wiki.mozilla.org/Firefox/3.next/hitlist ).

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Freezing and making available to js the mp_int bignum package API

2011-03-01 Thread Jean-Marc Desperrier

Robert Relyea wrote:

About the
only use I could reasonable see for it would be to support PKCS #11
modules.


The other use would be as an optimized base for a big num 
implementation, and that's what the original distribution says : ANSI C 
code library that performs arbitrary precision integer arithmetic 
functions ( http://spinning-yarns.org/michael/mpi/#what ).


If you want to have big nums, it's a bit stupid to separately implement 
something that is in fact functionally equivalent to mpi.h



we really would need to design an ABI
freezable interface with things like opaque equivalents to mp_ints --


It can be just handling mp_int * as an opaque pointer.


which means you couldn't allocate them on the stack like we typically do
in freebl.


You're right, the current ABI can't be used as is, it would be needed to 
add mp_int* mp_new(int n) and void mp_free(mp_int *b).


In answer to WTC's message :

 First, what should serve as the reference definition of the mp_int API ?
 Would it be just
 http://mxr.mozilla.org/security/source/security/nss/lib/freebl/mpi/mpi.h ?

I guess that's the file.  You may need to download the original MPI
distribution and find out which headers are considered the public
header.


Which is here : http://spinning-yarns.org/michael/mpi/#where. It's 
almost 100% identical to the mpi.h inside, so it's not easier to make it 
opaque. OTOH the steps are just making mp_int * opaque, excluding the 
mp_digit functions, adding mp_new/mp_free, it's not *big*.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: J-PAKE in NSS

2011-03-01 Thread Jean-Marc Desperrier

Robert Relyea wrote:

  So the end result : I see that J-PAKE code got included inside NSS
  https://bugzilla.mozilla.org/show_bug.cgi?id=609076  with a layer to
  access it from js (bug 601645). This was not announced here, and even
  if it looked like Sync Would keep J-PAKE, I did not imagine it would
  be included as a new mechanism in NSS, I thought it would stay inside
  an external layer.

It's a crypto authentication mechanism. It involves keys. I needs to be
in NSS if we are to support it at all. (which is why it's there;).


It involves no key, in the usual meaning of a secret permanent key, 
which makes it *possible* to implement it externally.
I notice the committed code extracts the generated shared symmetric key 
up to the javascript level, so takes no real advantage from having 
generated it inside NSS (I'd expect it instead to leave the AES256 key 
inside NSS and just get back the handle to it to encrypt what it needs 
later. It seems they believe they *must* be able to extract the key, but 
I don't really understand why).


Now it's certainly the most sensible things to do to have all crypto be 
handled by NSS.


But I thought J-PAKE was intended as an as quick as possible hack, which 
is why the Sync team was so reluctant to switch to using SRP (unless it 
was proven J-PAKE was cryptographically weak), despite SRP being much 
more widely used (and having already several open bugs, with patch even, 
requesting it's inclusion in NSS/Firefox).


Seeing the J-PAKE patch, it would be quite fast to rewrite it using SRP 
instead of J-PAKE using the existing SRP patch 
(CKM_NSS_JPAKE_ROUND1-ROUND2_SHAxxx/CKM_NSS_JPAKE_FINAL_SHAxxx would 
change to CKM_NSS_SRP_SERVER_KEY_PAIR_GEN/CKM_NSS_SRP_DERIVE, etc.), 
it's totally equivalent functionally, we'd just add a small step where 
the server, before deriving the shared key, make a call to generate the 
password verifier from the password.


BTW, it's a bit disappointing to see the javascript so entangled with 
the specificities of J-PAKE, when the P11 layer below maps it to generic 
PK11_KeyGenWithTemplate / PK11_DeriveWithTemplate / PK11_Derive operations.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


J-PAKE in NSS

2011-02-28 Thread Jean-Marc Desperrier

For context, from a message I wrote in last October :

Given the number of protocols that include SRP (SSL/TLS, EAP, SAML),
given that there's already a proposed patch for NSS (bug 405155, bug
356855), a proposed patch for openssl (
http://rt.openssl.org/Ticket/Display.html?id=1794user=guestpass=guest
), I still think SRP is the better choice since the effort to implement
it would be much more widely useful than with J-PAKE.

On the long term, I wouldn't be surprised if at some point you'll add
another scenario where augmented security would be useful, and you will
in all likehood stay the only users of J-PAKE, I believe SRP will
certainly end up being included, and it will be a little stupid to have
2 functionally equivalent algorithms.


So the end result : I see that J-PAKE code got included inside NSS 
https://bugzilla.mozilla.org/show_bug.cgi?id=609076 with a layer to 
access it from js (bug 601645). This was not announced here, and even if 
it looked like Sync Would keep J-PAKE, I did not imagine it would be 
included as a new mechanism in NSS, I thought it would stay inside an 
external layer.


I really, really regret I gave up on that dead-end discussion about 
J-PAKE and did not follow what happened exactly after that.
It would have been just functionally equivalent for Sync to use SRP 
instead of J-PAKE in Sync, but so much more useful for everyone else, 
especially for all those who would love to be able to use SRP with TLS.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Freezing and making available to js the mp_int bignum package API

2011-02-28 Thread Jean-Marc Desperrier

Hi,

There was some talk last october about accessing the mp_int API from 
javascript, and so freezing it in order to make it available as a frozen 
API.


Nelson concluded that the one difficult point would be to freeze the 
mpdigit structure, since it currently has machine/processor-version 
dependent definitions.


I just got interested in trying to revive this subject.
First, what should serve as the reference definition of the mp_int API ?
Would it be just 
http://mxr.mozilla.org/security/source/security/nss/lib/freebl/mpi/mpi.h ?


Second, at one point I found this 
http://swtch.com/plan9port/man/man3/mp.html which not in fact exactly 
the same API, but which gave me an interesting idea as inside it the 
mpint functions are completely separated from the mpdigit functions.


What not use as a public freezed API a version of the API that simply 
removes everything that uses mp_digit ?
mp_digit is only an optimization in case the manipulated number is 
small, I believe that in many cases this optimization is not very 
significant. Not using it in the public API would not really have a 
performance impact, and would make things much easier I believe.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: NSS in Summer of Code?

2011-02-25 Thread Jean-Marc Desperrier

Gervase Markham wrote:

Are any of you interested in submitting a proposal for a Summer of Code
project for Bugzilla this year, and mentoring it?
https://wiki.mozilla.org/Community:SummerOfCode11:Brainstorming

NSS has done several projects in the past (recently, RSA-PSS signatures
and some TLS improvements), which as far as I know have been very
successful.


Why not getting Firefox to use the new NSS database format, and be able 
to load it from a global directory instead of the firefox profile 
directory ? (even if it's not strictly talking NSS).


It's not a very big task, the biggest part might be en up being checking 
that's there's no performance regression, and no unfortunate lock that 
would hinter actual multiple use of the database.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: A dedicated SSL MITM box on the market

2010-11-22 Thread Jean-Marc Desperrier

Jean-Marc Desperrier wrote:

Especially the certlock Firefox extension they propose, which builds
upon Kaie's Conspiracy, but does something more sophisticated.



Unfortunately it seems it has not been made publicly available until now.


Coming back on that old message to say I just saw it's now public :
http://code.google.com/p/certlock/

It says it's an incomplete version though.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Plan B for J-PAKE in Fennec B3 / Firefox B9 -- exposing MPI to Firefox for one beta cycle

2010-11-19 Thread Jean-Marc Desperrier

Robert Relyea wrote:

We do not support a
binary compatible big num library interface, and that's what adding the
symbols to freebl is saying.


One month ago Nelson said he wasn't in principle against doing that, 
taking into account making it cleanly definitively requires more work 
and thinking that just adding a .def :

http://groups.google.com/group/mozilla.dev.tech.crypto/msg/b99065a9ca70f62d

What is your take on that ?

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Moderator note: Happy Day - newsgroup moderation has begun

2010-11-15 Thread Jean-Marc Desperrier

On 11/11/2010 07:24, Nelson B wrote:

Today, there's no doubt.  Moderation is really in effect.


Great to see that as I'm coming back online after a two weeks break.


[...]  Finally I can be confident that readers of this list
will not be receiving spam through it ... (I think)


And the people who read the newsgroup will be able to focus only on 
significant messages.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Invalide certificate encoding crashing certutil [Re: Thunderbird: Could not verify this certificate for unknown reasons]

2010-10-26 Thread Jean-Marc Desperrier

Matej Kurpel wrote:

In the Type field for S:, O:, OU: and CN: I always provided 0x0c which
is utf-8 string, but in the certificate there was 0x13 - printable
string. After I changed it - voila, it's working in Thunderbird, and
certutil doesn't crash anymore.


It sounds like a serious bug. Could you open it in bugzilla, with NSS 
tools as the component ?

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Usage of FreeBL and FreeBL/mpi through JavaScript in Firefox 4 Sync

2010-10-25 Thread Jean-Marc Desperrier

Brian Smith wrote:

Nelson B Bolyard wrote:


[...]



I'm talking about putting JBAKE (or whatever it is) into the base product.



[...]


Is there something specific about J-PAKE that you think is bad or
worse than some alternative? Are you objecting to J-PAKE because you do
not trust the proofs of security given by the authors? Is there anything
you'd like to have clarified about how the Sync team is proposing to use
J-PAKE and what steps we're planning to take to make the key exchange safe?


Hi, Brian.

I believe mostly the problem is that the enthousiam level of Nelson for 
any password based solution is extremly low.


I think the best way forward for now is to work to make FreeBL/mpi 
available for javascript, use it for your J-PAKE implementation, but 
include a way to select the algorithm in your protocol so that it's not 
hardcoded to be J-PAKE.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: J-PAKE (was Re: Usage of FreeBL and FreeBL/mpi through JavaScript in Firefox 4 Sync)

2010-10-25 Thread Jean-Marc Desperrier

Brian Smith wrote:

A balanced scheme is actually better for Sync because we are asking
the user to read a code from the screen of device 1 and type it into
device 2. Both devices need the same psssword/PIN.


The augmented scheme of SRP can be degraded to a balanced scheme if you 
need. It's trivial to regenerate the verifier from the password when 
needed, instead of just getting it out of the storage.



I am very interested in hearing what people think about the validity
of the proofs in the J-PAKE paper and whether any security
considerations have been overlooked.


IMO the trouble with J-PAKE is that it's probably an order of magnitude 
or more less used than SRP. It means less eyes on it to see the 
defaults, and that in princip is a problem.


I've found the initial proof of security for SRP :
http://srp.stanford.edu/ndss.html#SECTION0004
That's the v3 version of 1997. While the proofs have not been 
repudiated, some minor problems have been found and the v6 version has 
been issued to solve them as described here :

http://srp.stanford.edu/srp6.ps


FWIW, I am pretty sure that we will be having a discussion about SRP
and other solutions to the problems that SRP solves when we do planning
for post-FF4 releases. Implementing J-PAKE now for this one Sync use
case doesn't mean that we will prefer J-PAKE for solving those other
problems, and it doesn't mean that we've decided to avoid implementing SRP.


I think it would be dead-code to have an implementation for both. By 
versionning the protocol, you can go with J-PAKE now, and use only SRP 
later.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Usage of FreeBL and FreeBL/mpi through JavaScript in Firefox 4 Sync

2010-10-24 Thread Jean-Marc Desperrier

On 22/10/2010 19:07, Brian Smith wrote:

  Speaking only for myself, I have no objection to offering the mp_int
  bignum API as a public API out of freebl3.

If people are open to having the J-PAKE building blocks in FreeBL,
then we wouldn't need MPI to be part of the public API. The main concern
with putting J-PAKE building blocks in NSS is getting that NSS release
out for FF4.0.


The MPI option is not a bad option, having an efficient bignums 
implementation available would probably be useful for several people.

You want it exported so you can call from js-ctypes ?

In fact, making this library and bignums available to all javascript 
would be useful. Brendan happens to be currently thinking about it 
http://brendaneich.com/2010/10/should-js-have-bignums/ but it certainly 
won't happen soon.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: J-PAKE (was Re: Usage of FreeBL and FreeBL/mpi through JavaScript in Firefox 4 Sync)

2010-10-23 Thread Jean-Marc Desperrier

Brian Smith wrote:

Jean-Marc Desperrier wrote:

Why are you choosing J-PAKE instead of SRP ?



The J-PAKE authors claim they developed J-PAKE to avoid patents that
cover other algorithms, and they claim they won't patent it. I don't
know if either claim is true or not.


The reference I gave before shows that there is now a widely accepted 
opinion that SRP does not infringe on patent more than J-PAKE (even if 
there was indeed that doubt a few years ago).


A patent that covers SRP might be found, but it does not appear today to 
be more likely than it is for J-PAKE.



[...]


Balanced vs augmented does not matter for Sync's usage because the
user is at both end points. The end-user is establishing a secure
channel from one of his/her devices to another one of his/her devices
that are in the same location. Also, there is a new PIN (password) for
every transaction.

See https://wiki.mozilla.org/Services/Sync/SyncKey/J-PAKE


If you don't need augmented security, J-PAKE makes more sense.

I'm now reading here 
http://www.mail-archive.com/cryptogra...@metzdowd.com/msg09739.html that 
J-PAKE is *proven* to be no weaker than the algorithms it relies on.
I don't know have exact references but I doubt that version 6 of SRP 
doesn't have an equivalent security proof, given the number of standards 
that rely on it. Wikipedia says even if one or two of the cryptographic 
primitives it uses are attacked, it is still secure but doesn't give a 
direct link that shows that (they are reference to it resisting to 
collision attacks on SHA1).


Given the number of protocols that include SRP (SSL/TLS, EAP, SAML), 
given that there's already a proposed patch for NSS (bug 405155, bug 
356855), a proposed patch for openssl ( 
http://rt.openssl.org/Ticket/Display.html?id=1794user=guestpass=guest 
), I still think SRP is the better choice since the effort to implement 
it would be much more widely useful than with J-PAKE.


On the long term, I wouldn't be surprised if at some point you'll add 
another scenario where augmented security would be useful, and you will 
in all likehood stay the only users of J-PAKE, I believe SRP will 
certainly end up being included, and it will be a little stupid to have 
2 functionally equivalent algorithms.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Usage of FreeBL and FreeBL/mpi through JavaScript in Firefox 4 Sync

2010-10-22 Thread Jean-Marc Desperrier

Philipp von Weitershausen wrote:

Not sure how generic the signature of the zero knowledge proof we use
in J-PAKE is. Compatibility with the implementation found in OpenSSL
is important for us right now


Hi,

Why are you choosing J-PAKE instead of SRP ?

Looking for an assessment of J-PAKE against SRP, I found the following 
that make me worried that choice's a mistake.


http://rdist.root.org/2010/09/08/clench-is-inferior-to-tlssrp/#comment-5990
The JPAKE in OpenSSH is unfinished and I don’t recommend enabling it 
[...] When writing it, I came up with a hacky solution to the cleartext 
password storage problem [...]


http://rdist.root.org/2010/09/08/clench-is-inferior-to-tlssrp/#comment-5993
“Balanced” is symmetric and requires both sides to hold the same 
authenticator (e.g., a plaintext password). “Augmented” has the 
additional property that compromise of the server does not yield the key 
necessary to impersonate the client


JPAKE and SPEKE are balanced schemes and thus have the same problem as 
Clench. However, SRP does not. SRP is an augmented scheme



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Signature with a privatekey doesn't works in JSS

2010-10-08 Thread Jean-Marc Desperrier

Felix Alejandro Prieto Carratala wrote:

I also try this:
[...]
//pk is a org.​mozilla.​jss.​crypto.PrivateKey that i get with
//CryptoManager.findPrivKeyByCert(cryptoManager.findCertByNickname(nickName));


Why is that line commented out ? Do you test you get a valid pk handle 
out of findPrivKeyByCert ?


It hasn't a single chance of working if you don't use findPrivKeyByCert 
to get the private key.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Support for SSL False Start in Firefox

2010-10-08 Thread Jean-Marc Desperrier

Stephen Shankland wrote:

I've now located the blacklist file, which at present has 661 sites
blacklisted, so I suspect you guys are right on that basis, too.


The way it was written on Langley's blog, one could easily think they 
had used the method of calculation that gave a better looking percentage.


It was just the opposite : if 0,05% means an actual number of broken 
sites of 661, then the reference used for the total number of sites is 
1,3 Million, which means they used the number of distinct valid leaves 
from EFF SSL Observatory, not even the valid cert chains.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Support for SSL False Start in Firefox

2010-10-05 Thread Jean-Marc Desperrier

Hi,

Google is currently communicating about how they will use SSL False 
Start to accelerate the web, even if it means breaking a small 
fraction of incompatible site (they will use a black list that should 
mitigate most of the problem).

See http://news.cnet.com/8301-30685_3-20018437-264.html

Am I right that there is currently no bug and no plan to make available 
in Firefox the False Start support that's has been included in NSS in 
bug 525092 ? (as noted here 
https://bugzilla.mozilla.org/show_bug.cgi?id=525092#c24 making it 
minimally available requires one call to set the SSL_ENABLE_FALSE_START 
option, and a preference to optionally disable it. Handling the black 
list is more work, I don't know if Google plans to make their list a 
public resource, maybe Wan-Teh Chang can tell)


XP2 mda.firefox and mdt.crypto, fu2 mda.firefox

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ReferenceTable overflow (max=512)

2010-08-22 Thread Jean-Marc Desperrier

On 19/08/2010 22:44, Nelson B Bolyard wrote:

Support for NSS on device OSes (such as cell phone OSes) is provided by
various teams that are adapting Firefox to run on those devices.  Mozilla
has a team that does that and I suspect they could help you


Maybe they couldn't. That's a JSS problem, and they don't use JSS.

I think the best place to get a solution would be a more generic dalvik 
development forum where JNI is a frequent topic. If there's too many JNI 
exports in the .so lib, the problem has nothing really JSS/NSS specific.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Odp: Re: JSS in Firefox - loading applets over mutual SSL stopped working since the v. 3.6.x

2010-07-13 Thread Jean-Marc Desperrier

waldemar.ko...@max.com.pl wrote:

Unfortunately i don't :(  and it's out of
http://releases.mozilla.org/pub/mozilla.org/firefox/releases/. Could you
provide me with the link if it exists elsewhere ?


It's here :
ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/

But the fact http://ftp.mozilla.org/pub/mozilla.org/firefox/releases/ 
redirects you to http://releases.mozilla.org where you can get only the 
most recent versions is certainly annoying.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Thunderbird problem with the search for certificates in the S-TRUST trust list service

2010-06-10 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

Fame and Glory await.:-)


Which means a mention in http://www.mozilla.org/credits/ or about:credits :
  We would like to thank our contributors, whose efforts make this 
software what it is. [...]
  Any such contributors who wish to be added to the list should send 
mail to cred...@mozilla.org with your name and a sentence (citation) 
summarizing your contribution

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: multiple certificate selection dailogs

2010-05-19 Thread Jean-Marc Desperrier

Šandor Feldi wrote:

I do get multiple certificate selection dialogs in sequence at SSL
session start...so I have to reselect the same cert, say twice...

I enter the https of the target site, I get asked about the cert - I
select it, then the site displays my info and offers me anenter
site  button, then it asks me again for the cert... this is was
confuses me... why?


The web site is also something you develop ?

There's a number of pitfall in apache/mod_ssl configuration that will 
cause it to throw away the existing user identification info, and ask 
again. If that's the kind of configuration you are using it's quite 
likely your problem is more such an apache/mod_ssl problem.


https://issues.apache.org/bugzilla/show_bug.cgi?id=48215
https://issues.apache.org/bugzilla/show_bug.cgi?id=48228
https://issues.apache.org/bugzilla/show_bug.cgi?id=47055
https://issues.apache.org/bugzilla/show_bug.cgi?id=44961

You might have a better behavior by making sure you use the latest 
apache version update and setting the OptRenegotiate option :

http://httpd.apache.org/docs/2.2/en/mod/mod_ssl.html#ssloptions
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: The Rational Rejection of Security Advice by Users by Cormac Herley

2010-05-19 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

Isn't this actually a sign that the technology works? I mean, 100% false
positives means literally 100% success.


Shit no !

The higher the false positive rate, the more acute the failure.

People will trust and respect the warning *only* if there's a very low 
rate of false positives, down to the point where it *could make sense* 
to tolerate a few false negative *if* that's the only way to make sure 
they are sufficiently few false positives to get user to trust the 
warnings when they do appear.


Yes, the average users Firefox 3+ don't know how to work around the 
warnings, but they just start IE to get to the site instead. At the end 
of the day, the failure to protect them is exactly the same.


I was running an old Firefox 2 version recently, and well it made it 
again obvious the modified SSL warning are a failure.


Firefox 2 tried to explain the error to user, which even if *really* far 
from perfect, at least made it often possible to understand why the 
Firefox was wrongly raising an error, so was better in term of the user 
trusting the report, and not just starting IE to get to the site instead.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: The Rational Rejection of Security Advice by Users by Cormac Herley

2010-05-19 Thread Jean-Marc Desperrier

Marsh Ray wrote:

What do you propose other than not letting the user bypass
the cert error page at all?


Investing some serious time enhancing those errors.

Or investing some serious time evangelising the SSL site owners into 
using a real certificate.


But the statu quo doesn't work.


Another page that says we really really mean it?


Many time the user is right that the site he wants to access is not 
*actually* an attacker. So Firefox should not really mean it.


I collected a page of links on my blog. All of them raises SSL warning.
Not one is actually an attacker.
It's mostly a list of people using a private CA for a public server.
But user don't care they're doing something incorrect. They just care if 
they're actually an evil doer. And they're not.

One of them is https://svn.boost.org , they certainly get a lot of hits.

There's little solution for this at the browser level.

Still one could for example think about an option to crowdsource the 
answer.
Not automatically, but have an button when you meet the problem that ask 
to the network if svn.boost.org + this certificate imprint is a fake 
or not.
As soon as you start thinking what could we do ? instead of just 
saying this should not happen, some ideas appear.


Then they are also the other error, like expired certificate, which 
often is just a bad manipulation when the cert is *shortly* expired. The 
browser could be smarter about that.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Can I add more than one e-mail addresses as subjAltName extionsion in X.509 cert

2010-05-17 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

 - Do other applications (like thunderbird and other mail), would make
sure that they search through all the e-mail addresses to look for a
match?


Yes, this appears to be the case.


IIRC, they do but they are some place where only one adresse will be 
printed, the first of the list, not the one that matches the current email.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Alerts on TLS Renegotiation

2010-04-13 Thread Jean-Marc Desperrier

On 12/04/2010 15:29, Eddy Nigg wrote:

updated servers need updates clients and break older ones, whereas old
servers will not allow new clients.


I haven't seen one yet, that doesn't have a flag to accept older 
clients. If you set that flag, *and* disable renegotiation at least for 
older clients, you're safe.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: S/MIME interop issue with Outlook 2010 beta

2010-04-10 Thread Jean-Marc Desperrier

On 31/03/2010 17:11, Kaspar Brand wrote:

On 31.03.2010 07:49, Michael Ströder wrote:

It seems it's a CMS structure and recipientInfos contains subject key ids
instead of issuerAndSerialNumber. It seems Seamonkey 2.0.x does not support
that. Is it supported by the underlying libs?


I believe so, see

http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/security/nss/lib/smime/cmsreclist.cmark=89-91#85

That's the code which is used by nsCMSMessage
(http://mxr.mozilla.org/comm-central/ident?i=nsCMSMessage), and
therefore also by Seamonkey.


Are you certain ? Previously we found out real ugly SMIME code that 
hardcodes the use of SHA-1 :

http://groups.google.fr/group/mozilla.dev.tech.crypto/msg/7a15dafef963fe20
and here directly for the code
https://mxr.mozilla.org/comm-central/source/mailnews/extensions/smime/src/nsMsgComposeSecure.cpp#496

When I checked, I concluded that code reimplements everything on top on 
low level pkcs#7 (nss/lib/pkcs7/) and makes no use of nss/lib/smime.


I need to check the code you digg out here. It seems very confusing.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Domain-validated name-constrained CA certificates?

2010-04-07 Thread Jean-Marc Desperrier

Matt McCutchen wrote:

On Apr 6, 5:54 am, Jean-Marc Desperrierjmd...@gmail.com  wrote:

  Matt McCutchen wrote:

An extended key usage of TLS Web Server Authentication on the
intermediate CA would constrain all sub-certificates, no?


  You are here talking about a proprietary Microsoft extension of the X509
  security model.

No, I'm talking about the Extended Key Usage extension defined in
RFC 5280 section 4.2.1.12.


I repeat, you *are* talking about a proprietary Microsoft extension, 
which is to take into account the EKU inside path validation.


The EKU as defined in section 4.2.1.12 of RFC 5280 only applies to the 
certificate that contains it, it has no effect on certification paths 
that include that certificate.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Domain-validated name-constrained CA certificates?

2010-04-06 Thread Jean-Marc Desperrier

Matt McCutchen wrote:

An extended key usage of TLS Web Server Authentication on the
intermediate CA would constrain all sub-certificates, no?


You are here talking about a proprietary Microsoft extension of the X509 
security model.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Domain-validated name-constrained CA certificates?

2010-04-06 Thread Jean-Marc Desperrier

Matt McCutchen wrote:

A name-constrained intermediate certificate could be quite convenient
for the large organizations that are presently demanding their users
to trust private CAs for the whole Web (see bug 501697).


Ah ! The direction of restricting people who currently use sub-CA for 
their purpose to make it more secure will certainly be much more 
successful than presenting it as allowing many more people to have their 
own sub-CA.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Domain-validated name-constrained CA certificates?

2010-04-04 Thread Jean-Marc Desperrier

On 04/04/2010 08:32, Matt McCutchen wrote:

[...]
It would be great if a Mozilla-recognized CA would be willing to give
me, as the registrant of mattmccutchen.net, an intermediate CA
certificate with a critical name constraint limiting it to
mattmccutchen.net.


I don't believe this taking a hammer to crack a nut approach will have 
much success. Especially since there's also the fact the CA would not be 
able to constraint the *usage* you give to your certs.



#2. The tooltip of the Firefox SSL badge (a.k.a. Larry site identity
button) shows the Organization field of the lowest CA certificate,
[...]  The registrant could
put a misleading value in this field.  [...]  Should Firefox
show the organization name of the root CA instead, since it is
ultimately responsible for all validation paths that end at its trust
bit?


We are to something much more interesting here. I'm not sure it's really 
a great practice to have only one level that's taken into account there. 
But then only the root might be a bit too much in the other side. So, 
maybe something better is needed but it's not easy to decide what.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Alerts on TLS Renegotiation

2010-04-03 Thread Jean-Marc Desperrier

On 02/04/2010 18:25, johnjbarton wrote:

The appropriate way to address this security problem starts by
contacting the major providers of server software


There's no need to contact them, they are well aware of the problem.
AFAIK they have all already issued the necessary updates.

It's the sites that need to catch on those updates.
And web developers can have power to influence those sites to update.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Alerts on TLS Renegotiation

2010-04-02 Thread Jean-Marc Desperrier

johnjbarton wrote:

Closely related to bug 554594 is
https://bugzilla.mozilla.org/show_bug.cgi?id=535649

Web developers using Firefox Error Console or tools like Firebug that
use nsIConsoleService are now bombarded with pointless messages like:

services.addons.mozilla.org : potentially vulnerable to CVE-2009-3555


No, what's closely related to this is
https://bugzilla.mozilla.org/show_bug.cgi?id=555952
Implement RFC 5746 for mozilla.org SSL sites, to avoid Mozilla warning 
about CVE-2009-3555


As soon as the proper version of Zeus is deployed, this should be fixed.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Improper SSL certificate issuing by CAs

2010-04-02 Thread Jean-Marc Desperrier

Kurt Seifried wrote:

Is this another 1st of April joke? At least your timing is a bit
  questionable;-)

No this is not an April fools joke. The PDF at Linux Magazine is what
will be in the print copy (due out in 3 weeks I believe)


Kurt, the best group for sending this and also to continue the 
discussion would be mozilla.dev.security.policy


From a cryptographic point of view, nothing was broken. It's the policy 
that's bad.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Using of HTML keygen element

2010-03-30 Thread Jean-Marc Desperrier

The most adequate group for this discussion would be mozilla.dev.tech.crypto

I agree than enhancing generateCRMFRequest to let it generate a more 
usual format instead of only CRMF would be a big step forward.


And making more obvious that keygen is not a good long term solution is 
a very good thing.


Thomas Zangerl wrote:

Arm,

I am not sure whether I would recommend this, but in Firefox and
Safari  keygen currently just generates aselectoption.../select
structure in DOM. So what we in the Confusa project (http://
www.confusa.org) are currently playing with to increase the user
friendliness, is just assigning the keylength to the option texts and
then setting the right option to selected. In JavaScript that is
something along the lines of

 var keysize = /* usually something from PHP */ 2048;
var keygenCell = document.getElementById(keygenCell);
var options = keygenCell.getElementsByTagName(option);

/* Gecko based browsers use some strange Grade syntax for
keylengths - replace*/
if (navigator.userAgent.indexOf('Gecko') != -1) {
var GECKO_STRING_HIGH = High Grade;
var GECKO_STRING_MEDIUM = Medium Grade;

for (var i = 0; i  options.length; i++) {
var option = options[i];

if (option.text == GECKO_STRING_HIGH) {
option.text = 2048 bits;
option.value=GECKO_STRING_HIGH;
} else if (option.text == GECKO_STRING_MEDIUM) {
option.text = 1024 bits;
option.value=GECKO_STRING_MEDIUM;
}
}
}

/* autoselect the option with the right keysize */
for (var i = 0; i  options.length; i++) {
var option = options[i];

if (option.text.indexOf(keysize) != -1) {
option.selected = true;
}
}


The above seems to work in Firefox 3.0 and 3.5 and Safari 4
(selection) but not in Opera 10.50.
An alternative you might consider is using Mozilla's Crypto-Interface,
which gives you full control over the keysize etc.:
https://developer.mozilla.org/en/JavaScript_crypto

Regarding, Mozilla's Crypto-interface, we found it pretty inconvenient
to deal with yet another certificate format, though, because
generateCRMFRequest generates the cert-request as a CRMF file and
Firefox expects to receive the response in CMMF. If there is no easy
way to do this with your CA, you might however have to fall back to a
hack just as we do.

/Thomas


On Mar 29, 10:48 am, Arm Abramyanaabra...@gmail.com  wrote:

  Dear firefox support team

Armenian e-Science Foundation Certification Authority (ArmeSFo 
CA,http://www.escience.am/ca/index.html), which is a member of European Policy
Management Authority for Grid Authentication 
(EUGridPMA,https://www.eugridpma.org) created Graphical User Interface for the
generating a private key and Certificate Signing Request (CSR). According
our Certification Policy, the minimum key length for a user or host/service
certificate is 1024 bits.

The firefox gives to users a choice of RSA key between high strength (2048
bits) and medium strength (1024 bits). It provides with HTML keygenelement.

Would you help us to change text of HTML form: High Grade and Medium
Grade and to set the default value of them.

Thank you in advance
Armenuhi Abramyan
ArmeSFo CA operator




--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Using of HTML keygen element

2010-03-30 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

On 03/30/2010 01:23 PM, Jean-Marc Desperrier:

And making more obvious that keygen is not a good long term solution
is a very good thing.


Only in case the alternative will be supported by all or most browsers.


The original message shows that the fact keygen imposes a text of High 
Grade and Medium Grade is very inconvenient for users, who end up 
using hacky workarounds to change it. Even if the javascript solution 
that permits a better control over this is only supported by Firefox it 
would already be a step forward. I think if it were really good Chrome 
and Opera would copy it.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: no release tarball for 3.12.6

2010-03-29 Thread Jean-Marc Desperrier

Hanno Böck wrote:

[...]
Firefox release source bundles nss, but it's good linux distribution policy to
avoid bundled libraries, so this shouldn't happen.


Maybe in general, but in this case what you really want is the NSS 
version that's used by Firefox.


I think what the process guarantees is :
- the NSS cvs repository will be tagged,
- that tag will be published on https://wiki.mozilla.org/NSS:Tags
- two TAG-INFO file in the security/nss and nsprpub directory of the 
firefox release will contain the value of the NSS and NSPR tags have 
been imported. (this is new, see 
https://bugzilla.mozilla.org/show_bug.cgi?id=550223 )


So I think the best is to get the source directly from the CVS 
repository using that tag name, rather than wait for the release tarball.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: A dedicated SSL MITM box on the market

2010-03-29 Thread Jean-Marc Desperrier

Jean-Marc Desperrier wrote:

Article on Wired here :
http://www.wired.com/threatlevel/2010/03/packet-forensics/


The original article is well worth reading also :
http://files.cloudprivacy.net/ssl-mitm.pdf

Especially the certlock Firefox extension they propose, which builds 
upon Kaie's Conspiracy, but does something more sophisticated.

Unfortunately it seems it has not been made publicly available until now.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: no release tarball for 3.12.6

2010-03-27 Thread Jean-Marc Desperrier

On 27/03/2010 11:59, Hanno Böck wrote:

I'm not sure if you're aware of that issue, but as firefox 3.6.2 needs nss
3.12.6 and there's no release tarball yet


You are two days late :
https://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_12_6_RTM/src/ 


Dated from the 25 of march.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Cipher not picked/enabled in a TLS session

2010-03-19 Thread Jean-Marc Desperrier

Gregory BELLIER wrote:

Jean-Marc Desperrier a écrit :

Wan-Teh Chang wrote:

You can use the NSS command-line tool 'ssltap' to inspect the SSL
handshake
messages:http://www.mozilla.org/projects/security/pki/nss/tools/ssltap.html


It's significantly easier to do it with Wireshark.


Is it easier than the selfserv and tstclnt which are 2 tools supplied by
NSS ? They print the cipher negociated.
What would be seen in Wireshark ? The cipher's OID ?


If you are well served by the NSS tools, then it's not significantly 
easier than Wireshark.


But the first google hit for wireshark download is the page that 
provides you a one-click windows installer for wireshark download (you 
even have the portable and U3 version for your USB key, the download 
comes from a wide-band download server), on the right part of the 
download screen they are the third parties package for various Unix 
flavors, it is a standard package for most linux distrib so apt-get 
wireshark will get it directly, you get a nice GUI, with a tree view of 
all the content of each network layer, that in the SSL part does display 
the cipher suite name together with it's hex identifier (cipher suites 
identifiers are not OIDs, but a hex number), you get a right click 
Follow the TCP/SSL Stream option, it will reassemble all SSL segment 
to show you the full decrypted content when clicking the first packet.


The only one thing that is not convenient is that the option to decrypt 
the SSL content is hidden in a protocols/ssl submenu, and you need to 
know the ip,port,protocol,path_to_pem_private_key_file syntax it uses.


And the non-windows port might not all have the decryption option.

So most people who have to ask the question of how to get a dump of SSL 
traffic are better served by Wireshark than by the NSS native tools.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: popChallengeResponse unimplemented?

2010-03-17 Thread Jean-Marc Desperrier

Emmanuel Dreyfus wrote:

So as I understand, it is not implemented yet. This is a quite
disapointing, since the documentation does suggests it is fully
supported. This should be updated.


Just get a login on MDC :-)


Now that I wrote the code in C for producing a base64 encoded
popChallengeResponses, I have the feeling implemnting the reply would
not be very complicated. Is anyone working on it? Would a contribution
for this have a chance to be integrated?


I'm not sure. As the page says : The current implementation does not 
conform to that defined in the CMMF draft, and we intend to change this 
implementation to that defined in the CMC RFC.


Implementing it to do something non-standard and that nobody else does 
would probably not get in. Now if you have the time to check the CMC 
RFC, and do what's described there.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Idea for SoC-Project implementing PSS in NSS

2010-03-17 Thread Jean-Marc Desperrier

Wan-Teh Chang wrote:

Please use the official page instead:
https://wiki.mozilla.org/Community:SummerOfCode10


But only when a mentor can be immediately identified !

I have another idea, but I don't believe any sponsor/mentor can be found.

The S/MIME code in Thunderbird was written before an S/MIME layer was 
added to NSS. It's quite incomplete and broken in several respect with 
regard to what the NSS S/MIME layer can do. So a SoC project to convert 
Thunderbird so that it calls the correct S/MIME interface to create 
signed/encrypted messages instead of the low level pkcs#7 interfaces 
would be a huge step forward.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Cipher not picked/enabled in a TLS session

2010-03-16 Thread Jean-Marc Desperrier

Gregory BELLIER wrote:

As I said I would do, I looked every where in the code where the word
camellia appears and my code is very much alike. I really don't know.


Did you have a look at a Wireshark capture of it ?
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Idea for SoC-Project implementing PSS in NSS

2010-03-16 Thread Jean-Marc Desperrier

Wan-Teh Chang wrote:

Implementing RSA-PSS should be a good SoC project.  If it turns out
to be too little work, you can always implement the related RSA-OAEP
encryption.


Another good SoC project might be to add support for TLS 1.2 and SHA256 
based TLS crypto suites, no ?


Updating the PRF to make it cipher-suite-specified when TLS 1.2 is 
negociated, instead of SHA1+MD5, might already be enough for a SoC, but 
if not sufficient adding support for the newer mac algorithm (AES Galois 
Counter Mode) could complement the project.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: TLS logout in Firefox

2010-03-15 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

When the user says I
want to clear my current session, which of those SSL sessions
does he mean?


The server whose name appear in his URL bar.


 Anyway if PSM does not expose a jave script method for accessing the
 clear cache command, I'm sure kai or myself would be happy to review a
 patch which does.

The crypto object offers a logout method that does it.
http://mxr.mozilla.org/security/source/security/manager/ssl/src/nsCrypto.cpp#2875


This one just makes sure your password/PIN is forgotten for all token. 
Not really the same thing.

Still useful though.


I see no browser code that calls SSL_InvalidateSession
http://mxr.mozilla.org/security/ident?i=SSL_InvalidateSession


That one seems to be doing the trick.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Problems importing PKCS #12 client certs

2010-03-04 Thread Jean-Marc Desperrier

Chris Hills wrote:

Perhaps there is place for a fork of firefox (perhaps an enterprise
version) that uses the windows certificate store and dispenses with the
local certificate store. I understand that support for MSI installation
is already being worked on.


I think it would make much, much more sense to use the OS store for 
private keys across all Firefox versions !


This is the strategy followed by Chrome.

In fact, there is code to do that in NSS but I'm afraid it's currently 
not really maintained :

Mac OS X version :
http://mxr.mozilla.org/security/source/security/nss/lib/ckfw/nssmkey/
Microsoft CAPI version :
http://mxr.mozilla.org/security/source/security/nss/lib/ckfw/capi/

Until now, I thought Chrome was using that code, but it uses in fact 
three separate implementation of it's security and ssl code, for 
Windows, Mac OS, Linux, based on the CAPI-schannel/ CSSM-Secure 
Transport / NSS stack. As can be seen here :

http://src.chromium.org/viewvc/chrome/branches/official/build_166.1/src/net/base/ssl_client_socket_win.cc
http://src.chromium.org/viewvc/chrome/branches/official/build_166.1/src/net/base/x509_certificate_win.cc
http://src.chromium.org/viewvc/chrome/branches/official/build_166.1/src/net/base/ssl_client_socket_nss.cc
http://src.chromium.org/viewvc/chrome/branches/official/build_166.1/src/net/base/x509_certificate_nss.cc
http://src.chromium.org/viewvc/chrome/branches/official/build_166.1/src/net/base/ssl_client_socket_mac.cc
http://src.chromium.org/viewvc/chrome/branches/official/build_166.1/src/net/base/x509_certificate_mac.cc
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Another protection layer for the current trust model

2010-03-04 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

it has exposed an unrelenting amount of accusation without
evidence.  Show us a single falsified certificate.  Anything less is
unworthy of this forum.


A large amount of that. But not necessarily exclusively.

There is in what has been reported one fact that has merit to be 
examined I think : It's the report that google's automated site 
inspection tools demonstrate that CNNIC is involved, as an entity, in 
the distribution of software that deliberately install itself on the 
computer of user without their consent, using a security vulnerability.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: S/MIME in Thunderbird

2010-03-03 Thread Jean-Marc Desperrier

Gregory BELLIER wrote:

Ok, so it's still sha1 by default for S/Mime ?
Is it also sha1 by default for TLS ?


TLS depends on the cipher-suites, and fortunately it's not hard-coded.

Unfortunately, the first cipher suites using SHA256 are the one defined 
in TLS1.2 (RFC5246), and I believe the support for this RFC is still not 
included by NSS.


It would not be a lot of work to implement  at least 
TLS_RSA_WITH_AES_128_CBC_SHA256 , TLS_RSA_WITH_AES_256_CBC_SHA256 , 
TLS_DH_RSA_WITH_AES_128_CBC_SHA256 , TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 
as it would just mean replacing SHA1 with SHA256 wrt the equivalent SHA1 
suites, but it has not been done yet. I think an external contributor 
could do it.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Does anyone make Mozilla JSS 4.3.1/NSS 3.12.4 work at Android ?

2010-02-22 Thread Jean-Marc Desperrier

Wan-Teh Chang wrote:

But Michael Wu of Mozilla just started porting NSPR to Android.
So I expect NSS will be ported to Android soon.


Sorry if that's slightly off-topic, but what crypto layer does the 
Androïd browser use then ?

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Fix for the TLS renegotiation bug

2010-02-20 Thread Jean-Marc Desperrier

On 20/02/2010 03:25, Eddy Nigg wrote:

Apache performs a renegotiation when none is needed when configuring
client authentication at a particular location, is there a logical
explanation for that? Or even considered correct implementation?


Yes, there's a logical explanation and Apache is doing nothing wrong here.

The parameters of the SSL session, including SSL client authentication, 
are negotiated before the server sees any data from the client, so 
before the SSL server has any idea which location will be accessed.
The best Apache can do at this moment is to use the parameters that are 
set for the root of the virtual server concerned. After negotiation is 
complete, the client sends the GET/POST request, the server sees which 
location is actually accessed, and has to do a full renegotiation if 
there's a difference in the parameter for that location.


Where Apache is failing is in that it will quite often do a 
renegotiation when you access successively two locations which 
parameters are compatible, or even identical. So the best is too set the 
parameters at the root, and not overwrite them anywhere.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Fix for the TLS renegotiation bug

2010-02-19 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

Trying the different sub domain trick doesn't work on the same server
but different host and IP.


Let me phrase this explicitly :
- You use only one Apache instance
- You configured two virtual hosts inside that instance
- Then :
   - either each virtual host listens on a different IP
   - or they listen on two different ports,
 and you use a firewall to redirect the two separate external IP on 
those two ports on the same internal IP



I assume that's because the server reuses the
cached SSL session and initiates a renegotiation upon certificate
authentication. Does that make sense so far?


We'll it may be so, but it'd be a little surprising.
It requires two bug/feature I think :
- a server that allows reusing the same SSL ID on a different virtual 
host. I can see how it could happen that the SSL ID pool is actually 
shared between all virtual servers, but it's still not very clean.
- a client that tries to reuse the SSL ID if the request goes to the a 
different host inside the same subdomain. Now that's harder to think of 
it as anything else than a quite ugly bug, but we'd have to live with it 
if it's the case

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: List/remove cached S/MIME capabilities

2010-02-19 Thread Jean-Marc Desperrier

Michael Ströder wrote:

This is because some influential people consider:
 * S/MIME caps are just a part of mail security protocol

Which is IMO complete non-sense.


Yes, and I don't believe this is the major reason why it's not possible 
in Seamonkey/Thunderbird.


The main reason is that nobody works on this part of the product, 
leading it to be obsolete (one part of the problem, if the algorithm 
selection was less obsolete, it would make a much more acceptable 
default), never evolving for any enhancement (manual configuration of 
settings like you ask, or getting S/MIME capabilities from the X509 
extension when no mail has yet been received to initialize the 
capabilities with better values than the default) or bug fix (using the 
proper NSS S/MIME code instead of hard coding the use of SHA-1 in it's 
own the code).

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Fix for the TLS renegotiation bug

2010-02-19 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

Trying the different sub domain trick doesn't work on the same server
but different host and IP. I assume that's because the server reuses the
cached SSL session and initiates a renegotiation upon certificate
authentication. Does that make sense so far?


I just tried configuring a similar configuration, and thought more and 
more whilst doing that it doesn't make sense, that it can't fail in the 
way you described. And it doesn't (with two ports, but it definitively 
would be the same with two IP).


But I met whilst configuring it a problem that *could* be the cause of 
your problem.


Did you configure the SSLVerifyClient require option of the second 
virtual server on the *root* of the second virtual host ?
It must not be inside a sub-directory, or you will get a renegotiation 
error, even if your URL directly points to that directory.


Another point : We'll need to document that renegotiation is the default 
and systematic behavior of IIS, even when client authentication is 
required everywhere. You must change a flag with a script to correct that.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Fix for the TLS renegotiation bug

2010-02-17 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

On 02/14/2010 07:28 PM, Daniel Veditz:

[...] Firefox settings are currently extremely
permissive,


[...] it's breaking the client certificate authentication of a
couple of ten thousands of active user accounts at StartSSL. I take it
as a reward for being the only CA protecting sensitive information with
something better than username password pairs. :-)


Eddy, maybe you could talk here in mdt.crypto about the kind of 
difficulties you have, and we'd see together how website owners with 
similar problems should solve them.


If you currently have a https site that's partly open and partly 
accessed only with client authentication, I think the only reasonable 
way out is to break it in two.
Have secure.startcom.com with no cert and authent.secure.startcom.com 
with client cert.


Technically, we could imagine some smart solution at the server level, 
but it would be complex, fragile and require code changes, so really the 
easiest is to simply separate completely.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


MDC : NSS_cryptographic_module : No doc on NSC_ModuleDBFunc

2010-02-08 Thread Jean-Marc Desperrier

Hi,

On 
https://developer.mozilla.org/en/NSS_reference/NSS_cryptographic_module 
page, there's a link for NSC_ModuleDBFunc but it points nowhere.


Was the doc never written, or did it get lost in some reorganization of 
the site ?

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: My new role in 2010

2010-01-19 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

For over 13 years now I've been employed to work full time as a developer
of NSS and NSPR, but beginning in January 2010, I shall have a new job
where NSS is not part of my job description.


Good luck in that, Nelson.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: cert extension: authority key identifier (AKI)

2009-11-24 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

Interestingly I /think/ NSS is the only library which really has a
problem with it, to all of my knowledge (and I might be wrong with that)


You might. Openssl (therefore mod_ssl, etc.) also has a problem when it 
doesn't match. I think most other library also have a problem with, or 
then don't use keyid at all.



MS crypto doesn't have a problem with that IIRC.  That might be due that
it only checks the keyid in any case.


Library should consider the content of AKI as a hint, and ignore it when 
it doesn't match. Most any consider it as authoritative instead.
MS goes so far as considering the keyid more authoritative than the DN 
(yes, that's really, really broken).

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: cert extension: authority key identifier (AKI)

2009-11-24 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

CAs that
make this mistake typically have to abandon and completely replace their
entire PKI (entire tree of issued certificates) when a CA cert expires and
its serial number appears in the AKI of other subordinate certs.  More than
once I've seen entire corporate PKIs have to be replaced due to this error.


I feel a bit awkward about that description of the problem.

I'd say it becomes a huge problem only when you want to do a *silent* CA 
upgrade. And there's a list of common problem in PKIs, their deployment, 
and the PKI software stack, that have the consequence that silent 
upgrade is often the only way forward.


Silent CA upgrade mean here generating a new CA certificate, with a 
longer timespan, *without* changing the private key, and keeping the 
same DN, so that you can use it instead of the old certificate to verify 
the signature of the old certificates issued below it.


But in an ideal world, you could change the private key, the software 
stack would handle this properly, old certificate would be verified by 
the old CA, new one by the new CA, you'd even have a way to change your 
root CA private key and have the new certificate handled as equivalent 
to the old one.


The world is not ideal, but we hope for a change someday, I prefer to 
properly tell that the real source of problems is the software that 
don't let you update private keys of existing CAs in practice, leading 
to the consequence that you'd better be cautious and not include 
issuer's issuer name and serial number inside AKI.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Building NSS for OpenCSW (Solaris)

2009-11-24 Thread Jean-Marc Desperrier

Maciej Bliziński wrote:

I'd like to pass the -L and -R flags via environment
variables


For anyone else, CSW packages use this to tell the builds to use 
/opt/csw/lib to locate their dependencies.



What's the best way to make the NSS build read LDFLAGS and LD_OPTIONS?


That's a very valid question, does the NSS build system read those 
parameters from somewhere else ?

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: Do big parts of security in mozilla suck?

2009-07-16 Thread Jean-Marc Desperrier

Udo Puetz wrote:

I think (and from googling also quite a lot of other people too) that
you should use the stores that are available on that platform.


I fully agree with that. And just keep a manual option to do otherwise 
for those who don't want their security component to rely on a Microsoft 
black box.


Including a PKCS11-CAPI conversion layer and setting this to be the 
default module would do the trick.



[...] it seems that the windows cert store is more robust/lax
in dealing with such things.


It does get easier when working at the OS level, also you have to 
concede that regression/compatibility testing has always been a strong 
point of Microsoft.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Do big parts of security in mozilla suck?

2009-07-15 Thread Jean-Marc Desperrier

Udo Puetz wrote:

I've recently written about a windows firefox hardware token problem
(see list) and didn't get a solution before the discussion drifted off
into universalities. Problem not solved, customer unhappy and us too.


It's easy for discussiosn in a list such as this one to drif off, but it 
seems you failed to notice that Nelson had listed you in his message 
from the 04/07/2009 at 07:28 a list of actions that were required in 
order to investigate further on what was going wrong.


He also had offered you in that message some possibilities for what was 
going wrong exactly, even though if he did not have enough info to be 
sure if it was really one of those.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: S/MIME in Thunderbird

2009-07-10 Thread Jean-Marc Desperrier

Michael Ströder wrote:

- add a time-stamp and update the S/MIME capabilities
and timestamp whenever a new S/MIME message is received.
- use the cert extension solely when no signed S/MIME message was received
so far or the notBefore date of the e-mail cert is newer than the
timestamp of the last S/MIME caps stored.


I 100% agree with that, use a time-stamp, and when using the cert 
extension, set the time-stamp value to the issuance date.



Still this assumes that the
issuing CA really knows about the correct S/MIME caps which could be
true for corporate CAs issuing e-mail certs for a well-defined set of MUAs.


This would be a defect correction for RFC 4262, only use this extension 
in those condition, or if you have properly evaluated it's consequences. 
But the client can do no better than assume the cert issuer knew what 
it's doing.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Re: CEN TS 15480 (Re: USB device profile for smart-card readers)

2009-07-06 Thread Jean-Marc Desperrier

Anders Rundgren wrote:

we see the start of going out of that through the European Citizen Card
(ECC) standard CEN TS 15480

This is something I really hate:
http://www.evs.ee/product/tabid/59/p-165216-cents-15480-22007.aspx
Paying for *open* standards!


In fact, I'm not sure I directed you to the most specifically pertinent 
standard. The card interface would be the one that CEN/TC224 is 
currently developing ( 
http://www.etsi.org/WebSite/document/Workshop/Security2006/Security2006S3_2_Helmut_Scherzer.pdf 
) based on CWA 14890 which *is* easily available on-line (officially I 
believe since it's not a standard, just an agreement). I, in fact mostly 
know the French profile of this spec based on what is apparently a 
pre-version of the CEN/TC 224 standard, you can have some view of this 
on this page 
http://www.soliatis.com/index.php?page=ias_ecc_test_suitepath=_



this scheme will get hard competition from a lot of places including
the token vendors who certainly do not want to become replaceable like USB
memory sticks.


You're quite right on this point, this is certainly why there has been 
until now so little progress on inter-operable smart cards.


But the same smart card vendors are also able to position themselves on 
a market when they are replaceable, when that market is for tens of 
millions of units like the one for EMV cards.


And this is where the bait is for this standard, the long term 
perspective is to produce millions of ID/Health Care cards for governments.



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: USB device profile for smart-card readers (was: Problem reading certificate from hardware token)

2009-07-03 Thread Jean-Marc Desperrier

Kyle Hamilton wrote:

I'm not aware of any such profile.  There is smart card profile
  but I doubt it has much to do with PKCS #11, it is rather about
  7816.

You're right, PKCS#11.

http://www.usb.org/developers/docs/EH_MR_rev1.pdf

But what is 7861?


He's refering to ISO7816, the set of smart card standards :
http://www.cardwerk.com/smartcards/smartcard_standard_ISO7816.aspx

But I didn't see even a reference to that in the document you refer, 
thought USB smart card reader seem to be quite properly standardized, so 
it certainly does exist.


The trouble is that each smart card uses specific commands, which makes 
it impossible to go from ISO7816 to a universal pkcs#11 driver.


In Europe, we see the start of going out of that through the European 
Citizen Card (ECC) standard CEN TS 15480 and the IAS (Identification 
Authentication Signature) service based on it that enable this time to 
have a universal middleware, up to the pkcs#11 signature service layer. 
Unfortunately, very few cards comply to this standard.


In case you are interested in some details about this IAS ECC thing, 
here's a few pointers :

http://www.oberthurcs.com/press_page.aspx?id=211otherid=112
http://www.gemalto.com/products/multiapp_id_ias_ecc




--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: S/MIME in Thunderbird

2009-07-02 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

If Microsoft has merely taken a DER-encoded object from another standard
and has incorporated it into a cert extension, that seems fine to me.
I hope they did it in such a way that existing BER/DER parsers of the
sMIMECapabilities attribute can just parse the extension body directly.


Openssl recognise it as sMIMECapabilities, but apparently does not 
really include the layer to really interpret that extension.


Here is the content of the extension as it appears inside a certificate 
I have :

MEQGCSqGSIb3DQEJDwQ3MDUwDgYIKoZIhvcNAwICAgCAMA4GCCqGSIb3DQMEAgIAgDAHBgUr
DgMCBzAKBggqhkiG9w0DBw==

and the result dumpasn1 gives on it :
   0   68: SEQUENCE {
   29:   OBJECT IDENTIFIER sMIMECapabilities (1 2 840 113549 1 9 15)
  13   55:   OCTET STRING, encapsulates {
  15   53: SEQUENCE {
  17   14:   SEQUENCE {
  198: OBJECT IDENTIFIER rc2CBC (1 2 840 113549 3 2)
  292: INTEGER 128
 : }
  33   14:   SEQUENCE {
  358: OBJECT IDENTIFIER rc4 (1 2 840 113549 3 4)
  452: INTEGER 128
 : }
  497:   SEQUENCE {
  515: OBJECT IDENTIFIER desCBC (1 3 14 3 2 7)
 : }
  58   10:   SEQUENCE {
  608: OBJECT IDENTIFIER des-EDE3-CBC (1 2 840 113549 3 7)
 : }
 :   }
 : }
 :   }

I'll send you the cert in private mail. I've just checked the extension 
appears only in mail encryption cert (KU=key exchange), not in mail 
signature cert (KU=signature).


Also, testing MCS in any server edition of windows is nothing more than 
going in Control Panel, selecting Add/Remove Windows Component, 
clicking Certificate Services, and finding back the install CD/DVDs 
(which might be more difficult). They are a few question you need to 
answer, but it's really not difficult, you just need to select a Stand 
Alone authority so that you don't need to integrate it with Active 
Directory.



If you could supply a specification for this new extension, I'd file an
RFE for Thunderbird/NSS to handle these certs in the intended manner.


I'm not very well placed to give a specification, but it seems it's 
really nothing more than take sMIMECapabilities, include it inside x509.


It would be good to include the RFE also in Dogtag then.
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: S/MIME in Thunderbird

2009-06-30 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

Does this assume LDAP for acquiring the certificate without a signed
  S/MIME message?  (So it is only relevant in corporate setting?)



No.  There are many ways to get a cert for an email correspondent.
There is only one way to get that correspondent's email capabilities in a
form that Mozilla mail clients can understand, and that is to receive a
signed email.


For what it's worth, I've seen that the Microsoft Certificate Server 
product includes a sMIMECapabilities attribute directly inside X509 mail 
encryption certificates it issues.


This non-standard usage could be interpreted as I'll make sure any MUA 
that I use with this certificate will support at least this level of 
security.


Whilst I don't usually support Microsoft in reinterpreting standards, in 
that case I'll make an exception.


After all, even when you respect the standard and put sMIMECapabilities 
only inside a SMIME message, nothing guaranties that you'll be using the 
same MUA when you read the response.
It's up to you to make sure that none of the MUA you use makes promises 
another of them can't support, which is only a little less dangerous 
than including them directly in the certificate.


Also, it can be said an encryption certificate without the info of what 
encryption protocol the holder is ready to use is much less useful.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: S/MIME in Thunderbird

2009-06-19 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

if you send an encrypted message to
someone from whom you have never received a signed S/MIME message, you will
use weak encryption.


Thank you for this useful description.

I feel it would make sense to open a bug to change this default.

Rational : If someone went the hassle of doing everything it requires to 
send an encrypted mail, he needs his message to be encrypted more than 
he needs it to be 100% compatible with everybody.
And today 40 bits security is so easy to break that nobody can seriously 
call that encrypted.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: How to export private key using pk12util

2009-04-24 Thread Jean-Marc Desperrier

Arshad Noor wrote:

The reason we use the PKCS#8 format is only because, in the multi-step
process of generating a key-pair, creating a CSR and getting a digital
certificate from an internal/external CA, the private-key needs to be
temporarily stored securely until a CA issues the digital certificate.


It's technically feasible (it does not break the format) to create a 
private key only pkcs#12, but I don't know if the NSS API around pkcs#12 
supports it.



--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: UTF-8 Hashing

2009-04-23 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

Is that python code?  I thought it was JavaScript.


Yes, you're right, I had a really too quick look at it :-)

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: UTF-8 Hashing

2009-04-22 Thread Jean-Marc Desperrier

starryrendezv...@gmail.com wrote:

hash: function(str,method) {
[...] str.charCodeAt(i)


python quite probably outputs the value of str.charCodeAt(i) as some 
variant of a UTF-16 value. Or UCS-2 with no handling of surrogates.

Under which format is the string inside the file that md5sum hashes ?

Rather than that, you probably should use the python equivalent of 
java's String.getBytes(charset) 
http://java.sun.com/j2se/1.4.2/docs/api/java/lang/String.html#getBytes(java.lang.String), 
determining what is the proper value of charset for your use.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: How to get logs what should we request when a bug involves crypto

2009-04-09 Thread Jean-Marc Desperrier

Ludovic Hirlimann wrote:

Often we get issue that involve certificates, or crypto errors. Are
there any ways to log what PSM or NSS do the way we can log other
protocols - I haven't found anything in the documentation


I'm a bit surprised. If you know how to log other protocol, you know the 
right keyword (NSPR_LOG_MODULES), and you should have found pages likes 
this one : Using the PKCS #11 Module Logger

http://www.mozilla.org/projects/security/pki/nss/tech-notes/tn2.html

About PSM, there's clearly much less documentation. But the same keyword 
should have lead you to nsSecureBrowserUI.


I'd say the best to get *garanteed* relevant info is to free-text search 
PR_NewLogModule on the security starting point of mxr.

http://mxr.mozilla.org/security/search?string=PR_NewLogModule

The following hits are (more or less) relevant :
/security/nss/lib/util/nssilock.c
* line 182 -- nssILog = PR_NewLogModule(nssilock);
/security/nss/lib/libpkix/pkix_pl_nss/system/pkix_pl_lifecycle.c
* line 172 -- pkixLog = PR_NewLogModule(pkix);
/security/nss/lib/pk11wrap/debug_module.c
* line 2568 -- modlog = PR_NewLogModule(nss_mod_log);
/security/nss/lib/pki/tdcache.c
* line 163 -- s_log = PR_NewLogModule(nss_cache);
/security/manager/ssl/src/nsNSSComponent.cpp
* line 284 -- gPIPNSSLog = PR_NewLogModule(pipnss);
/security/manager/ssl/src/nsNTLMAuthModule.cpp
* line 56 -- PRLogModuleInfo *gNTLMLog = PR_NewLogModule(NTLM);
/security/manager/boot/src/nsSecureBrowserUIImpl.cpp
* line 176 -- gSecureDocLog = PR_NewLogModule(nsSecureBrowserUI);
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: The keygen element

2009-04-07 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

Adding parameters which adds additional control such a policies and
forcing of smart cards (storage device) would be extremely helpful, once
you get to add some features.


No, the keygen tag is just too bad to be updated to something really useful.

crypto.generateCRMFRequest is a much better and more flexible solution, 
but I'd like it to be usable to generate pkcs#10 request, which have 
became the standard.


--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Allocator mismatches

2009-03-31 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

The problem is in the way that Mozilla builds JEMalloc for FF on Windows.
They build a replacement for the Microsoft C RunTime Library.  This
replacement is a hybrid, built in part from JEMalloc source code, and in
part from Microsoft's source code for MSVCRT, which source code is ONLY
available with the professional edition of the MSVC compiler.


You know Nelson, I spent a little time investigating this, and whilst it 
would be some work to create a fully open source alternative, it would 
not be *that* much work. A good starting point is the reimplementation 
of MSVCRT in Wine (but there might be also some low level .obj to 
rewrite in addition *and* Wine does not have the latest version of MSVCRT).


So the best solution would be for one of those people that this 
situation really annoys to just do it instead of complaining.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: client certificates unusable?

2009-03-18 Thread Jean-Marc Desperrier

Robert Relyea wrote:

[...] At the
cost of about 20 bytes per client you would rather chew up CPU and
network resources?


It's very far from being that small usually. It can't be that small if 
client authentication is used.


There's an extension to TLS to offset the cost to the client (the server 
sends him the encrypted content of the session cache, and the client 
sends it back when he needs to reopen the session).

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: SV: Questions about Potentially Problematic Practices

2009-03-10 Thread Jean-Marc Desperrier

Peter Lind Damkjær wrote:

Varga Viktor wrote:

snip

 OCSP request with multiple certificate from different CA

--

The RFC has the possibility to send multiple certificate serial number into
OCSP request. It is not defined that allowed or not, to put two certificate

 serial number, from different CA.


Request ::= SEQUENCE {

 reqCert CertID,


Each CertID in the request contains both the serialNumber *and* the 
issuerNameHash. So it's perfectly defined that you can use it to 
identify two certificate from different CA.



In the response, there is only one signature field.


So the signature needs to be from an OCSP responder that's valid for 
*both* CA.


This means, as per 4.2.2.2  Authorized Responders, that this 
configuration can only work if the responder matches a local 
configuration of OCSP signing authority, and therefore can not simply be 
the CA or a certificates that has been delegated the OCSP responder role 
with id-ad-ocspSigning.



Several serial numbers in a request do not comply to OCSP responders
that follow RFC5019 so I'll suggest to use a strategy to split it into
several requests.


Yes, only responders that implement the full RFC2560 can accept this, 
not those who implement the RFC5019 high-volume environments profile.


What's more, the requirement for local configuration mean it can only be 
used with a local responder, that's responsible for responding for *all* 
CA.


So it's much better to not try to handle this special case that can only 
work in a very restricted environment, and split the request.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Jean-Marc Desperrier

Kyle Hamilton wrote:

[...]  this CA in question is not generating improper
certificates.  It is generating proper CRLs, and it is simply encoding
and transmitting them as PEM-encoded DER-encoded CRL structures when
RFC5280 (which, by the way, I've been repeatedly told that NSS does
*NOT* comply with) states that they must be sent as DER-encoded.


It does not *fully* support RFC3280, but I think all what it supports is 
RFC3280 compatible, and also it seems to me that in Firefox 3 it 
supports quite more of RFC3280 than OpenSSL.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Jean-Marc Desperrier

Nelson B Bolyard wrote:

Kathleen Wilson wrote, On 2009-02-24 12:21:


* CRL issue: Current CRLs result in the e009 error code when
downloading into Firefox. ComSign has removed the critical flag from
the CRL, and the new CRLs will be generated in April.


Was that with FF 2?   FF 3 should not be showing hexadecimal error numbers.
  I will be very upset with PSM if it is still showing hex
error numbers in FF 3.x!!


With FF 3.2a1pre latest nightly the result of dropping the URL 
http://fedir.comsign.co.il/crl/ComSignSecuredCA.crl on a browser window is :


The application cannot import the Certificate Revocation List (CRL).
Error Importing CRL to local Database. Error Code:e009
Please ask your system administrator for assistance.

What should it show instead, where's the bug number ?
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-26 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

On 02/25/2009 08:31 PM, Gervase Markham:

On 23/02/09 23:54, Eddy Nigg wrote:
[...]

Only CAs are relevant if at all. You don't expect that 200 domain names
were registered by going through anti-spoofing checking and measures, do
you?!

[...]


Outsh, sorry! That should have been 200 *million* domain names were
registered by going through some anti-spoofing checking and measures...


OTOH domain spoofing is dangerous *even* when there's no certificate 
involved, so it makes sense to require to solve it at the registrar/DNS 
level, and not at the CA level.


But you are right to point out that the volume of domain names involved 
makes unrealistic any procedure that's not fully automatized.


So I think Mozilla should require that the procedure be fully 
automatized, and not accept any solution that requires human 
intervention to approve requests, even if only for a portion of them.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: ComSign Root Inclusion Request

2009-02-26 Thread Jean-Marc Desperrier

Jean-Marc Desperrier wrote:

[...]
With FF 3.2a1pre latest nightly the result of dropping the URL
http://fedir.comsign.co.il/crl/ComSignSecuredCA.crl on a browser window
is :

The application cannot import the Certificate Revocation List (CRL).
Error Importing CRL to local Database. Error Code:e009
Please ask your system administrator for assistance.

What should it show instead, where's the bug number ?


In my opinion, the right bug is bug 379298.

bug 107491 used to be the bug about that but has changed into a meta bug 
since Patch v9 - netError.dtd in dom and browser and comment 82.


I think it would have been better to create a new meta bug, and to do 
the wording, nssFailure changes after that on separate bugs blocking 
that meta bug instead.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-26 Thread Jean-Marc Desperrier

Paul Hoffman wrote:

At 7:09 AM +0100 2/24/09, Kaspar Brand wrote:

Kyle Hamilton wrote:

Removal of support for wildcards can't be done without PKIX action, if
one wants to claim conformance to RFC 3280/5280.

Huh? Both these RFCs completely step out of the way when it comes to
wildcard certificates - just read the last paragraph of section
4.2.1.7/4.2.1.6. PKIX never did wildcards in its RFCs.


Which says:
Finally, the semantics of subject alternative names that include
wildcard characters (e.g., as a placeholder for a set of names) are
not addressed by this specification.  Applications with specific
requirements MAY use such names, but they must define the semantics.

At 10:50 PM -0800 2/23/09, Kyle Hamilton wrote:

RFC 2818 (HTTP Over TLS), section 3.1.


RFC 2818 is Informational, not Standards Track. Having said that, it is also 
widely implemented, and is the main reason that the paragraph above is in the 
PKIX spec.


Just one thing : The use of a wildcard certificate was a misleading red 
herring in the implementation of the attack.


What's truly broken is that the current i18n attack protection relies on 
the checking done by the registrar/IDN, and that the registrar/IDN can 
only check the second-level domain name component.


Once they have obtained their domain name, attacker can freely use the 
third-level domain name component to implement any i18n attack they want 
even if no wildcard certificate is authorized.


This is not to say that wildcard certificates are not bad, evil, 
anything, but that nothing new has been truly brought about that by this 
attack.


So talk about wildcard certificate all you want, but this is a separate 
discussion from the discussion about the solution for this new i18n attack.
And the solution for it will not be wildcard certificate related, will 
not be easy or obvious, and so needs to be discussed as widely as possible.
Also there will be no crypto involved in the solution, as it's not 
acceptable to choose to just leave ordinary DNS user out in the cold 
with regard to the attack. So it needs to be discussed on the security 
group, not crypto.

--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto


  1   2   >