Re: Allow Redaction of issues detailed in BR Audit statements?

2014-08-27 Thread Jean-Marc Desperrier

David E. Ross a écrit :

With a redacted audit report, the presumption
should be that hidden negative information exists that would disqualify
the certification authority from having its root certificate in the NSS
database if such information were disclosed.

any redaction would imply the existence of hidden negative
information that would necessitate removal of the affected root
certificate from the NSS database if such information were disclosed.


I think there's miscomprehension here.

I understand that the CAs are OK with people knowing that some unknown 
serial numbers would give status “good”, but not with them knowing the 
exact values of the concerned serial numbers, which could be used to 
attack the system. Likewise with the 1024-bit certs with validity beyond 
2013, it's useful to know they existed but a different matter to get the 
name of the client (in that case, Mozilla could published the number of 
certificates concerned).
Or letting people know which accounts exactly didn't have  multi-factor 
authentication for certificate issuance.


I understand the redaction not to be about which kind of problem there 
was, but about letting specific nominative information be published 
about each problem.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Microsoft deprecating SHA-1 certs by 2016

2013-11-13 Thread Jean-Marc Desperrier

Phillip Hallam-Baker a écrit :

also likely to brick a large
number of cell phones as far as online commerce goes.


Which smart phone OS would you expect not to support sha-256 ?

It's likely that any that doesn't 3 years from now will have enough 
security holes that it'd not be very reasonnable to use it for online 
commerce.

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Netcraft blog, violations of CABF Baseline Requirements, any consequences?

2013-10-31 Thread Jean-Marc Desperrier

Eddy Nigg a écrit :

If Firefox really uses the CRLDP


No, it has never used the CRLDP to download the CRL.
People need to import the CRL manually and then activate the 
auto-update, and nobody does it. What's more if the CRL becomes outdated 
for some reason, there will be no warning.


The effective solution is rather to implement support for Google's 
CRLSets within the Mozilla products :

https://sites.google.com/a/chromium.org/dev/Home/chromium-security/crlsets

___
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Re: Proposed Feature: Application Reputation system

2012-06-09 Thread Jean-Marc Desperrier

On 08/06/2012 18:02, Sid Stamm wrote:

binary-file reputation system based on a whitelist of binaries and
domains, and identifies benign executables as windows users attempt to
download them.  Benign executables can bypass any are you sure UI,
making it less annoying to users.


But also a lot more valuable for evil doers. I'm not at ease at all with 
the idea of considering anything coming from the xxx domain as safe. 
I'd feel OK with white-listing the hash of known safe binaries instead 
(and known safe app signers).


You list Forcing application download sites to use https as a non 
goal. IMO this is required to make domain name based white-listing 
acceptable. But actually I believe domain name based white-listing is 
intrinsically weak, because weaknesses that allow an attacker to upload 
his own file somewhere on the web server appear too frequently. And I 
believe many admin today check the integrity of files they know exist on 
the server, but quite less frequently check that no unexpected new one 
has appeared.


For app signers, we've seen like 4 different cases of an app signing 
certificate being stolen and used for virus propagation in the last 2 
years. I think it would be best to require the owner of the certificate 
to store it on an hardware token, so that his private key can't be 
copied, if he wants to get in the white-list.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: [b2g] Permissions model thoughts

2012-03-07 Thread Jean-Marc Desperrier

Jim Straus a écrit :

   I definitely don't like the Android model.  We'll have to figure out
exactly how to communicate permissions requests to users.  On the other
hand, an appropriately vetted and signed app could be given permissions
implicitly in a permissions manifest, so the user doesn't need to deal
with it.  Also, some kind of heuristics may make it possible for the
permissions manager to deal with things internally, again not bothering
the user.  These are areas that need thought and experimentation.


The Android model is broken, but so was the permission pop-up model.

As ROC said some months ago, the input type=file permission model 
just happens to work properly. It can be the model for a good solution 
if we properly understand why it works.


So what is needed IMO :
- Find a way to not ask the user at all
  To get that :
	-  don't give the application access to no-GUI API layer that they can 
abuse
	- give them access to an entry point that controls the GUI of the 
sensible part
	- allows an application to call an interface with a GUI only when it 
has focus
	- opaque elements are a good pivot to make this work. They can render 
safe something that would be very dangerous if not opaque.


- If that fails and you do have to ask, ask to allow a feature, never 
ask to authorize access to some technical elements the average user 
understand nothing about
- In some case, the GUI could just *inform* the user, instead of asking. 
Maybe the user just needs to be informed that an app is currently 
requiring his location.
- Role isolation can help too. Access to contacts is not actually 
dangerous if you have no network access and cannot transmit them to 
another application.


To implement that you will need :
- A list of preexisting modules that handle safely the most often needed 
sensible functionalities
- An extensibility mechanism that will allow to create new modules to 
handle cases that the preexisting one don't handle properly. It will be 
long, slow and hard to create one compared to just creating a normal 
application, and the code will have to be fully audited (probably only 
open source modules will be possible)
- Application won't need any authorization, because they are not allowed 
to do anything sensible directly. They may however be linked to a list 
of the sensible module they use, so that you can audit that, and react 
if one is a bit strange.
Also, we could end up with some module that will only be safe if the 
application strictly limited in what else it can do.
In concrete terms : You would be allowed to request full and complete 
access to contacts, but then you become a contact management application 
that has access to *nothing* else. That would be the role isolation 
protection layer.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Man-in-the-browser malware

2012-02-22 Thread Jean-Marc Desperrier

ianG a écrit :

That all worked to contain the problem, and now the 2nd gen attacks are
coming through.  E.g., here:

http://financialcryptography.com/mt/archives/001349.html


$45,000 too small for the police to investigate !? The bad guys can 
really make a lot of money with impunity.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: OCSP Tracking

2011-09-06 Thread Jean-Marc Desperrier

On 06/09/2011 11:48, Devdatta Akhawe wrote:

[...]  if I visit
https://www.secure.com in private browsing mode; Firefox makes a OCSP
request. After closing private browsing mode and going back to the
normal mode, if I go to https://www.secure.com then Firefox caches the
OCSP responses and doesn't make a new OCSP request. This seems like a
leak of information that should be disabled. What do others think?
[...]


Yes, it's a bug, it's not the only one, you can report it on bugzilla.

But it might actually be just a specific application of the more generic 
bug that the network cache is not properly separated between private and 
non-private mode.


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Mixed HTTPS/non-HTTPS content in IE9 and Chrome 13 dev

2011-05-18 Thread Jean-Marc Desperrier

Brian Smith wrote:

See https://twitter.com/#!/scarybeasts/status/69138114794360832:
Chrome 13 dev channel now blocks certain types of mixed content by
default (script, CSS, plug-ins). Let me know of any significant
breakages.

See
https://ie.microsoft.com/testdrive/browser/mixedcontent/assets/woodgrove.htm
 IE9: http://tinypic.com/view.php?pic=11qlnhys=7
Chrome: http://tinypic.com/view.php?pic=oa4v3ns=7

IE9 blocks all mixed content by default, and allows the user to
reload the page with the mixed content by pushing a button on its
doorhanger (at the bottom of the window in IE).

Notice that Chrome shows the scary crossed-out HTTPS in the address
bar.


This is actually much more a suject for the .security group, Brian.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: NSS/PSM improvements - short term action plan

2011-04-14 Thread Jean-Marc Desperrier

Zack Weinberg wrote:

Counterpoint: If the attacker is (or colludes with) a rogue CA, they are
in a position to make the *entire contents* of the certificate be
whatever they want.  They can forge EV status


Not really. EV status depends on the root certificate. If we'd lock on 
something else, we'd made sure that it's based on the CA's values, 
rather than the one of the issued certificates. The key that sign the CA 
certificate ought to be off-line and much harder to compromise.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: NSS/PSM improvements - short term action plan

2011-04-14 Thread Jean-Marc Desperrier

Zack Weinberg wrote:

a real possibility in the attacker is a nation-state scenarios


*Public* PKI as it is implemented in the browsers does *not* protect 
against nation-state attack scenario. It just can't.
A nation-state attack scenario means, amongst other things, the attacker 
can get a perfectly valid ID that in fact is false (think Dubai Hamas 
assassination and the Bristish passports). No commercial CA will be able 
to do anything against that.


If that's the scenario you want to fight, and I'm not saying they aren't 
valid reason to aim for it, you need either not use PKI at all, or use 
your own private one with your own rules.
But it's not a useful purpose of a general usage browser to try to do 
anything about that.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: NSS/PSM improvements - short term action plan

2011-04-09 Thread Jean-Marc Desperrier

On 09/04/2011 00:52, Adam Barth wrote:

- CA locking functionality in HSTS or via CAA

 There's significant interest in this feature from chrome-security
as well.


What about EV locking ?

How does a site change CA after he's started enabling CA locking.
Would you enable multiple CA locking so that he'd start by adding the 
new CA during a while when still using the old cert, and then hope for 
the best after making the switch ?

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Targetting specific vulnerabilities

2010-09-27 Thread Jean-Marc Desperrier

Ben Bucksch wrote:

1. It does indeed give attackers an advantage to know which security
holes I am vulnerable to. [...]
True, a well-written attack could use rendering engine feature changes
to detect the version. But not all security updates are detectable like
that, hopefully very few in fact,


I'm not so sure. Emergency updates correct only one bug, but other 
updates correct a bunch of stuff, and some of which whilst not being 
very critical can be easy to detect, I think mfsa2010-63, mfsa2010-46 
are probably in that case.

You end up getting a small windows of versions quite easily.


and that needs client-side code that makes things more detectable again.


I think that's the part of what you say that makes sense the most but, 
wasn't the plan at one time  to remove identification from User-Agent, 
and leave it available only from javascript ?
If you can run js on the browser, and are very sophisticated, you have a 
large windows of opportunities for detecting the version from the 
behavior in a way that is really hard to detect.



2. Don't conclude from current attacks.  Just because current attacks

 don't do A today doesn't mean it's neglectable.

I don't believe that's really what was done here.
And whilst it's useful to think of possible future attack in order to be 
a step ahead, it's not very useful to invest a lot of energy to prevent 
an attack that might never materialize (the strongest reason why it's so 
is that attackers will often find a smart way to completely avoid the 
Maginot line you spent a lot of time building). So it fully makes sense 
to first and foremost protect against the attacks that *do* exist.


A point to take into account also is that the kind of attacker you 
consider here is very dedicated and can spend quite a lot of money if 
needed. So they are quite likely to buy a zero-day vulnerability from 
the black market, and those by nature tends to work on all the minor 
version (those that don't are mostly those that have been /introduced/ 
by a security fix, and fortunately they are not too frequent).

Case in point : the zero-day IE flaw attack against Google.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: IE more secure than Firefox ?

2010-08-31 Thread Jean-Marc Desperrier

Nassim KACHA wrote:

One speaker at Microsoft has dared to justify the slowness of IE compared
to its competitors by its highest level of security. You should know that
Microsoft makes a wide propaganda in Algeria.


You could oppose to that the following independent report that 
integrates a lot of different calculations on vulnerabilities. Basically 
all of them show IE as the weakest browser by far :

http://www.webdevout.net/browser-security

A major indicator of security, more than the absolute number of 
incidents is how long they go without being patched.
This is now a bit dated but the score of IE6 in 2006 in this regard was 
abysmal :

http://blog.washingtonpost.com/securityfix/2007/01/internet_explorer_unsafe_for_2.html

For a much more recent data, you can use the Pwn2Own 2010 example.
Two vulnerability against both Firefox and IE were disclosed at that 
time, Mozilla issued a corrected version of Firefox on the 1st April 
only 8 days later :

http://www.zdnet.com/blog/security/mozilla-firefox-first-to-patch-pwn2own-vulnerability/6008

But the fix for IE was published only on the 8 of June, one month and a 
half after the vulnerability has been known :

http://www.zdnet.com/blog/security/microsoft-finally-fixes-pwn2own-browser-flaw/6628
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Who is using NSS in their projects?

2010-03-03 Thread Jean-Marc Desperrier

davidwboswell wrote:

I maintain a list of applications that use Mozilla technologies in
their projects and wanted to add more examples of projects that use
NSS.


You obviously forgot to include Google Chrome in the list !! ;-)

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Firefox Add-ons

2010-02-08 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

no CA was here admitted under these conditions for having the code
signing bit turned on.

I'm not saying that at some point in PKI history this wasn't done. It's
not done today and fee free to publicly name the CA which does that.


Last I checked there definitively were some code signing certificates 
basically issued under the terms of If the credit card check comes back 
OK, issue it. It's a little while ago thought.


But really. It's *hard* to do better than that, better than Send us by 
fax our doctored ID so that we check if you pass the bar of having 
minimal photoshop skills.


If and when people will have a governmentally issued cryptographic ID 
card, it will become a lot easier, but then the code signing CA will 
have little room for added value.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Firefox Add-ons

2010-02-06 Thread Jean-Marc Desperrier

On 06/02/2010 19:47, Eddy Nigg wrote:

But I guess you would think twice to sign (malicious) code with your
name - any code for that matter.


How hard is it to sign it with a cert you bought with a stolen credit 
card number, using the name from the card ?


A 50$ code signing certificate just brings you 50$ worth of security ...
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: dns-prefetch

2009-07-27 Thread Jean-Marc Desperrier

Daniel Veditz wrote:

4. Acknowledge privacy is dead and don't worry about it.


I tend to like that solution, but as this weakness will allow email 
existence confirmation for spam senders, it's not really adequate here.


___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: dns-prefetch

2009-07-24 Thread Jean-Marc Desperrier

Johnathan Nightingale wrote:

But with prefetch enabled, they could potentially harvest a significant
amount of information about the contents of your emails by watching all
the prefetch requests


But it will be disclosed anyway if he actually follows the link.
And I get a lot of spam from adultfriendfinder.com ;-)

The most serious attack seem to me to be than the attacker can know 
*when* exactly you read any given mail.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Jean-Marc Desperrier

Daniel Veditz wrote:

CSP is designed so that mistakes of omission tend to break the site
break. This won't introduce subtle bugs, rudimentary content testing
will quickly reveal problems.


But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.


A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Jean-Marc Desperrier

Daniel Veditz wrote:

CSP is designed so that mistakes of omission tend to break the site
break. This won't introduce subtle bugs, rudimentary content testing
will quickly reveal problems.


But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.


A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Jean-Marc Desperrier

Daniel Veditz wrote:

CSP is designed so that mistakes of omission tend to break the site
break. This won't introduce subtle bugs, rudimentary content testing
will quickly reveal problems.


But won't authors fail to understand how to solve the problem, and open 
everything wide ? From experience, that's what happens with technologies 
that are too complex.


A simpler syntax for simple case really would help, it's just that Ian 
is coming a bit late for this.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Comments on the Content Security Policy specification

2009-07-17 Thread Jean-Marc Desperrier

Bil Corry wrote:

CSP is non-trivial; it takes a bit of work to configure it properly
and requires on-going maintenance as the site evolves.  It's not
targeted to the uninformed author, it simply isn't possible to
achieve that kind of coverage -- I suspect in the pool of all
authors, the majority of them don't even know what XSS is, let alone
ways to code against it and using CSP to augment defense.


But did you try to get feedback, not from the average site author, but 
from those who have experience at successfully protecting against XSS 
large sites that evolve frequently ?


If the syntax has to be ugly, then there should be a tool that takes a 
site and calculates the appropriate CSP declarations.


In fact a solution could be that everytime the browser reject 
downloading a ressource due to CSP rules, it spits out a warning on the 
javascript console together with the minimal CSP authorization that 
would be required to obtain that ressource.
This could help authors to write the right declarations without 
understanding much to CSP.


PS : Sorry for the multi-posting earlier, I was trying to cross-post to 
www-arch...@w3.org but it didn't work and I did not know it had sent the 
message to the group.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Shared security Db in FF-3.5?

2009-07-16 Thread Jean-Marc Desperrier

Nelson Bolyard wrote:

[...] In NSS 3.12, you must tell NSS every time
it is initialized whether it is using old (Berkeley, default) or new
(Sqlite3) DBs.  This may be done in any of (at least) 3 different ways,
including an environment variable, a directory name prefix, or a
programmatic function call (IIRC).


Oh, too bad. I think it would be better then if Firefox were to 
programmatic set it to use sqlite3 when the sqlite3 file exists.



An annoying limitation is that the certificate file*must*  be in the
profile directory, there's no way to set an absolute path, so it's still
hard to use it as a multi-application db.

hmm.  I think that is a Firefox limitation, not an NSS limitation.
But I could be wrong about that.


Yes, it is a Firefox limitation. I think there's already a bug open 
about that.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Shared security Db in FF-3.5?

2009-07-06 Thread Jean-Marc Desperrier

Nelson Bolyard wrote:

By default, it is still the old single-process cert8 and key3 DBs,
as before.

However, FF 3.5 has the code to support shared-access cert9 and key4 DBs,
based on sqlite3.  You can force FF 3.5 to use that by setting an
environment variable.


My understanding is that is you start FF *once* with the setting enabled 
for the new db format, the base will be converted, and then it will use 
the new format every time after that point, without any special setting.


Maybe even you could externally convert the base, and Fx will use the 
new format the next time it starts ?


An annoying limitation is that the certificate file *must* be in the 
profile directory, there's no way to set an absolute path, so it's still 
hard to use it as a multi-application db.



___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Return of i18n attacks with the help of wildcard certificates

2009-03-04 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

[...]  When do we expect SSL? On submit or on
password fields in a form[...]


IF page contains form
AND form contains password field
THEN flash insecure form warning

Could be done. But there would better be a cross browser agreement on 
this. And coupled with a way to offer (low/no)-cost SSL to everybody.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Return of i18n attacks with the help of wildcard certificates

2009-03-03 Thread Jean-Marc Desperrier

Gervase Markham wrote:

[...]
We just turned hostname display UI for SSL on, according to The Burning
Edge...


This is a nice change, I found out about it on the burning edge too :-)

But, and as the link Eddy just reported shows, the attack is far from 
being only for SSL.


I think we should reconsider the options available to make the domain 
name more visible for http connexions.

What about a white version of the hostname display for http sites ?
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Return of i18n attacks with the help of wildcard certificates

2009-03-03 Thread Jean-Marc Desperrier

Boris Zbarsky wrote:

Jean-Marc Desperrier wrote:

But, and as the link Eddy just reported shows, the attack is far from
being only for SSL.

I think we should reconsider the options available to make the domain
name more visible for http connexions.
What about a white version of the hostname display for http sites ?


Wait. Why does the domain matter at all for non-SSL connections? It's
not like we have any guarantees against MITM here...


Well, we don't have the option to change the world, and in practice 
people just *do* send important login/password on http connections.


You do have a point though, maybe it's time to think if there's a way by 
which mozilla could push toward more use of https to protect sensitive data.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-27 Thread Jean-Marc Desperrier

Boris Zbarsky wrote:

Jean-Marc Desperrier wrote:

Which blacklist ? There's a blacklist inside the browser ?


Yes. See
http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/modules/libpref/src/init/all.jsrev=3.762mark=704-708#704


I'm left with the feeling this really should have been more widely 
documented.


The existence of that protection was really hard to guess from the 
tld-idn-policy-list.html page :
- this did not stop Moxie Marlinspike from finding U+2571 was not 
protected and using it in an attack demonstration
- this did stop anyone from reviewing the list and telling you U+2571 
was missing.


Once again, security through obscurity failed. I don't know if it was 
really intended to be security trough obscurity (it was public in 
bugzilla/the source code), but the end result looked very similar.


But this means that there's a work around for this attack that's usable 
right now. I'll publish it separately.



[...]

And then you begin to think that maybe just having . would work very
often, that most user have the most cursory look at the url bar, so
that making security depend on the url bar is just bad.


I happen to think so, yes.


Good. But can a small committee find good solutions, or build consensus 
about them ?

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Work-around for Moxie Marlinspike's Blackhat attack

2009-02-27 Thread Jean-Marc Desperrier
Until a better solution is deployed, here is the work around to make 
Moxie Marlinspike's attack ineffective.


- select and copy in your clipboard the character inside the  below :
╱
  This character looks similar to / but is not the same !
  This message is sent in unicode to allow for proper transmission of 
that character.


- type about:config in Firefox url bar

- type blacklist_chars in the Filter line

- Click to modify the network.IDN.blacklist_chars preference

- Click inside the preference content and paste the character from you 
clipboard.

  Do not overwrite any of the characters already present !

- validate the change

- try to access this url
 http://www.google.xn--comaccountsservicelogin-5j9pia.f.ijjk.cn/

- After it times-out, you'll see the following message :
« Firefox can't find the server at 
www.google.xn--comaccountsservicelogin-5j9pia.f.ijjk.cn. »


- Without that change you would have seen :
« Firefox can't find the server at 
www.google.com╱accounts╱servicelogin.f.ijjk.cn »


PS : Marlinspike refers to a character visually similar to ? in his 
presentation. I haven't found what it is, I've only found ‽. You can 
repeat the process above with ‽.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-26 Thread Jean-Marc Desperrier

Paul Hoffman wrote:

At 7:09 AM +0100 2/24/09, Kaspar Brand wrote:

Kyle Hamilton wrote:

Removal of support for wildcards can't be done without PKIX action, if
one wants to claim conformance to RFC 3280/5280.

Huh? Both these RFCs completely step out of the way when it comes to
wildcard certificates - just read the last paragraph of section
4.2.1.7/4.2.1.6. PKIX never did wildcards in its RFCs.


Which says:
Finally, the semantics of subject alternative names that include
wildcard characters (e.g., as a placeholder for a set of names) are
not addressed by this specification.  Applications with specific
requirements MAY use such names, but they must define the semantics.

At 10:50 PM -0800 2/23/09, Kyle Hamilton wrote:

RFC 2818 (HTTP Over TLS), section 3.1.


RFC 2818 is Informational, not Standards Track. Having said that, it is also 
widely implemented, and is the main reason that the paragraph above is in the 
PKIX spec.


Just one thing : The use of a wildcard certificate was a misleading red 
herring in the implementation of the attack.


What's truly broken is that the current i18n attack protection relies on 
the checking done by the registrar/IDN, and that the registrar/IDN can 
only check the second-level domain name component.


Once they have obtained their domain name, attacker can freely use the 
third-level domain name component to implement any i18n attack they want 
even if no wildcard certificate is authorized.


This is not to say that wildcard certificates are not bad, evil, 
anything, but that nothing new has been truly brought about that by this 
attack.


So talk about wildcard certificate all you want, but this is a separate 
discussion from the discussion about the solution for this new i18n attack.
And the solution for it will not be wildcard certificate related, will 
not be easy or obvious, and so needs to be discussed as widely as possible.
Also there will be no crypto involved in the solution, as it's not 
acceptable to choose to just leave ordinary DNS user out in the cold 
with regard to the attack. So it needs to be discussed on the security 
group, not crypto.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-26 Thread Jean-Marc Desperrier

Gervase Markham wrote:

On 26/02/09 11:49, Jean-Marc Desperrier wrote:

What's truly broken is that the current i18n attack protection relies on
the checking done by the registrar/IDN, and that the registrar/IDN can
only check the second-level domain name component.


Actually, our protection had a bug (that is, there were some characters
not on our blacklist which should have been). But it's not true that
there was no protection.


Which blacklist ? There's a blacklist inside the browser ?

The oppposite seems obviously said here :
http://www.mozilla.org/projects/security/tld-idn-policy-list.html
« it does not [] require multiple DNS lookups, large character tables in 
the browser [] »


Blacklist at the registrar level can not protect from attacks on the 
third-level domain name (or fourth, or more).


But this being said, I'm coming to think it would be better to take a 
wider perspective and consider that making security rely on the user 
being able to *validate* the content of the URL bar is not realistic.


You know, you can exclude ╱.

But then you start wondering how many user will *really* notice if 
there's a ∕ or a ⁄, or ʃ, or Ɉ, or ͵ʹ, or ٪, or ޙ ,ހ, 
৴, ૮, ८, །, ༼, ᚋ, ᤣ, ⁒, ⅟, ∠ instead of /.


And then you begin to think that maybe just having . would work very 
often, that most user have the most cursory look at the url bar, so that 
making security depend on the url bar is just bad.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-23 Thread Jean-Marc Desperrier

Nelson Bolyard wrote:

[...]
Wildcards are not an essential part of this attack.  They merely were a
convenience for this demonstration, but the attack could have been done
without using a wildcard cert. Even eliminating wildcard certs altogether
would not stop this attack.


  This being said : Is there already a bug open for this ? The only thing
  that stops me opening it myself is that it might already exist but be
  security restricted.


Yes, there is, and yes, it is.


So why is it still security restricted when the problem is out in the 
open ?


Yes, the way of exploiting the failure without a wildcard cert is 
apparently not yet out in the open. But :

- it's either a matter of days or hours
- CA are still issuing wildcard certificates, so attackers don't need to 
know a wildcard is not really required to exploit the failure
- I don't expect there will be any effort to try to stop CA from issuing 
dangerous wildcard certificates, since it won't solve the problem at large.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Return of i18n attacks with the help of wildcard certificates

2009-02-20 Thread Jean-Marc Desperrier

Eddy Nigg wrote:

On 02/19/2009 03:30 PM, Jean-Marc Desperrier:

Moxie Marlinspike in Black Hat has just demonstrated a very serious i18n
attack using a *.ijjk.cn certificate.
http://www.blackhat.com/presentations/bh-dc-09/Marlinspike/BlackHat-DC-09-Marlinspike-Defeating-SSL.pdf

.cn is authorized for i18n, and the * will match anything, allowing all
the classic i18n based attacks.


This was striking:

Get a domain-validated SSL wildcard cert for *.ijjk.cn


Yes, it's surprising how some of such attacks seem obvious *after* they 
have been done, but it takes so long to realize it can be done.


The md5 collision between a normal and a *CA* certificate was similar 
for me, how the fuck did we not think earlier, when it was already 
obvious someone would soon create a collision between two real md5 
certs, that they just had to do that to make the attack really effective.


This being said : Is there already a bug open for this ? The only thing 
that stops me opening it myself is that it might already exist but be 
security restricted.


PS : I think this discussion should be on mozilla.dev.security since 
it's about a security vulnerability, not crypto and not security.policy.

Does everyone share my opinion ? (I'm setting the follow-up there)
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: TLS, if available in Thunderbird

2008-09-18 Thread Jean-Marc Desperrier
Johnathan Nightingale wrote:
 [...]
 - We should turn it ON by default on non-secure connections, because
 even though we know full well that the connection is subject to
 subversion, we have a nearly-free way to marginally reduce the attack
 surface in the background.
 - And yes, there should be some way to turn it off in case you have an
 ancient or broken server that's confused by STARTTLS requests

I'm happy we can agree on this point, it makes me hope someday you'll 
see the light about why the current handling of SSL errors in Fx3 is far 
from perfect (first by realizing that there's *not* only the proponents 
of self-signed certs in that camp).

The options should read :
[ ] require STARTTLS
[ ] disable STARTTLS

With none of the two enabled by default.

Getting require STARTTLS automatically enabled if the initial 
connexion was succesfully in STARTTLS mode would be good.

Maybe require secure mode (STARTTLS) and disable secure mode 
(STARTTLS) would be even better for the average user ?
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Decline in firefox usage due to lacking CA certificates

2008-07-24 Thread Jean-Marc Desperrier
Thorsten Becker wrote:
 Nelson B Bolyard schrieb:
 I think the solution that Jean-Marc outlined above would make some
 sense: It would make it a bit easier to visit certain sites, but
 disturb permanently if someone visits a site that has no trust anchor
 in firefox.

 There's a great deal of evidence, and consensus in the UI and security
 community, that UI error/warning dialogs that are easily dismissed
 condition
 users to dismiss them without thinking. Users who do it often enough
 actually reach a point where they are no longer consciously aware that
 they're experiencing the dialog, nor that they're actively dismissing it.
 When that happens, the error dialog loses all value. It might as well
 not exist, because it has no effect.

 Please compare the warning that you receive when you go to
 http://www.mozilla.com/firefox/its-a-trap.html
 with phishing protection enabled with the warning you get if you go to a
 site with a certificate mozilla does not trust. What I do like about the
 phishing warning is that it stays on screen even if you ignore the
 warning and visit the site.

This is exactly the kind of thing I would like to see for SSL, and there 
is no reason why the strategy for bad SSL is different from the strategy 
for malware/fishing. I hope the non existing PSM team ;-) can take that 
into consideration. Well, I'll copy this message to mozilla.dev.security 
because the people who implemented the new SSL page might be there (as 
well as more people who have the power to reconsider this decision).

Now if we go in some more details, in the fishing/malware protection 
feature, the initial screen is coming back for every link on the site, 
which I think is a bit too much.

Try going to the page below and follow some links at the top to see that 
(you can not test that with its-a-trap.html, because there's no link 
inside the page to go to another malware flagged page):
http://www.km-jsw.gov.cn/new/html/Gov/zcfg/

And also it seems that there is a bug that makes some pages not display 
the warning bar after going through the warning. Try this one to see this :
http://www.km-jsw.gov.cn/new

It just happens that in my initial test with the malware protection, I 
had met this two behaviors which made me think that my idea was 
different from the malware protection mechanism currently in place.
But after all, it's really almost exactly the same with the difference 
of suppressing the possibility of easily removing the warning bar.

PS: I strongly suspect www.km-jsw.gov.cn has been flagged by error (or 
else we need to talk with the chinese government), which make it a great 
test site. But I don't know for sure, so access it at your own risks. If 
you need other malware site addresses for testing, 
http://www.malware.com.br/#blocklist has a useful list.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Debian Weak Key Problem

2008-06-06 Thread Jean-Marc Desperrier
Eddy Nigg (StartCom Ltd.) wrote:
 Boris Zbarsky:
 Could maybe try to brute-force the old key until they come up with a
 forged
 certificate that an SSL library accepts?

 No, not really. It requires the possession of the certificate with the
 weak key signed by a CA.

I really don't think that they will need to have access the site before 
it changed it's certificate is a significant mitigation factor for such 
a high risk.

I like the black list approach. Would be good to web-crawl to enhance 
the estimate, but I think around 99% of sites are using standard key 
sizes. And the people who are knowledgeable enough to have used values 
different from the standard ones in the scripts certainly have already 
changed their cert.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Problem in using javascript in subdirectories

2008-01-13 Thread Jean-Marc Desperrier
Jean-Marc Desperrier wrote:
 Dan Veditz wrote:
 If you change the security.fileuri.origin_policy pref to a traditional
 value does it start working again?
 http://bonsai.mozilla.org/cvsblame.cgi?file=/mozilla/modules/libpref/src/init/all.jsrev=3.717mark=477-478#477
  


 Try '3' first, and if that's still not working try '4'.

 Is there a way to download a small example of the problem? The maps I 
 found
 at the link below were all on-line.

 The fixes should not prevent local pages from opening other pages, it 
 just
 prevents reading or writing into them. What sorts of actions are you 
 doing?
 What errors are you getting on the error console? (XBL and XSLT seem to
 have problems due to this change).
 
 Dan, I've seen your comment #73 
in bug 230606
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Problem in using javascript in subdirectories

2008-01-13 Thread Jean-Marc Desperrier
Dan Veditz wrote:
 If you change the security.fileuri.origin_policy pref to a traditional
 value does it start working again?
 http://bonsai.mozilla.org/cvsblame.cgi?file=/mozilla/modules/libpref/src/init/all.jsrev=3.717mark=477-478#477
 
 Try '3' first, and if that's still not working try '4'.
 
 Is there a way to download a small example of the problem? The maps I found
 at the link below were all on-line.
 
 The fixes should not prevent local pages from opening other pages, it just
 prevents reading or writing into them. What sorts of actions are you doing?
 What errors are you getting on the error console? (XBL and XSLT seem to
 have problems due to this change).

Dan, I've seen your comment #73 about allowing by default access to 
foo_files name subdirectories and the l10n problems with it.
I think allowing the file to allows subdirs with the same name and some 
extension behind is really the right direction, it's in fact better then 
allowing access to all other files in the same directory.

Windows seems to recognize both the default extension of _files and the 
localized version. But what if I save the web page on a French Windows, 
and then copy it to an English version ? Or if I run a French Firefox on 
an English Windows ?

So I think the best would to implement this by allowing foo to access 
the subdirectory foo_bar whatever the value of bar is (but maybe with 
some reasonnable restrictions on the content of bar.
___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: New Input type proposal

2008-01-10 Thread Jean-Marc Desperrier
Boris Zbarsky wrote:
 Alexander Mueller wrote:
 However HTTPS does not prevent that the Administrator of the 
 destination server is acquiring the actual plain text data.
 
 So the .value of this input would already be hashed? Otherwise, this 
 argument fails: the page can just grab the value and do whatever it 
 wants with it.

Could be nice to do that, so there would be no way from javascript to 
get the original value the user has typed.

But if you do not consider the content of your page as trusted, that 
means the attaquer can just as well replace the 'hash' input field with 
a normal 'password' input field, get it's value, and hash it before it's 
sent to the server, making the change completely transparent to the user.

___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security


Re: Problem in using javascript in subdirectories

2008-01-10 Thread Jean-Marc Desperrier
Armin Mueller wrote:
 First i want to say that i am new in this group and that i am not very 
 versed in security questions.
 And, my english not very good, sorry.

Mein Deutsch ist sicher schlimmer ;-)

 [...]. These applications 
 should be run locally and on internet. To have a better overview all 
 files are sorted in different subdirectories the index.html is on top.
 This works since several years on all browsers (Firefox, Opera, IE, 
 safari 3). Now with the new FF 3 Beta 2 our application brings security 
 errors. We have the information that this is because of 
 https://bugzilla.mozilla.org/show_bug.cgi?id=230606.
 Now my question. Why does our applications run with all browsers but not 
 with FF 3. Have other browsers a lower security level? 

In a sense, they have a lower security level. IE uses the notion of 
security zones and mark of the web on downloaded files which is not 
seen as an effective concept by the Mozilla team. If Opera and Safari 3 
do nothing, they are letting js files running from your hard drive do 
anything with the data that's on your hard drive.

 Is there a 
 possibility to organize the files without running in errors with FF3. 
 Why are subdirectories not allowed?

My install seems to use the value security.fileuri.origin_policy=3 by 
default which allows subdirectories, in an asymetric manner.
Javascript can access files in the subdirectories, but files in the 
subdirectories can not access files in a higher directory.

Can you make available an exemple that allows to see what your problem 
is exactly ? I tried to download locally your schlatterbach example, 
but it doesn't work properly from local disk, even with Fx 2, I get a 
Browser does not support MapViewSVG functionnality error.
Fx3 doesn't even show me the content of the map, and Fx2 does, but 
changing the value of security.fileuri.origin_policy doesn't enhance that.






___
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security